The use of data reduction technologies such as compression and deduplication to reduce storage costs are nothing new. Tape drives have used compression for decades to increase backup data densities on tape while many modern deduplicating backup appliances use compression and deduplication to also reduce backup data stores. Even a select number of existing HDD-based storage arrays use data compression and deduplication to minimize data stores for large amounts of file data stored in archives or on networked attached file servers.
There is backup and then there is backup. To meet the backup and recovery needs of today’s organizations, they need to verify that the selected backup appliance includes the features needed to protect their environment today and positions them to meet their needs into the foreseeable future. In this third installment of DCIG’s interview with STORServer President Bill Smoldt, he describes the new must-have features that backup appliances must offer.
One of the more difficult tasks for anyone deeply involved in technology is the ability to see the forest from the trees. Often responsible for supporting the technical components that make up today’s enterprise infrastructures, to step back and recommend which technologies are the right choices for their organization going forward is a more difficult feat. While there is no one right answer that applies to all organizations, five (5) technologies – some new as well as some old technologies that are getting a refresh – merit that organizations prioritize them in the coming months and years.
Anyone who is close to backup recognizes that some types of data deduplicate better than others. However trying to translate that understanding of the environment into meaningful backup policies is almost impossible since it is both complicated and time consuming to successfully implement. Using the new Sepaton VirtuoSO platform, it is able to choose the best form of deduplication for each backup stream on the fly. In this third part of my interview series with Sepaton’s Director of Product Management, Peter Quirk, we discuss how its VirtuoSO platform detects the nature of incoming backup data and then automatically invokes the best deduplication method to deduplicate the data.
A trend that DCIG is seeing among more new products being introduced into the enterprise space is the proclivity to use the best of what has been previously developed in the past and combining that with new technologies that meet the emerging requirements of today’s organizations. The new VirtuoSO offering from Sepaton reflects this broader industry trend. In this second part of my interview series with Sepaton’s Director of Product Management, Peter Quirk, we discuss what features Sepaton brought forward from its existing S2100 product line and what new features its VirtuoSO platform introduced.
Ever since using disk as a preferred backup target gained momentum in the late 2000′s, there have been those who opine that disk’s life in this role would be short lived. But those providers who deliver disk-based backup solutions and are betting their future on them see no slowdown in their adoption. In this first interview with Sepaton’s Director of Product Management, Peter Quick, we discuss how databases and virtual machines (VMs) are just beginning to take full advantage of the benefits that disk offers as a backup target.
DCIG is pleased to announce the availability of its DCIG 2013 Midrange Deduplicating Backup Appliance Buyer’s Guide. In this Buyer’s Guide, DCIG weights, scores and ranks 46 midrange deduplicating backup appliances respectively from ten (10) different providers. Like all previous DCIG Buyer’s Guides, this Buyer’s Guide provide the critical information that all size organizations need when selecting a midrange deduplicating backup appliance to help protect their fast growing data-intensive applications.
DCIG is pleased to announce the availability of its DCIG 2013 Midrange Deduplicating Backup Appliances Buyer’s Guides. In these two Buyer’s Guides, DCIG weights, scores and ranks 20 and 29 midrange deduplicating backup appliances respectively from nine (9) different providers.
As DCIG prepares to release a number of Buyer’s Guides on Midrange Deduplication Backup Appliances in the next few weeks, we thought we would share some of our observations that came out of our evaluation of these products. Like all Buyer’s Guides that DCIG prepares, it did a comprehensive review of available deduplicating backup appliances in anticipation of releasing these Guides. As it did so, it uncovered that deduplication itself has moved well beyond the breakthrough technology that it was a decade or so ago to provide an assortment of features there leaves plenty for organizations to consider when buying one of these appliances.
It was not that long ago – like no more than five (5) years ago – that if as a storage administrator you could configure a storage system to provide average response times of around 2 milliseconds for any application, you were a hero to everyone you supported. Fast forward to today’s hybrid and all flash memory systems and 2 millisecond response times are the new “slow.” In this first installment of my interview series with Tegile System’s VP of Marketing, Rob Commins, we discuss how hybrid and all flash memory are redefining the “Gold” standard for performance in storage systems.
This past week I received an email from someone asking for my help in their process of buying a backup appliance. This individual had just downloaded the DCIG 2012 Backup Appliance Buyer’s Guide but, due to the number of models included in the Buyer’s Guide (over 60), was looking for some recommendations from me as to which one to buy. While I sent this individual a list of backup appliances to look at more closely, it brought to my attention that there are five questions every organization should ask and answer before buying a backup appliance.
It’s no secret that ‘Big Data’ is becoming a ‘Big Problem’ for organizations from a data and storage management perspective. However what organizations may fail to realize is that the best way to solve their Big Data problems is NOT by mindlessly throwing more resources at them. Rather it is to look at Big Data more strategically and then tackle the data management problems it creates in one fell swoop using software like CommVault® Simpana® and its OnePass technology.
In this fourth and final part of our interview series with GreenBytes CEO Bob Petrocelli, we hear about a three-second failover between canisters used in Solidarity, a solid-state storage array solution. If you’re not looking, says Petrocelli, you could miss the failover.
In the first part of our interview series with GreenBytes CEO Bob Petrocelli, we got a glimpse into the company’s groundwork with solid_state drives (SSDs) that led to the development of Solidarity. It is a high-availability (HA), globally optimized SSD storage array solution receiving a great deal of attention because it does away with magnetic drives and delivers a massive 200,000-plus IOPS performance. Today I resume my interview with Petrocelli as he lays out the configurations and processes that make Solidarity hum.
Inline deduplication data storage solutions provider GreenBytes, Inc. recently released a new high-availability (HA), globally optimized solid-state drive (SSD) storage array solution called Solidarity that is garnering a lot of attention. Solidarity offers inline real-time deduplication and compression via a dual-controller unit outfitted entirely with SSD storage. The buzz over Solidarity is in large part because of its 200,000-plus IOPS performance–with deduplication and compression enabled.
Recently I have had number of engaging conversations regarding how backup management is evolving. On the upside, many of the challenges associated with managing backup are definitely on the decline. But there are aspects of managing backup that are probably never going away and which every size organization needs to be prepared to manage indefinitely.
Companies are adopting server virtualization at an accelerating rate each year and, as they do, the need for performance on the back end hardware is growing right along with it. To accommodate this, enterprises need a way to increase the I/O throughput of their virtual machines (VMs). Today, I continue my blog series talking with Virsto Software CEO Mark Davis where we discuss the VM I/O blender problem, what it is and how Virsto boosts VM performance using a hypervisor plug-in that is up to ten times faster than what VM hypervisors natively provide.
A little over two (2) years ago I did an interview with ExaGrid System’s CEO Bill Andrews about the same time that EMC and NetApp were engaged in a bidding war over Data Domain. In that interview Andrews expressed concern about EMC winning the battle for Data Domain and how that might negatively impact ExaGrid. But as EMC and ExaGrid both respectively announced overwhelmingly positive numbers this week, it turns out that EMC’s acquisition of Data Domain has served both companies well.
About a month ago I started to put some thought and research into what might emerge as the top trends of 2012 by keeping a notebook next to my keyboard so as ideas struck me I could jot them down. Now as I look at the four trends that made today’s short list, they ended up being on the surface ones that I hear, write and talk about every day.
Before DCIG announces its top three blog entries of 2011 tomorrow, this year we thought we would do something different and take a look at some other blog entries that garnered a great deal of attention throughout 2011 but not quite enough to reach the Top 10. That being the case, an honorable mention for these blog entries was in order. Further, what is notable about these entries is that, with the exception of one, they were all published in 2011.