There are two assumptions that IT professionals need to exercise caution before making when evaluating cloud data protection products. One is to assume all products share some feature or features in common. The other is to assume that one product possesses some feature or characteristic that no other product on the market offers. As DCIG reviews its recent research into the cloud data protection products, one cannot make either one of these assumptions, even on features such as deduplication, encryption, and replication that one might expect to be universally adopted by these products in comparable ways.
If you assume that leading enterprise midrange all-flash arrays (AFAs) support deduplication, your assumption would be correct. But if you assume that these arrays implement and deliver deduplication’s features in the same way, you would be mistaken. These differences in deduplication should influence any all-flash array buying decision as deduplication’s implementation affects the array’s total effective capacity, performance, usability, and, ultimately, your bottom line.
Enterprises now demand higher levels of automation, integration, simplicity, and scalability from every component deployed into their IT infrastructure and the integrated backup appliances found in the DCIG’s forthcoming Buyer’s Guide Editions that cover integrated backup appliances are a clear output of those expectations. Intended for organizations that want to protect applications and data and then keep it behind corporate fire walls, these backup appliances come fully equipped from both hardware and software perspectives to do so.
Hybrid storage arrays, which dynamically place data in storage pools that combine flash memory and HDDs, are rapidly expanding their market share in the enterprise space. These arrays use the latest generation of hardware – including multi-core CPUs and DRAM and flash caches – to offer high levels of performance and inline data optimization. However, the ZS4-4’s underlying architecture and its unique ability to integrate with Oracle Database 12c make it a superior storage platform to accelerate Oracle Database performance and reduce storage capacity requirements.
Rarely does a day go by at DCIG when deduplication is not mentioned in some context. Instead of storing every chunk of data, deduplication removes redundant data and stores unique recording data just once across the network. Offering up to 20x reductions in data, data deduplication directly equates to lower backup storage costs for almost any size data center as less hardware is needed for storage backup.
DCIG is pleased to announce the availability of its DCIG 2014-15 Deduplicating Backup Appliance Buyer’s Guide that weights, scores and ranks over 100 features on 47 different deduplicating backup appliances from 10 different providers. This Buyer’s Guide provides the critical information that all size organizations need when selecting deduplicating backup appliances to protect environments ranging from remote offices to enterprise data centers.
Physical, purpose-built deduplicating backup appliances have found their way into many enterprise data centers as they expedite installation and simplify ongoing management of backup data. However there is a growing business case for virtual appliances that offer the benefits of deduplication without the associated hardware costs. To determine when and if a virtual appliance is the correct choice, there are key factors that enterprises must evaluate to arrive at the right decision for a specific office or environment.
The use of data reduction technologies such as compression and deduplication to reduce storage costs are nothing new. Tape drives have used compression for decades to increase backup data densities on tape while many modern deduplicating backup appliances use compression and deduplication to also reduce backup data stores. Even a select number of existing HDD-based storage arrays use data compression and deduplication to minimize data stores for large amounts of file data stored in archives or on networked attached file servers.
There is backup and then there is backup. To meet the backup and recovery needs of today’s organizations, they need to verify that the selected backup appliance includes the features needed to protect their environment today and positions them to meet their needs into the foreseeable future. In this third installment of DCIG’s interview with STORServer President Bill Smoldt, he describes the new must-have features that backup appliances must offer.
It was not that long ago – like no more than five (5) years ago – that if as a storage administrator you could configure a storage system to provide average response times of around 2 milliseconds for any application, you were a hero to everyone you supported. Fast forward to today’s hybrid and all flash memory systems and 2 millisecond response times are the new “slow.” In this first installment of my interview series with Tegile System’s VP of Marketing, Rob Commins, we discuss how hybrid and all flash memory are redefining the “Gold” standard for performance in storage systems.
Inline deduplication data storage solutions provider GreenBytes, Inc. recently released a new high-availability (HA), globally optimized solid-state drive (SSD) storage array solution called Solidarity that is garnering a lot of attention. Solidarity offers inline real-time deduplication and compression via a dual-controller unit outfitted entirely with SSD storage. The buzz over Solidarity is in large part because of its 200,000-plus IOPS performance–with deduplication and compression enabled.
While attending SNW last week and receiving a briefing from WhipTail Technologies CTO James Candelaria, Bob Farkaly, Director of Marketing for Exar’s Storage System Products, was never far away. Because of the brevity of my meeting with Candelaria, I never really had a chance to formally talk with Farkaly at the show and connect all of the dots between Exar and WhipTail, other than to assume some component of Exar was an integral component of WhipTail’s Racerunner solid state storage appliance.
Every week I talk to a lot of people within the storage industry – end users, other analysts, resellers, public relations, CEOs, storage engineers, etc. While none of the news I pick up is necessarily enough to substantiate a blog on its own, when aggregated it becomes interesting and noteworthy. In fact, I was talking to Don Jennings at Lois Paul and Partners (LPP) about this yesterday and he suggested that I weekly post a blog that recaps what I hear and do on a weekly basis. Since Friday’s are typically a slow day during the summer months and anyone who is anyone is always looking to cut out a little early on Fridays anyway, I thought I’d give everyone a reason to check out the DCIG website before they do.
I have made no secret about my skepticism of using dual controller architectures for inline deduplication, specifically at the enterprise level. My concern was that the workloads in enterprise backup environments would essentially overwhelm the capacity of just using two controllers and negatively impact backup jobs. However a recent briefing I had with Data Domain’s VP of Product Management, Brian Biles, has started to change my perspective as to why doing inline deduplication using dual controller architectures is becoming a more viable option for enterprise environments.
The STN-6000 Series resides in the data path on corporate LANs between production servers and corporate file servers and compresses data stored on the corporate file servers. While it supports any file server that does CIFS or NFS traffic (which is pretty much all file servers) and is available in models suitable for departments, organizations that are using enterprise network filers like the EMC Celerra, HP StorageWorks 9100 or NetApp FAS are likely going to see the greatest benefit. The simple reason is that organizations need to generate enough savings in capacity to justify the cost of introducing the 6000 Series into their environment.
Israel’s The Marker and Globes Online are reporting this morning that IBM has made it official that it is acquiring Diligent Technologies. Though the two sources differ as to the terms of the deal (Globes Online reports $200 million while The Marker reports $168 million), my sources in Israel’s IT community tell me that the $168 million number is the more accurate of the two numbers. Under the terms of the deal, IBM will pay $160 million for Diligent’s intellectual property while the balance would be used to keep some of the existing employees onboard.
I have expressed skepticism in the past about Diligent Technologies’ ProtecTIER™ in light of the fact its primary go-to-market strategy is the enterprise open systems and mainframe environments. This strategy prompts me to exercise extreme diligence about their technology and architecture before endorsing it. The reason? The speeds and feeds that ProtecTIER is likely to encounter in enterprise shops are unlike what inline deduplication appliances will experience in small and midsize businesses and enterprises.