DCIG is pleased to announce the availability of the 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide developed from the backup appliance body of research. As core business processes become digitized, the ability to keep services online and to rapidly recover from any service interruption becomes a critical need. Given the growth and maturation of cloud services, many organizations are exploring the advantages of storing application data with cloud providers and even recovering applications in the cloud.
A storage decision that many small, midsize and large enterprise organizations are trying to make regards what type of array to host their production data on. This often comes down to the selection of either an all-flash or a hybrid storage array. Since most organizations do not have the luxury of saying, “Money is no object,” the majority are, for now, selecting hybrid storage arrays to get flash-like performance for their most active application data while using disk to store the bulk of their application data. It as organizations evaluate hybrid storage arrays that there are key factors that they need to consider.
An Omaha city employee recently gained unwanted public visibility after they sent twelve filing cabinets containing a hundred years of irreplaceable original building permits from the basement of City Hall to the county dump. It turns out that the head of the permits and inspections division decided to get rid of the cabinets as part of cleaning out its basement storage area. They did not realize that other city employees regularly pulled the permits, which dated from the 1880s through the 1980s. They were also apparently unaware that a local preservation group was developing a plan to move the permits to a new facility in order to make the permits more secure and accessible to the public.
Like Omaha’s City Hall, businesses often face what appear to be incompatible priorities. IT departments are expected to keep spending in check and know that only 10-20 percent of data is ever accessed after 60 days of its creation. But knowing which data to keep available and which data to delete or archive can be a challenge. This type of dilemma is one of many drivers in the development of a new group of storage systems–public cloud gateways.
At TechEd 2014 in Houston, TX this week, Microsoft made it clear that it is no longer content to just send customers to storage array vendors to meet their storage needs, especially when it comes to embracing a cloud-oriented approach to infrastructure. In the process of improving Windows storage technology, Microsoft is effectively delivering the benefits of–and addressing the barriers to–the adoption of server SAN technology.
DCIG has concluded our analysis of 41 hybrid storage arrays for the forthcoming DCIG 2014 Hybrid Storage Array Buyer’s Guide, As we reflected on the data we had collected, there were five features that stood out as distinguishing hybrid storage arrays from one another, and from all-flash arrays or traditional arrays.
Converged infrastructures are emerging as the next “Big Thing” in enterprise datacenters with servers, storage and networking delivered as a single SKU. Yet what providers are beginning to recognize – and what organizations should begin to expect – is that unprecedented jumps in application performance and resource optimization are now possible. The first examples of these jumps are seen in today’s ZS3 Storage Systems announcement from Oracle as it raises the bar in terms of how Oracle Database performance and resource utilization can be delivered while ushering in a new era of application-storage convergence.
As we were researching arrays for inclusion in the DCIG 2013 Flash Memory Storage Array Buyer’s Guide we kept encountering an intriguing group of companies that had designed–or were developing–storage arrays from the ground up to realize the performance benefits of an all flash array, but with storage capacities and price points that would bring the benefits of flash memory storage to a broader range of businesses. The resulting hybrid storage arrays achieve this balance of performance, capacity, and cost by intelligently combining flash memory with large capacity disk drives in a single storage system.
As we have been working on the development of a DCIG Buyer’s Guide for Hybrid Storage Arrays, it has been interesting to see the different approaches that the vendors are taking as they seek to leverage flash memory plus traditional hard drives to deliver previously unheard of IOPS and ultra-low latencies at a cost per GB that makes sense to a broad range of businesses. The “secret sauce” varies from vendor to vendor, but in every case it involves sophisticated caching and/or automated storage tiering software.
Over the years big data has crept into the everyday life of systems administrators. Attempts to solve the big data problem in both block and file storage emerged as data management software. While data management software struggled to get a footing, deduplication and compression took off stunting data management software’s growth.
Deduplication and compression technologies have well known capabilities in both the storage and information disciplines. However, they differ in a significant way. These technologies do not ease the burden of information management.
Here are the million and, in many cases, the multi-million dollar questions that every enterprise of almost any size or consequence is making or will be making now or in the next few years, “Are Dell and HP serious about enterprise storage?” Or are they inclined to treat storage as they have in the past – a bolt-on accessory to a server sale?
Automated storage tiering (AST) seems to be getting ever more attention as more organizations move from physical to virtualized environments and look to use networked storage systems with AST to support them. But AST carries its own set of baggage and can potentially create as many problems as it solves, not the least of which is that it may not be as automated for all application workloads as some vendors may lead you to believe.
Of all the topics that I thought I might be writing about after my first day in attendance at the fall Storage Networking World (SNW) conference in 2010, I did not think tape would be it. In fact, it was not even on my radar screen walking into the show. But after meeting with the Ultrium LTO team yesterday at SNW, it is clear that tape is back in the storage conversation and those arguing for its broader adoption and continued use have much more to talk about than its power savings, larger capacities and faster speeds.
Today DCIG is pleased to announce that through a special licensing agreement with Nexsan Technologies, the 2010 DCIG Midrange Array Buyer’s Guide is now available for a free download on Nexsan’s website for a limited time. This is a full copy of the 105 page Buyer’s Guide exactly as it was originally published by DCIG with no additions, deletions or edits.
This week I am spending a couple of days at Compellent’s annual C-Drive conference in Minneapolis, MN where about 500 users, value added resellers (VARs) and Compellent sales reps are in attendance. Since a couple of years have passed since I attended the last one, I thought I would make the 6-hour drive from Omaha to Minneapolis to catch up on the latest going-ons with Compellent and gain some insight as to how they plan to recoup after their latest earnings stumble.
Last month I announced that DCIG is putting together its first annual Midrange Array Buyers Guide. Since then a lot has happened and over the last two weeks responses to the questionnaires that I sent out to over 20 storage providers representing over 100 midrange array models have been pouring in. So while it is still too early to announce any winners and results are still being tabulated, I am prepared to share some preliminary findings in the areas of total storage capacity and cache sizes on midrange arrays.
One thing that struck me was that Compellent users really understand what a game-changing technology that virtualization is. I sat through 2 or 3 presentations during the two days of the conference (May 7 – 8) and also met with a fair number of users (~10) between sessions, over meals and at the evening events and all of them were pretty stoked about the capabilities that virtualization in general and Compellent specifically delivers.