In May 2010 DCIG released its first-ever Midrange Array Buyer’s Guide in which we covered 70+ models from over 20 vendors. Fast forward just three (3) short years later and DCIG is on track to release not one, not two, not three no, not even four Buyer’s Guides on enterprise midrange arrays but five distinct Buyer’s Guides on this topic! So what has changed in just three (3) short years that DCIG feels the need to produce so many? To understand this requires a closer look at the forces that are driving the evolution and revolution in enterprise midrange arrays.
In 2010 when DCIG released its first Midrange Array Buyer’s Guide, the midrange array market was already very mature. There were multiple providers of storage arrays (over 30,) multiple models from these providers (nearly 150 models) and an increasingly sophisticated set of software available on these arrays.
The storage management software (or firmware as it is commonly called) was generally not as sophisticated as found on larger enterprise arrays (the EMC VMAX or the HDS VSP.) However it certainly offered many advanced features. Even three (3) years ago, automated storage tiering, snapshots, replication, thin provisioning and many others were commonly found on these arrays.
Despite the maturity of midrange arrays, a lot has changed in the last three years that DCIG now sees it necessary and can justify producing five Buyer’s Guides in a single year on enterprise midrange arrays. In short, there are two specific forces driving midrange array segmentation. These are:
1. Unstructured Data Growth/Big Data. As an analyst I regularly run across statistics like 30%, 50%, 80%, and, in some extreme cases, even 400% data growth in some environments. However organizations are feeling the impact of this data growth in real time and, they assure me, their storage budgets are growing nowhere near as fast as their data is.
If they get single digit increases in their budgets year-over-year, they are thrilled. So their annual challenge is to make single digit increases in budget stretch to cover double and triple digit percentages in data growth.
One way in which they are doing so – especially small and midsized organizations – is by turning to Unified Storage Arrays (access free download of DCIG Buyer’s Guide on this topic here.) These can be tuned to achieve high capacity, high performance or some combination of both. This is done by deploying a mix of high performance storage capacity (flash memory/SSDs) and higher storage capacity, lower performing and more economical 3 & 4 TB SATA drives in a single array.
Then so any application can access this various types of storage capacity, these arrays make the storage accessible over any available storage networking protocol. These could be high performance SAN protocols (8 Gb FC or 10 Gb Ethernet) or 1 Gb NAS protocols (CIFS or NFS). In this way, organizations can buy a single storage array, configure it with the type of storage and networking interfaces they need to accommodate their mixed needs of unstructured data growth and performance hungry applications, and do so economically.
Enterprises are also turning to unified storage arrays but in these environments, they are often architected as scale-out storage arrays. In these configurations, organizations can add or even remove performance, capacity or both on an ad hoc basis with minimal effort and without increasing their ongoing management workload. More notably, these tend to scale to much higher capacities (into the petabytes) whereas other midrange arrays only scale into the hundreds of terabytes.
2. Performance Hungry Apps. Even as recently as a few years ago, if an array – any array – did read or write I/O in as little as a few milliseconds (around 5 ms) it was considered blazing fast. Today it seems 5 ms response times will barely get you in the performance conversation when discussing databases.
Further, as organizations virtualize more of their applications and put more VMs on fewer physical machines, this puts a lot of pressure on storage arrays to keep up. Aggravating the situation, server and networking technologies have literally experience d ten-fold or greater increases in performance over the last few years while storage arrays have only seen incremental increases in performance.
This has led to the emergence of two different types of midrange storage arrays – flash memory and hybrid – that have contributed to giving these arrays the 2 – 10x increases in performance that they have needed to keep up with application demands and improvements in other parts of the technology stack.
Both of these arrays use flash memory and/or solid state drives (SSDs) to accelerate performance. The main difference between the two is that flash memory storage arrays only offer flash memory as a storage option while hybrid storage arrays use both flash memory and spinning disk to store data. As a result, flash memory arrays are generally faster though more expensive than hybrid storage arrays.
The primary use cases for both of these arrays due to their cost and more limited capacities are primarily for specific high performance applications workloads. However as their capacities increase, flash memory prices drop and other technologies such as compression, deduplication and thin provisioning are implemented on these arrays, expect them to be used more widely for other applications.
The combination of these two forces has led to dramatic changes in the architecture of enterprise midrange arrays. While one can still get big boxes full of spinning disks connected via FC to servers, there are now many more options than what was available in the past. They can be capacity focused. They can be performance focused. Storage can be delivered over a number of storage networking protocols. These combined are leading to an evolution – and some would even say a revolution – in how midrange arrays are architected and what they will look like in the years to come.