10 Characteristics That Help to Define Today’s High End Storage Arrays

It has been said that everyone knows what “normal” is but that it is often easier to define “abnormal” than it is to define “normal.” To a certain degree that axiom also applies to defining “high end storage arrays.” Everyone just seems to automatically assume that a certain set of storage arrays are in the “high end” category but when push comes to shove, people can be hard-pressed to provide a working definition as to what constitutes a high end storage array in today’s crowded storage space.

Over the last few weeks the analysts at DCIG have certainly wrestled with some of those same issues regarding the definition of a high end storage array. Whereas the highest levels of availability, capacity and performance were once the defining attributes of these arrays, the providers of these arrays can no longer claim that they exclusively deliver these features. Many storage arrays classified as “enterprise midrange” or “midrange” offer similar or even higher levels of availability, capacity and performance than the storage arrays typically classified as “high end.”

This is not to imply that a high end class of arrays does not exist. Such arrays do exist and it is important that organizations and enterprises recognize these arrays for what they are. However the features or characteristics that make them “high end” may, in some cases, differ from even a few years ago. To shed some light on what makes these storage arrays “high end,” DCIG has come up with 10 characteristics that organizations should look for to distinguish between an array that is “high end” and one that is “midrange.”

  1. FICON connectivity to an IBM mainframe. In talking to a number of end users, VARs and vendors, FICON connectivity to IBM mainframes running z/OS is often where the difference between mainframe and midrange begins and ends. In short, if it does not offer FICION connectivity to a mainframe, it is not a high end storage array.
  2. Fibre Channel (FC) block-based storage connectivity. Absent FICON connectivity, the storage array must minimally offer block-based FC connectivity to even have a shot at being considered a high end storage array. While a number of storage arrays considered high end may support Ethernet block-based protocols such as iSCSI or FCoE (Fibre Channel over Ethernet,) support for these protocols alone are not enough to bridge the midrange to high end gulf.
  3. Multiple Active-Active controller/blade/processor pairs. A number of midrange arrays offer an “Active-Active” controller configuration where a pair of controllers permits concurrent access to data on the same backend disk. What differentiates a high end array from a midrange array is the availability of multiple pairs of these Active-Active controllers (also called “blade pairs” or “processor pairs” on some arrays) on the same physical array that are all part of the same logical array configuration.
  4. High levels of cache and capacity. Despite the encroachment on this territory by multiple midrange arrays, high end storage arrays as a group still generally support far higher levels of cache and storage capacity than most midrange arrays. One should generally expect the amount of cache available on a high end storage array to scale into the hundreds if not thousands of GBs and provide support for PBs of storage capacity.
  5. Large number of multi-core processors. The multiple blade/controller/processor pairs in a high end storage array deliver much more than high availability. They also provide access to much higher levels of performance. This becomes critically important in environments that are handling mixed workloads that may include sequential reads, sequential writes and random access, small block transactions.
  6. Scale-out and scale-up configurations. Midrange array providers often tout the scale-out or scale-up capabilities of their arrays like they are the best thing since sliced bread. High end storage providers tend to yawn, stretch and say, “It is about time you offer those features on your array.” In other words, scale-out and scale-up are part and parcel to the configuration of every high end storage array.
  7. Detailed system analysis, performance monitoring and troubleshooting. High end storage arrays give organizations unparalleled flexibility to gather and analyst system data. This may then be used to quickly, accurately and confidently pinpoint where a performance bottleneck is occurring or what piece of hardware inside of the storage array is malfunctioning. Most midrange storage arrays do not offer this level of diagnostics or capabilities to troubleshoot a performance or system issue.
  8. Tested, certified configurations. While midrange array also “certify” their arrays with certain OSes and applications, the certification process in my mind for midrange arrays has always been a little suspect. This concern stems from the large number of applications and operating systems for which midrange arrays must be certified and the diverse environments into which they are deployed. Due to the smaller number of application- and OS-specific environments into which high end storage arrays are deployed, the level of confidence that enterprises may have about the quality and thoroughness of the interoperability testing and the quality of the features available can be higher.
  9. Starting list price of $250,000 or higher. All of these features, high levels of capacity and performance and certifications come at a price. While these high end storage arrays may actually be price competitive on a per GB basis with some midrange arrays, you first need an environment that justifies the scale that these high end arrays bring to the table.
  10. Non-disruptive operations across two or more data centers. Many storage arrays offer one or more forms of replications. But what is arguably becoming a defining feature on high end arrays is their ability to deliver synchronous replication to at least two storage arrays and then sync the applications (think VMs) with the underlying replication activities so as to guarantee non-disruptive operation of applications. While this feature was initially designed to deliver disaster recovery, more enterprises are looking to leverage this capability for load balancing, non-disruptive failovers and failbacks and even to lower their data center operating costs.
image_pdfimage_print
Jerome M. Wendt

About Jerome M. Wendt

President & Founder of DCIG, LLC Jerome Wendt is the President and Founder of DCIG, LLC., an independent storage analyst and consulting firm. Mr. Wendt founded the company in November 2007.

Leave a Reply

Bitnami