As DCIG makes its final preparations for the release of its inaugural Purpose-Built Flash Memory Appliance Buyer’s Guide, we have had a number of conversations internally about what the criteria for product inclusion and exclusion in this Buyer’s Guide will be. As we do so, our conversation almost always turns to ways in which these purpose-built flash memory appliances will impact organizations and their decision making and buying habits.
Probably the worst kept secret in the storage industry is that flash memory in whatever form it takes is almost always faster, quieter, consumes less power and produces less heat than comparable HDD-based storage systems. It is these benefits that every flash memory provider on the planet banks on to displace today’s existing storage arrays.
Yet what is not so intuitive is how these purpose-build flash memory appliances change the storage conversation. While they impact application performance, they change the dynamics of how organizations will buy, manage and even depreciate storage. Consider the following:
- Flash as cache or storage. One of the ways that traditional storage arrays compete against this emerging class of flash memory appliances is by deploying flash as a tier of cache or “Tier 0” storage inside their storage arrays. To get data on this flash memory tier they use an automated storage tiering feature that many include with their arrays. They argue (rightfully I might add) that 5% or less of the data stored on the storage array is active and will actually benefit from the performance that flash memory offers.
The shortcoming of this argument is, “How does the storage array determine which of the data on the array is active?” Yes, they have their algorithms and formulas but do they work equally well for all applications they host. The answer to that is nebulous at best.
This highlights an important advantage that purpose-built flash memory appliances have over traditional storage arrays: they keep flash simple. In other words, if data is stored on a flash memory appliance, it is stored on flash. This eliminates the guesswork as to if an application will perform better and makes it easier to understand (albeit potentially more costly) the benefits of flash.
- Solid state disks (SSDs) or flash memory. In flash’s initial iteration it was “easy” to deploy. All you had to do was replace a hard disk drive (HDD) with an SSD in a storage system and, voila, you had flash. However the costs, overhead and risk of this approach have surfaced. SSDs cost more than HDDs, organizations still have to get data on the SSDs once they are deployed and the storage array has to be programmed to recover the data to another SSD should the primary SSD fail.
This is not to say storage manufacturers are not taking these concerns about deploying SSDs in their storage arrays into account. But the decision to deploy SSDs in storage arrays is no longer as “easy” as it once as the costs, risks and overhead of SSDs have become better known.
Purpose-built flash memory arrays have their own set of challenges but the primary advantage they again offer is their simplicity. Aside from the assurance that all data is stored on flash, purpose-built flash memory appliances are specifically designed to manage flash and account for its idiosyncrasies (garbage collection, wear leveling, etc.) whereas their traditional storage array counterparts are not.
- The future roles of storage engineers and architects. Almost every enterprise organization as well as a storage reseller of any size relies upon storage engineers and architects to support and/or manage their storage infrastructures. But as purpose-built flash memory appliances are deployed, how does that change their role? It is unlikely it makes them obsolete but it should seriously reduce the amount of time they spend troubleshooting application performance issues.
As such, what do they do now? Develop strategies that optimize where data is placed across different tiers of storage? Focus in on data protection and disaster recovery? The answer will clearly vary by organization but troubleshooting and managing application performance issues should fall off of or to the bottom of their list of items to manage.
- The future of storage networks. FC and Ethernet storage networks have become almost inextricably linked with enterprise storage deployments. But with the performance that purpose-built flash memory appliances offer, FC and Ethernet switches start to become a performance bottleneck. So what happens next? My guess is that this will give organizations impetus to deploy 10 Gb Ethernet or 16 Gb FC but who knows? 40 Gb Infiniband is already available and some early flash memory providers like Nimbus Data Systems already tell me that they ship systems equipped with Infiniband that is on par with their Ethernet and FC shipments.
- Changing how one depreciates storage arrays. One of the recurring themes I hear regarding why many are reluctant to deploy flash is its high cost. However what many fail to consider is how improvements in flash arrays extend their viability of these systems beyond the normal three (3) year storage system life span to five(5) or even ten (10) years with minimal or no decrease in performance during that period of time(we are talking microseconds in improvement in the years to come.)
That being the case why do organizations still want to depreciate these systems over a three (3) year period? It is probably time for organizations to re-evaluate the depreciation period assigned to these arrays and move to at least a four (4) if not a five (5) year depreciation cycle.
Flash memory does a lot more than improve application performance. It forces an almost top-to-bottom change in how organizations need to think about their storage infrastructures from how they should deploy them to how they should depreciate and manage them. However the real trick for each organization will be to arrive at a flash memory strategy that is right for them which may be the most difficult feat of them all.