Almost any article published today related to enterprise data storage will talk about the benefits of flash memory. However, while many organizations now use flash in their enterprise, most are only now starting to use it at a scale where they use it to host more than a handful of their applications. As organizations look to deploy flash more broadly in their enterprises, here are six best practices to keep in mind as they do so.
The exhibit halls at the annual National Association of Broadcasters (NAB) show in Las Vegas always contain eye-popping displays highlighting recent technological advances as well as what is coming down the path in the world of media and entertainment. But behind NAB’s glitz and glamour lurks a hard, cold reality; every word recorded, every picture taken, and every scene filmed must be stored somewhere, usually multiple times, and available at a moment’s notice. It is these halls at the NAB show that DCIG visited where it identified two start-ups with storage technologies poised to disrupt business as usual.
Non-volatile Memory Express (NVMe) has captured the fancy of the enterprise storage world. Implementing NVMe on all-flash arrays or hyper-converged infrastructure appliances carries with it the promise that companies can leverage these solutions to achieve sub-millisecond response times, drive millions of IOPS, and deliver real-time application analytics and transaction processing. But differences persist between what NVMe promises for these solutions and what it can deliver. Here is a practical look at NVMe delivers on these solutions in early 2018.
Hybrid and all-disk arrays still have their place in enterprise data centers but all-flash arrays are “where it’s at” when it comes to hosting and accelerating the performance of production applications. Once reserved only for applications that could cost-justify these arrays, continuing price erosion in the underlying flash media coupled with technologies such as compression and deduplication have put these arrays at a price point within reach of almost any size enterprise. As that occurs, all-flash arrays from Dell EMC XtremIO and Pure Storage are often on the buying short lists for many companies. Those companies considering these two products can turn to a recent DCIG Pocket Analyst Report that compares these two products to help them make an informed buying decision.
Today organizations more so than ever are looking to move to software-defined data centers. Whether they adopt software-defined storage, networking, computing, servers, security, or all of them as part of this initiative, they are starting to conclude that a software-defined world trumps the existing hardware defined one. While I agree with this philosophy in principle, organizations need to carefully dip their toe into the software-defined waters and not dive head-first.
A few years ago when all-flash arrays (AFAs) were still gaining momentum, newcomers like Nimbus Data appeared poised to take the storage world by storm. But as the big boys of storage (Dell, HDS, and HPE, among others,) entered the AFA market, Nimbus opted to retrench and rethink the value proposition of its all-flash arrays. Its latest AFA models, the ExaFlash D-Series, is one of the outcomes of that repositioning as these arrays answer the call of today’s hosting providers. These arrays deliver the high levels of availability, flexibility, performance, and storage density that they seek backed by one of the lowest cost per GB price points in the market.
In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.
In the last couple of weeks X-IO announced a number of improvements to its iglu line of storage arrays – namely flash optimized controllers and stretch clustering. But what struck me in listening to X-IO present the new features of this array was in how it kept referring to the iglu as “intelligent.” While that term may be accurate, when I look iglu’s architecture and data management features and consider them in light of what small and midsize enterprises need today, I see the iglu’s architecture as “thoughtful.”
A little over a decade ago when I told people that I was managing three (3) storage arrays with eleven (11) TBs of storage under management, people looked at me with a mixture of shock and awe. Fast forward to 2015 and last week’s NAB conference in Las Vegas, NV, and it was hard to find many storage vendors who even wanted to have a conversation with a customer unless it had at least a petabyte of data under management.
On March 17, 2015, the Storage Performance Council (SPC) updated its “Top Ten” list of SPC-2 results that includes performance metrics going back almost three (3) years to May 2012. Noteworthy in these updated results is that the three storage arrays ranked at the top are, in order, a high end mainframe-centric, monolithic storage array (the HP XP7, OEMed from Hitachi), an all-flash storage array (from startup Kaminario, the K2 box) and a hybrid storage array (Oracle ZFS Storage ZS4-4 Appliance). Making these performance results particularly interesting is that the hybrid storage array, the Oracle ZFS Storage ZS4-4 Appliance, can essentially go toe-to-toe from a performance perspective with both the million dollar HP XP7 and Kaminario K2 arrays and do so at approximately half of their cost.
At the beginning of 2014, I started the year with the theme: “it’s an exciting time to be part of the DCIG team“. This was due to the explosive growth we saw in website visits and popularity of our Buyer’s Guides. That hasn’t changed. DCIG Buyer’s Guides continue to grow in popularity, but what’s even more exciting is the diversity of our new products and services. This year’s theme is diversity: a range of different things. DCIG is expanding…again…in different directions. In the past year, we have added a number of offerings to our repertoire of products and services. In addition to producing our popular Buyer’s Guides and well known blogs, we now offer Competitive Research Services, Executive Interviews, Executive White papers, Lead Generation, Special Reports and Webinars. Even more unique, DCIG now offers an RFP/RFI Analysis Software Suite. This suite gives anyone (vendor, end-user or technology reseller) the ability to license the same software that DCIG uses internally to…
At the beginning of 2014, I started the year with the theme: “it’s an exciting time to be part of the DCIG team”. This was due to the explosive growth we saw in website visits and popularity of our Buyer’s Guides. That hasn’t changed. DCIG Buyer’s Guides continue to grow in popularity, but what’s even more exciting is the diversity of our new products and services. This year’s theme is diversity: a range of different things. DCIG is expanding…again…in different directions.
Flash is by all estimates the future of enterprise production storage with most enterprises anticipating a day in the not too distant future where they will use flash storage arrays (all-flash or hybrid) much more broadly within their data center. Yet despite flash’s many benefits (higher levels of performance, smaller data center footprint and reduced energy consumption among others,) many enterprises still only use flash in a limited capacity if they use it at all. Today I take a look at some of the factors that still contribute to an enterprise reticence to adopt flash more broadly.
Dedicating a single flash-based storage array to improving the performance of a single application may be appropriate for siloed or small SAN environments. However this is NOT an architecture that enterprises want to leverage when hosting multiple applications in larger SAN environments, especially if the flash-based arrays has only a few or unproven data management services behind it. The new Oracle FS1 Series Flash Storage System addresses these concerns by providing enterprises both the levels of performance and the mature and robust data management services that they need to move flash-based arrays from the fringes of their SAN environments into their core.
A couple of weeks ago I attended the Flash Memory Summit in Santa Clara, CA, where I had the opportunity to talk to a number of providers, fellow analysts and developers in attendance about the topic of flash memory. The focus of many of these conversations was less about what flash means right now as its performance ramifications are already pretty well understood by the enterprise. Rather many are already looking ahead to take further advantage of flash’s particular idiosyncrasies and, in so doing, give us some good insight into what will be hot in flash in the years to come.
The use of data reduction technologies such as compression and deduplication to reduce storage costs are nothing new. Tape drives have used compression for decades to increase backup data densities on tape while many modern deduplicating backup appliances use compression and deduplication to also reduce backup data stores. Even a select number of existing HDD-based storage arrays use data compression and deduplication to minimize data stores for large amounts of file data stored in archives or on networked attached file servers.
As I attended sessions at Microsoft TechEd 2014 last week and talked with people in the exhibit hall a number of themes emerged including “mobile first, cloud first”, hybrid cloud, migration to the cloud, disaster recovery as a service, and flash memory storage as a game-changer in the data center. But as I reflect on the entire experience, a statement made John Loveall, Principal Program Manager for Microsoft Windows Server during one of his presentations sums up to overall message of the conference, “Today it is really all about the integrated solution.”
Toward the end of April Wikibon’s David Floyer posted an article on the topic of server SANs entitled “The Rise of Server SANs” which generated a fair amount of attention and was even the focus of a number of conversations that I had at this past week’s Symantec Vision 2014 conference in Las Vegas. However I have to admit, when I first glanced at some of the forecasts and charts that were included in that piece, I thought Wikibon was smoking pot and brushed it off. But after having had some lengthy conversations with attendees at Symantec Vision, I can certainly see why Wikibon made some of the claims that it did.
VMware® VMmark® has quickly become a performance benchmark to which many organizations turn to quantify how many virtual machines (VMs) they can realistically expect to host and then perform well on a cluster of physical servers. Yet a published VMmark score for a specified hardware configuration may overstate or, conversely, fail to fully reflect the particular solution’s VM consolidation and performance capabilities. The HP ProLiant BL660c published VMmark performance benchmarks using a backend HP 3PAR StoreServ 7450 all-flash array provide the relevant, real-world results that organizations need to achieve maximum VM density levels, maintain or even improve VM performance as they scale and control costs as they grow.
Delivering always-on application availability accompanied by the highest levels of capacity, management and performance are the features that historically distinguish high end storage arrays from other storage arrays available on the market. But even these arrays struggle to easily deliver on a fundamental data center task: migrating data from one physical array to another. The introduction of the storage virtual array feature into the new HP XP7 dramatically eases this typically complex task as it facilitates data consolidations and migrations by migrating entire storage virtual arrays from one physical array frame to another while simplifying array management in the process.