If you want to get waist-deep in the technologies that will impact the data centers of tomorrow, the Flash Memory Summit 2018 (FMS) held this week in Santa Clara is the place to do it. This is where the flash industry gets its geek on and everyone on the exhibit floor speaks bits and bytes. However, there is no better place to learn about advances in flash memory that are sure to show up in products in the very near future and drive further advances in data center infrastructure.
Non-volatile Memory Express (NVMe) has captured the fancy of the enterprise storage world. Implementing NVMe on all-flash arrays or hyper-converged infrastructure appliances carries with it the promise that companies can leverage these solutions to achieve sub-millisecond response times, drive millions of IOPS, and deliver real-time application analytics and transaction processing. But differences persist between what NVMe promises for these solutions and what it can deliver. Here is a practical look at NVMe delivers on these solutions in early 2018.
The all-flash array market has settled down considerably in the last few years. While there are more all-flash arrays (90+ models) and vendors (20+) than ever before, the ways in which these models can be grouped and classified has also become easier. As DCIG looks forward to releasing a series of Buyer’s Guides covering all-flash arrays in the coming months, it can break these all-flash arrays into five (and soon to be six) general classifications based upon their respective architectures and use cases.
Today organizations more so than ever are looking to move to software-defined data centers. Whether they adopt software-defined storage, networking, computing, servers, security, or all of them as part of this initiative, they are starting to conclude that a software-defined world trumps the existing hardware defined one. While I agree with this philosophy in principle, organizations need to carefully dip their toe into the software-defined waters and not dive head-first.
A few years ago when all-flash arrays (AFAs) were still gaining momentum, newcomers like Nimbus Data appeared poised to take the storage world by storm. But as the big boys of storage (Dell, HDS, and HPE, among others,) entered the AFA market, Nimbus opted to retrench and rethink the value proposition of its all-flash arrays. Its latest AFA models, the ExaFlash D-Series, is one of the outcomes of that repositioning as these arrays answer the call of today’s hosting providers. These arrays deliver the high levels of availability, flexibility, performance, and storage density that they seek backed by one of the lowest cost per GB price points in the market.
In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.
Few data center technologies currently generate more buzz than hyper-converged infrastructure solutions. By combining compute, data protection, flash, scale-out, and virtualization into a single self-contained unit, organizations get the best of what each of these individual technologies has to offer with the flexibility to implement each one in such a way that it matches their specific business needs. Yet organizations must exercise restraint in how many attributes they ascribe to hyper-converged infrastructure solutions as their adoption is a journey, not a destination.
All-flash arrays, cloud computing, cloud storage, and converged and hyper-converged infrastructures may grab many of today’s headlines. But the decades old Fibre Channel protocol is still a foundational technology present in many data centers with it holding steady in the U.S. and even gaining increased traction in countries such as China. In this first installment, QLogic’s Vice President of Products, Marketing and Planning, Vikram Karvat, provides some background as to why fibre channel (FC) remains relevant and how all-flash arrays are one of the forces driving the need for 32Gb FC.
A little over a decade ago when I told people that I was managing three (3) storage arrays with eleven (11) TBs of storage under management, people looked at me with a mixture of shock and awe. Fast forward to 2015 and last week’s NAB conference in Las Vegas, NV, and it was hard to find many storage vendors who even wanted to have a conversation with a customer unless it had at least a petabyte of data under management.
On March 17, 2015, the Storage Performance Council (SPC) updated its “Top Ten” list of SPC-2 results that includes performance metrics going back almost three (3) years to May 2012. Noteworthy in these updated results is that the three storage arrays ranked at the top are, in order, a high end mainframe-centric, monolithic storage array (the HP XP7, OEMed from Hitachi), an all-flash storage array (from startup Kaminario, the K2 box) and a hybrid storage array (Oracle ZFS Storage ZS4-4 Appliance). Making these performance results particularly interesting is that the hybrid storage array, the Oracle ZFS Storage ZS4-4 Appliance, can essentially go toe-to-toe from a performance perspective with both the million dollar HP XP7 and Kaminario K2 arrays and do so at approximately half of their cost.
At a recent analyst briefing, Micron Storage leaders identified at least three critical transitions that must take place in order to unleash the full potential of flash memory in the data center…
At a recent analyst briefing, Micron Storage leaders identified at least three critical transitions that must take place in order to unleash the full potential of flash memory in the data center and explained their strategy for accelerating those transitions.
Dedicating a single flash-based storage array to improving the performance of a single application may be appropriate for siloed or small SAN environments. However this is NOT an architecture that enterprises want to leverage when hosting multiple applications in larger SAN environments, especially if the flash-based arrays has only a few or unproven data management services behind it. The new Oracle FS1 Series Flash Storage System addresses these concerns by providing enterprises both the levels of performance and the mature and robust data management services that they need to move flash-based arrays from the fringes of their SAN environments into their core.
A couple of weeks ago I attended the Flash Memory Summit in Santa Clara, CA, where I had the opportunity to talk to a number of providers, fellow analysts and developers in attendance about the topic of flash memory. The focus of many of these conversations was less about what flash means right now as its performance ramifications are already pretty well understood by the enterprise. Rather many are already looking ahead to take further advantage of flash’s particular idiosyncrasies and, in so doing, give us some good insight into what will be hot in flash in the years to come.
The use of data reduction technologies such as compression and deduplication to reduce storage costs are nothing new. Tape drives have used compression for decades to increase backup data densities on tape while many modern deduplicating backup appliances use compression and deduplication to also reduce backup data stores. Even a select number of existing HDD-based storage arrays use data compression and deduplication to minimize data stores for large amounts of file data stored in archives or on networked attached file servers.
As I attended sessions at Microsoft TechEd 2014 last week and talked with people in the exhibit hall a number of themes emerged including “mobile first, cloud first”, hybrid cloud, migration to the cloud, disaster recovery as a service, and flash memory storage as a game-changer in the data center. But as I reflect on the entire experience, a statement made John Loveall, Principal Program Manager for Microsoft Windows Server during one of his presentations sums up to overall message of the conference, “Today it is really all about the integrated solution.”
Toward the end of April Wikibon’s David Floyer posted an article on the topic of server SANs entitled “The Rise of Server SANs” which generated a fair amount of attention and was even the focus of a number of conversations that I had at this past week’s Symantec Vision 2014 conference in Las Vegas. However I have to admit, when I first glanced at some of the forecasts and charts that were included in that piece, I thought Wikibon was smoking pot and brushed it off. But after having had some lengthy conversations with attendees at Symantec Vision, I can certainly see why Wikibon made some of the claims that it did.
VMware® VMmark® has quickly become a performance benchmark to which many organizations turn to quantify how many virtual machines (VMs) they can realistically expect to host and then perform well on a cluster of physical servers. Yet a published VMmark score for a specified hardware configuration may overstate or, conversely, fail to fully reflect the particular solution’s VM consolidation and performance capabilities. The HP ProLiant BL660c published VMmark performance benchmarks using a backend HP 3PAR StoreServ 7450 all-flash array provide the relevant, real-world results that organizations need to achieve maximum VM density levels, maintain or even improve VM performance as they scale and control costs as they grow.
DCIG is pleased to announce the March 30 release of the DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide that weights, scores and ranks more than 130 features of thirty-nine (39) different storage arrays from twenty (20) different storage providers.
Many changes have taken place in the data center storage marketplace in the 14 months since the release of the inaugural DCIG 2013 Flash Memory Storage Array Buyer’s Guide. This blog entry highlights a few of those changes based on DCIG’s research for the forthcoming DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide.