Dell EMC VMAX and HPE 3PAR StoreServ arrays can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements. Features such as data center footprint, licensing simplicity, mainframe connectivity, performance resources, predictive analytics, raw storage density and effective storage density are key areas where these two products differentiate themselves.
Both Hitachi Vantara and NetApp refreshed their respective F-Series and A-Series lines of all-flash arrays (AFAs) in the first half of 2018. While some of these changes reinforced the respective strengths of each of their product lines, other changes provided some key insights into how these two vendors see the AFA market shaping up in the years to come. Features such as host-to-storage networking connectivity, predictive analytics, support for public clouds, and data protection and flash performance optimization are key areas where these two products differentiate themselves.
The exhibit halls at the annual National Association of Broadcasters (NAB) show in Las Vegas always contain eye-popping displays highlighting recent technological advances as well as what is coming down the path in the world of media and entertainment. But behind NAB’s glitz and glamour lurks a hard, cold reality; every word recorded, every picture taken, and every scene filmed must be stored somewhere, usually multiple times, and available at a moment’s notice. It is these halls at the NAB show that DCIG visited where it identified two start-ups with storage technologies poised to disrupt business as usual.
Non-volatile Memory Express (NVMe) has captured the fancy of the enterprise storage world. Implementing NVMe on all-flash arrays or hyper-converged infrastructure appliances carries with it the promise that companies can leverage these solutions to achieve sub-millisecond response times, drive millions of IOPS, and deliver real-time application analytics and transaction processing. But differences persist between what NVMe promises for these solutions and what it can deliver. Here is a practical look at NVMe delivers on these solutions in early 2018.
Early in my IT career, a friend who owns a software company told me he had been informed by a peer that he wasn’t charging enough for his software. This peer advised him to adopt a “flinch-based” approach to pricing. He said my friend should start with a base licensing cost that meets margin requirements, and then keep adding on other costs until the prospective customer flinches. My friend found that approach offensive, and so do I.
Hybrid and all-disk arrays still have their place in enterprise data centers but all-flash arrays are “where it’s at” when it comes to hosting and accelerating the performance of production applications. Once reserved only for applications that could cost-justify these arrays, continuing price erosion in the underlying flash media coupled with technologies such as compression and deduplication have put these arrays at a price point within reach of almost any size enterprise. As that occurs, all-flash arrays from Dell EMC XtremIO and Pure Storage are often on the buying short lists for many companies. Those companies considering these two products can turn to a recent DCIG Pocket Analyst Report that compares these two products to help them make an informed buying decision.
The business case for organizations with petabytes of file data under management to classify and then place it across multiple tiers of storage has never been greater. By distributing this data across disk, flash, tape and the cloud, they stand to realize significant cost savings. The catch is finding a cost-effective solution that makes it easier to administer and manage file data than simply storing it all on flash storage. This is where a solution such as what Quantum now offers come into play.
The annual Flash Memory Summit is where vendors reveal to the world the future of storage technology. Many companies announced innovative products and technical advances at last week’s 2017 Flash Memory Summit that give enterprises a good understanding of what to expect from today’s all-flash products today as well as a glimpse into tomorrow’s products. These previews into the next generation of flash products revealed four flash memory trends sure to influence the development of the next generation of all-flash arrays.
Today organizations more so than ever are looking to move to software-defined data centers. Whether they adopt software-defined storage, networking, computing, servers, security, or all of them as part of this initiative, they are starting to conclude that a software-defined world trumps the existing hardware defined one. While I agree with this philosophy in principle, organizations need to carefully dip their toe into the software-defined waters and not dive head-first.
While the overall economy and even the broader technology sector largely boom, the enterprise storage space is feeling the pinch. As storage revenues level off and even drop, many people with whom I spoke at this past week’s HPE Discover 2017 event shared their thoughts as to what is causing this situation. The short answer: there does not appear to be a single reason for the pullback in storage revenue but rather a perfect storm of events that is contributing to this situation. The good news is that this retrenching should ultimately benefit end-users.
If you assume that leading enterprise midrange all-flash arrays (AFAs) support deduplication, your assumption would be correct. But if you assume that these arrays implement and deliver deduplication’s features in the same way, you would be mistaken. These differences in deduplication should influence any all-flash array buying decision as deduplication’s implementation affects the array’s total effective capacity, performance, usability, and, ultimately, your bottom line.
A few years ago when all-flash arrays (AFAs) were still gaining momentum, newcomers like Nimbus Data appeared poised to take the storage world by storm. But as the big boys of storage (Dell, HDS, and HPE, among others,) entered the AFA market, Nimbus opted to retrench and rethink the value proposition of its all-flash arrays. Its latest AFA models, the ExaFlash D-Series, is one of the outcomes of that repositioning as these arrays answer the call of today’s hosting providers. These arrays deliver the high levels of availability, flexibility, performance, and storage density that they seek backed by one of the lowest cost per GB price points in the market.
In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.
Few data center technologies currently generate more buzz than hyper-converged infrastructure solutions. By combining compute, data protection, flash, scale-out, and virtualization into a single self-contained unit, organizations get the best of what each of these individual technologies has to offer with the flexibility to implement each one in such a way that it matches their specific business needs. Yet organizations must exercise restraint in how many attributes they ascribe to hyper-converged infrastructure solutions as their adoption is a journey, not a destination.
Almost all size organizations now view flash as a means to accelerate application performance in their infrastructure … and for good reason. Organizations that deploy flash typically see increases in performance by factor of up to 10x. But while many all-flash storage arrays can deliver these increases in performance, savvy organizations must prepare to do more than simply increase workload performance. They need to identify solutions that help them better troubleshoot their emerging flash infrastructure as well as future proof their investment in flash by better modeling anticipated application workloads on all-flash arrays being evaluated before they are acquired.
DCIG appreciates the attention given to its recently released DCIG 2015-16 All-Flash Array Buyer’s Guide. This type of dialog and feedback is absolutely critical in helping DCIG, the industry as a whole, and most importantly, the buyers and the organizations for which they work to make informed buying decisions about all-flash arrays.
DCIG is pleased to announce the September 29 release of the DCIG 2015-16 All-Flash Array Buyer’s Guide that weights, scores and ranks more than 100 features of twenty-eight (28) all-flash arrays or array series from eighteen (18) enterprise storage providers.
Since the publication of the DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide, the storage industry has embraced the term all-flash array. For that reason the forthcoming refresh of the buyer’s guide will be called the DCIG 2015-16 All-Flash Array Buyer’s Guide. More than terminology has changed over the last eighteen (18) months. The fresh data DCIG compiled shows that all-flash array vendors have substantially reduced the barriers to all-flash array adoption.
Almost any hybrid or all-flash storage array will accelerate performance for the applications it hosts. Yet many organizations need a storage array that scales beyond just accelerating the performance of a few hosts. They want a solution that both solves their immediate performance challenges and serves as a launch pad to using flash more broadly in their environment.
A little over a decade ago when I told people that I was managing three (3) storage arrays with eleven (11) TBs of storage under management, people looked at me with a mixture of shock and awe. Fast forward to 2015 and last week’s NAB conference in Las Vegas, NV, and it was hard to find many storage vendors who even wanted to have a conversation with a customer unless it had at least a petabyte of data under management.