There is literally a divergence occurring right now in data storage solutions. On one hand, a number of storage providers seek to deliver highly differentiated storage solutions that work with a broad set of applications and operating systems. On the other, a few providers focus on delivering a storage solution that tightly integrates with one or more applications to deliver unparalleled levels of application performance and ease of management. The latest Oracle ZFS Storage Appliance ZS3 Series with its new OS8.2 provide the best of what both of these categories of storage systems currently have to offer to deliver a storage platform that truly stands apart.
The use of data reduction technologies such as compression and deduplication to reduce storage costs are nothing new. Tape drives have used compression for decades to increase backup data densities on tape while many modern deduplicating backup appliances use compression and deduplication to also reduce backup data stores. Even a select number of existing HDD-based storage arrays use data compression and deduplication to minimize data stores for large amounts of file data stored in archives or on networked attached file servers.
Toward the end of April Wikibon’s David Floyer posted an article on the topic of server SANs entitled “The Rise of Server SANs” which generated a fair amount of attention and was even the focus of a number of conversations that I had at this past week’s Symantec Vision 2014 conference in Las Vegas. However I have to admit, when I first glanced at some of the forecasts and charts that were included in that piece, I thought Wikibon was smoking pot and brushed it off. But after having had some lengthy conversations with attendees at Symantec Vision, I can certainly see why Wikibon made some of the claims that it did.
VMware® VMmark® has quickly become a performance benchmark to which many organizations turn to quantify how many virtual machines (VMs) they can realistically expect to host and then perform well on a cluster of physical servers. Yet a published VMmark score for a specified hardware configuration may overstate or, conversely, fail to fully reflect the particular solution’s VM consolidation and performance capabilities. The HP ProLiant BL660c published VMmark performance benchmarks using a backend HP 3PAR StoreServ 7450 all-flash array provide the relevant, real-world results that organizations need to achieve maximum VM density levels, maintain or even improve VM performance as they scale and control costs as they grow.
Delivering always-on application availability accompanied by the highest levels of capacity, management and performance are the features that historically distinguish high end storage arrays from other storage arrays available on the market. But even these arrays struggle to easily deliver on a fundamental data center task: migrating data from one physical array to another. The introduction of the storage virtual array feature into the new HP XP7 dramatically eases this typically complex task as it facilitates data consolidations and migrations by migrating entire storage virtual arrays from one physical array frame to another while simplifying array management in the process.
DCIG is pleased to announce the March 30 release of the DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide that weights, scores and ranks more than 130 features of thirty-nine (39) different storage arrays from twenty (20) different storage providers.
Many changes have taken place in the data center storage marketplace in the 14 months since the release of the inaugural DCIG 2013 Flash Memory Storage Array Buyer’s Guide. This blog entry highlights a few of those changes based on DCIG’s research for the forthcoming DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide.
In this final blog entry from our interview with Nimbus Data CEO and Founder Thomas Isakovich, we discuss his company’s latest product, the Gemini X-series. We explore the role of the Flash Director and how it Gemini X-series appeals to enterprises as well as cloud service providers.
In this second blog entry from our interview with Nimbus Data CEO and Founder Thomas Isakovich, we discuss microsecond latencies and how the recently announced Gemini X-series scale-out all-flash platform performs against the competition.
In 2014, high-density flash memory storage such as the 4TB Viking Technology αlpha SSD will accelerate the flash-based disruption of the storage industry and of the data center. Technology providers that engage in a fresh high-density flash-storage-enabled rethinking of their products will empower savvy data center architects to substantially improve the performance, capacity and efficiency of their data centers. Businesses will benefit by reducing the cost of running their IT infrastructures while increasing their capacity to serve customers and generate profits.
Recognized as an innovator in storage system technology, Thomas Isakovich sat down with DCIG to discuss the development, capabilities, and innovation in Nimbus Data’s latest release: the Gemini X. In this first blog entry, he guides us through the development of the X-series, and where he sees it fitting into the current market.
The key for many enterprises today is to identify a storage provider that delivers the best of what next generation hybrid storage arrays have to offer. However, technology alone is not enough for enterprise organizations. This storage provider also has to meet internal financial stability and long-term viability requirements as well as deliver enterprise-class technical service and support.
One of the most common requests that DCIG gets from its readers is to include the actual cost of storage systems in its Buyer’s Guides. The reason DCIG continues to decline that request and only includes starting list prices is that most storage systems may be configured in multiple different ways. This makes it impossible to arrive at a definitive price point. The second part in DCIG’s interview series with iXsystem’s Jordan Hubbard illustrates this point as he discusses how the availability of multiple different storage configurations and services trumps a cookie cutter approach to buying storage every time.
Anyone who managed IT infrastructures in the late 1990′s or early 2000′s probably still remembers how external storage arrays were largely a novelty reserved for high end enterprises with big data centers and deep pockets. Fast forward to today and a plethora of storage arrays exist in a variety of shapes and sizes at increasingly low price points. As such it can be difficult to distinguish between them. To help organizations sort them out, my blog entry today provides a primer on the types of storage arrays currently available on the market.
Anytime DCIG prepares a Buyer’s Guide – whether a net new Buyer’s Guide or a refresh of an existing Buyer’s Guide – it always uncovers a number of interesting trends and developments about that technology. Therefore it is no surprise (at least to us anyway) that as DCIG prepares to release its DCIG 2014 Enterprise Midrange Array Buyer’s Guide that it observed a number of interesting data points about enterprise midrange arrays. As DCIG looks forward to releasing this Buyer’s Guide, we wanted to share some of these observations and insights that we gained as we prepared this Guide as well as why we reached some of the conclusions that we did.
Earlier this week Cisco officially became a storage provider when it announced its intention to acquire privately held WHIPTAIL Technologies. While this may have come as a surprise to some, rumors that Cisco was looking to acquire a storage company were already circulating in 2012 at Storage Networking World (SNW) as I discussed in a blog entry last year. So now that Cisco is in the process of becoming a storage company, what are the ramifications of this change in its product offerings?
As we were researching arrays for inclusion in the DCIG 2013 Flash Memory Storage Array Buyer’s Guide we kept encountering an intriguing group of companies that had designed–or were developing–storage arrays from the ground up to realize the performance benefits of an all flash array, but with storage capacities and price points that would bring the benefits of flash memory storage to a broader range of businesses. The resulting hybrid storage arrays achieve this balance of performance, capacity, and cost by intelligently combining flash memory with large capacity disk drives in a single storage system.
As we have been working on the development of a DCIG Buyer’s Guide for Hybrid Storage Arrays, it has been interesting to see the different approaches that the vendors are taking as they seek to leverage flash memory plus traditional hard drives to deliver previously unheard of IOPS and ultra-low latencies at a cost per GB that makes sense to a broad range of businesses. The “secret sauce” varies from vendor to vendor, but in every case it involves sophisticated caching and/or automated storage tiering software.
To say or imply that NetApp was in any near term danger of falling from its position as a storage leader would be a gross mischaracterization of its current condition. However it would be accurate to say that the industry lacked clarity as to how NetApp would respond to the encroachment of flash memory storage arrays on the high performance end of storage. After attending the NetApp Industry Analyst event this week it is now clear that to address this challenge NetApp plans to go back to its roots to lay the foundation for its future.
Last week’s acquisition of NexGen Storage by Fusion-io was greeted with quite a bit of fanfare by the storage industry. But as an individual who has covered Fusion-io for many years and talked one-on-one with their top executives on multiple occasions, its acquisition of NexGen signaled that Fusion-io wanted to do more than deliver an external storage array that had its technology built-in. Rather Fusion-io felt it was incumbent for it to take action and accelerate the coming data center transformation that it has talked and written about for years.