At a recent analyst briefing, Micron Storage leaders identified at least three critical transitions that must take place in order to unleash the full potential of flash memory in the data center…
Category Archives: SSD
At the beginning of 2014, I started the year with the theme: “it’s an exciting time to be part of the DCIG team“. This was due to the explosive growth we saw in website visits and popularity of our Buyer’s Guides. That hasn’t changed. DCIG Buyer’s Guides continue to grow in popularity, but what’s even more exciting is the diversity of our new products and services. This year’s theme is diversity: a range of different things. DCIG is expanding…again…in different directions. In the past year, we have added a number of offerings to our repertoire of products and services. In addition to producing our popular Buyer’s Guides and well known blogs, we now offer Competitive Research Services, Executive Interviews, Executive White papers, Lead Generation, Special Reports and Webinars. Even more unique, DCIG now offers an RFP/RFI Analysis Software Suite. This suite gives anyone (vendor, end-user or technology reseller) the ability to license the same software that DCIG uses internally to…
At a recent analyst briefing, Micron Storage leaders identified at least three critical transitions that must take place in order to unleash the full potential of flash memory in the data center and explained their strategy for accelerating those transitions.
At the beginning of 2014, I started the year with the theme: “it’s an exciting time to be part of the DCIG team”. This was due to the explosive growth we saw in website visits and popularity of our Buyer’s Guides. That hasn’t changed. DCIG Buyer’s Guides continue to grow in popularity, but what’s even more exciting is the diversity of our new products and services. This year’s theme is diversity: a range of different things. DCIG is expanding…again…in different directions.
Dedicating a single flash-based storage array to improving the performance of a single application may be appropriate for siloed or small SAN environments. However this is NOT an architecture that enterprises want to leverage when hosting multiple applications in larger SAN environments, especially if the flash-based arrays has only a few or unproven data management services behind it. The new Oracle FS1 Series Flash Storage System addresses these concerns by providing enterprises both the levels of performance and the mature and robust data management services that they need to move flash-based arrays from the fringes of their SAN environments into their core.
A couple of weeks ago I attended the Flash Memory Summit in Santa Clara, CA, where I had the opportunity to talk to a number of providers, fellow analysts and developers in attendance about the topic of flash memory. The focus of many of these conversations was less about what flash means right now as its performance ramifications are already pretty well understood by the enterprise. Rather many are already looking ahead to take further advantage of flash’s particular idiosyncrasies and, in so doing, give us some good insight into what will be hot in flash in the years to come.
There is literally a divergence occurring right now in data storage solutions. On one hand, a number of storage providers seek to deliver highly differentiated storage solutions that work with a broad set of applications and operating systems. On the other, a few providers focus on delivering a storage solution that tightly integrates with one or more applications to deliver unparalleled levels of application performance and ease of management. The latest Oracle ZFS Storage Appliance ZS3 Series with its new OS8.2 provide the best of what both of these categories of storage systems currently have to offer to deliver a storage platform that truly stands apart.
The use of data reduction technologies such as compression and deduplication to reduce storage costs are nothing new. Tape drives have used compression for decades to increase backup data densities on tape while many modern deduplicating backup appliances use compression and deduplication to also reduce backup data stores. Even a select number of existing HDD-based storage arrays use data compression and deduplication to minimize data stores for large amounts of file data stored in archives or on networked attached file servers.
Toward the end of April Wikibon’s David Floyer posted an article on the topic of server SANs entitled “The Rise of Server SANs” which generated a fair amount of attention and was even the focus of a number of conversations that I had at this past week’s Symantec Vision 2014 conference in Las Vegas. However I have to admit, when I first glanced at some of the forecasts and charts that were included in that piece, I thought Wikibon was smoking pot and brushed it off. But after having had some lengthy conversations with attendees at Symantec Vision, I can certainly see why Wikibon made some of the claims that it did.
VMware® VMmark® has quickly become a performance benchmark to which many organizations turn to quantify how many virtual machines (VMs) they can realistically expect to host and then perform well on a cluster of physical servers. Yet a published VMmark score for a specified hardware configuration may overstate or, conversely, fail to fully reflect the particular solution’s VM consolidation and performance capabilities. The HP ProLiant BL660c published VMmark performance benchmarks using a backend HP 3PAR StoreServ 7450 all-flash array provide the relevant, real-world results that organizations need to achieve maximum VM density levels, maintain or even improve VM performance as they scale and control costs as they grow.
Delivering always-on application availability accompanied by the highest levels of capacity, management and performance are the features that historically distinguish high end storage arrays from other storage arrays available on the market. But even these arrays struggle to easily deliver on a fundamental data center task: migrating data from one physical array to another. The introduction of the storage virtual array feature into the new HP XP7 dramatically eases this typically complex task as it facilitates data consolidations and migrations by migrating entire storage virtual arrays from one physical array frame to another while simplifying array management in the process.
DCIG is pleased to announce the March 30 release of the DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide that weights, scores and ranks more than 130 features of thirty-nine (39) different storage arrays from twenty (20) different storage providers.
Many changes have taken place in the data center storage marketplace in the 14 months since the release of the inaugural DCIG 2013 Flash Memory Storage Array Buyer’s Guide. This blog entry highlights a few of those changes based on DCIG’s research for the forthcoming DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide.
In this final blog entry from our interview with Nimbus Data CEO and Founder Thomas Isakovich, we discuss his company’s latest product, the Gemini X-series. We explore the role of the Flash Director and how it Gemini X-series appeals to enterprises as well as cloud service providers.
In this second blog entry from our interview with Nimbus Data CEO and Founder Thomas Isakovich, we discuss microsecond latencies and how the recently announced Gemini X-series scale-out all-flash platform performs against the competition.
In 2014, high-density flash memory storage such as the 4TB Viking Technology αlpha SSD will accelerate the flash-based disruption of the storage industry and of the data center. Technology providers that engage in a fresh high-density flash-storage-enabled rethinking of their products will empower savvy data center architects to substantially improve the performance, capacity and efficiency of their data centers. Businesses will benefit by reducing the cost of running their IT infrastructures while increasing their capacity to serve customers and generate profits.
Recognized as an innovator in storage system technology, Thomas Isakovich sat down with DCIG to discuss the development, capabilities, and innovation in Nimbus Data’s latest release: the Gemini X. In this first blog entry, he guides us through the development of the X-series, and where he sees it fitting into the current market.
The key for many enterprises today is to identify a storage provider that delivers the best of what next generation hybrid storage arrays have to offer. However, technology alone is not enough for enterprise organizations. This storage provider also has to meet internal financial stability and long-term viability requirements as well as deliver enterprise-class technical service and support.
One of the most common requests that DCIG gets from its readers is to include the actual cost of storage systems in its Buyer’s Guides. The reason DCIG continues to decline that request and only includes starting list prices is that most storage systems may be configured in multiple different ways. This makes it impossible to arrive at a definitive price point. The second part in DCIG’s interview series with iXsystem’s Jordan Hubbard illustrates this point as he discusses how the availability of multiple different storage configurations and services trumps a cookie cutter approach to buying storage every time.
Anyone who managed IT infrastructures in the late 1990’s or early 2000’s probably still remembers how external storage arrays were largely a novelty reserved for high end enterprises with big data centers and deep pockets. Fast forward to today and a plethora of storage arrays exist in a variety of shapes and sizes at increasingly low price points. As such it can be difficult to distinguish between them. To help organizations sort them out, my blog entry today provides a primer on the types of storage arrays currently available on the market.