Every year at the Flash Memory Summit held in Santa Clara, CA, attendees get a firsthand look at the technologies that will impact the next generation of storage. This year many of the innovations centered on forthcoming interconnects that will better deliver on the performance that flash offers today. Here are DCIG’s main takeaways from this year’s event.
DCIG is pleased to announce the March 30 release of the DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide that weights, scores and ranks more than 130 features of thirty-nine (39) different storage arrays from twenty (20) different storage providers.
In this final blog entry from our interview with Nimbus Data CEO and Founder Thomas Isakovich, we discuss his company’s latest product, the Gemini X-series. We explore the role of the Flash Director and how it Gemini X-series appeals to enterprises as well as cloud service providers.
Recognized as an innovator in storage system technology, Thomas Isakovich sat down with DCIG to discuss the development, capabilities, and innovation in Nimbus Data’s latest release: the Gemini X. In this first blog entry, he guides us through the development of the X-series, and where he sees it fitting into the current market.
As DCIG makes its final preparations for the release of its inaugural Purpose-Built Flash Memory Appliance Buyer’s Guide, we have had a number of conversations internally about what the criteria for product inclusion and exclusion in this Buyer’s Guide will be. As we do so, our conversation almost always turns to ways in which these purpose-built flash memory appliances will impact organizations and their decision making and buying habits.
In this final installment of our blog series on WhipTail Technologies, a Solid State Drive (SSD) array provider with some impressive features and capabilities, I am continuing my discussion with WhipTail Technologies Chief Technology Officer, James Candelaria. Last time, we looked at how WhipTail implements software RAID on its devices. Today, we will be discussing the different transport protocols supported by the WhipTail array and why the FCoE and iSCSI protocols trump Infiniband in today’s SSD deployments.
However, during the many presentations that I attended and conversations that I had about this technology, SSD vendors revealed some key “gotchas” about SSDs. They also shared how SSDs stand to impact the hard disk drive (HDD) market as well as the market for memory as well. So here, in no particular order, are some of the new challenges and opportunities that SSDs create as well as what to watch out for.
As I write this blog entry, I am currently on a flight to New York City to attend the last day of the fall 2008 Storage Decisions conference. While I intend to post a blog entry about my experiences at SD this Friday, the flight is giving me some time to go back to last week and share some additional thoughts and insights I gained while attending the InfiniBand Trade Association (IBTA) Tech Forum in Las Vegas on Monday, Sept 15. While infiniband was obviously covered as part of this forum, it was done so in the larger context of what virtualizing the corporate infrastructure means and how that will contribute to how companies construct and manage their data centers in the future.
Day 2 at VMworld has come and gone and probably my biggest regret was that I had to miss this morning’s keynote by VMware’s new CEO, Paul Maritz. In reading through some other blogs this evening about the event and assuming Storagezilla called it right, it was a doozey essentially declaring open war on other operating systems. In any case, my day was focused on catching up with a number of vendors to get some of the latest behind the scenes scoop in the storage world. In fact, as one walks into the exhibitor hall in VMworld, it is hard not to mistake this conference for a storage conference.
It’s day one at VMworld in Las Vegas and while the day for me began in Omaha NE at 4:30 am CST before landing in Las Vegas around 7:30 am PST, I did not join the throngs basking in the VMworld love fest. Instead I spent the day educating myself more about the topic of Infiniband by attending the InfiniBand Trade Association’s (IBTA) annual tech forum that was held at Harrahs (Harrahs is adjacent to The Venetian where VMworld is being held). The reason that I elected to first attend the IBTA Tech Forum and not VMworld is simple. Everyone already knows that server virtualization is the BIG thing. What everyone doesn’t know or understand is why Infiniband is making a case to become the next big thing in another form of virtualization: Virtualizating server I/O.
I believe a new way on thinking should be applied to the deployment on Infiniband technology in the storage landscape. Most of you probably think of Infiniband as predominately a backend transport for storage, and/or the interconnection mechanism for high compute clusters (HPC). Or, “Oh yeah, I heard something about that 5-6 years ago, isn’t that only used in super-computing or giant research labs?”
I initially intended to share in this blog posting what I learned from my briefings on Day 3 of SNW. However I’ve had some more time to digest the news surrounding the FCoE announcements at SNW on Tuesday and the more I think about it, the more this whole FCoE strikes me as a huge setup that is being carefully orchestrated by the FC industry. Bottom line, Brocade, Emulex and Qlogic and, to a lesser extent, Cisco and Intel, used SNW as a platform to obviously promote FCoE but longer term to make sure enterprise data centers lock into FC for the next 10 years.
Xiotech made the first “earthshaking” announcements of the day at 7:00 am which mostly had those I spoke to shaking their heads trying to figure out what the announcement meant. The announcement centered on their new patented Intelligent Storage Element (ISE) technology that they acquired from Seagate last November that will, according to Xiotech, “virtually eliminate the need for service, scale from one terabyte to one petabyte and dramatically boost perfromance”.
Last month I did some research and evaluation of Fibre Channel over Ethernet (FCoE). In my Part 1 of 3 I shared some elements that can encourage the use of FCoE in the data center. During my research I spent about an hour on the phone with Mike Krause, Fellow Engineer at HP. Mike and I talked about few things related to FCoE, shared storage and network fabrics. I asked Mike about creating a shared fabric using InfiniBand, because InfiniBand requires a single card type and InfiniBand switches that take frames to their respective networks. Mike countered saying that InfiniBand is a great option, but it introduces a new, third architecture to the data center. I realized immediately that it was orthogonal to existing data and storage networks. Mike further commented that he liked InfiniBand as an option, but the original intent was to replace PCI as the primary peripheral interconnect within servers etc. Thus, it made sense that InfiniBand…
When I received the assignment to review the FCoE specification and compare it to iFCP, FCIP and iSCSI (block protocols over data networks) I was thinking it might be boring, I was very wrong. After just a few short minutes with Claudio and Bill I knew I was talking to a pair of very intelligent and thoughtful business technologists.