Gemini X All Flash Scale-Out Storage Ready to Replace HDD as Enterprise Tier One; Interview with Thomas Isakovich, Nimbus Data Systems, Inc. Chief Executive Officer and Founder, Part 1

Recognized as an innovator in storage system technology, Thomas Isakovich sat down with DCIG to discuss the development, capabilities, and innovation in Nimbus Data’s latest release: the Gemini X. In this first blog entry, he guides us through the development of the X-series, and where he sees it fitting into the current market.

DCIG: Can you tell us what is so different about the X-series?

Thomas: In terms of availability, this is probably the most advanced product — well, it is the most advanced product we’ve ever made, because it builds on everything that we’ve been improving. It takes that Gemini technology and then amps it up with true scale-out capability that is managed by our all-new Flash Director device. We’ve been working on it for the past two years. It’s been a real challenge and also a pleasure developing it.

And, really, for us it completes the story. We believe we have the most competitive all-flash system currently in the market with the Gemini F. The only caveat being how do we scale to huge, huge sets of capacity? We now provide that with the Gemini X, and we’ve done it in a way that keeps the software and the hardware building blocks about 90 percent shared between the two platforms.

Customers can start with the F series and go to the X series later. There’s a lot of commonality between the two. From a manageability perspective, that familiarity will be a big plus. I think from an all-flash array portfolio perspective, we’ve got customers covered from three terabytes to a petabyte now — from a $50,000 entry point to multimillion dollar solutions — all on the same Nimbus Data technology.

The timing of this product from our perspective is pretty perfect because our sales force is increasingly encountering customers that want to do wholesale refreshes of their entire tier-1 infrastructure. Not just flash for individual applications like databases and VDI, but really observing all-flash as a potential contender for the entirety of the tier-1 infrastructure. So having the ability to scale is well-timed and we’re excited to be putting it out there now.

DCIG: Can you talk more about the deduplication and compression of Gemini X?

Thomas: The deduplication and compression is really a sneak preview to an important feature of our forthcoming HALO 2014 operating system that we’ll announce later this spring.

One of the challenges in scaling an array that uses inline deduplication is managing the vast metadata hash table that is the result of that, and keeping it in a manner where it’s very rapidly accessible. And, as you know, a lot of solutions consume inordinate amounts of RAM to hold all this. But it’s actually the RAM constraints of the controllers that may be limiting the ability for these inline deduplicating storage arrays to scale. So, many of those guys have been resorting to scale-out because, really, who’s going to build an Intel server that can hold 20 terabytes of RAM? And even if it could, how do you protect it?

So we’ve come up with an algorithm here that effectively uses about 1/50th the RAM and can deliver the same 4K block size in-line deduplication. This is one of the reasons we can build such high scale systems in such a small size. The Gemini X takes advantage of that technology, and so will the Gemini F, as part of running the HALO 2014 OS.

DCIG: What environments are you seeing that are pushing high IOPS?

Thomas: It’s definitely geared more toward folks that just need huge amounts of capacity in a single domain. A lot of our customers, like our biggest one, which has 100 Gemini systems — they have no interest in actually presenting that as a single logical name space. They really do want 100 different name spaces, because of the way they’re doing their scale-out. But they’re a very sophisticated cloud provider, they can do very specific fancy things. For general purpose enterprise that doesn’t have that level of sophistication — they’re used to having their 500 terabyte hard drives or whatever — they need something that can present as one big box, and that’s where this guy plays.

For example, a Fibre Channel port on a good day can do 100,000 IOPS. You’re not going to get a million IOPS into a single server unless you’re prepared to stick ten Fibre Channel cards in that server — which is going to be a challenge — and then run everything perfectly parallelized and all this other stuff. So our thought process in supporting four million IOPS is that we’re going to need to support dozens or maybe hundreds of physical machines. And at 100,000 IOPS a port, that actually works out to about four million IOPS because you can have up to forty host ports on the Gemini X.

It’s not so much that there’s any one application that can come close to that, but you need to maintain a reasonable sort of IOPS-per-terabyte/IOPS-per-port kind of ratio. That’s what our rationale is on achieving four million, because if you look at the Gemini F, by itself it’s doing north of a million in a read-write balance scenario. So on an IOPS-per-terabyte basis, the Gemini F is actually better, because when you do a cluster scale-out like this, you’re going to have at least a little bit of latency from the cluster grid. We’ve kept latency very low because of the Flash Director design.

In part two of this interview series, we discuss micro-latency and how the Gemini X performs against the competition.

Ken Clipperton

About Ken Clipperton

Ken Clipperton is a Managing Analyst at DCIG, a group of analysts with IT industry expertise who provide informed, insightful, third party analysis and commentary on IT hardware, software and services. Within the data center, DCIG has a special focus on the enterprise data storage and electronically stored information (ESI) industries.

Leave a Reply