was successfully added to your cart.

Toward the end of April Wikibon’s David Floyer posted an article on the topic of server SANs entitled “The Rise of Server SANs” which generated a fair amount of attention and was even the focus of a number of conversations that I had at this past week’s Symantec Vision 2014 conference in Las Vegas. However I have to admit, when I first glanced at some of the forecasts and charts that were included in that piece, I thought Wikibon was smoking pot and brushed it off. But after having had some lengthy conversations with attendees at Symantec Vision, I can certainly see why Wikibon made some of the claims that it did.

While I recommend you read the article at Wikibon’s site in its entirety, I’ll briefly summarize it. Floyer alleges that the combination of powerful servers with commodity hardware, server-side flash and storage software installed on the server (one possible iteration of software defined storage or SDS) will disrupt the current SAN and NAS attached storage array market of today. In its place two types of server SAN architectures will emerge: Hyperscale Server SAN Storage and Enteprise Server SAN storage. The collective impact of these two technologies over the next decade will be so dramatic that by 2024 they will generate up to 10x or more revenue than today’s storage arrays.

Assuming this forecast is true (and it is a pretty damn good reading of the tea leaves if it is,) the entire storage industry is headed for some pretty tumultuous times over the next decade. Based upon these predictions, today’s SAN and NAS attached systems will begin to feel a noticeable impact by as soon as 2017 with the sale of externally attached storage arrays falling off the cliff from 2019-2022 before finally bottoming out in the 2024-2027 time frame.

While these types of forecasts make for entertaining reading and good press, I initially dismissed this forecast as being too extreme to be realistic. But as I was at Symantec Vision this past week, I spent a great deal of time talking to the Storage Foundation team about server SANs and flash in general and its recently released (December 2013) SmartIO technology in particular which could contribute to Wikibon’s forecast for server SANs becoming a reality.

As I covered in a blog entry on SmartIO back in December 2013, SmartIO provides organizations with three distinct benefits:

  1. Organizations may do targeted deployments of in-server flash without exposing themselves to the risk of having a single copy of data on the server.
  2. Organizations get the performance benefits of in-server flash.
  3. Organizations may opt to use in-server flash on a larger scale by deploying it in conjunction with Tier 2 storage arrays.

Applying these three benefits to the concept of server SANs and their forecasted growth, we begin to see why technologies like SmartIO can make this a reality:

  1. Another significant leap forward in performance. Companies are currently deploying hybrid or all-flash arrays because they give them a 2-5x increase in performance over HDD-based arrays by reducing latency from 5-10 milliseconds to 1-2 milliseconds (or less.)  Using server SANs with server-side flash deployments, latencies are now measured in microsecond and even nanosecond response times.
  2. It’s much cheaper. SANs and NAS are very pricey but their deployments to data have been justified by lowering costs through shared storage, better utilization rates, and improved performance. Those benefits are almost completely eroded away by server SANs. Server side flash already provides better performance at a lower cost. While storage utilization rates may not (and I emphasize may not) be as good as in NAS and SAN environments, the savings realized through the deployment of server SANs may make this last point irrelevant (at least for many organizations.)
  3. It’s scales more simply and affordably. Need more performance, add another server with more flash drives or insert flash into existing servers. This is  simpler and more affordable to scale than adding an entire new storage array to the mix when an existing one runs out of capacity, performance or both.

As promising as server SAN technology looks, let’s not forget that this transition is not going to happen overnight. Here are a few key issues that proponents of server SAN architectures have yet to answer:

  1. Who you gonna call? Almost every organization large or small wants the assurance that their preferred storage vendor will fly in the black helicopters when applications or equipment break (and gear will break – count on it!) It is not clear yet if all server SAN providers are ready to step up to this challenge.
  2. Plug-n-play. Deploying server SANs that require any kind of touch on the server such as installing new software or hardware tends to be met with resistance by server admins. Further, not every server OS or server hardware can have just any software or hardware installed into it. SAN and NAS architectures do have the luxury of having an incumbent, in-place architecture where these types of battles (installs of HBAs, multi-pathing software, etc.) have already been largely fought and won. This makes the introduction of new all-flash or hybrid storage arrays into these environments today simpler and less intrusive and creates a barrier that server SANs need to overcome.
  3. Unproven. Save the example of Symantec’s Storage Foundation suite and its SmartIO, many competitors with server SAN solutions offer software that is still new and largely unproven. This alone will give any organization pause.

Having had some time to think and talk about this topic of Server SANs for the past week, it is clear that this storage architecture has a much brighter future than for which I initially gave it credit. That said, to say that it will crush SAN and NAS architectures over the next decade to the point where they will have, at best, a nascent percentage of the storage space assumes that everything breaks almost perfectly for server SAN providers. My gut feeling, having worked as a storage guy for 10 years and now having written about it for another 10, is that the only assumption you can make is that stuff will break and then you just hope that either it breaks in your favor or have a contingency plan in place to help ensure it does.

image_pdfimage_print
Jerome M. Wendt

About Jerome M. Wendt

President & Lead Analyst of DCIG, Inc. Jerome Wendt is the President and Lead Analyst of DCIG Inc., an independent storage analyst and consulting firm. Mr. Wendt founded the company in September 2006.

Leave a Reply