The Clock is Ticking on Decades-Old Storage Technologies; Interview with NetApp VP Dave Mooney Part I

Companies are creating more data, storing it on disk and keeping it accessible for longer periods of time than ever before. The problem this creates is that many of the underlying technologies that have powered storage systems for the past 20 years are still being used today even though the capacity and performance of today’s storage systems have largely outstripped the capabilities of these technologies. In part I of this interview series with NetApp’s Vice President of Worldwide OEM Sales, Dave Mooney, we examine how the clock is ticking on decades-old storage technologies which is exposing companies to new risks in their IT infrastructure.

Jerome: Dave, thanks for taking some time out of your schedule and joining me today. To kick things off and for the benefit of DCIG’s readership, could you first begin by telling us a bit about yourself?

Dave:    I’ve been involved with this business unit, the NetApp E-Series business line for over 20 years, as part of the NetApp E-Series product line and currently manage the field sales organization.

The architecture, where the product comes from, and what its core values are have been developed over the last 10 or so years with it having become a very popular product in the market.

Jerome: Thanks for that background. Having been in the business for nearly 20 years and seen a lot of changes in terms of compute power and storage capacity, you have witnessed almost exponential increases in that period of time. Can you provide examples how those increases have positively impacted businesses and the underlying technologies that enable them to run more efficiently?

Dave: We have done a lot of business in what I call the structured data environments. As you get more compute power and more capacity, people analyze data more efficiently and hold it longer.

Early on in the days of RAID, I would say there was a big issue about making sure that the storage was reliable, expandable, performed well, and maintained the integrity of the data for a long time. Then over the last 10-15 years these technologies got normalized and we are now used to them.

But as performance continues to increase, the architectures on the storage side have been outpaced because of the heavy use of storage. This is why we have developed new ways to manage and protect data as the original ways we developed 20 years ago have been outstripped with the size and the capacity of the storage systems that we have today.

Originally we would stripe the data on drives that were less than 1GB in size. Now the chunks of data are larger than those original drives even though the failure rates on drives are still holding at 1 to 2 percent per year. As a result, we are more and more affected by rebuild times and failure rates because these drives have so much data on them whereas before that was not necessarily the case.

Jerome: In light of the changes that have occurred over the last 10 -20 years, what are some of the new trends in storage technologies that are impacting capacity and performance?

Dave:    I do think that is the case and you are seeing it on a variety of levels. One level is obviously the advent of flash and solid state storage. It has created an opportunity to reintroduce tiering into the market as people move solid state drives (SSDs) into storage arrays to get more performance out of them.

The other aspect is that there is so much data on so many drives that you now have to start thinking, “How are my applications going to perform when I have drives failing?” Drives are going to fail, they are going to fail at a certain rate and these failure rates have never really changed over time.

Then the other question is, “How is my performance going to be affected when these drives are rebuilding?” 5 – 6 years ago, people were not talking about rebuild times. Now it has become a big issue.

We know that in 18 or 36 months, the size of the storage system is going to double or even quadruple which means the rebuild times are going to also take as long – double or even quadruple.

This means if you have a problem today, it is going to get a lot worse. If you do not have this problem today, there is a really good chance that in a couple of years you will be worried about it. Your storage systems are probably going to last 3 – 4 years so it is imperative that people starting to look at this issue now.

Jerome: So what is changing to enable a faster adoption of these technologies that organizations now need to so they do not have to worry about these issues?

Dave:    That’s a good question. It seems to me that customers over the years have been really conservative when it comes to storage. They’re less likely to want to take chances on their primary storage systems because they know it has to outlast the servers attached to it.

They know it is going to be there longer, they know that it is their data and if their server is running an application, they seem to be more willing to take the chances on the compute side. But on the storage side, people do not fault IT for not implementing five new features on the storage system as IT staff certainly gets in trouble if the CEO or the CIO loses his or her data.

You only hear bad news when it comes to storage and people want to protect themselves from the bad news. That is one thing that we probably worry about more than most people in the E-Series line. We are really concerned about making sure that your data is transparent, that you never have to think about it, you do not have to worry about it. We are constantly making it that way.

In Part II of this interview series with Dave, he will discuss what new features are now available on the NetApp E-Series so that managing it is a worry-free experience. 

Jerome M. Wendt

About Jerome M. Wendt

President & Lead Analyst of DCIG, Inc. Jerome Wendt is the President and Lead Analyst of DCIG Inc., an independent storage analyst and consulting firm. Mr. Wendt founded the company in September 2006.

Leave a Reply