The spring version of Storage Networking World 2008 is again upon us and, as is my tradition, I take some time out of my normal blogging routine to do some real time blogging to record my thoughts and observations at the event. Unfortunately, I am running about a day behind in discussing my activities and the information and insight I picked up while at SNW. So, in actuality, this is more like near real-time analysis than real-time but sometimes that’s life.

The first thing that many were interested in finding out as they arrived was how many people (users and vendors) were actually in attendence. Bottom line, SNW day 1 (Monday, April 6) was pretty quiet and seemed pretty sparsely attended. The normal high profile vendor displays and grandiose announcements that often accompany SNW were noticeably absent though I received mixed reports on whether or not  user attendance was up or down.

To the naked eye, it appeared it was down (confirmed by many) but a number of the vendors I spoke to said their booths were busy and users were genuinely engaging them in discussions about their products. I also received some feedback that many of the users attending were coming from nearby cities in Florida and just commuting to and from the event which, based upon the comments, economy and actual observations, seemed a reasonable conclusion.

For my part, Monday was a fairly light day in taking a look at some of the emerging products and technologies in the storage market. I kicked off Monday speaking with Tarmin Technologies about their GridBank archiving product. Frankly, I was a little skeptical about whether or not the market needed another software based archiving product in an already crowded market space. But after meeting with Tarmin, it left me more optimistic about its future than I originally anticipated for a couple of reasons.

First, Tarmin has been developing this code for some time based upon specific user requirements. It was developed by a team of former end-users that once managed data in Fortune 500 data centers and GridBank already has over a million lines of code (and if I recall correctly, it may be even over 2 million lines). The point is, a substantial amount of work either has already gone into developing and testing the code (or Tarmin spent an inordinate amount of time cutting and pasting the code to get to millions of lines of code just to impress me).

Second, Tarmin hit on a lot of points that resonated with me during the presentation. It talked about its ability to agentlessly scan and index data, how it uses its GridBank architecture to scale performance and capacity, and then can create policies that retains data for specified periods of time as well as digitally shreds data. The last point I found relevant because of more reports that I am getting from end-users getting nailed not for their failure to keep data but for keeping data too long. In some cases, this is just as bad as if they had just destroyed the data in the first place.

However there were a couple things that I would like to see in this product that would really make it more compelling. The first is some way to assess the impact that moving the data into the archive would have on the environment. I walked away a little unclear if Tarmin could assess the impact on the environment or what actions the product would take if it started to negatively impact the production servers.

Second, I thought it would be really cool if Tarmin could in some way content index the spoken word in audio and video files. Adding the ability to automatically index these types of files would clearly differentiate it from other products in the market place and give certain organizations with large amounts of these files a compelling reason to buy it. This may become even more relevent as more organizations keep and archive their voice messages coupled with the continuing growth in social media. By performing these types of tasks, it would give Tarmin a story no one else is telling and give certain customers a more compelling reason to buy its product now so it can get an initial foothold in the market.

Following my meeting with Tarmin, I had a chance to hook up with NetApp’s VP of Solutions Marketing, Patrick Roger, for a brief meeting. Our conversation focused on how quickly and/or slowly organizations are virtualizing their infrastructures.

I had heard someone estimate that it would be another 15 years before infrastructures were broadly virtualized and the full benefits of virtualization were fully realized. He seriously questioned that assessment as NetApp is seeing a much more rapid ramp-up in both server and storage virtualization among its clients and expects large scale adoption to occur more quickly – possibly in as few as 5 years. NetApp is already seeing some of its clients cut server and storage provisioning times down from weeks or even months to just hours which is dramatically driving costs out of the environment.

Of course, everybody is talking about cloud storage so it wouldn’t be a storage conference without talking to an emerging cloud storage player and Zetta, Inc, gladly stepped up to the plate. Zetta emerged from quasi-stealth mode on Monday to announce that it existed as a cloud storage provider and that it is specifically targetting small to midsize companies (200 – 2000 employees) with its cloud storage offering. Right now it is currently looking for users to beta the product and, according to what Zetta told me, it had about 100 already signed up and if interested, you can sign up here.

There were two interesting tidbits about Zetta that caught my attention during the briefing. First, it is looking to provide its clients with visibility into its cloud infrastructure so users could actually see where their data resides within the Zetta cloud. While consumers may be comfortable storing their data in “the cloud” and not knowing exactly where it is, businesses  are not. They want to know where their data physically is located and how it is protected so Zetta is providing them with this higher level of transparency.

Second, Zetta is working with local telcos to possibly put some storage systems at the telco’s sites so the storage is closer to the customer site and provides better performance. I’m curious to see how this particular angle plays out if Zetta in fact ends up doing this.

Finally, I can’t believe it but enterprise SRM (Storage Resource Management) is back. A number of vendors threw in the towel on this technology 3 or 4 years ago (at least from a heterogeneous perspective) but Data Global, a German company that is making its first foray into the US market, tells me that it has cracked the enterprise SRM nut and even has referenceable (and satisfied!) customers running it in large fibre channel SAN environments.

What I found most intriguing about Data Global’s SRM design is that it adopted a similar technique for quickly and efficiently scanning volumes and files on servers that CommVault uses in its Simpana software as Data Global creates an index on each server that it scans. This eliminates the need for it to create and maintain a large central database as well as quickly get near real time information across the enterprise. In talking to its CEO, I was even impressed by how Data Global had already designed the product to recognize and account for new storage system features such as thin provisioning such that it allegedly can report on the actual amount of storage in use below thinly provisioned volumes.

While this product intrigues me and, even if it does work, Data Global has its work cut out for it here in the US. Large data centers clearly need this sort of monitoring and reporting tool but so many have become disillusioned by tools like this from past bad experiences that it may well require Data Global some time before it convinces anyone to take the required step of faith to bring its product in-house.

More tomorrow on SNW Day 2.

Join the conversation! 2 Comments

  1. I need to respectfully clarify and correct some of the audience impressions you’ve made.
    With one full day of sessions remaining here at SNW Spring 2009, end-user attendance is 92% of that attending a very healthy SNW Fall 2008, which took place just prior to the macroeconomic financial crisis. Bottom line is that the end-user audience here in Orlando is solid and trending to what it was in 2008. SNW, Computerworld, the SNIA had already taken measures last year to ensure that users would be well-informed to attend SNW in a 2009 of economic uncertainty.
    You are correct that the geographic audience mix of users attending SNW Spring 2009 has shifted from SNW Spring 2008 and SNW Fall 2008. While the volume and storage responsibility of user representation has remained consistent with SNW’s 2008 conferences, a higher representation of users from Central Florida are at SNW Spring 2009. Despite that adjustment, a considerable number of users are attending SNW Spring from across the country and internationally.
    Corporate travel restrictions have affected IT vendor companies attending SNW. While a volume of sponsoring companies are represented at SNW Spring 2009 (the world’s largest storage networking event) those companies have sent fewer representatives per sponsoring company. The Expo is busy, demonstrating the relative health of the storage industry and user needs.
    Attendees at SNW Spring 2009 are taking advantage of a breadth of consistently popular educational tracks and more than 150 sessions, including 3 new and well-attended Summits at SNW covering Cloud Computing, Virtualization and Solid State Storage.
    SNW Fall 2008 will take place October 12-15, 2009, at the JW Marriott Desert Ridge Resort in Phoenix, Arizona. Submissions to the call for presentations can be made at http://www.snwusa.com
    Any questions, please let me know!
    Derek Hulitzky
    Vice President, Event Marketing & Conference Programs
    Computerworld

  2. Derek,
    Thanks for leaving the comment and reaching out to me while at the conference today. It was a pleasure to finally meet you.
    Others following this thread, I spoke to Derek about my comments and he also told me that part of the reason attendance appeared to be off was that many of the vendors did not send as many of their own representatives to the conference. As a result, the perceived amount of traffic away from where the sessions were being held was down significantly. However in the main area where the sessions were being held, the hallways appeared crowded between sessions and the vendors at the kiosks in the hallway reported steady traffic.
    Jerome

Leave a Reply

Category

Archiving, DCIG

Tags