DCIG 2013 High Availability and Clustering Software Buyer’s Guide Now Available

DCIG is pleased to announce the availability of its DCIG 2013 High Availability and Clustering Software Buyer’s Guide that weights, scores and ranks over 60 features on 13 different software solutions from 10 different software providers. This Buyer’s Guide provides the critical information that all size organizations need when selecting high availability (HA) and clustering software for applications running in their physical or virtual environments.

High-Availability-Clustering-Software-Buyers-Guide-Logo-500x503.jpgThe world of HA has fundamentally changed. Technicians who once had to work for weeks if not months to configure and setup a highly available environment for just one application can now many times deploy a highly available configuration for all applications in an environment in as quickly as a few hours.

Yet this transformation is no accident as data centers in all size organizations are rapidly transforming. Driven by the cost savings and power of virtualization, organizations are virtualizing more and more of their applications–“critical” and “non-critical” alike–and putting them on ever fewer physical machines with the number of companies virtualizing their applications exploding. In an October 2012 report, the analyst firm Gartner said that the total number of virtual OS instances already comprised over 70% of total OS instances with this percentage growing to over 80% by 2016.

This is where today’s high availability and clustering software comes in. This software has been around for decades powering application, OS, and server-based clustering. However this software has evolved to meet today’s new application and business requirements.

High availability and clustering software enables organizations to live in today’s real-time world. It frees them to first cluster and then move an entire application stack to another local or remote location within seconds while incurring little to no downtime throughout the process.

Virtualization is at the center of high availability and clustering software’s ability to expand its presence in the data center. Using virtualization with high availability and clustering software, organizations can create highly redundant operating platforms so that applications may be easily ported across both physical and virtual infrastructures.

More importantly, high availability and clustering software no longer has to be an either/or choice. Using a combination of virtualization, the economical, powerful computing platforms available from many hardware providers today and one of the high availability/clustering software packages examined in this Buyer’s Guide, organizations can survive catastrophic events of varying magnitudes with relative peace of mind knowing that they are non-events from a data and application availability perspective.

It is in this context that DCIG presents its 2013 High Availability and Clustering Software Buyer’s Guide. As prior Buyer’s Guides have done, it puts at the fingertips of organizations a comprehen­sive list of high availability and clustering software solutions and what features they offer in the form of detailed, standardized data sheets that can assist them in this important buying decision.

The DCIG 2013 High Availability and Clustering Software Buyer’s Guide accomplishes the following objectives:

  • Provides a comprehensive listing of high availability and clustering software by vendor and product
  • Provides an objective, third party evaluation of high availability and clustering software that evaluates and scores their features from an end user’s perspective.
  • Scores and ranks the features of each high availability and clustering software solution based upon the criteria that matter most to end users and then presents these results in an easy to understand tables that displays the products’ scores and rankings so they can quickly ascertain which virtual server backup software solutions are the most appropri­ate for their needs.
  • Provides a standardized data sheet for each of the 13 high availability and clustering software solutions from 10 different software providers so users may do quick comparisons of the features that are supported and not supported on each product.
  • Gives any organization a solid foundation for getting competitive bids from different high availability and clustering software providers that are based on “apples-to-apples” comparisons.

The DCIG 2013 High Availability and Clustering Software Buyer’s Guide Top 5 solutions include (in alphabetical order):

  • HP ServiceGuard Solutions for HP-UX
  • IBM Power HA System Mirror for AIX
  • Oracle Solaris Cluster 4.1
  • Symantec Cluster Server (VCS)
  • VMware High Availability 5.x.

Symantec Cluster Server achieved the “Best-in-Class” ranking and earned the top spot in this inaugural DCIG High Availability and Clustering Software Buyer’s Guide. Making Symantec Cluster Server’s strong showing so impressive was that in a highly competitive space most HA/clustering software packages only focus on a few or even only one operating system. Symantec Cluster Server stands apart with its support for all standard operating systems. Further, it takes this support one step further by allowing failovers to be orchestrated across multiple operating systems.

In doing its research for this Buyer’s Guide, DCIG uncovered some interesting statistics about high availability and clustering software solutions in general:

  • 100% of the solutions evaluated support at least 14 nodes in a cluster
  • 92% send heartbeats over Ethernet
  • 92% operate without a clustered file system
  • 84% support DB2 in a cluster, the highest level of support for any application of any kind
  • 69% support a clustered file system
  • 62% support SAN-based (FC or FCoE) failover
  • 62% support NAS-based ((CIFS/NFS) failover
  • 54% support SuSe Linux, the highest level of support for any OS of any kind
  • 15% support more than 32 nodes in a cluster

The DCIG 2013 High Availability and Clustering Software Buyer’s Guide is immediately available. It may be downloaded for no charge with registration by following this link.




BC, DR and Compliance Driving Cloud Service Provider Convergence; Interview with AIS VP of Network Engineering Steve Wallace Part I

A convergence is happening in the cloud service provider space. More cloud-based archive and backup providers are evolving to account for transactional/production data while managed service providers want to extend their reach into the archival/backup space. One company at the forefront of this convergence is cloud service provider American Internet Services (AIS). Today I talk with AIS’s VP of Network Engineering, Steve Wallace, about how this convergence is impacting cloud service providers in general and AIS specifically.

AIS Steve Wallace.JPGJerome: Thank you Steve for taking time out of your schedule to meet with me and talk more about cloud service providers in general and AIS specifically. So to kick off our conversation, can you tell me a bit about how both cloud service providers and AIS are evolving to meet these converging cloud service provider requirements?

Steve: Jerome, thank you for giving me the opportunity to publicly share some of my thoughts on these important topics.

Cloud service providers have the challenge of providing cloud services – compute and local storage – as well as meet client requirements to extricate their data from their local site for business continuity (BC) or disaster recovery (DR) purposes.

There are a few things that play into in terms of pressures that enable that opportunity. Some are regulatory compliance for public companies. They need some sort of business continuity plan which requires they store their data offsite.

In some cases, regulated industries like financial industry in California cannot ship their data outside of the state. You also see this in Europe. They have to keep all of their financial data within the borders of the EU.

It is similar situation in California. Banks need to know where their data lives – that is not always simple – especially when dealing with a larger service provider like Amazon.

As a cloud service provider, one of our products is cloud storage of which there are several types. There is transactional storage which is extremely fast and local. Then there is bulk storage. This is network attached, SATA storage that is not transactional but still very fast.

Finally, there is archival storage. Archival storage is large amounts of data that need to be stored for extended periods of time or is driven by regulatory requirements to get it out of the area or out of the local data center.

Jerome: So explain to me how these different requirements are forcing cloud service providers like AIS to evolve their offerings so they encompass transactional as well as more specific bulk/archival storage requirements.

Steve: Over the last two years AIS has transitioned from a traditional, co-location services company to a managed services company. As such, data ultimately ends up in the cloud and, as a result, we become a cloud services provider.

In this respect AIS already has a very solid regional network that we operate ourselves and that connects us to a DR capable site in Phoenix. To achieve this goal we have leveraged our networking services and clients’ gear to provide a turnkey DR and BC solutions with storage being one of the crucial pieces of that solution.

We can transmit data very easily to Phoenix and provide very good external Internet connectivity. This gives us many ways to move data from place to place. We can offer local data center storage which, in the co-location model, is on the client’s own equipment that they own and operate.

However clients that have data here in San Diego – companies with Big Data-like genomics, bioinformatics, and financial analytics companies – need to safely get their data somewhere offsite. The obvious choice for the location here in San Diego is Phoenix because it is geographically stable and nothing is likely to happen to it there.

However this approach becomes problematic for clients when it comes to supporting remote infrastructure. They wind up with not only the CAPEX of having to build the equipment but the OPEX of having to support that equipment in a remote location. So ultimately we have to get rid of that CAPEX component and allow them to get data into a safe place that may not even be on our network.

The first step is to build a fast, reliable network and provide the DR facilities.  The second is to add on features that complete the picture of business continuity and DR, such as global load balancing. This they need because if they are a pay-per-click type of business or any sort of e-commerce operation any type of downtime translates to lost revenue.

Now that the basic capabilities are in place, businesses can transfer their operations to the DR location and seamlessly redirect operations to the backup site.  Whether it is full-scale operations or limited operations, at least they are in business and have some communication with their client.

There are even a number of scenarios where having two locations is not enough. For instance computer hackers and viruses are some of the most devastating threats to your data. These can destroy or corrupt your data to the point where you cannot trust it.

What you need to do is put your data into some place safe – in a read only repository – that conforms to the best practices for storing your data. This often includes encrypting the data during transmission and encrypting the data while it is in the archive storage space.

There are a lot of other little pieces of security to consider, such as to who can access your data, and making sure access control is independent from your own internal security systems. You probably want to de-identify the file names so no one can look at a URL and say, “That is the password file.” or “That is a list of social security numbers.

This is why we partnered with Nirvanix. They are local and within our network reach so we can get data very quickly and seamlessly into their cloud. In the cases where clients need to have more than one repository, they have that capability at the click of a button. You may now have three instances of that data, spread across the globe.

When clients need to know exactly where their data is, they can constrain the storage location. Back to the basics: First, you have the front line defenses. In your local data center you may have load balancing and redundancy built into your equipment.

Next you want to have your equipment in two places and have some sort of failover capability between sites. Then have your important data transmitted from one site to the other. In any case, you need to get your data offsite should something really bad happen – the smoking crater scenario – so you can rebuild everything. Granted, it maybe a slightly outdated data set, but in most cases that is something you can use to recover your business.

In Part II of this interview series with Steve Wallace, we talk about the different ways that companies may host their data with a cloud service provider and why archival/backup is a first step that many take.




CDP Finding a New Home in the Next Frontier of Data Protection: Managed Service Providers and Social Media

Ever since continuous data protection (CDP) was introduced nearly a decade ago, it has largely been a technology looking for a problem to solve. However in the last few years it is finding a home in the most unlikely of places – social media websites. But maybe what is most interesting is that little known R1Soft CDP has emerged as the early and widely recognized leader in this space.

The growing role that CDP is playing in protecting social media websites first came to my attention late last year. I was attending some conference in Silicon Valley and while there had the opportunity to talk to a number of system administrators who worked for various hosting providers.

Now if there is any place where geeks and business owners successfully co-exist, it is within the halls of today’s managed service providers as their system administrators and architects are in many respects today’s modern-day cowboys. What they may lack in discipline and couth they make up in savvy and an uncanny knack of how to milk the most out of systems that most other companies would have mothballed years ago.

Yet one particular thorny problem that many of these individuals were encountering and unable to easily and economically resolve was protecting their social media websites. While websites are often viewed as static and unchanging, many of the client websites for which they are responsible for protecting are social media sites that are often constantly changing and being updated.

Aggravating the problem, many of these websites are:

  • Running as either physical or virtual machines on Red Hat Linux or Windows servers
  • Collecting reader comments and feedback that could not be reproduced
  • Websites hosted for a fixed monthly fee

Protecting the reader comments and feedback was specifically becoming a priority for the MSP’s clients. Aside from it being impossible to recreate the comments left by the readers on these sites, a growing amount of corporate intellectual property now resided in these reader comments in the form of suggested product enhancements, workarounds, and new use cases for the products.

The trouble these system administrators encountered as they looked for a solution were on multiple fronts.

  • Few CDP technology options with support for both Windows and Linux. The number of data protection solutions that protect both Windows and Linux and offer CDP functionality is limited.
  • Affordable. Managed service providers are under constant pressure to offer more services at ever lower costs so a backup solution, no matter how good it is, is not an option.
  • Quick recoveries. Websites are expected to be up 24×7 so in addition to constantly protecting the environment, the CDP solution had to enable the MSPs to recover just as quickly.
  • Protect both physical and virtual environments. This is maybe what surprised me the most. One would think that hosting providers would be at the forefront of adopting and delivering virtualization in their environment. While they certainly offer and support it, the amount of their environments that is virtualized is less than one would think. I expected somewhere from 70 – 80% but many are in the 50% or less virtualized range with some only having 20 – 30% of their servers virtualized.
  • No SANs or NAS. Despite the buzz you hear about networked storage environments, many of these managed service providers still run physical servers with internal direct attached storage so they do not have access to any of the advanced software feature functionality found on high end storage systems.

It is in this space why CDP software and R1Soft CDP in particular is finding a new home. CDP software enables these service providers to provide the constant data protection that their customers want and keep the same low cost infrastructure they have in place now.

R1Soft CDP is having particular success in this space. The reasons that systems administrators at these hosting providers constantly and consistently cite for choosing R1Soft are:

  • Protects a large number of servers (I have talked to R1Soft customers using a single R1Soft CDP server to protect over 100 servers)
  • Is the only data protection provider of which I am aware that offers a snapshot plug-in for Red Hat Linux that has functionality similar to Red Hat Linux
  • Bare metal restore capabilities
  • An affordable starting price point (starts in the neighborhood of $15K)

CDP has long been a solution looking for the right venue in which to get a footing. In managed service providers, it finally seems to have found a match. These are cost-conscious organizations that need data protection software that gives them the ability to constantly protect their data and then quickly recover it. Equally important, they can achieve this ideal without first needing to put in place a costly hardware infrastructure.

This is really what makes rise of R1Soft CDP among managed service providers all the more impressive. There are numerous data protection providers in the market today but R1Soft CDP has carved out a nice niche for itself by providing the specific technical features that they need  at a price point that matches their budget.




Three Reasons Why the Traditional Approach to Backup Persists

A couple of weeks ago I was getting a briefing on Atempo Live Navigator regarding its deduplication and near-CDP features that are specifically targeted for desktops, laptops and file servers. But since that conversation, it struck me that CDP and near-CDP technologies have been around for years which got me to thinking. Why is it that traditional approaches to backup persist even as arguably better approaches to data protection such as CDP and near-CDP struggle to get traction?

When I use the term “traditional backup,” I am referring to the practice of making a copy of production data on a nightly and weekly basis. This is usually done in the context of doing an incremental or differential backup on weekdays and then a full backup on the weekend. While I am not sure exactly how this methodology originated or when, it most likely traces its roots back to when the only effective and economical way to do backup was to use tape as the primary backup target.

But now that disk has effectively replaced tape as the primary backup target in many environments, in my mind to continue with this legacy approach of daily and weekly backup makes little sense to me. While these disk-based backups are “recoverable” in the broadest sense of the term, organizations cannot present this backup image to a server and immediately restart the application from it. Instead they have to first access the backup software and restore the data before they can restart the application or access the file.

While taking these extra steps are not “wrong” per se, they arguably add extra time and effort into the recovery process. Further, depending on what data was lost and how much time has passed since the backup took place, the data that is recovered may either be unusable or be so old that extra time and effort is needed to recreate the data that has not been protected since the last backup.

This legacy approach to backup fails to capitalize on the many inherent benefits that disk offers over tape from both a backup and recovery perspective. For example:

  • Backup your data continuously or nearly all the time such as what Atempo Live Navigator does. This almost eliminates any possibility of data loss since data is backed up every 15 minutes.
  • Minimize network traffic while eliminating backup windows. The primary reason that my prior employer had a FC SAN with the highest possible network throughput was not because any of its applications actually needed this bandwidth save maybe one. The majority of the time network utilization was in the range of 1 – 5%. Rather it was only during backup windows that network throughput exceeded 30, 40 or even 50%.

CDP eliminates those backup windows since data is backed up continuously (or nearly all the time in the case of Live Navigator) and, while more network traffic occurs during the day, it only occurs when writes occur so it can take advantage of the ample network bandwidth available through the day while also freeing up the bandwidth used during the nightly backup windows.

  • Reduce your data stores even as you improve your recovery point objectives. One of the myths of CDP technologies is that they consume a lot more storage space than incremental, differential and full backups that are deduplicated. (I wrote about this myth a little over a year ago.) While they may consume a little more storage capacity, there is nothing preventing companies from pointing data protected by CDP technologies toward solutions that deduplicate data and, in the case of some solutions like Atempo Live Navigator, it deduplicates data as it protects it. So now you get continuous data protection and reduced data stores.
  • Find a restore point and recover your data. CDP solutions differ in their restore capabilities but some CDP offerings afford users to select a recovery point from within the CDP solution and actually run the application from the CDP data store. Now it likely will not run as well or as fast as it does on your production storage but this recovery option is not even an option in backup software.

So with new technologies like CDP well beyond the beta stage and being used extensively by cloud providers (R1Soft’s CDP solution is HUGE in the cloud provider market,) it begs the question, why do traditional approaches to backup such as I described above persist? Here are three reasons as I see it:

  • It works. It may not be perfect but using disk in lieu of tape as its new primary backup target in the form of either NAS or VTL has solved or is solving the backup problem as it exists for most organizations for now.
  • Deduplication has made disk more affordable than tape. Disk has come down in price but by itself still is not on par with tape from a cost perspective. But once deduplication is added into the equation and deduplication ratios of 6-7x or greater are achieved, disk becomes more economical than tape as a backup target.
  • Organizations are still wrapping their minds around disk’s potential.  Backup has been such a big problem in organizations that many are simply taking a deep breath and enjoying the break before turning their focus as to what to do next from a backup and recovery perspective. So for now they are just letting things be until circumstances (backup windows too short, out of disk space, need for major backup software upgrade, etc.) force them to make a change.

It is for these reasons that I believe the traditional approach to backup has persisted to date. But it is also why the traditional approach to backup is probably on its last legs in the role that it is used in now. As companies understand what they can do with disk and virtualization fundamentally changes how they need to backup and recover their VMs, organizations will have no choice but to select data protection technologies like CDP that are better positioned to provide the functionality that they need going forward.




Compellent CEO Phil Soran Talks about Its Glorious Past, Its Uncomfortable Present and What’s Next

This week I am spending a couple of days at Compellent’s annual C-Drive conference in Minneapolis, MN where about 500 users, value added resellers (VARs) and Compellent sales reps are in attendance. Since a couple of years have passed since I attended the last one, I thought I would make the 6-hour drive from Omaha to Minneapolis to catch up on the latest going-ons with Compellent and gain some insight as to how they plan to recoup after their latest earnings stumble.

Ever since Compellent went public a few years it has arguably been a shining star in the storage space watching its sales and earnings grow for nine consecutive quarters. But after its latest Q1 2010 financial misstep, it was obvious leading up to C-Drive and during my time here that the focus of everyone associated with Compellent was to put this latest quarter behind them.

Suddenly Compellent couldn’t do enough to make all of the analysts, bloggers and reporters happy in the hopes they would put a positive spin on everything going on at Compellent so this past quarter remained firmly planted in the past. In talking to a number of analysts, bloggers and reporters, they said that Compellent bent over backwards to accommodate them in whatever way possible to make sure they attended this C-Drive and demonstrate to them that this last quarter was not likely to be repeated.

In Compellent’s defense, it does have a lot to be positive about. Over breakfast yesterday I spoke with a financial analyst from a Chicago based firm and we were both puzzling over why Compellent struggled in what should clearly be a market that is favorable to them. The server virtualization market is expanding which should drive the need for external storage systems like Compellent’s while more cloud service providers (of which they were a number at C-Drive) could not say enough positive things about their experiences with Compellent storage.

However the latest fiscal quarter was clearly weighing on Compellent CEO Phil Soran’s mind as his opening remarks at the customer and analyst portion of C-Drive on Wednesday morning focused on how it was impacting Compellent’s employees. The tactical response has brought, in his words, “a sense of energy” among the employees. He described the company’s employees as having a “chip on their shoulder” and that they were determined to prove that the latest quarter was only an aberration and not the start of a downturn for the company long term.

He pointed out that many of the features that the Compellent Storage Center introduced years ago are now becoming must-have features on midrange array storage systems across the industry. In 2003 it started shipping thin provisioning (over 50% of all midrange arrays now support Thin Provisioning according to DCIG’s research); in 2005 it introduced Automated Tiered Storage which has become a one of Compellent’s defining features and, in 2008, it introduced Live Volume, a business continuity automation feature.

He then went on to expound upon the five features that are core to Compellent’s design and help to differentiate it from the industry. Its granular data management laid the foundation for it to do automated, block-level storage tiering and he sees it as instrumental as it looks to do deduplication of data on primary storage in the coming years.

It eliminated the concept of “fork lift” upgrades and pointed to customers like InstantWeb and Ohio State who have taken advantage of this feature. These two organizations have had Compellent storage systems for 6 and 7 years respectively and are still upgrading those systems to current features without needing to replace the underlying storage system. Ohio State just added SSDs to its Storage Center while InstantWeb has gone through 6 generations of server technology while using the same Compellent system during that entire time.

Soran also listed Compellent’s data movement engine, open systems hardware that facilitates the rapid introduction of new storage and storage networking technologies and an integrated software architectures as key reasons why it has the right foundation in place for future growth.

In the near term, he sees Compellent’s storage virtualizaton, thin provisioning, automated tiered storage, Instant Replay and Live Volumes features as what users are looking for and should contribute to Compellent quickly rebounding.

Going forward, Compellent is increasing its investment in R&D so it can more quickly innovate and deliver on new features that the data center of tomorrow will need. He saw the ability for Compellent to provide an even more scalable architecture, do data life cycle management, data deduplication of primary storage and management of unstructured data as being high priorities for Compellent. However he also commented that he does not expect data deduplication to provide huge capacity savings on Compellent systems, adding at most a 20% savings in storage capacity.

So where does this leave Compellent as a company going forward? Overall, I’d say it is in good shape and this last quarter was more than likely just a blip on the radar screen. Yes, it has been a humbling experience for them but who doesn’t need a little humbling from time to time?

I look at key areas that keep customers loyal for long periods of times:high satisfaction rates, use of advanced software technologies like replication and positive case studies as reasons for encouragement. In this respect, Compellent is firing on all cylinders. It reports 95% user satisfaction rates, 68% adoption rate of its replication software among its customers and users willing and ready to speak out about the benefits of Compellent’s technolodgy.

If anything is hurting Compellent is that it historically tends to shun outside analysts, bloggers and reporters that can help raise the awareness about the benefits of its technology. But based upon its above average outreach to this audience at this event, even that mindset may be changing.




HDS Takes the ‘White Gloves’ Off as it Launches New Strategy to Expedite and Simplify Data Migrations

The new relationship that Hitachi Data Systems (HDS) struck with InMage Systems to use InMage about three months ago had a number of immediate ramifications. It provided HDS with a new heterogeneous replication option that it could use across its own storage systems; it made HDS more competitive in customer accounts where it did not traditionally have a foothold and it provided an entrée for HDS into next generation data protection technologies for disaster recovery. So while today’s announcement that HDS is formally introducing InMage as Hitachi Dynamic Replicator (HDR) is no great surprise, it does create some interesting new opportunities for HDS and its customers going forward.

The need that HDS (or any storage system provider) has for a product like InMage is immediately evident to anyone who has worked in an enterprise data center. Whether the customer wants to upgrade from a vendor’s older model to a newer storage system or needs to switch to another vendor’s storage system, the data migration process is often painful and tedious.
 
HDS’s technical product manager, Rudy Castillo, described the planning and labor that both HDS and its customers need to go through in completing a data migration from one storage system to another (same or competitive vendor) as a “white gloves process”. These data migrations are often intrusive to the customer environment as they can take long periods of time to plan and execute plus they can require the customer schedule application downtime.

The availability of HDR should immediately reduce many of these data migration issues that HDS and its customers face when performing them. Since InMage is already a proven product in many Linux, UNIX and Windows customer accounts, HDS can immediately provide references to prospective clients as to the viability of the product.

This is part of Castillo’s vision that HDS can transform the data migration process from a “white gloves” operation into a “low touch” model that reduces the amount of effort, time and money that its customers have to devote to the process.  Since the product has just been announced, Castillo is still working through the numbers in terms of what types of cost and time savings that customers might expect, but he says, “The savings will be an order of magnitude greater than before.”

Already he is receiving internal requests from HDS’s professional services to use this product in current engagements as well as situations where HDS is looking to take out competitive products. The internal demand within HDS’s professional services for this solution has been so great that it has surpassed even his expectations such that he expects HDR to contribute significantly to HDS’s revenue.

However using HDR for data migrations is only part of his vision. The original intent and design of the underlying InMage software is for disaster recovery (DR) which, to a certain degree, puts it in direct competition with other enterprise data protection solutions already available from HDS – whether it is HDS’s own Shadow Image and TrueCopy Remote Replication or its Hitachi Data Protection Suite (HDPS) powered by CommVault.

However he does not see this as a problem short or long term. In the near term, InMage will be re-badged with HDS collateral put around it so it can be used for the purposes described above.

Longer term, he has more ambitious plans for this software. He does not see HDS customers abandoning any of the current HDS data protection solutions they are using now. Rather he sees HDR as complementing these solutions in four important ways.

  • First, because HDR takes full copies of production and can take consistent snapshots of this data, HDR customers can present these snapshots to the backup software so they can do off-host backups of this data.
  • Second, organizations do not want to abandon their enterprise backup software but right now HDR is not able to be managed by either HDPS or Symantec NetBackup. Castillo would like to leverage his existing relationships with these two data protection providers to add centralized management support for HDR.
  • Third, HDS professional services periodically need to take storage systems off-line to perform storage system maintenance or upgrades. Using HDR, they can failover applications from one storage system to another and accomplish this maintenance or upgrades non-disruptively.
  • Finally, HDS sees customers spend a lot of money on DR solutions but then either only rarely or never test these solutions. Using HDR, they can now test their DR plans at any time including physical to virtual test scenarios.

HDR is a powerful piece of software that HDS just added to its solutions portfolio that makes it easier for customers to affordably stay with HDS storage system plus they can just as affordably and non-disruptively move to HDS storage systems from competitive solutions.

The real power of HDR is not the tactical data migration problems that it solves (though enterprise customers will surely love that functionality). Smart organizations will quickly recognize HDR’s potential to solve a multitude of other storage management and DR challenges that they regularly face. As they gain this awareness, it suddenly takes on the role of becoming a piece of software that is both tactical and strategic to the overall management of their data centers.




FalconStor Soars into VMworld with a Passion

This week at VMworld FalconStor announced a very unique product targetted at around VMware Disaster Recovery: the Network Storage Server (NSS) Virtual Appliance. Jerome and I had a long conversation with FalconStor and I can tell you these are game changing technologies from FalconStor.

First, a bit of background: back in May VMware introduced a product called Site Recovery Manager (SRM). There have been many opinions called out on this product from “Finally we have something to perform a real DR with on VMware”, to “It’s extremely difficult to deploy and manage”. However, VMware did something very intelligent in its inception of SRM, knowing that they can’t be all things to all people. The replication portion of SRM is relying on the back-end storage devices already in use to perform those functions with a set of API tools called a Storage Replication Adapter, or SRA, to integrate directly into VMware’s SRM functionality.

There are a few things to keep in mind with this scenario. In the traditional storage-iron replication process, you typically need to have similar or identical storage systems on both-ends of the pipe that drive up cost with under-utilized storage at the remote location; difficult FCIP Gateways to manage; no real bandwidth shaping or optimization; and, the need to replicate a block for block copy versus replicating only the actual data. These factors will keep SRM out of the hands of the SMB and some mid-market customers just due to the overall cost and complexity of the solution.

Enter FalconStor with its NSS Virtual Appliance, which is the first software vendor to receive this ratification from VMware in the SRM landscape. FalconStor brings a very open approach to this solution. By placing a FalconStor NSS appliance in between the ESX Server’s and the storage farm the solution can now become truly hardware independent as the FalconStor appliance can virtualize some or all of the storage on the back-end.



falconstor.jpg

Placing the FalconStor appliance into the virtual layer between the ESX Servers and the Storage Farm(s), allows a customer to realize the following benefits:

Flexibility

  • Seamless integration with Site Recover Manager
  • Take snapshots down to the application and file system level on the VM guest
  • Support for all major applications
  • Simple and straight-forward to deploy and manage via a central management console that controls all aspect

Proven Technology

  • Usage of FalconStor robust replication technology
  • Offering bandwidth shaping
  • Deployment of micro-scan to only replicate the data that has changed, as well as compressing that data before it leaves the source location

These benefits can allow a customer regardless of size to replicate data from Tier-1 Fibre Channel storage system to a low-cost iSCSI storage array (or any storage array in between) without missing a beat since the entire storage landscape has been virtualized to the FalconStor appliances. This ensures the company can keep their cost low by deploying a recovery solution at the remote location running at a diminished capacity versus having to deploy similar or identical storage hardware and complex transport mechanisms.

The integration of FalconStor with VMware is pushing up the bar into the Virtualization utopia that has been promised by many vendors but has yet to be delivered upon. While FalconStor still needs to deal with the stigma that remains with virtualizing enterprise storage systems, this announcement reflects a giant step in the right direction for the industry and, more importantly, the end-user community as a whole.




HP Data Protector Upgrade Evolutionary, not Revolutionary; Subtly Concedes that EMC Still Winning Best of Breed Battle for Storage in HP Shops

After a receiving a briefing on today’s announcement on HP Data Protector’s enhanced integration with VMware, one has to wonder why HP is making any noise about this new functionality at all. While Data Protector’s enhanced integration with VMware virtual machines (VMs) provides some nice integration and recovery features for its HP EVA storage system as well as EMC’s DMX storage system, it appears all HP did was take a feature it now offers for physical machines and make it available for VMware VMs as well. Further, we saw little in this announcement that would make us think Data Protector is well suited to provide improved levels of recovery for companies that are anything but primarily homogeneous HP shops.

For those of you unfamiliar with Data Protector, it is HP’s enterprise backup software that receives little attention outside of HP shops but which, according to HP, has 22,000 customers. Yet HP Data Protector customers are like customers everywhere in that they are transforming to a virtual environment. As part of this transformation, they are increasing their use of server virtualization software such as VMware.

While Data Protector has for some time supported VMware’s Consolidated Backup (VCB), HP is under increasing pressure from its customer base to minimize or eliminate downtime of VMware virtual machines (VMs) and provide immediate recoveries. So as part of this release, HP expanded the functionality of Data Protector’s agents to take application aware snapshots on its StorageWorks Enterprise Virtual Array (EVA) storage systems.

Application aware snapshots ensure that the applications running on individual VMs retain their data integrity while providing companies near-real time levels of recoverability, hence its Zero Downtime Backup and Instant Recovery (ZDBIR) name. The Data Protector agent on the VM coordinates the creation of snapshot on the EVA by first acquiescing the application on the VM, putting the application data in an application consistent state by flushing all buffers, initiating the snapshot on the EVA and then resuming application processing.

This all occurs almost instantly (seconds or less) while offloading the task of creating application backup from the VM and the server hosting it to the EVA. Because the creation of the snapshot of the VM is dependent on the underlying storage system and essentially ignores the underlying server virtualization hypervisor, it can be used with any server virtualization software including Microsoft 2008 Hyper-V and Citrix XenServer.

Yet the real question is how big a step forward is this for Data Protector and why is this press release even significant? Data Protector has for years supported similar functionality on physical machines and has now repackaged it to support VMs. Including it in the press release is apparently just a means for HP to create some noise as VMworld begins even though the new functionality it adds for protection and recovery of VMware VMs is relatively minimal.

Further, it only currently supports HP XP, EMC DMX, and HP EVA storage systems. So unless you are predominantly an HP shop or planning to become one, not only is this announcement not relevant, it may only serve to drive potential customers away from entertaining HP Data Protector as their preferred enterprise data protection software. If anything, the fact that it continues to support EMC storage systems is a sign of weakness on HP’s part. This subtly signals that EMC is still winning more battles than not when HP customers separate their storage buying decision from their server buying decisions.

Every vendor is looking to make some noise going to VMworld and show how they can help support your virtualized server environment but this announcement about Data Protector borders somewhat on the ridiculous. Repackaging and remarketing a feature that already exists for physical machines to make it look like it is well suited for VMs is something of a joke. About the only new thing it does that it did not do before is recognize and protect the VMDK files associated with individual VMs. But how big a deal is this? In our minds, not a big deal at all and the fact that it is only supporting EMC storage systems besides its own indicates that it still has a ways to go in winning the storage battle against EMC even with its own customers.




Did Xiotech Over-Engineer Its Emprise Storage System? Insights from Day 2 at Chicago Storage Decisions

Yesterday I completed my quick road trip to Chicago to attend TechTarget’s annual spring Storage Decisions conference returning home last night. Here are some the highlights from my day 2.

I started out the day with an hour-long briefing with Xiotech’s CTO Stephen J Sicola and Storage Architect Peter Selin. Xiotech has been talking up a storm about the ground-shaking importance of its new Intelligent Storage Elements (ISE) ever since Xiotech announced it at Storage Networking World about a month ago. However Xiotech and I have not had a chance to connect for me to take a close look at its architecture so Stephen and Peter spent some time talking me through it.

One of the factoids I found most intriguing was the history (at least as Steve tells it) why Xiotech (and Seagate behind the scenes) felt obligated to go back to the basics in designing the ISE that underlies its new Emprise stroage system. One of the more interesting aspects to the story was the history of placing disk drives into storage systems. Apparently when disk drives were first placed into storage systems, they were not designed them for vertical insertion – always horizontal. So when disk drives were placed vertically in storage systems to optimize rack space, they started failing more frequently.

Another key problem had to do with mounting and cooling the disk drives. Again, disk drives were designed for mounting in stable (non-vibrating) racks as standalone units with ample air flow for cooling. However, when putting tens or hundreds of disk drives into a rack, not only is air flow around the disk drives reduced, but the vibration of all of these spinning disk drives in the same rack is amplified leading to higher disk drive failure rates. So the disk storage systems have compensated over the years by making tweaks in their firmware and controllers to offset these variances and minimize the impact of failures.

Xiotech’s Sicola felt it was time to go back to the drawing board and re-examine the design of everything from the disk drive firmware to how they were mounted in storage systems to the controllers managing them. He started this process nearly 6 years and the result is the ISE found in Xiotech’s Emprise storage systems. Key changes it makes are more stable mounts for disk drive placement and replacing the disk drive’s native firmware with its own firmware for more pro-active monitoring and the transmission of storage system reports to Xiotech.

Though there were many others, sending the activity reports to Xiotech caught my attention because Xiotech will now monitor activity on your systems and notify companies not just when drives fail, but warn them when it detects abnormal activity on their Emprise system that may contribute to degraded application performance. For instance, if a company places a high performance Oracle database on SATA disk drives, the reports sent back to Xiotech should detect this activity and Xiotech should in turn warn the company that not only should its Oracle database not reside on SATA disk drives, but that this level of activity could lead to degraded performance and SATA disk drives on the system failing prematurely.

So what do all these new features mean for users short and long term? Because Xiotech makes the Emprise more resilient, they have extended the warranties on their systems from 3 to 5 years while its upfront costs are comparable to other systems. This should allow companies to depreciate these systems out over five years rather than three. This can lower quarterly depreciation costs and, since the underlying disk drives are theoritically more reliable, there is a lower chance of disk drives failing and hence less risk to your applications.

The main question companies need to ask themselves about Emprise is not about its stability and reliability but did Xiotech over-engineer this system? Five years is a long time in the technology industry and can span as much as three generations of technology improvements (assuming new technology is introduced every 18 months). This can leave a company with book value on a 3 year old storage system should a need to upgrade it to more current technology. This could require the company taking a financial hit on the books even though the Emprise is still a viable storage system. Overall, though, Xiotech’s Emprise should give companies pause about their current vendor’s storage system and think more deeply about how their current storage systems are archtitected and if going from a 3 to a 5 year warrantly makes sense.

My next meeting was with Omneon’s Director of Storage Marketing, Dave Frederick. Omneon is a 10-year old, $120 million storage company primarily dedicated to providing storage for the broadcasting industry so I inquired of Dave why his company’s sudden interest in attending Storage Decisions. He said that more Fortune 500 and Fortune 1000 companies are now broadcasting video internally  and this is creating a new demand for storage systems specifically designed for the broadcasting industry.

So I queried Dave further to understand further how high transaction environments differ from broadcasting since both call for near 100% availability. Dave explained that there are two fundamental differences between broadcasting and high transaction environments. Broadcasting accesses data sequentially while high transaction environments tend to access data randomly. However, the larger difference is that if there are pauses in high transaction environments (even milliseconds), the transaction can be resent. This is not so in broadcasting. It even one frame is missed (30 frames are sent every second), you don’t get a second chance and those types of misses (called black spaces) result in missed SLAs and loss of revenue for broadcasting companies.

It is in this way that Omneon’s MediaDeck Integrated Media Server storage differentiates itself from competitive products. Though it uses a grid storage architecture, it also includes an out of band component that verifies each frame as it is encoded and decoded so that when a broadcast is sent out, it streams the video without a black spaces.

Finally, my other notable meeting for the day was lunch with representatives from the LTO consortium: Quantum‘s Product Marketing Manager, Tom Hammond; IBM‘s Senior Program Manager, Bruce Master; and HP‘s Product Marketing Manager, Rick Sellers. Most of our conversation focused around how the use of tape is changing in environments and that while disk is becoming the primary target for backup, companies still need to exercise some caution about using disk exclusively for backup. All of us were aware of recent examples where companies had both their primary, secondary and, in one case, even a tertiary DR site affected by disasters that required the use of portable media in order to recover their environment at still another site.




Compellent Users Get Virtualization; Day 2 of Compellent’s Annual C-Drive User Conference

I just returned home after attending Compellent’s C-Drive user conference and had some final thoughts and experiences to share after completing my stay at the conference.

 

One thing that struck me was that Compellent (NYSE: CML) users really understand what a game-changing technology that virtualization is. I sat through 2 or 3 presentations during the two days of the conference (May 7 – 8) and also met with a fair number of users (~10) between sessions, over meals and at the evening events and all of them were pretty stoked about the capabilities that virtualization in general and Compellent specifically delivers.

 

Compellent’s Data Progression (automated tiered storage) was the virtualization feature that its users spoke most highly about. One user I spoke with over drinks who was from Palm Beach said that he has been using the Data Progression feature for a couple of years. He actually described the experience as “fun” in watching the Compellent Storage Center migrate infrequently accessed blocks of data to lower cost tiers of disk.

 

Compellent’s Dynamic Capacity (thin provisioning) feature was given a lot of attention at the user conference but none of the users I spoke to seemed to be using it – or at least it never came up in conversations that I had with them. It might just be that they assumed I knew they were using it since the Dynamic Capacity feature is part of Compellent’s Storage Center core software licensing and, hence, didn’t feel obligated to bring it up.

 

Replication was clearly on the mind of almost every user whether they were presenting at the show or merely talking with me privately. It seems a fair number of its users are taking advantage of Compellent’s Storage Center replication functions, Remote Instant Replay (asynchronous replication) and Data Instant Replay (snapshots), in some way even though these software features are add-on licenses to the core software. This trend confirms my suspicions that fast recoveries are becoming more important for companies and the end-users they support.

 

However not all of the news around replication was positive. Most users had no problems using Compellent to replicate data locally or remotely but when it came to providing consistent recoverable snapshots in conjunction with applications, the news was somewhat mixed. During one user panel, Bill Moss, IT Director for Moss Construction Managers in Ft. Lauderdale, FL, described replicating and recovering Exchange data as a “nightmare”. He had to work with Microsoft and come up with a two pages of procedures (some of this content apparently appears on Microsoft’s website) to recover public folders within Exchange. In looking around the audience and gauging their reaction, it appeared that Moss’s struggles with protecting and recovering Exchange is not unique.

 

Also at the conference, I had the opportunity to meet with Bruce Kornfeld, Compellent’s VP of Marketing, and Larry Aszmann, Compellent’s CTO. The main item Bruce and I discussed was how Compellent licenses its software. What distinguishes Compellent from most of its competitors is that it licenses its software by spindle (per disk drive). Its core licensing includes Dynamic Capacity (thin provisioning), LUN security, boot from SAN, some base level reporting features and email home support. This licensing is based on a single controller with one shelf of 16 disk drives. As companies grow their Compellent system, Compellent sells disk drives and licensing in what it terms as “8-packs”. Additional software features that users can optionally license with larger systems include its Data Progression, Data Instant Replay, Remote Instant Replay and Fast Track features.

 

In the brief meeting I had with Compellent’s CTO Larry Aszmann just before I exited the conference, I gleaned  two pieces of new information regarding Compellent’s  manufacturing process and its commitment to its VARs. In regards to manufacturing, Compellent primarily uses off-the-shelf components in the construction of its systems. This removes from Compellent many of the traditional manufacturing concerns that other storage system providers need to manage.

 

Aszmann also said that Compellent sells 100% of its products through the channel and has no plans to go direct. He has seen other storage systems vendors do that which has ultimately undermined the relationship with their VARs. Because Compellent does not sell direct, VARs are much more transparent with Compellent about their business dealings since they are less worried about Compellent cutting them out of deals later on. Aszmann says this level of transparency is helping it as a public traded company because it can remain very accurate (about 90% on target) with each quarter’s sales forecasts.




The End of the Internet Free-For-All? Network Bandwidth Limitations Loom for Businesses

Should there be a “Use more, pay more” fee for Internet use? Should the cost of sending a text message to Grandma about junior’s birthday party be the same as the cost of sending the entire video of junior’s birthday party? How much of the Internet is a person or company entitled to? These were some of the questions that CIO magazine‘s Gary Beach recently attempted to address in a video commentary, Net Neutrality: Why the Internet Can’t Remain Free, which recently appeared on CIO magazine’s website.

The Internet is now a pervasive and ubiquitous tool for businesses and consumers alike. In whatever form the Internet is used – blogging, email, video or web browsing – the Internet now affects everyone in some way. In fact, it is hard to think of life as we know it today without Internet access though we are just a decade removed from little or no Internet access at all.

The trouble is more data means more Internet traffic and with it comes concerns about congestion on the Internet. It is not just more data; it is the type and amount of data traversing the Internet. Streaming video is first and foremost among the concerns about why Internet congestion appears inevitable. The Internet can probably handle for years to come text and even image-based website traffic like pictures and photos. But as NCAA basketball games, American Idol, the Olympics and other live events are transmitted over the Internet at the same time, these live transmissions have ramifications that go well beyond just poor video quality.

This type of traffic does not play to the strengths of the underlying network infrastructure of the Internet. Dropped packets require retransmissions. Retransmissions result in delays. Delays result in poor video quality creating frustrated users. However, the bigger question that Gary raises in the video is whose responsibility is it to pay for the infrastructure to improve this? Should the NCAA and America’s colleges streaming the videos need to pay extra or the people watching the videos? And should that traffic run across the same Internet backbone that was designed to handle email and web browsing?

This is not as far-fetched as it sounds. A bill entitled the Communications Opportunity Promotion Enhancement (COPE) is already working its way across Capital Hill with companies like AT&T, Verizon and Comcast on one side of the aisle and Google and Yahoo on the other. The telecoms are arguing for a “Use more, pay more” model while Google and Yahoo are calling for a “one size Internet fits all”. Though no one winner has yet emerged, expect a premium tier of Internet service to eventually be forced on businesses that send or receive data volumes that exceed specified limits.

So what does this mean for businesses? It unclear at this point but it should clearly serve as a warning for businesses to get their collective act together and determine what is acceptable and unacceptable in terms of what content they want their employees viewing over the Internet while at work as well as what sort of content their business makes available on the Internet. So whether employees are watching NCAA basketball games or your company  makes panoramics view of its parking lot available over the Internet, the premium associated with sending or receiving any of this type of content over the Internet seems destined to go up which may impact companies who are using the Internet for backup, replication or business continuity in ways they least expect.




Compellent’s Data Instant Reply Can Dramatically Improve Application RTOs; How Do You Migrate Data to a Thinly Provisioned Volume?

Most IT staff already understand the differences between a recovery point objective (RPO) and a recovery time objective (RTO) and many companies use storage system snapshots to meet specific RPOs and deliver faster RTOs. Yet what is not always so clear is that the few seconds it takes to create a snapshot does not necessarily translate into a recovery time that is equally as fast.

The rapid creation of a snapshot can result in an illusion that application recoveries are just as fast. Since snapshots typically only take seconds to create on most storage systems, companies can incorrectly assume that application recoveries will also only take seconds or minutes to perform. The unpleasant reality is that snapshot recovery times can vary by storage system and may not restore the data back as fast as IT staff expects or to the state that the application expects it in.

For example, some storage systems only create snapshots in a read-only state. While the snapshot may be available immediately, if the application requires read/write access to the data in order to recover from a failure, IT staff still need to copy the data back to the production volume thereby lengthening the application recovery time. Another potential problem is the requirement for the source volume to remain available in order to access the data since snapshots typically copy pointers of the source volume to the target volume. However if the volume to which the index is pointed is corrupted or inaccessible, so is the snapshot. To recover the application in this circumstance could require a complete restore of the data from tape.

Of course, not every storage system has such a radical disconnect between the time it takes to create a snapshot and the time it takes to recover it. One such example is Compellent’s Storage Center storage system where data recoveries are nearly as fast as the creation of the snapshot.

In discussing this feature with Compellent’s Senior Product Manager, Bob Fine, he explained that Compellent’s snapshot technology, called “Data Instant Replay“, is based on thin provisioning. Using thin provisioning, the amount of data that each snapshot or replay needs is minimal plus it give users a wide range of choices for recovery without the wait. They can do local read-only recoveries, local read-write recoveries and, because the amount of data on each snapshot is so small, configure another Compellent Storage Center storage system at a secondary location and do a full recovery of the data and application in seconds or minutes.

The more difficult question that companies need to answer is how do they bring Compellent into their environment? The glitch with using storage systems based on thin provisioning is that the volume managers used by most operating systems do not recognize thinly provisioned storage system volumes. As a result, data migrated on a block-by-block basis from existing volumes to Compellent’s thinly provisioned volumes consumes just as much storage space as before. While the data migration does not negate the benefits of Compellent’s Storage Center Data Instant Replay, it does negate one of the other primary benefits that thin provisioning provides – the prevention of storage over provisioning.

As its name suggests, Compellent’s Storage Center is a compelling product for companies to evaluate but they need to exercise caution in how they implement it and in what circumstances. Compellent’s Data Instant Replay feature should match the snapshot capabilities of other storage systems and exceed many in its recovery capabilities. However Compellent’s use of thin provisioning to provide this feature should give companies pause about what types of application data they should migrate to Storage Center and what other promised benefits of its thin provisioning feature it will not be able to deliver.




Continuity Software Offers Disaster Recovery Assessments for $15K; No Agents and a 48 Hour Turnaround

Someone once said to me that making changes in an enterprise mission-critical production data center storage area network (SAN) is akin to changing the wheels on a 747 as it is taking off. There is no room for error, you better be damn good at what you are doing and you need at least three back out plans in your back pocket should something go wrong. So what does this have to do with Continuity Software’s RecoverGuard? This is the type of environment that it says its RecoverGuard software can monitor in order to help companies recreate a replica of at a disaster recovery site.

Individuals that are responsible for managing these types of environments know the challenges associated with managing these environments and keeping them operational. Every change in these environments is typically fraught with risk with little or no margin for error. So a great deal of time for most companies is spent making sure nothing goes wrong in these environments. However ensuring that upgrades and changes implemented on production servers, storage systems and Fibre channel switches are replicated to the disaster recovery site is a different matter. It either never occurs or happens at best in a haphazard fashion.

The purpose of a disaster recovery site is simple: recover the production environment, ideally in hours or even minutes. However the reality of keeping the disaster recovery environment in sync with the production environment is a totally different matter. To do so requires individuals to constantly monitor every change – documented and undocumented – in the production environment and then make the exact same change at the disaster recovery site within hours after the production change is made. If that sounds impractical that’s because it is.

This inability to accurately recreate the production environment at the disaster recovery site is the reason that Continuity Software created RecoverGuard. RecoverGuard’s premise is that it monitors SAN hardware at the production and disaster recovery sites and gathers information about their configuration. Once gathered, it compares the information and reports on the discrepancies that exist between production and disaster recovery sites.

However RecoverGuard goes beyond that. It also generates a topology map so companies can understand the business impact of how out of whack the two environments actually are. This report documents in black and white what the environment actually looks like and, in many cases, illustrates how broken the DR site is when compared to the production site. From a business perspective, it helps companies understand that despite spending millions of dollars to build a disaster recovery site, they still can not recover their production application.

To show its value to companies and demonstrate it works, Continuity Software gives companies the opportunity to test drive RecoverGuard for $15,000. This price tag includes setting up and configuring RecoverGuard to gather the needed information from 30 servers in the SAN infrastructure. Avi Stone, Continuity Software’s Director of Marketing, says that RecoverGuard usually finishes gathering the needed information in about 48 hours that the Continuity Software assessment team uses as the background data to present to management in the organization.

More remarkable is what happens next. What is no surprise is that RecoverGuard always finds critical problems that impact the client’s ability to recover. What is unusual is that Stone claims Continuity Software has a 100% conversion rate of clients who go from performing the assessment to buying the RecoverGuard software. Of course, when one considers that if one buys professional services from either EMC or Symantec to do a similar assessment, it would cost $150,000. Continuity Software claims that their assessment would not be as accurate or as fast as Continuity Software’s $15,000 assessment so it is not a shock that companies are willing to lay out the $60K minimum to have RecoverGuard continually produce these reports.

Continuity Software has uncovered an unfilled niche at the most high end of the enterprise data protection market. While RecoverGuard still has some holes that it needs to fill (it does not provide mainframe support and monitoring replication pairs is still on its roadmap), for only being on the market for 7 months, it already has some notable success stories to share in helping companies maximize previous investments they have made in their data center DR dollars.




LeftHand Networks Announces New Virtual SAN Appliance (VSA); VSA Circumvents VMotion SAN Prerequisite

Over the last few years LeftHand Networks has quietly – or maybe not so quietly – grown its install base to 9000. Like a growing number of storage-centric businesses, it has developed a customer niche in the small and midsize enterprise (SME) market providing economical iSCSI storage products for these environments. However no matter what size customer a storage vendor supports, all storage vendors must now account for the growing presence of VMware virtual machines (VMs) in these environments, and LeftHand Networks is no exception.

The difference is that the type of functionality that a storage vendor needs to incorporate into its products to support VMware is driven by the market segment it serves. LeftHand Networks’ heavy focus on SMEs coupled with its SAN/iQ software gives it flexibility and advantages that most other providers of iSCSI storage software or hardware typically cannot offer for VMware.

However as LeftHand Networks seeks to move into adjacent markets such as small and midsize businesses (SMBs) or remote and branch offices (ROBOs) who are looking to implement VMware, it needs to hit their two hot buttons: low cost and simplicity. Over the years, LeftHand Networks SAN/iQ software has delivered this in the SME space by giving them the option to turn existing servers into storage controllers and then presenting that server’s storage to other network servers as iSCSI storage targets. These clients can then cluster two or more of these servers together using SAN/iQ’s clustering technology to create a highly available configuration.

These technologies coupled with LeftHand Networks’ new focus on SMBs and ROBOs is seen in today’s new product offering – their Virtual SAN Appliance (VSA). The two features that particularly stand out about LeftHand’s VSA are:

  • It provides for failover between different VMware ESX servers using VMware’s VMotion feature without a requirement for an external iSCSI or FC SAN. Typically the only way users can take advantage of VMware VMotion, which allows for failover of a VM from one physical machine to another, is when it is deployed in conjunction with a SAN. VMotion’s requirement for a SAN may preclude an SMB or a ROBO from utilizing this feature. LeftHand Networks circumvents this requirement for an external SAN by using its SAN/iQ software to virtualize disk (internal or external) on each VMware server and then creating a cluster of VMs on different VMware physical servers.
  • LeftHand Networks SAN/iQ software appears on VMware’s Hardware Compatibility List (HCL). This achievement is a first, to the best of my knowledge, for a software provider of storage virtualization software. I have talked to other storage virtualization software providers who have expressed a great deal of frustration about their inability to receive this coveted certification from VMware because of what it means in terms of achieving customer acceptance of their product. LeftHand Networks VP of Business Development, Karl Chen, told me that surveys of VMware customers reveal that 88% of them are more comfortable with a product certified by VMware than products without a certification. So it was a real coup for LeftHand Networks to obtain this certification from VMware for their SAN/iQ storage virtualization software.

The growth of iSCSI SANs is certain to grow in the coming years as Ethernet performance increases and costs continue to drop. But not every SMB and ROBO needs or wants an Ethernet iSCSI SAN on their initial install but still want to experience the data protection and application failover benefits that VMware’s VMotion delivers. Now thanks to LeftHand Networks understanding of their specific needs, its new VSA offering and their recently awarded VMware HCL certification, it looks like these benefits may now become achievable goals for SMBs and ROBOs after all.

Bitnami