As I attended sessions at Microsoft TechEd 2014 last week and talked with people in the exhibit hall a number of themes emerged including “mobile first, cloud first”, hybrid cloud, migration to the cloud, disaster recovery as a service, and flash memory storage as a game-changer in the data center. But as I reflect on the entire experience, a statement made John Loveall, Principal Program Manager for Microsoft Windows Server during one of his presentations sums up to overall message of the conference, “Today it is really all about the integrated solution.”
Small and medium businesses (SMBs) are rapidly moving towards virtualizing their physical servers using VMware. But as they do so, they are also looking to minimize the cost, complexity and overhead that the backup of VMware servers introduces while increasing their ability to recover their newly virtualized applications. It is these concerns that InMage’s new vContinuum software addresses by using a new technique to tap into VMware that provides near zero impact backups with near real time recoveries.
Last week I took a look at the first three factors to consider when choosing a replication software product. This week I wanted to finish my thoughts around that subject and discuss the final four factors that should be part of any evaluation of replication software.
Replication is becoming an ever more important component in the protection and recovery of applications. Anecdotal evidence already suggests that 50% or more of all SAN and NAS storage systems ship with some form of replication software while many more organizations use replication in its other forms (application, appliance or host-based). But regardless of what form of replication software that organizations buy, they are many times unaware of the subtle ways in which replication software products differentiate themselves.
One of the principle struggles within organizations in the first decade of the new millennium has been solving Windows backup issues. Now that a new decade has arrived the problem has changed as organizations turn their attention to how they can recover their Windows application servers in a time frame and manner that meets their requirements. But to identify such a solution they first need to define what such a recovery solution should look like.
One of the key concerns that businesses have is how providers of the cloud will handle and respond to spikes in application demands. It is these questions that InMage’s newly announced cloud-optimized infrastructure is designed to answer.
These days it seems that all someone has to do is use the word “deduplication” in conjunction with a data protection product and that data protection product magically looks “better”. But what organizations have to be careful to do is not allow deduplication to color their view of what they hope to accomplish with the implementation of disk-based data protection. Rather organizations need to look at data protection from a different viewpoint that it is not tainted by deduplication and allows them to fully leverage the flexibility that disk-based backup provides.
There is a perception among enterprise organizations that in order to deploy continuous data protection (CDP) technology, they also need to use high performance disk in conjunction with it. But enterprises probably should re-assess that assumption. The emergence of new and better CDP architectures such as what InMage offers enables organizations to deliver high speed CDP while using slower performing SATA disk drives.
Here is what determines how much storage a CDP product needs. CDP initially needs an allotment of storage capacity that is equal to the size of the volume on which the data resides that is being protected. This is needed so the CDP product can make a copy of all of the blocks on the production volume. However, the wild cards in how much storage the CDP product requires are based not the size of the production volume but two other variables.
The introduction of disk and deduplication into the backup process over the last few years has certainly helped to minimize existing backup problems. Organizations using these technologies have found that their backup success rates now approach 100% and that they no longer have to continually troubleshoot backup problems. But while these technologies may fix existing backup problems, they relegate disk to a glorified form of tape and do not serve to fundamentally transform the recovery process.
At the conclusion of a recent call I had with Rob Tellone, the CEO of vBC Cloud, he asked me, “What do you consider the difference between business continuity (BC) and disaster recovery (DR)?” I gave him my definition of each but then went on to explain to him that on the business side of the house no one really cares about the definition of either BC or DR. At the end of the day, all they care about is how quickly and cost effectively IT can bring the affected parts of their business back online regardless of the scope of the incident.
Server virtualization was one of the hot technology trends in 2009 and there is every reason to believe it will remain that way in 2010. But as this trend broadens to include the virtualization of mission critical applications like Microsoft Exchange and SQL Server, new considerations come into play. Most notably, organizations must identify a data protection solution that can deliver application-consistent recovery points, bring applications quickly back online and do so without negatively impacting the performance of the physical host.
On top of the storage news this week we saw the demise of COPAN Systems; or did we? It really isn’t quite clear as to what has been going on over at COPAN as we have yet to get any confirmation from within the industry. Bill Mottram, a managing partner at Veridictus Associates, and fellow Coloradan such as myself, was unable to contact the Colorado company for comment. Concrete information is hard to find regarding COPAN but we were able to put a few pieces together from across the social sphere:
Even though Gartner Research says that server virtualization is not yet widely implemented (only 16 percent of workloads currently run on virtual machines according to Gartner), Gartner does point to a more virtualized environment in the very near future. It expects that fully 50% of workloads will run inside virtual machines by 2012 and represent nearly 58 million deployed machines. But as this transition from physical to virtual occurs within data centers, traditional disaster recovery (DR) software, procedures and techniques are not positioned to migrate so cleanly into this newly virtualized environment.
Disaster recovery (DR), testing and development environments have historically been closely linked whether or not anyone liked to admit it. Organizations would construct test and development environments and then use them for DR purposes if needed; or, they would quietly repurpose computer gear purchased using DR funds for testing and development. However the trick is getting both of these distinct but separate business processes to share this same environment without creating new levels of complexity.
This past spring a debate erupted on BackupCentral.com between a user complaining about not getting new features in his backup software as part of his annual maintenance contract and his backup software provider wanting to charge extra for it. The user was, in his words, ‘faithfully paying his annual 20% fee for maintenance’ and now wanted the backup software’s new Advanced Recovery option as part of his support costs.
Software as a Service (SaaS) is on almost every company’s radar screen as a cost-effective means for outsourcing applications that are not core competencies of their IT staff. Yet while outsourcing more applications sounds great in theory, applications such as disaster recovery (DR) that organizations are looking to outsource must support certain characteristics. Specifically, the software needs to support options like partitioning and data security that are inherent in a feature like multi-tenancy.
“Business Continuity” and “Disaster Recovery” are two aspects of IT and business planning and process management that no organization can afford to get wrong. So it is somewhat disconcerting that a recent article reports that the majority of businesses do not yet have a disaster recovery plan or business continuity process in place or, if they do, they do not regularly test it.
Recent feedback from InMage Systems’ existing customer base indicates that 100% of them use its Scout software for disaster recovery. That probably comes as no surprise to anyone familiar with Scout or its heterogeneous recovery capabilities. But what may come as a surprise to some is that nearly 40% of these existing Scout users are seeing a 200% return on investment (ROI) in Scout because of how it can be used in multiple ways in a company’s IT infrastructure.
Most organizations recognize that the introduction of disk into the data protection process is fundamentally changing the landscape of how data is protected. But what organizations are failing to entirely grasp is how disk fundamentally alters how applications can be protected and recovered. Disk can minimize the impact of data protection on production applications while providing shorter recovery times and improving recovery reliability. It is as organizations come to this realization that they also begin to grasp how recovery can displace backup as the next IT headache.