One of the most vexing problems in enterprise data centers today is the lack of information that those in charge of data centers have about the infrastructure that they manage. When I used to work for a Fortune 500 company it was like pulling teeth to try to get the base line information that I needed even in order to understand how to manage the infrastructure and this was before the advent of server virtualization. These problems certainly have not decreased and, if anything, organizations now have less time, less money and probably less people but a greater need than ever to understand their infrastructure so they can manage it.
Enterprise data protection software is experiencing a fundamental shift in terms of what organizations expect it to deliver and the amount of distributed structured and unstructured data that it needs to protect. As recently as a few years ago, the expectations of enterprise organizations were relatively modest – support for most major operating systems, integration with major applications (MS Exchange, Oracle, etc.) and tape library support – as compared to today’s standards. While some of those requirements still hold true today, more has changed than has stayed the same. This is putting a great deal of pressure on data protection products to swiftly evolve.
Most businesses small and large have many IT needs but one that they continue to focus on as they move into a completely paperless world is data protection and, more specifically, data recovery. They know their current in-house backup and recovery processes are often less than adequate so when they ask hard questions like, “How long can I afford to be without my data?” and “What does losing that data mean to the company and the company’s public reputation?”, they don’t like the answers. But what IT managers are surprised to learn as they look to move to a SaaS offering based on a cloud-based computing architecture for their backup and recovery services, they find there are many options from which to choose.
To say that organizations are approaching 2009 with more than just a little apprehension would be an understatement. Scandals are rocking the financial markets on an almost daily basis. There is the looming threat of new legislation in 2009 which will make it more expensive to conduct business going forward. And, in the US, nearly 700,000 individuals in the private sector lost jobs in the month of December alone – Yikes! That leaves those left in organizations trying to figure out new ways to deliver the same amount of value and services with less money and people and nothing is more clearly in the sights of businesses than lowering their IT costs and keeping them under control.
However as the number of MSPs proliferate, the decision about which MSP to dial up gets harder, not easier, since more and more VARs are jumping on the SaaS bandwagon to offer Managed Backup Services. Further, companies need to quantify their own needs and expectations as they select an MSP. Below are some examples of questions that they need to ask and answer internally and externally before making this important decision.
Offering the appropriate technology solutions to your internal business customers is a priority for any technology manager that desires to provide high levels of service at the lowest possible costs, particularly in the troubling economic times that we are living in today. However, knowing when to pull the trigger and outsource a critical IT function such as backup versus making further investments in infrastructure choice is not so cut-and-dry when your name is on the dotted line. Further, every IT manager now regularly faces the “Do I continue investing in hardware and software upgrades to support the data growth in the data center and remote locations ?”, or “Should I start leveraging the backup services of a Managed Service Provider via a cloud computing offering?” conundrum.
Despite what happens out on the pitch, the Premier League is experiencing a small awakening amongst its clubs – and some unexpected harmony – for their IT disaster recovery solutions. The challenges, demands and expectations to deliver a robust backup and recovery solution for these clubs is just as pronounced as any other corporate datacenter. However, faced with meeting the escalating salaries of their best football players, the IT staff often comes out on the short end in these organizations.
Any storage architect or administrator that has ever dared to accept the challenge of engineering or re-designing their company’s backup and recovery environment has undoubtedly discovered that he or she has had to sacrifice functionality or features based on the practical limits of their budget. Reasons for this vary from vendor to vendor, but mostly it comes down to how many backup and recovery software options are they willing to pay for? Most vendors offer reasonably good licensing for the core software, but once you step outside of that realm, some of the most basics features are not included.
Anyone who works as an end-user is continually confronted with crafting SLAs for various infrastructure components. Aggravating the situation, once SLAs are signed-off on, it is nearly impossible to make changes without completely rocking the boat so it is extremely important to get it right from day one.
When server and storage managers out there hear the “A-Word” (Agents) come up in a conversation with a software vendor, they typically cringe, and think to themselves, “Oh great, another set of agents that I have to not only deploy but that I have to manage and track.” In the server world, some agents are unavoidable, like performance/security monitoring, virus and worm detection and prevention etc.
The benefits that continuous data protection (CDP) technology provides as part of a company’s overall data protection strategy are becoming more evident everyday. Point-in-time restores, faster recoveries and off-site replication of data for disaster recoveries are just some of the benefits that companies using CDP are already experiencing. However one of the challenges that companies may encounter as they look to deploy CDP that may hinder or even prevent its adoption is the need to deploy host agents on servers.
A recurring theme in terms of what I hear from users is how VMware adds new complexities to their day-to-day management tasks. For instance, even before server virtualization came in vogue, companies were already complaining that their physical servers reproduced like rabbits. Server virtualization just makes server growth that much easier to occur since now companies don’t even need to purchase a new physical machine anymore – it now is little more than a copy-and-paste like exercise to create a new virtual machine (VM) once server virtualization is in place.
Companies can experience an overwhelming sense of relief when they finally resolve their ongoing backup problems by switching from tape to disk as their primary backup target. But what companies may fail to fully contemplate is the new possibilities – and challenges – that storing data on disk opens up to them. On the upside disk makes data recoveries and off-site replication of the data much easier to accomplish. Conversely, it can present companies with new challenges to manage the data on disk as it ages lest the escalating costs of disk capacity and cooling and powering the storage system start to offset some of the benefits that disk-based backup provides.
Asigra makes no bones about it: it unabashedly advocates that companies keep all of their backup data on disk under the management of its Televaulting software. The reasons Asigra provides for keeping backup data on disk are plentiful as well: Faster backup and recovery times; elimination of tape management tasks; deduplication technologies that minimize data storage requirements for disk; and, data that is easy to copy and replicate locally and remotely. Yet if there is anything companies know about backup, it is that managing backup data and its recovery over the long term, whether it is on disk or tape, is where the complexity can start to surface.
In Asigra’s recent release of Televaulting 8.0 data security remains at the forefront with their use of the AES encryption algorithm to encrypt data while in transmission across the network; or at rest in its DS-System or BLM Archiver. Televaulting’s approach to encryption key management provides several options in how to best approach encryption key management. Televaulting 8.0 gives users and service providers several key ways to protect data from unauthorized exposure.
Knowing how long to keep copies of production data in backup repositories is a problem that companies only give scant attention to now. When companies back up production data to tape, they tend to only invest minimal time and effort managing the data after it is backed up. The backup data remains on the tape until the data is overwritten during the next backup job; or the tape, and data on it, is simply discarded when the tape wears out. Besides, taking a more proactive approach to managing backup data on tape is time consuming, difficult to implement and has, to date, shown minimal return on investment (ROI).
Backup to disk is fundamentally changing corporate perceptions about backup and recovery. Using disk as a primary backup target has solved long-standing corporate backup problems including successfully completing backups within designated backup windows and expediting recoveries while deduplication is resolving the cost and capacity issues associated with storing backup data on disk. But before companies breathe a collective sigh of relief and think that disk has officially solved their backup problems, they need to think again. The immediate crisis may be over but longer term problems still remain.
Backup has become a fairly innocuous method for companies to use to test the capabilities of a Managed Service Provider (MSP) and start companies down the path of outsourcing some of their storage services. However the task of selecting an MSP should go well beyond just determining how well it backs up data. Outsourcing backups is likely just the first step for most companies in a larger journey that companies are embarking are towards outsourcing more of their storage management requirements. So it behooves companies to regularly analyze their MSP to determine what steps it is taking to improve the management of its backup data stores and keep its data storage costs down long term.
The one I want to focus on in this entry is Televaulting’s new replication functionality. Replication is a key function in any facet of the storage landscape and, with Asigra adding this feature into its latest release of Televaulting, it becomes an even more robust player in the enterprise space.
Grid computing is starting to appear in some unlikely places. It is easy to assume that grid computing appears primarily in the world of academia or high tech corporate IT engineering labs. In these environments, computer scientists typically have the time and expertise to engineer complicated, high performance, low cost computing solutions that can perform tasks like mapping out the human DNA or identifying possible new sites to drill for oil. But applying grid computing to address a low-tech problem like backup and recovery? That almost seems like a misnomer.