Deduplication has emerged as “the” quick fix for the myriad of problems associated with enterprise backups. Deduplication enables organizations to shrink backup windows, minimize their reliance on tape, and more easily and cost effectively replicate their backup data to an offsite location. But as deduplication has grown in popularity, so has the number of ways that organizations can chose to implement it in their environment.
The ongoing success of virtual server environments is unprecedented in terms of shrinking the footprints of servers in data centers, decreasing the time to deploy new applications and delivering needed cost savings to corporate IT organizations. Yet one component of the virtual environment that is often overlooked, and that can introduce new levels of complexity, is the backup and recovery processes required to protect virtual server environments. In fact, it is only now that significant advances are occurring that are making the protection of virtual servers a simple and straightforward operation.
Offering the appropriate technology solutions to your internal business customers is a priority for any technology manager that desires to provide high levels of service at the lowest possible costs, particularly in the troubling economic times that we are living in today. However, knowing when to pull the trigger and outsource a critical IT function such as backup versus making further investments in infrastructure choice is not so cut-and-dry when your name is on the dotted line. Further, every IT manager now regularly faces the “Do I continue investing in hardware and software upgrades to support the data growth in the data center and remote locations ?”, or “Should I start leveraging the backup services of a Managed Service Provider via a cloud computing offering?” conundrum.
With all the debates going on out there today about which vendor offers the best deduplication approach, one wonders, “How is a customer supposed to make the right deduplication decision?” Of course, any approach that demonstrates real-life space reductions ratios makes the technology worth purchasing. But even in this scenario, there are several different camps about the best way to deduplicate data and where the deduplication should occur. Should companies deduplicate data on the client; should they do it using in-line processing; or, should they deduplicate data using a post-processing algorithm?
An area that is often overlooked in an IT infrastructure, at least until it’s needed, is the backup and recovery environment. Then when the realization hits the company that it needs backup software, it’s typically complex to install, configure and maintain, even in small environments, because of the fact that backup consists of so many moving parts (backup servers, tape robots, disk-based arrays, SAN networks, etc.). The good news is that more hardware and software vendors are stepping up to the plate and partnering to take some of the complexity out of installing and configuring backup software in these size environments. The most recent announcement between Dell and Symantec is the latest in the growing number of symbiotic relationships between hardware and software vendors in the backup space.
One can hardly visit any storage system vendor’s website without running into a reference to “Thin Provisioning” that is available either in their current product or on their product roadmap. However, how many operating system or volume managers/filesystems producers do you find using those words? Until recently, there were none. But now that Symantec has jumped with both feet into the Thin Provisioning arena, how companies use and manage thin provisioning in the coming years should change significantly.
Sarbanes-Oxley, FRCP amendments, the FTC Red Flag Rules and the Payment Card Industry’s Data Security Standard (PCI DSS) are just some of the many federal, state and local regulations with which businesses may need to comply. This does not even begin to factor in the need to satisfy the many internal governance policies and procedures with which they need to adhere to. Then even if they somehow manage to satisfy all of these compliance requirements, they still have pools of data that do not fall under any compliance or regulatory requirements, at least not at the beginning of the data’s lifecycle. There are a variety of reasons why businesses archive non-compliance data. Some of these include reductions in primary storage capacity and cost, decreasing the length of backup jobs by moving data out of the backup stream, optimizing the performance of production email stores and satisfying internal file and email retention policies. These business reasons generally vary from company to company depending…
Traditional clustering methodologies are severely limited in respect to scale, heterogeneous support, and distributed application support. Because of these limitations, clustering has primarily been the domain of shops with high-end applications with equally high-end budgets for the hardware and software needed to implement clustering. Symantec’s announcement last week of Veritas Cluster Server (VCS) One begins to change this scenario for any organization interested in extending the benefits of clustering to a greater number of their applications. And based upon what we saw in this first release of VCS One, we are now wondering who wouldn’t be interested in clustering more of their environment, whether virtual or physical.
Since the inception of VCS (Veritas Cluster Server), end-users have had access to significant higher levels of reliability and availability on heterogeneous platforms such as AIX, Linux, HP-UX, Solaris and Windows for their critical, tier-1 applications. Now with a decade of clustering critical business applications under its belt, Symantec has the experience and understanding of what customers expect from high availability (HA) software and what they need to make it successful in their shops.