was successfully added to your cart.

Cart

New Arguments for Using Host-based Replication as the Enterprise De Facto Standard for Replication

By February 21, 2013 Symantec

Virtualizing applications such that it results in the use of fewer servers makes great sense. Applications are centralized. Hardware is more efficiently used. Data center floor space is freed up. Virtual machine (VM) loads may be more efficiently and non-disruptively redistributed between physical systems. But then the realization hits. You have put all of your proverbial eggs in one basket and unless you have a real or near real-time copy of this data off-site, should a major disaster hit, your goose is cooked. The question then becomes, “What is the best way to get this data off-site?

This is a question more companies are being forced to ask and answer as server virtualization and storage consolidations have gone from being “cutting edge” to “how IT is done” in organizations of all sizes. As a result, there is a new realization that consolidation and virtualization are no longer sufficient. Rather topics such as business continuity, disaster recovery and high availability are becoming top of mind.

While in the past IT organizations and the business they support could have possibly absorbed the outage and/or loss of a single piece of hardware and the application it hosts, in today’s environment a single piece of hardware (server or storage) may impact dozens of applications which in turn could have a ripple effect throughout the entire organization.

This is where it gets dicey. Resolving it may get costly, complex or, more than likely, both. Using replication software to copy and/or move data offsite is the most sensible approach to achieve this goal. However there are multiple forms of replication software available each with its own set of pros and cons.

Consider:

  • Application-based replication. This application-specific approach to replication is great for certain applications such as Microsoft Exchange or Oracle as it ensures they will still stay available in the event of an outage. The challenge here is that this approach does nothing to help the rest of the virtualized, consolidated applications in the enterprise.  Further, application-specific replication solutions often require specialized skills to setup and manage.
  • Appliance-based replication. Using appliance-based replication, an appliance is placed in the network to do replication. The desirable aspect of this approach is that it sits between the servers and the storage so all data passes through it. This is also its challenge. By funneling all data through the appliance, questions about its scalability emerge as well as what happens should a performance issue arise (i.e. – is the appliance to blame for the performance issue?)
  • Array-based replication. This one is appealing for the simple reason that the replication software resides on the storage array and can replicate data to another storage array. This is also the downside with this approach. The replication software will only replicate data to another storage array from that same storage provider and it may only work with like models from that provider. Further, any data that is to be replicated must reside on the array. The solution also lacks application-level granularity.
  • Host-based replication. Host-based replication is desirable from the standpoint that all traffic from the server passes through it and it can see all storage – direct attached (DAS), network attached (NAS) or SAN-attached. Where enterprises need to exercise caution in selecting a solution is that many host-based approaches only work on a few operating systems (Windows or Linux) or only replicate data in one format or another (file or block.)
(Please note this was not meant to be extensive list of pros and cons for each form of replication. I am simply providing some examples for illustration purposes.)

This mix of pros and cons of each of these approaches has led many enterprises to use an assortment of replication software solutions to meet their varied needs. While this works up to a point, enterprises must now deal with the shortcomings of each approach plus configure and manage multiple replication products (resulting in complexity.) Further, license each solution may require a company to lock into the underlying hardware on which the replication software is delivered (cost.)

The good news is that not all host-based replication software is created equal and using solutions like Veritas Replicator organizations are free to more aggressively standardize on this approach. Veritas Replicator has done host-based replication at the volume (block) level since its inception a decade or so ago. However in recent months and years it has made some notable enhancements that make it practical and logical for enterprises to look to displace other forms of replication they may be using while expanding their use of Veritas Replicator within their organization.

  • Bi-directional replication. As enterprises create secondary sites with their own hardware, they do not want these resources to sit idle and simply act as a passive replication target. They also want to use them for activities that support the business. To accomplish this and still adequately protect data at this secondary site, the software must be able to both send and receive data. Veritas Replicator now offers this functionality.
  • Content distribution. Many organizations of all sizes have remote and branch offices that need updated information from the home office on a regular basis (weekly, daily, hourly) to conduct business. Veritas Replicator now offers both fan-in and fan-out replication capabilities so that organizations can do more than simply concurrently distribute data to multiple offices. If it is updated, they also have the option to replicate those updates back home.
  • Data migration. Many enterprise organizations are looking to move from UNIX to Linux to host their applications. The challenge? Easily, quickly and successfully migrating the data from UNIX to Linux. Now using Veritas Replicator, enterprises can accomplish this cross-platform data migration, even at the block level.
  • File and folder level replication. Sometimes organizations do not want to replicate all of the data on a device; they just want to replicate all of the data in a specific folder or even a file. Now using the latest version of Veritas Replicator, they get this capability without needing to sacrifice their ability to replicate entire drives or volumes at the block level.
  • Manageable across multiple operating system platforms. One of the strengths of Veritas Replicator versus other host-based replication software solutions has always been its ability to support multiple operating system platforms (Linux, Windows and Unix.) However what now more than ever sets it apart from these other platforms is that using a common console administrators may centrally configure and manage replication on any of these platforms.

Enterprises may always have a need to replicate at multiple layers within the compute stack (host, networks and storage.) But by seeking to standardize on one product to deliver replication they improve their chances at making this process both easier to manage and cost-effective to implement.

The recent enhancements to Veritas Replicator coupled with the many other proven features that it already offers give enterprises more reasons than ever to standardize on it as their preferred replication solution. In so doing, they can help bring high availability, business continuity and data protection to a portion of the business process that needs it more than ever so they never again have to worry about their goose being cooked.

image_pdfimage_print
Jerome M. Wendt

About Jerome M. Wendt

President & Founder of DCIG, LLC Jerome Wendt is the President and Founder of DCIG, LLC., an independent storage analyst and consulting firm. Mr. Wendt founded the company in November 2007.

Leave a Reply

Bitnami