Most organizations when they look at backup appliances have to segregate them into one of two categories: those that function as integrated backup appliances (which include backup software) and those that function as target-based deduplicating backup appliances. Cohesity effectively blurs these lines by giving organizations the option to use its appliances to satisfy either or both of these use cases in their environment. In this fourth and final installment in my interview series with system architect, Fidel Michieli, he describes how he leverages Cohesity’s backup appliance for both VM protection and as a deduplicating backup target for his NetBackup backup software.
Jerome: Can you provide some insight into how much data you protect in your environment and how you are using Cohesity to back it up and protect it?
Fidel: We have 2.8 PB of data protected which has a long retention requirement. We use Cohesity as a backup target for our Veritas NetBackup to protect data store on our traditional NAS arrays with NFS and CIFS shares. Currently there is no way for Cohesity to back those file shares up so we leverage NetBackup to protect that data and use Cohesity as a backup target.
Jerome: What percentage of your current environment is protected by NetBackup?
Fidel: I would say about seven percent of our environment which includes those CIFS shares and a couple of physical servers. In those cases, we use Cohesity as a backup target.
One feature of Cohesity that is very help is its views concept which is very cool. Cohesity has three settings.
- Inline deduplication
- No data deduplication
- Post process deduplication
Sometimes I do not want to deduplicate data because it is very processor intensive and I know that all of my data does not deduplicate and compress well. So Cohesity gives me the option to turn these features off as I have never heard of any storage device that you can optionally turn off deduplication and compression if you do not want it.
The other setting is post process deduplication. That frees it to quickly ingest backup data and not waste any time deduplicating it.
I prefer to do inline deduplication for the majority of my data as it comes in but, it my backup windows do not allow for it or the data does not deduplicate well, it is nice to have options to do post process deduplication or even turn deduplication off.
Jerome: Do you schedule the time post process deduplication occurs and how much do you use it?
Fidel: We do not use the no deduplication option though it is a very cool feature. We see basically what we would expect from using post process deduplication. Backups go really fast as Cohesity ingests the data really quickly.
When we use inline deduplication, we experience tremendous dedupe and compression rates that allows us to store large amounts of data on Cohesity and the recoveries are awesome. Basically, you tell Cohesity to recover a VM and it mounts the VM to vCenter and spins up the VM from its location on the Cohesity appliance. You do not even need to move data in order to recover it. That’s very, very powerful.
We started to leveraging this capacity for test and dev environment. We have an identity management application from Dell that we are rolling out and our architect is having a lot of issues with deploying it because it is very closely tied to Active Directory (AD). Since he does not have a real test environment, we use Cohesity to take and store a backup of our production AD. Then, we restore it on Cohesity for test and development so he can connect to this test DR environment using yesterday’s backup. We have a script that changes the IPs to make it kosher for him to use plus we have a process of refreshing that environment weekly.
Jerome: Do you back up your VMs using Cohesity?
Fidel: NetBackup is only used for physical servers which Cohesity does not protect right now nor does it protect CIFS shares. However, by moving most of VM backups to Cohesity, our licensing costs for NetBackup have dropped by over 97 percent.
Jerome: Was the amount saved enough to pay for the Cohesity solution?
Fidel: Yes. When we originally architected our backup environment, we used raw disk in the design and its cost per TB could not compete with Cohesity as it achieves deduplication ratios of 20:1. That is pretty amazing considering we could get half a PB of raw storage for around 50 grand but even taking those numbers into consideration, they could not compete with Cohesity plus that fails to take into consideration the cooling, power, and rack space costs associated with that extra disk.
In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.
In part 2 of this interview series Fidel shares how he gained a comfort level with Cohesity prior to rolling it out enterprise-wide in his organization.
In part 3 of this interview series, Fidel shares some of the challenges he encountered while rolling out Cohesity and the steps that Cohesity took to address them.