BC, DR and Compliance Driving Cloud Service Provider Convergence; Interview with AIS VP of Network Engineering Steve Wallace Part I

A convergence is happening in the cloud service provider space. More cloud-based archive and backup providers are evolving to account for transactional/production data while managed service providers want to extend their reach into the archival/backup space. One company at the forefront of this convergence is cloud service provider American Internet Services (AIS). Today I talk with AIS’s VP of Network Engineering, Steve Wallace, about how this convergence is impacting cloud service providers in general and AIS specifically.

AIS Steve Wallace.JPGJerome: Thank you Steve for taking time out of your schedule to meet with me and talk more about cloud service providers in general and AIS specifically. So to kick off our conversation, can you tell me a bit about how both cloud service providers and AIS are evolving to meet these converging cloud service provider requirements?

Steve: Jerome, thank you for giving me the opportunity to publicly share some of my thoughts on these important topics.

Cloud service providers have the challenge of providing cloud services – compute and local storage – as well as meet client requirements to extricate their data from their local site for business continuity (BC) or disaster recovery (DR) purposes.

There are a few things that play into in terms of pressures that enable that opportunity. Some are regulatory compliance for public companies. They need some sort of business continuity plan which requires they store their data offsite.

In some cases, regulated industries like financial industry in California cannot ship their data outside of the state. You also see this in Europe. They have to keep all of their financial data within the borders of the EU.

It is similar situation in California. Banks need to know where their data lives – that is not always simple – especially when dealing with a larger service provider like Amazon.

As a cloud service provider, one of our products is cloud storage of which there are several types. There is transactional storage which is extremely fast and local. Then there is bulk storage. This is network attached, SATA storage that is not transactional but still very fast.

Finally, there is archival storage. Archival storage is large amounts of data that need to be stored for extended periods of time or is driven by regulatory requirements to get it out of the area or out of the local data center.

Jerome: So explain to me how these different requirements are forcing cloud service providers like AIS to evolve their offerings so they encompass transactional as well as more specific bulk/archival storage requirements.

Steve: Over the last two years AIS has transitioned from a traditional, co-location services company to a managed services company. As such, data ultimately ends up in the cloud and, as a result, we become a cloud services provider.

In this respect AIS already has a very solid regional network that we operate ourselves and that connects us to a DR capable site in Phoenix. To achieve this goal we have leveraged our networking services and clients’ gear to provide a turnkey DR and BC solutions with storage being one of the crucial pieces of that solution.

We can transmit data very easily to Phoenix and provide very good external Internet connectivity. This gives us many ways to move data from place to place. We can offer local data center storage which, in the co-location model, is on the client’s own equipment that they own and operate.

However clients that have data here in San Diego – companies with Big Data-like genomics, bioinformatics, and financial analytics companies – need to safely get their data somewhere offsite. The obvious choice for the location here in San Diego is Phoenix because it is geographically stable and nothing is likely to happen to it there.

However this approach becomes problematic for clients when it comes to supporting remote infrastructure. They wind up with not only the CAPEX of having to build the equipment but the OPEX of having to support that equipment in a remote location. So ultimately we have to get rid of that CAPEX component and allow them to get data into a safe place that may not even be on our network.

The first step is to build a fast, reliable network and provide the DR facilities.  The second is to add on features that complete the picture of business continuity and DR, such as global load balancing. This they need because if they are a pay-per-click type of business or any sort of e-commerce operation any type of downtime translates to lost revenue.

Now that the basic capabilities are in place, businesses can transfer their operations to the DR location and seamlessly redirect operations to the backup site.  Whether it is full-scale operations or limited operations, at least they are in business and have some communication with their client.

There are even a number of scenarios where having two locations is not enough. For instance computer hackers and viruses are some of the most devastating threats to your data. These can destroy or corrupt your data to the point where you cannot trust it.

What you need to do is put your data into some place safe – in a read only repository – that conforms to the best practices for storing your data. This often includes encrypting the data during transmission and encrypting the data while it is in the archive storage space.

There are a lot of other little pieces of security to consider, such as to who can access your data, and making sure access control is independent from your own internal security systems. You probably want to de-identify the file names so no one can look at a URL and say, “That is the password file.” or “That is a list of social security numbers.

This is why we partnered with Nirvanix. They are local and within our network reach so we can get data very quickly and seamlessly into their cloud. In the cases where clients need to have more than one repository, they have that capability at the click of a button. You may now have three instances of that data, spread across the globe.

When clients need to know exactly where their data is, they can constrain the storage location. Back to the basics: First, you have the front line defenses. In your local data center you may have load balancing and redundancy built into your equipment.

Next you want to have your equipment in two places and have some sort of failover capability between sites. Then have your important data transmitted from one site to the other. In any case, you need to get your data offsite should something really bad happen – the smoking crater scenario – so you can rebuild everything. Granted, it maybe a slightly outdated data set, but in most cases that is something you can use to recover your business.

In Part II of this interview series with Steve Wallace, we talk about the different ways that companies may host their data with a cloud service provider and why archival/backup is a first step that many take.

image_pdfimage_print
Jerome M. Wendt

About Jerome M. Wendt

President & Founder of DCIG, LLC Jerome Wendt is the President and Founder of DCIG, LLC., an independent storage analyst and consulting firm. Mr. Wendt founded the company in November 2007.

Leave a Reply

Bitnami