Rethinking Your Data Deduplication Strategy in the Software-defined Data Center

Data Centers Going Software-defined

There is little dispute tomorrow’s data center will become software-defined for reasons no one entirely anticipated even as recently as a few years ago. While companies have long understood the benefits of virtualizing the infrastructure of their data centers, the complexities and costs of integrating and managing data center hardware far exceeded whatever benefits that virtualization delivered. Now thanks to technologies such as such as the Internet of Things (IoT), machine intelligence, and analytics, among others, companies may pursue software-defined strategies more aggressively.

The introduction of technologies that can monitor, report on, analyze, and increasingly manage and optimize data center hardware frees organizations from performing housekeeping tasks such as:

  • Verifying hardware firmware compatibility with applications and operating systems
  • Troubleshooting hot spots in the infrastructure
  • Identifying and repairing failing hardware components

Automating these tasks does more than change how organizations manage their data center infrastructures. It reshapes how they can think about their entire IT strategy. Rather than adapting their business to match the limitations of the hardware they choose, they can now pursue business objectives where they expect their IT hardware infrastructure to support these business initiatives.

This change in perspective has already led to the availability of software-defined compute, networking, and storage solutions. Further, software-defined applications such as databases, firewalls, and other applications that organizations commonly deploy have also emerged. These virtual appliances enable companies to quickly deploy entire application stacks. While it is premature to say that organizations can immediately virtualize their entire data center infrastructure, the foundation exists for them to do so.

Software-defined Storage Deduplication Targets

As they do, data protection software, like any other application, needs to be part of this software-defined conversation. In this regard, backup software finds itself well-positioned to capitalize on this trend. It can be installed on either physical or virtual machines (VMs) and already ships from many providers as a virtual appliance. But storage software that functions primarily as a deduplication storage target already finds itself being boxed out of the broader software-defined conversation.

Software-defined storage (SDS) deduplication targets exist that have significantly increased in storage capabilities. By the end of 2018, a few of these software-defined virtual appliances scaled to support about 100TB or more of capacity. But organizations must exercise caution when looking to position these available solutions as a cornerstone in a broader software-defined deduplication storage target strategy.

This caution, in many cases, stems less from the technology itself and more from the vendors who provide these SDS deduplication target solutions. In every case, save one, these solutions originate with providers who focus on selling hardware solutions.

Foundation for Software-defined Data Centers Being Laid Today

Companies are putting plans in place right now to build the data center of tomorrow. That data center will be a largely software-defined data center with solutions that span both on-premises and cloud environments. To achieve that end, companies need to select solutions that have a software-designed focus which meet their current needs while positioning them for tomorrow’s requirements.

Most layers in the data center stack, to include compute, networking, storage, and even applications, are already well down the road of transforming from hardware-centric to software-centric offerings. Yet in the face of this momentous shift in corporate data center environments, SDS deduplication target solutions have been slow to adapt.

It is this gap that SDS deduplication products such as Quest QoreStor look to fill. Coming from a company with “software” in its name, Quest comes without the hardware baggage that other SDS providers must balance. More importantly, Quest QoreStor offers a feature-rich set of services that range from deduplication to replication to support for all major cloud, hardware, and backup software platforms that comes from 10 years of experience in delivering deduplication software.

Free to focus solely on delivering a SDDC solution, Quest QoreStor represents the type of SDS deduplication target that does truly meet the needs of today’s enterprise while positioning them to realize the promise of tomorrow’s software-defined data center.

To read more of DCIG’s thoughts about using SDS deduplication targets in the software-defined data center of tomorrow, follow this link.




Key Differentiators between the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 Systems

Companies have introduced a plethora of technologies into their core enterprise infrastructures in recent years that include all-flash arrays, cloud, hyper-converged infrastructures, object-based storage, and snapshots, just to name a few. But as they do, a few constants remain. One is the need to backup and recover all the data they create.

Deduplication appliances remain one of the primary means for companies to store this data for short-term recovery, disaster recoveries, and long-term data retention. To fulfill these various roles, companies often select either the HPE StoreOnce 5650 or the Dell EMC Data Domain 9300. (To obtain a complimentary DCIG report that compares these two products, follow this link.)

Their respective deduplication appliance lines share many features in common. They both perform inline deduplication. They both offer client software to do source-side deduplication that reduces data sent over the network and improves backup throughput rates. They both provide companies with the option to backup data over NAS or SAN interfaces.

Despite these similarities, key areas of differentiation between these two product lines remain which include the following:

  1. Cloud support. Every company either has or anticipates using a hybrid cloud configuration as part of its production operations. These two product lines differ in their levels of cloud support.
  2. Deduplication technology. Data Domain was arguably the first to popularize widespread use of deduplication for backup. Since then, others such as the HPE StoreOnce 5650 have come on the scene that compete head-to-head with Data Domain appliances.
  3. Breadth of application integration. Software plug-ins that work with applications and understand their data formats prior to deduplicating the data provide tremendous benefits as they improve data reduction rates and decrease the amount of data sent over the network during backups. The software that accompanies the appliances from these two providers has varying degrees of integration with leading enterprise applications.
  4. Licensing. The usefulness of any product hinges on the features it offers, their viability, and which ones are available to use. Clear distinctions between the HPE StoreOnce and Dell EMC Data Domain solutions exist in this area.
  5. Replication. Copying data off-site for disaster recovery and long-term data retention is paramount in comprehensive enterprise disaster recovery strategies. Products from each of these providers offer this but they differ in the number of features they offer.
  6. Virtual appliance. As more companies adopt software-defined data center strategies, virtual appliances have increased appeal.

In the latest DCIG Pocket Analyst Report, DCIG compares the HPE StoreOnce 5650 and Dell EMC Data Domain 9300 product lines and examines how each well these two products fare in their support of these six areas in which DCIG looks at nearly 100 features to draw its conclusions. This report is currently available at no charge for a limited time on DCIG’s partner website, TechTrove. To receive complimentary access to this report, complete a registration form that you can find at this link.




NVMe: Four Key Trends Set to Drive Its Adoption in 2019 and Beyond

Storage vendors hype NVMe for good reason. It enables all-flash arrays (AFAs) to fully deliver on flash’s performance characteristics. Already NVMe serves as an interconnect between AFA controllers and their back end solid state drives (SSDs) to help these AFAs unlock more of the performance that flash offers. However, the real performance benefits that NVMe can deliver will be unlocked as a result of four key trends set to converge in the 2019/2020 time period. Combined, these will open the doors for many more companies to experience the full breadth of performance benefits that NVMe provides for a much wider swath of applications running in their environment.

Many individuals have heard about the performance benefits of NVMe. Using it, companies can reduce latency with response times measured in few hundred microseconds or less. Further, applications can leverage the many more channels that NVMe has to offer to drive throughput to hundreds of GBs per second and achieve millions of IOPs. These types of performance characteristics have many companies eagerly anticipating NVMe’s widespread availability.

To date, however, few companies have experienced the full breadth of performance characteristics that NVMe offers. This stems from:

  • The lack of AFAs on the market that fully support NVMe (about 20%).
  • The relatively small performance improvements that NVMe offers over existing SAS-attached solid-state drives (SSDs); and,
  • The high level of difficulty and cost associated with deploying NVMe in existing data centers.

This is poised to change in the next 12-24 months with four key trends poised to converge that will open up NVMe to a much wider audience.

  1. Large storage vendors getting ready to enter the NVMe market. AFA providers such as Tegile (Western Digital), iXsystems, Huawei, Lenovo, and others ship products that support NVMe. These vendors represent the leading edge of where NVMe innovation has occurred. However, their share of the storage market remains relatively small compared to providers such as Dell EMC, HPE, IBM, and NetApp. As these large storage providers enter the market with AFAs that support NVMe, expect market acceptance and adoption of NVMe to take off.
  2. The availability of native NVMe drivers on all major operating systems. The only two major enterprise operating systems that have currently native NVMe drivers for their OSes are Linux and VMware. However, until Microsoft and, to a lesser degree, Solaris, offer native NVMe drives, many companies will have to hold off on deploying NVMe in their environments. The good news is that all these major OS providers are actively working on NVMe drivers. Further, expect that the availability of these drivers will closely coincide with the availability of NVMe AFAs from the major storage providers and the release of the NVMe-oF TCP standard.
  3. NVMe-oF TCP protocol standard set to be finalized yet in 2018. Connecting the AFA controller to its backend SSDs via NVMe is only one half – and much easier part – of solving the performance problem. The much larger and more difficult problem is easily connecting hosts to AFAs over existing storage networks as it is currently difficult to setup and scale NVMe-oF. The establishment of the NVMe-oF TCP standard will remedy this and facilitate the introduction and use of NVMe-oF between hosts and AFAs using TCP/IP over existing Ethernet storage networks.
  4. The general availability of NVMe-oF TCP offload cards. To realize the full performance benefits of NVMe-oF using TCP, companies are advised to use NVMe-oF TCP offload cards. Using standard Ethernet cards with no offload engine, companies will still see high throughput but very high CPU utilization (up to 50 percent.) Using the forthcoming NVMe-oF TCP offload cards, performance increases by anywhere from 33 to 150 percent versus native TCP cards while only introducing nominal amounts of latency (single to double digit microseconds.)

The business need for NVMe technology is real. While today’s all-flash arrays have tremendously accelerated application performance, NVMe stands poised to unleash another round of up to 10x or more performance improvements. But to do that, a mix of technologies, standards, and programming changes to existing operating systems must converge for mass adoption in enterprises to occur. This combination of events seems poised to happen in the next 12-24 months.




20 Years in the Making, the Future of Data Management Has Arrived

Mention data management to almost any seasoned IT professional and they will almost immediately greet the term with skepticism. While organizations have found they can manage their data within certain limits, when they remove those boundaries and attempt to do so at scale, those initiatives have historically fallen far short if not outright failed. It is time for that perception to change. 20 years in the making, Commvault Activate puts organizations in a position to finally manage their data at scale.

Those who work in IT are loath to say any feat in technology is impossible. If one looks at the capabilities of any handheld device, one can understand why they have this belief. People can pinpoint exactly where they are almost anywhere in the world to within a few feet. They can take videos, pictures, check the status of their infrastructure, text, … you name it, handheld devices can do it.

By way of example, as I write this, I was present to watch YY Lee, SVP and Chief Strategy Officer of Anaplan, onstage at Commvault GO. She explained how systems using artificial intelligence (AI) were able within a very short time, sometimes days, became experts at playing games such as Texas Hold’em and beat the best players in the world at them.

Despite advances such as these in technology, data management continues to bedevil large and small organizations alike. Sure, organizations may have some level of data management in place for certain applications (think email, file servers, or databases,) but when it comes to identifying and leveraging a tool to deploy data management across an enterprise at scale, that tool has, to date, eluded organizations. This often includes the technology firms that are responsible for producing so much of the hardware that stores this data and software that produces it.

The end for this vexing enterprise challenge finally came into view with Commvault’s announcement of Activate. What makes Activate different from other products that promise to provide data management at scale is that Commvault began development on this product 20 years ago in 1998.

During that time, Commvault became proficient in:

  • Archiving
  • Backup
  • Replication
  • Snapshots
  • Indexing data
  • Supporting multiple different operating systems and file systems
  • Gathering and managing metadata

Perhaps most importantly, it established relationships and gained a foothold in enterprise organizations around the globe. This alone is what differentiates it from almost every other provider of data management software. Commvault has 20+ years of visibility into the behavior and requirements of protecting, moving, and migrating data in enterprise organizations. This insight becomes invaluable when viewed in the context of enterprise data management which has been Commvault’s end game since its inception.

Activate builds on Commvault’s 20 years of product development with Activate’s main differentiator being its ability to stand alone apart from other Commvault software. In other words, companies do not first have to deploy Commvault’s Complete Backup and Recovery or any of its other software to utilize Activate.

They can deploy Activate regardless of whatever other backup, replication, snapshot, etc. software product you may have. But because Activate draws from the same code base as the rest of Commvault’s software, companies can deploy it with a great deal of confidence because of the stability of Commvault’s existing code base.

Once deployed, Activate scans and indexes the data across the company’s environment which can include its archives, backups, file servers, and/or data stored in the cloud. Once indexed, companies can do an assessment of the data in their environment in anticipation of taking next steps such as eDiscovery preparation, remediate data privacy risks, and index and analyze data based upon your own criteria.

Today more so than ever companies recognize they need to manage their data across the entirety of their enterprise. Delivering on this requirement requires a tool appropriately equipped and sufficiently mature to meet enterprise requirements. Commvault Activate answers this call as a software product that has been 20 years in the making to provide enterprises with the foundation they need to manage their data going forward.




Purpose-built Backup Cloud Service Providers: A Practical Starting Point for Cloud Backup and DR

The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.

Benefits of Using a Purpose-built Backup Cloud Service Provider

Cloud service providers purpose-built for backup and DR put companies in the best position to check all those boxes for this initial cloud use case. These providers solve their clients’ immediate challenges of easily and cost-effectively moving backup data off-site and then retaining it long term.

Equally important, addressing the off-site backup challenge in this way positions companies to do data recovery with the purpose-built cloud service provider, a general purpose cloud service provider, or on-premises. It also sets the stage for companies to regularly test their disaster recovery capabilities so they can perform them when necessary.

Choosing a Purpose-built Backup Cloud Service Provider

To choose the right cloud service provider for corporate backup and DR requirements companies want to be cost conscious. But they also want to experience success and not put their corporate data or their broader cloud strategy at risk. A purpose-built cloud service provider such as Unitrends and its Forever Cloud solution frees companies to aggressively and confidently move ahead with a cloud deployment for its backup and DR needs.

picture showing date and time of webinar

Join Our Webinar for a Deeper Look at the Benefits of Using a Purpose-built Backup Cloud Service Provider

Join me next Wednesday, October 17, 2018, at 2 pm EST, for a webinar where I take a deeper look at both purpose-built and general purpose cloud service providers. In this webinar, I examine why service providers purpose-built for cloud backup and DR can save companies both money and time while dramatically improving the odds they can succeed in their cloud backup and DR initiatives. You can register by following this link.

 




HPE Expands Its Big Tent for Enterprise Data Protection

When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.

Having largely solved their backup problems at scale, enterprises have new freedom to analyze and address their broader enterprise data protection challenges. As they look to bring long term data retention, data archiving, and multiple types of recovery (single applications, site fail overs, disaster recoveries, and others) under one big tent for data protection, they find they often need to deploy multiple products.

This creates a situation where each product addresses specific pain points that enterprises have. However, multiple products equate to multiple management interfaces that each have their own administrative policies with minimal or no integration between them. This creates a thornier problem – enterprises are left to manage and coordinate the hand-off of the protection and recovery of data between these different individual data protection products.

A few years HPE started to build a “big tent” to tackle these enterprise data protection and recovery issues. It laid the foundation with its HPE 3PAR StoreServ storage arrays, StoreOnce deduplication storage systems, and Recovery Manager Central (RMC) software to help companies coordinate and centrally manage:

  • Snapshots on 3PAR StoreServ arrays
  • Replication between 3PAR StoreServ arrays
  • The efficient movement of data between 3PAR and StoreOnce systems for backup, long term retention, and fast recoveries

This week HPE expanded its big tent of data protection to give companies more flexibility to protect and recover their data more broadly across their enterprise. It did so in the following ways:

  • HPC RMC 6.0 can directly recover data to HPE Nimble storage arrays. Recoveries from backups can be a multi-step process that may require data to pass through the backup software and the application server before it lands on the target storage array. Beginning December 2018, companies can use RMC to directly recover data to HPE Nimble storage arrays from an HPE StoreOnce system without going through the traditional recovery process just as they can already do to HPE 3PAR StoreServ storage arrays.
  • HPE StoreOnce can directly send and retrieve deduplicated data from multiple cloud providers. Companies sometimes fail to consider that general purpose cloud service providers such as Amazon Web Services (AWS) or Microsoft Azure make no provisions to optimize data stored with them such as deduplicating it. Using HP StoreOnce’s new direct support for AWS, Azure, and Scality, companies can use StoreOnce to first compress and deduplicate data before they store the data in the cloud.
  • Integration between Commvault and HPE StoreOnce systems. Out of the gate, companies can use Commvault to manage StoreOnce operations such as replicating data between StoreOnce systems as well as moving data directly from StoreOnce systems to the cloud. Moreover, as this relationship between Commvault and HPE matures, companies will also be able to use HPE’s StoreOnce Catalyst, HPE’s client-based deduplication software agent, in conjunction with Commvault to backup data on server clients where data may not reside on HPE 3PAR or Nimble storage. Using the HPE StoreOnce Catalyst software, Commvault will deduplicate data on the source before sending it to an HPE StoreOnce system.

Source:HPE

Of these three announcements that HPE made this week, this new relationship with Commvault that accompanies its pre-existing relationships with Micro Focus (formerly HPE Data Protector) and Veritas demonstrate HPE’s commitment to helping enterprises build a big tent for their data protection and recovery initiatives. Storing data on the HPE 3PAR and Nimble and using RMC to manage their backups and recoveries on the StoreOnce systems certainly accelerates and simplifies these functions when companies can do so. But by working with these other partners, it illustrates that HPE recognizes that companies will not store all their data on its systems and that HPE will accommodate companies so they can create a single, larger data protection and recovery solution for their enterprise.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




Analytics, Automation and Hybrid Clouds among the Key Takeaways from VMworld 2018

At early VMworld shows, stories emerged of attendees scurrying from booth to booth on the exhibit floor looking for VM data protection and hardware solutions to address the early challenges that VMware ESXi presented. Fast forward to the 2018 VMworld show and the motivation behind attendees attending training sessions and visiting vendor booths has changed significantly. Now they want solutions that bring together their private and public clouds, offer better ways to analyze and automate their virtualized environments, and deliver demonstrable cost savings and/or revenue opportunities after deploying them.

The entrance to the VMworld 2018 exhibit hall greeted attendees a little differently this than in year’s past. Granted, there were still some of the usual suspects such as Dell EMC and HPE that have reserved booths at this show for many years. But right alongside them were relative newcomers (to the VMworld show anyway) such as Amazon Web Services and OVHcloud.

Then as one traversed the exhibit hall floor and visited the booths of the vendors immediately behind them, the data protection and hardware themes of the early VMworld shows persisted in these booths, though the messaging and many of the vendor names have changed since the early days of this show.

Companies such as Cohesity, Druva, and Rubrik represent the next generation of data protection solutions for vSphere while companies such as Intel and Trend Micro have a more pronounced presence on the VMworld show floor. Together these exhibitors reflect the changing dynamics of what is occurring in today’s data centers and what the current generation of organizations are looking for vendors to provide for their increasingly virtualized environments. Consider:

  1. Private and public cloud are coming together to become hybrid. The theme of hybrid clouds with applications that can span both public and private clouds began with VMworld’s opening keynote announcing the availability of Amazon Relational Database Service (Amazon RDS) on VMware. Available in the coming months, this functionality will free organizations to automate the setup of Microsoft SQL Server, Oracle, PostgreSQL, MariaDB and MySQL databases in their traditional VMware environments and then migrate them to the AWS cloud. Those interested in trying out this new service can register here for a preview.
  2. Analytics will pave the ways for increasing levels of automation. As organizations of all sizes adopt hybrid environments, the only way they can effectively manage their hybrid environments at scale is to automate their management. This begins with the use of analytics tools that capture the data points coming in from the underlying hardware, the operating systems, the applications, the public clouds to which they attach, the databases, the devices which feed them the data, whatever.

Evidence of growing presence of these analytics tools that enable this automation was everywhere at VMworld. One good example is Runecast analyzes the logs of these environments and then also scours blogs, white papers, forums, and other online sources for best practices to advise companies on how to best configure their environments. Another one is Login VSI which does performance benchmarking and forecasting to anticipate how VDI patches and upgrades will impact the current infrastructure.

  1. The cost savings and revenue opportunities for these hybrid environments promise to be staggering. One of the more compelling segments in one of the keynotes was the savings that many companies initially achieved deploying vSphere. Below is one graphic that appeared at the 8:23 mark in this video of the second day’s keynote where a company reduced its spend on utility charges by over $60,000 per month or an 84% reduction in cost. Granted, this example was for illustration purposes but it seemed inline with other stories I have anecdotally heard.

Source: VMware

But as companies move into this hybrid world that combines private and public clouds, the value proposition changes. While companies may still see cost savings going forward, it is more likely that they will realize and achieve new opportunities that were simply not possible before. For instance, they may deliver automated disaster recoveries and high availability for many more or all their applications. Alternatively, they will be able to bring new products and services to market much more quickly or perform analysis that simply could not have been done before because they have access to resources that were unavailable to them in a cost-effective or timely manner.




DCIG’s VMworld 2018 Best of Show

VMworld provides insight into some of the biggest tech trends occurring in the enterprise data center space and, once again, this year did not disappoint. But amid the literally dozens of vendors showing off their wares on the show floor, here are the four stories or products that caught my attention and earned my “VMworld 2018 Best of Show” recognition at this year’s event.

VMworld 2018 Best of Show #1: QoreStor Software-defined Secondary Storage

Software defined dedupe in the form of QoreStor has arrived. A few years ago Dell Technologies sold off its Dell Software division that included an assortment (actually a lot) of software products that emerged as Quest Software. Since then, Quest Software has done nothing but innovate like bats out of hell. One of the products coming out of this lab of mad scientists is QoreStor, a stand-alone software-defined secondary storage solution. QoreStor installs on any server hardware platform and works with any backup software. QoreStor provides a free download that will deduplicate up to 1TB of data at no charge. While at least one other vendor (Dell Technologies) offers a deduplication product that is available as a virtual appliance, only Quest QoreStor gives organizations the flexibility to install it on any hardware platform.

VMworld 2018 Best of Show #2: LoginVSI Load Testing

Stop guessing on the impact of VDI growth with LoginVSI. Right sizing the hardware for a new VDI deployment from any VDI provider (Citrix, Microsoft, VMware, etc.) is hard enough. But to keep the hardware appropriately sized as more VDIs are added or patches and upgrades are made to existing VDI instances, that can be dicey at best and downright impossible at worst.

LoginVSI helps take the guesswork out of any organization’s future VDI growth by:

  • examining the hardware configuration underlying the current VDI environment,
  • comparing that to the changes proposed for the environment, and then
  • making recommendations as to what changes, if any, need to be made to the environment to support
    • more VDI instances or
    • changes to existing VDI instances.

VMworld 2018 Best of Show #3: Runecast vSphere Environment Analyzer

Plug vSphere’s holes and keep it stable with Runecast. Runecast was formed by a group of former IBM engineers who over the years had become experts at two things while working at IBM: (1) Installing and configuring VMware vSphere; and (2) Pulling out their hair trying to keep each installation of VMware vSphere stable and up-to-date. To rectify this second issue, they left IBM and formed Runecast,

Runecast provides organizations with the software and tools they need to proactively:

  • identify needed vSphere patches and fixes,
  • identify security holes, and even
  • recommend best practices for tuning your vSphere deployments
  • with minimal to no manual intervention.

VMworld 2018 Best of Show #4: VxRail Hyper-converged Infrastructure

Kudos to Dell Technologies VxRail solution for saving every man, woman, and child in the United States $2/person in 2018. I was part of a breakfast conversation on Wednesday morning with one of the Dell Technologies VxRail product managers. He anecdotally shared with the analysts present that Dell Technologies recently completed two VxRail hyper-converged infrastructure deployments in two US government agencies. Each deployment is saving each agency $350 million annually. Collectively that amounts to $700 million or $2/person for every person residing in the US. Thank you, Dell.




CloudShift Puts Flawless DR on Corporate Radar Screens

Hyper-converged Infrastructure (HCI) solutions excel at delivering on many of the attributes commonly associated with private clouds. Consequently, the concepts of hyper-convergence and private clouds have become, in many respects, almost inextricably linked.

But for an HCI solution to not have a clear path forward for public cloud support … well, that’s almost anathema in the increasingly hybrid cloud environments found in today’s enterprises. That’s what makes this week’s CloudShift announcement from Datrium notable – it begins to clarify Datrium’s strategy for how Datrium is going to go beyond backup to the public cloud as part of its DVX solution and puts the concept of flawless DR on corporate radar screens.

Source: Datrium

HCI is transforming how organizations manage their on-premise infrastructure. By combining compute, data protection, networking, storage and server virtualization into a single pre-integrated solution, they eliminate many of the headaches associated with traditional IT infrastructures while delivering the “cloud-like” speed and ease of deployment that enterprises want.

However, enterprises increasingly want more than “cloud-like” abilities from their on-premise HCI solution. They also want the flexibility to move the virtual machines (VMs) they host on their HCI solution into public cloud environments if needed. Specifically, if they run disaster recovery (DR) tests, perform an actual DR, or need to move a specific workload in the public cloud that is experiencing high throughput, having the flexibility to move VMs into and out of the cloud as needed is highly desirable.

Datrium answers the call for public cloud integration with its recent CloudShift announcement. However, Datrium did not just come out with a #MeToo answer for public clouds by announcing it will support the AWS cloud. Rather, it delivered what most enterprises are looking for at this stage in their journey to a hybrid cloud environment: a means to seamlessly incorporate the cloud into their overall DR strategy.

The goals behind its CloudShift announcement are three-fold:

  1. Build on the existing Datrium DVX platform that already manages the primary copy of data as well as its backups. With the forthcoming availability of CloudShift in the first half of 2019, it will complete the primary to backup to cloud circle that companies want.
  2. Make DR work flawlessly. If there are two words together that often represent an oxymoron, it is “flawless DR”. By bringing all primary, backup and cloud together and managing them as one holistic piece, companies can begin to someday soon (ideally in this lifetime) view flawless DR as the norm instead of the exception.
  3. Orchestrated DR failover and failback. DR failover and failback just rolls off the tongue – it is simple to say and everyone understands what it means. But to execute on the successful DR failover and failback in today’s world tends to get very pricey and very complex. By Datrium rolling the management of primary, backup and cloud under one roof and then continually performing compliance checks on the execution environment to ensure that they meet RPO and RTO of the DR plan, companies can have a higher degree of confidence that DR failovers and failbacks only occur when they are supposed to and that when they occur, they will succeed.

Despite many technology advancements in recent years, enterprise-wide, turnkey DR capabilities with orchestrated failover and failback between on-premises and the cloud are still largely the domain of high-end enterprises that have the expertise to pull it off and are willing to commit large amounts of money to establish and maintain a (hopefully) functional DR capability. Datrium’s CloudShift announcement puts the industry on notice that reliable, flawless DR that will meet the budget and demands of a larger number of enterprises is on its way.




Orchestrated Backup IN the Cloud Arrives with HYCU for GCP

Companies are either moving or have moved to the cloud with backup TO the cloud being one of the primary ways they plan to get their data and applications into the cloud. But orchestrating the backup of their applications and data once they reside IN the cloud… well, that requires an entirely different set of tools with few, if any, backup providers yet offering features in their respective products that deliver on this requirement. That ends today with the introduction of HYCU for GCP (Google Cloud Platform).

Listen to the podcast associated with this blog entry.

Regardless of which public cloud platform you may use to host your data and/or applications, Amazon Web Services (AWS), Microsoft Azure, GCP, or some other platform, they all provide companies with multiple native backup utilities to protect data that resides on their cloud. The primary tools include the likes of snapshots, replication, and versioning with GCP being no different.

What makes these tools even more appealing to use is that they are available at a cloud user’s fingertips; they can turn them on with the click of a button; and, they only pay for what they use. Available for any data or applications hosted in the cloud, they give organizations access to levels of data availability, data protection, and even disaster recovery for which they previously had no means to easily deliver and they can do so for any data or application hosted with the cloud provider.

But the problem in this scenario is not application and/or data backup. The catch is how does an organization do this at scale in such a way that they can orchestrate and manage the backups of all their applications and data on a cloud platform such as GCP for all their users. The short answer is: organizations cannot.

This is a problem that HYCU for GCP addresses head-on. HYCU has previously established a beachhead in Nutanix environments thanks to its tight integration with AHV. This integration well positions HYCU to extend those same benefits to any public cloud partner of Nutanix. The fact that Nutanix and Google announced a strategic alliance last year at the Nutanix .NEXT conference to build and operate hybrid clouds certainly helped HYCU prioritize GCP over the other public cloud providers for backup orchestration.

Leveraging HYCU in the GCP, companies immediately gain three benefits:

  1. Subscribe to HYCU directly from the GCP Marketplace. Rather than having to first acquire HYCU separately and then install it in the GCP, companies can buy it in the GCP Marketplace. This accelerates and simplifies HYCU’s deployment in the GCP while simultaneously giving companies access to a corporate grade backup solution that orchestrates and protects VMs in the GCP.
  2. Takes advantage of the native backup features in the GCP. GCP has its own native snapshots that can be used for backup and recovery that HYCU capitalizes on and puts at the fingertips of admins who can then manage and orchestrate backups and recoveries for all corporate VMs residing in the GCP.
  3. Frees organizations to confidently expand their deployment of applications and data in GCP. While GCP obviously had the tools to backup and recover data and applications in GCP, managing them at scale was going to be, at best, cumbersome, and, at worst, impossible. HYCU for GCP frees companies to begin to more aggressively deploy applications and data at scale in GCP knowing that they can centrally manage their protection and recovery.

Backup TO the cloud is great and almost every backup provider offers that feature functionality. But backup IN the cloud where the backup and recovery of a company’s applications and data in the cloud is centrally managed…now, that is something that stands apart from the competition. Thanks to HYCU for GCP, companies can finally do more than just deploy data and applications in the Google Cloud Platform that requires each of their users or admins to assume backup and recovery responsibilities for their applications and data. Instead, companies can do so knowing they now have a tool in place that can centrally manage their backups and recoveries.




Differentiating between High-end HCI and Standard HCI Architectures

Hyper-converged infrastructure (HCI) architec­tures have staked an early claim as the data center architecture of the future. The abilities of all HCI solutions to easily scale capacity and performance, simplify ongoing management, and deliver cloud-like features have led many enterprises to view it as a logical path forward for building on-premise clouds.

But when organizations deploy HCI at scale (more than eight nodes in a single logical configuration), cracks in its architecture can begin to appear unless organizations carefully examine how well it scales. On the surface, high-end and standard hyper-converged architectures deliver the same bene­fits. They each virtualize compute, memory, storage networking, and data stor­age in a simple to deploy and manage scale-out architecture. They support standard hypervisor platforms. They provide their own data protection solutions in the form of snap­shots and replication. In short, these two archi­tectures mirror each other in many ways.

However, high-end and standard HCI solutions differ in functionality in ways that primarily surface in large virtualized deployments. In these envi­ronments, enterprises often need to:

  • Host thousands of VMs in a single, logical environment.
  • Provide guaranteed, sustainable performance for each VM by insulating them from one another.
  • Offer connectivity to multiple public cloud providers for archive and/or backup.
  • Seamlessly move VMs to and from the cloud.

It is in these circumstances that differences between high-end and standard architectures emerge with the high-end HCI architecture possessing four key advantages over standard HCI architectures. Consider:

  1. Data availability. Both high-end and standard HCI solutions distribute data across multiple nodes to ensure high levels of data availability. However, only high-end HCI architectures use separate, distinct compute and data nodes with the data nodes dedicated to guar­anteeing high levels of data availability for VMs on all compute nodes. This ensures that should any compute node become unavail­able, the data associated with any VM on it remains available.
  2. Flash/performance optimization. Both high-end and standard HCI architectures take steps to keep data local to the VM by storing the data of each VM on the same node on which the VM runs. However, solutions based on high-end HCI architectures store a copy of each VM’s data on the VM’s compute node as well as on the high-end HCI architecture’s underlying data nodes to improve and optimize flash performance. High-end HCI solutions such as the Datrium DVX also deduplicate and compress all data while limiting the amount of inter-nodal communication to free resources for data processing.
  3. Platform manageability/scalability. As a whole, HCI architectures are intuitive to size, manage, scale, and understand. In short, if you need more performance and/or capacity, one only needs to add another node. However, enterprises often must support hundreds or even thousands of VMs that each must perform well and be protected. Trying to deliver on those requirements at scale using a standard HCIA solution where inter­-nodal communication is a prerequisite becomes almost impos­sible to achieve. High-end HCI solutions facilitate granular deployments of compute and data nodes while limiting inter-nodal communication.
  4. VM insulation. Noisy neighbors—VMs that negatively impact the performance of adjacent VMs—remain a real problem in virtual environments. While both high-end and standard HCI architectures support storing data close to VMs, high-end HCI architectures do not rely on the availability of other compute nodes. This reduces inter-nodal communication and the probability that the noisy neighbor problem will surface.

HCI architectures have resonated with organizations in large part because of their ability to deliver cloud-like features and functionality with maintaining their simplicity of deployment and ongoing maintenance. However, next generation high-end HCI architecture with solutions available from providers like Datrium provide organizations greater flexibility to deliver cloud-like functionality at scale such as offering better management and optimization of flash performance, improved data availability, and increased insulation between VM instances. To learn more about how standard and high-end HCI architectures compare, check out this recent DCIG pocket analyst report that is available on the TechTrove website.

Note: Image added on 7/27/2018.



Too Many Fires, Poor Implementations, and Cost Overruns Impeding Broader Public Cloud Adoption

DCIG’s analysts (myself included) have lately spent a great deal of time getting up close and personal on the capabilities of public cloud providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. We have also spent time talking to individuals deploying cloud solutions. As we have done so, we recognize that the capabilities of these cloud offerings should meet and exceed the expectations of most organizations regardless of their size. However, impeding cloud adoption are three concerns that have little to do with the technical capabilities of these public cloud solutions.

Anyone who spends any time studying the capabilities of any of these cloud offerings for the first time will walk away impressed. Granted, each offering has its respective strengths and weaknesses. However, when one examines each of these public cloud offerings and their respective infrastructures and compares them to the data centers that most companies own and manage, the differences are stark. The offerings from these public cloud providers win hands down. This might explain why organizations of all sizes are adopting the cloud at some level.

The more interesting dilemma is why organizations are not adopting public cloud offerings at a faster pace and why some early adopters are even starting to leave the cloud. While this is not an extensive list of reasons, here are three key concerns that have come out of our conversations and observations that are impeding cloud adoption.

Too many fires. Existing data centers are a constant target for budget cutbacks, understaffing, and too often lack any clear, long-term vision that guides data center development. This combination of factors has led to costly, highly complex, inflexible data centers that need a lot of people to manage them. This situation exists at the exact moment when the business side of the house expects the data center to become simpler and more cost-effective and flexible to manage. While in-house data center IT staff may want to respond to these business requests, they often are consumed with putting out the fires caused by the complexity of the existing data center. This leaves them little or no time to explore and investigate new solutions.

Poor implementations. The good news is that public cloud offerings have a very robust feature set. The bad news is that all these features make it very daunting to learn and very easy to incorrectly set it up. If anything, the ease and low initial costs of most public cloud providers may work against the adoption of public cloud solutions. They have made it so easy and low cost for companies to get into the cloud that companies may try it out without really understanding all the options available to them and the ramifications of the decisions they make. This can easily lead to poor application implementations in the cloud and potentially introduce more costs and complexity – not less. The main upside here is that because creating and taking down virtual private clouds with these providers is relatively easy, even if one does set it up poorly it can be rectified by creating a new virtual private cloud that does better meet your needs.

Cloud cost overruns. Part of the reason companies live with and even mask the complexity of their existing data centers is that they can control their costs. Even if an application needs more storage, compute, networking, power – whatever – they can sometimes move hardware and software around on the back end to mask these costs until the next fiscal quarter or year rolls around when they go to the business to ask for approval to buy more. Once applications and data are in the cloud and start to grow, these costs become exposed almost immediately. Since cloud providers bill based upon monthly usage, companies need to closely monitor their applications and data in the cloud to include identifying which ones are starting to incur additional charges, to know what options they have available to them to lower these charges, and the practicality of making these changes.

Anyone who honestly assesses the capabilities available from the major public cloud providers will find they can better deliver next-gen features than what most organizations can do on their own. That said, companies either need to find the time to first educate themselves about these cloud providers or identify someone they trust to help them down the cloud path. While these three issues are impeding cloud adoption, they should not be stopping it as they still too often do. The good news is that even if a company does poorly implement their environment in the cloud the first time around (and few will,) the speed and flexibility at which public cloud providers offer to build out new virtual private clouds and tear down existing ones means they can cost-effectively improve it.




Five Key Differentiators between the Latest NetApp AFF A-Series and Hitachi Vantara VSP F-Series Platforms

Every enterprise-class all-flash array (AFA) delivers sub-1 millisecond response times using standard 4K & 8K performance benchmarks, high levels of availability, and a relatively full set of core data management features. As such, enterprises must examine AFA products to determine the differentiators between them. It is when DCIG compared the newest AFAs from leading providers such as Hitachi Vantara and NetApp in its latest DCIG Pocket Analyst Report that differences between them quickly emerged.

Both Hitachi Vantara and NetApp refreshed their respective F-Series and A-Series lines of all-flash arrays (AFAs) in the first half of 2018. In so doing, many of the similarities between the products from these providers persisted in that they both continue to natively offer:

  • Unified SAN and NAS interfaces
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

However, the latest AFA product refreshes from each of these two vendors also introduced some key areas where they diverge. While some of these changes reinforced the strengths of each of their respective product lines, other changes provided some key insights into how these two vendors see the AFA market shaping up in the years to come. This resulted in some key differences in product functionality emerging between the products from these two vendors that will impact them in the years to come.

Clicking on image above will take you to a third party website to access this report.

To help enterprises select the solution that best fits their needs, there are five key ways that the latest AFA products from Hitachi Vantara and NetApp differentiate themselves from one another. These five key differentiators include:

  1. Data protection and data reduction
  2. Flash performance optimization
  3. Predictive analytics
  4. Public cloud support
  5. Storage networking protocols

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares these newest all-flash arrays from Hitachi Vantara and NetApp. This report is currently available for sale on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report becomes available at no charge at some future time.




Proven Investment Principles Can Guide Your Cloud Strategy

Living in Omaha, Nebraska, one cannot help but be influenced by Berkshire Hathaway and its CEO, Warren Buffett, one of the wealthiest men in the world, when it comes to making investment decisions. However, the process that Berkshire Hathaway uses to make investment decisions has multiple other applications that include guiding you in making decisions about your cloud strategy.

If there is a company and an individual that epitomize the concept of “buy and hold”, they are Berkshire Hathaway and Warren Buffet.  Their basic premise is that  you thoroughly research a stock before making an investment decision. As part of that research, you investigates the financials of the company, its management team, its reputation, and the products and/or services it offers. Then you determine the type of growth that company will experience in the future. Once that decision is made, you then buy and hold it for a long time.

However, buy-and-hold is not the only principle that Warren Buffett follows. His first rule of investing is: Never lose money.

Companies should apply variations of both these principles when creating a cloud strategy. Once a company initiates and/or moves applications and/or data into the cloud, odds are that they will “buy-and-hold” them in the cloud for a long time assuming service levels and pricing continue to make sense. The more applications and data they store with a cloud provider, the more difficult it becomes for them to bring it back on-premise. Further, they can easily lose track of what data and applications their company has stored in the cloud.

The good and bad news is that public cloud providers such as Amazon, Google, and Microsoft have made and continue to make it easier than ever to get started with your cloud strategy as well as migrate existing applications and data to the cloud. This ease of implementing a cloud strategy can prompt organizations to bypass or shortcut the due diligence that they should take before placing applications and data in the cloud. Unfortunately, this approach leaves them without clearly defined plans to manage their cloud estate once it is in place.

To avoid this situation, here are some “investment” principles to follow when creating a cloud strategy to improve your chances of success to get the return from the cloud that you expect.

  1. Give preference to select proven, supported services from the cloud provider for critical applications and data. Most organizations when they move need to start with the basics such as compute, networking, security, and storage. These services are the bread and butter of IT and are the foundation for public cloud providers. These have been around for years, are stable, and are likely not going anywhere. Organizations can feel confident about using these cloud services for both existing and new applications and data and should expect them to be around for a long time to come.
  2. Shy away from “speculative” technologies. Newly and recently introduced Amazon services such as Lambda (serverless computing), Machine Learning, Polly (text-to-voice), and Rekognition (visual analysis of images and videos) among others sound (and are) exciting and fun to learn about and use. However, they are also the ones that cloud providers may abruptly change or even cancel. While organizations use them in production, companies just moving to the cloud may only want to use them with their test and dev applications or stay away altogether until they are confident they are stable and will be available indefinitely.
  3. Engage with a trusted advisor. Some feedback that DCIG has heard is that companies want a more orchestrated roll-out of their computing services in the cloud than they have had on-premise. To answer that need, cloud providers are working to build out partner networks which have individuals certified in their technologies to include helping with the initial design and deployment of new apps and data in the cloud as well as the subsequent migration of existing applications and data to the cloud.
  4. Track and manage your investment. A buy-and-hold philosophy does not mean you ignore your investment after you purchase it. You track cloud services like any other investment so take the time to understand and manage the billings. Due to the multiple options provided by each cloud service, you may need to periodically or even frequently change how you use a service or even move some applications and/or data back on-premise.

As organizations look to create a cloud strategy and make it part of how they manage their applications and data, they should take a conservative approach. Primarily adopt cloud technologies that are stable, that you understand, and which you can safely, securely, and confidently manage. Leave more “speculative” technologies for test and dev or until such a time that your organization has a comfort level with the cloud. While the cloud can certainly save you money, time, and hassle if you implement a cloud strategy correctly, its relative ease of adoption can also cost you much more if you pursue it in a haphazard manner.




Two Insights into Why Enterprises are Finally Embracing Public Cloud Computing

In between my travels, doing research, and taking some time off in May, I also spent time getting up to speed on Amazon Web Services by studying for the AWS Certified Solutions Architect Associated exam in anticipation of DCIG doing more public cloud-focused competitive research. While I know it is no secret that cloud adoption has taken off in recent years, what has puzzled me during this time is, “Why is it now that have enterprises finally started to embrace public cloud computing?”

From my first days as an IT user I believed that all organizations would eventually embrace cloud computing in some form. That belief was further reinforced as I came to understand virtualization and its various forms (compute, network, and storage.) But what has perplexed me to one degree or another ever since then is why enterprises have not more fully invested in these various types of virtualization and embraced the overall concept of cloud computing sooner.

While there are various reasons for this, I sense the biggest reason is that most organizations view IT as a cost center. Granted, they see the value that IT has brought and continues to bring to their business. However, most organizations do not necessarily want to provide technology services. They would rather look to others to provide the IT technologies that they need and then consume them when they are sufficiently robust and mature for their needs.

Of course, establishing exactly when a technology satisfies these conditions varies for each industry. Some might rightfully argue that cloud computing has been around for a decade or more and that many organizations already use it.

But using public cloud computing for test, development, or even for some limited production deployments within an organization is one thing. Making public cloud computing the preferred or even the only choice for hosting new and existing applications is quite another.  When this change in policy occurs within an enterprise, then one can say an enterprise has embraced public cloud computing. To date, only a relatively few enterprises have embraced the cloud computing at scale but I recently ran across two charts that help to explain why this is changing.

The first chart I ran across was in one of the training videos I watched. This video included a graphic that showed the number of new service announcements and updates that AWS made each year from 2011-2017.

Source: A Cloud Guru

It was when I saw the amount of innovation and changes that have occurred in the past three years at AWS that I got a better understanding as to why enterprises have started to embrace cloud computing at scale. Based on these numbers, AWS announced nearly five service announcements and/or updates every business day of 2017.

Many businesses would consider themselves fortunate to do five changes every month much less every day. But this level of innovation and change also explains why public cloud providers are pulling away from traditional data center in terms of the capabilities they can offer. It also explains why enterprises can have more confidence in public cloud providers and move more of their production applications there. This level of innovation also inherently communicates high degrees of stability and maturity which is often what enterprises prioritize.

The other chart brought to my attention is found on Microsoft’s website and provides a side-by-side comparison of Microsoft Azure to AWS. This chart provides a high-level overview of the offerings from both of these providers and how their respective offerings compare and contrast.

Most notable about this chart is that it means organizations have another competitive cloud computing offering that is available from a large, stable provider. In this way, as an enterprise embraces the idea of cloud computing in general and chooses a specific provider of these services, they can do so with the knowledge that they have a viable secondary option should that initial provider become too expensive, change offerings, or withdraw an offering that they currently or plan to use.

Traditional enterprise data centers are not going away. However, as evidenced by the multiple of enhancements that AWS, Microsoft Azure, and others have made in the past few years, their cloud offerings surpass the levels of auditing, flexibility, innovation, maturity, and security found in many corporate data centers. These features coupled with organizations having multiple cloud providers from which to choose provide insight into why enterprises are lowering their resistance to adopting public cloud computing and embracing it more wholeheartedly.




Amazon AWS, Google Cloud, Microsoft Azure and … now Nutanix Xi Cloud Services?!

Amazon, Google, and Microsoft have staked their claims as the Big 3 as providers of enterprise cloud services with their respective AWS, Cloud, and Azure offerings. Enter Nutanix. It has from Day 1 sought to emulate AWS with its on-premise cloud offering. But with the announcements made at its .NEXT conference last week in New Orleans, companies can look for Nutanix to deliver cloud services both on- and off-premise that should fundamentally change how enterprises view Nutanix going forward.

There is a little dispute that Amazon AWS is the unquestioned leader in cloud services with Microsoft, Google, and IBM possessing viable offerings in this space. Yet where each of these providers still tend to fall short is in addressing enterprise needs to help them maintain a hybrid cloud environment for the foreseeable future.

Clearly most enterprises want to incorporate public cloud offerings into their overall corporate data center design and, by Nutanix’s own admission, the adoption of the public cloud is only beginning in North America. But already there is evidence from early adopters of the cloud that the costs associated with maintaining all their applications with public cloud providers outweighs the benefits. However, these same enterprises are hesitant to bring these applications back on-premise because they like, and I daresay have even become addicted, to the ease of managing applications and data that the cloud provides them.

This is where Nutanix at its recent .NEXT conference made a strong case for becoming the next cloud solution on which enterprises should place their bets. Three technologies it announced during this conference particularly stood out to me as evidence that Nutanix is doing more than bringing another robust cloud offering to market but it is addressing this nagging enterprise concern for the need to deploy a cloud solution that they can manage in the same way on-premise and off. Consider:

1. Beam takes the mystery out of where all the money and data in the cloud has gone. A story I repeatedly hear is how complicated billing statements are from AWS and how easy it is for these costs to exceed corporate budgets. Another story I often hear is that it is so easy for corporate employees to get started in the cloud that they can easily run afoul of corporate governance. These stories, true or not, likely impede broader cloud adoption by many companies.

This is where Beam sheds some light on the picture. For those companies already using the cloud, Beam provides enterprises visibility into the cloud to address both the cost and data governance concerns. Since Beam is a separate, standalone product available from Nutanix, organizations can quickly gain visibility into how much money they are spending on the cloud, who is spending it, and perform audits to ensure compliance with HIPAA, ISO, PCI-DSS, CIS, NiST and SOC-2. For those organizations not already using the cloud, they can implement Beam in conjunction with their adoption of cloud services to monitor and manage their usage of it. Beam currently supports AWS and Azure with support for Nutanix Xi and Google Cloud in the works.

2. Xi brings together the management of on- and off-premise clouds without compromise. Make no mistake – Nutanix’s recently announced Xi cloud services offering is not yet on the same standing as AWS, Azure, or Google Cloud. In fact, by Nutanix’s own admission Xi is “still coming” as an offering. That said, Nutanix addresses this lingering concern that persists among enterprise users – they want the same type of cloud experience on-premise and off. The Nutanix Acropolis Hypervisor (AHV) accompanied with its forthcoming Xi cloud services offering stand poised to deliver that giving companies the flexibility to seamlessly (relatively speaking) move applications and data between on- and off-premise locations without changing how they are managed.

3. Netsil is “listen” spelled backwards which is just one more reason to pay attention to this technology. Every administrators’ worst nightmare is when they must troubleshoot an issue in the cloud. In today’s highly virtualized, inter-dependent application world, identifying the root cause of a Sev 1 problem can make even the most ardent supporter of virtualized, serverless compute environments long the “simpler” days of standalone servers.

Thank God solutions such as Netsil are now available. Netsil tackles this thorny issue of microsegmentation – how applications within containers, virtual machines, and physical machines communicate, interact and wall off one another – by identifying their respective dependencies on each other. This helps to take much of the guesswork out of troubleshooting these environment as well as gives enterprises more confidence to deploy multiple applications on fewer hosts. While Netsil is “still coming” per Nutanix, this type of technology is one that enterprises should find almost a necessity to both maximize their use of resources in the cloud while giving them peace of mind that they have tools at their disposal to help them solve the challenges that will inevitably arise.




Hackers Say Goodbye to Ransomware and Hello to Bitcoin Mining

Ransomware gets a lot of press – and for good reason – because when hackers break through your firewalls, encrypt your data, and make you pay up or else lose your data, it rightfully gets people’s attention. But hackers probably have less desire than most to be in the public eye and sensationalized ransomware headlines bring them unwanted attention. That’s why some hackers have said goodbye to the uncertainty of a payout associated with getting a ransom for your data and instead look to access your servers to do some bitcoin mining using your CPUs.

A week or so ago a friend of mine who runs an Amazon Web Services (AWS) consultancy and reseller business shared a story with me about one of his clients who hosts a large SaaS platform in AWS.

His client had mentioned to him in the middle of the week that the applications on one of his test servers was running slow. While my friend was intrigued, he did not at the time give it much thought. This client was not using his managed services offering which meant that he was not necessarily responsible for troubleshooting their performance issues.

Then the next day his client called him back and said that now all his servers hosting this application – test, dev, client acceptance, and production – were running slow. This piqued his interest, so he offered resources to help troubleshoot the issue. The client then allowed his staff to log into these servers to investigate the issue

Upon logging into these server, they discovered that all instances running at 100% also ran a Drupal web application. This did not seem right, especially considering that it was early on a Saturday morning when the applications should mostly be idle.

After doing a little more digging around on each server, they discovered a mysterious multi-threaded process running on each server that was consuming all their CPU resources. Further, the process also had opened up a networking port to a server located in Europe. Even more curious, the executable that launched the process had been deleted after the process started. It was as if someone was trying to cover their tracks.

At this point, suspecting the servers had all been hacked, they checked to see if there were any recent security alerts. Sure enough. On March 28, 2018, Drupal issued a security advisory that if you were not running Drupal 7.58 or Drupal 8.5.1, your servers were vulnerable to hackers who could remotely execute code on your server.

However, what got my friend’s attention is that these hackers did not want his client’s data. Rather, they wanted his client’s processing power to do bitcoin mining which is exactly what these servers had been doing for a few days now on behalf of these hackers. To help their client, they killed the bitcoin mining process on each of these servers before calling his client to advise them to patch Drupal ASAP.

The story does not end there. In this case, his client did not patch Drupal quickly enough. Sometime after they killed the bitcoin mining processes, another hacker leveraged that same Drupal security flaw and performed the same hack. By the time his client came to work on Monday, there were bitcoin mining processes running on those servers that again consumed all their CPU cycles.

What they found especially interesting was how the executable file that the new hackers had installed worked. In reviewing their code, the first thing it did was to kill any pre-existing bitcoin mining processes started by other hackers. This freed all the CPU resources to handle bitcoin mining processes started by the new hackers. The hackers were literally fighting each other over access to the compromised system’s resources.

Two takeaways from this story:

  1. Everyone is rightfully worried about ransomware but bitcoin mining may not hit corporate radar screens. I doubt that hackers want the FBI, CIA, Interpol, MI6, Mossad, or any other criminal justice agency hunting them down any more than you or I do. While hacking servers and “stealing” CPU cycles is still a crime, it probably is much further down on the priority list of most companies as well as these agencies.

A bitcoin mining hack may go unnoticed for long periods of time and may not be reported by companies or prosecuted by these criminal justice agencies even when reported because it is easy to perceive this type of hack as a victimless crime. Yet every day the hacker’s bitcoin mining processes go unnoticed and remain active, the more bitcoin the hackers earn. Further, one should assume hackers will only become more sophisticated going forward. Expect hackers to figure out how to install bitcoin mining processes that run without consuming all CPU cycles so these processes remain running and unnoticed for longer periods of time.

  1. Hosting your data and processes in the cloud does not protect your data and your processes against these types of attacks. AWS has all the utilities available to monitor and detect these rogue processes. That said, organizations still need someone to implement these tools and then monitor and manage them.

Companies may be relieved to hear that some hackers have stopped targeting their data and are instead targeting their processors to use them for bitcoin mining. However, there are no victimless crimes. Your pocket book will still get hit in cases like this as Amazon will bill you for using these resources.

In cases like this, if companies start to see their AWS bills going through the roof, it may not be the result of their businesses. It may be their servers have been hacked and they are paying to finance some hacker’s bitcoin mining operation. To avoid this scenario, companies should ensure they have the right internal people and processes in place to keep their applications up-to-date, to protect infrastructure from attacks, and to monitor their infrastructures whether hosted on-premise or in the cloud.




Six Best Practices for Implementing All-flash Arrays

Almost any article published today related to enterprise data storage will talk about the benefits of flash memory. However, while many organizations now use flash in their enterprise, most are only now starting to use it at a scale where they use it to host more than a handful of their applications. As organizations look to deploy flash more broadly in their enterprises, here are six best practices to keep in mind as they do so.

The six best practices outlined below are united by a single overarching principle. That overarching principle is that the data center is not merely a collection of components, it is an interdependent system. Therefore, the results achieved by changing any key component will be constrained by its interactions with the performance limits of other components. Optimal results come from optimizing the data center as a system.

Photograph of scaffolding on a building

Photo by Dan Gold on Unsplash

Best Practice #1: Focus on Accelerating Applications

Business applications are the reason businesses run data centers. Therefore, accelerating applications is a useful focus in evaluating data center infrastructure investments. Eliminating storage perfor­mance bottlenecks by implementing an all-flash array (AFA) may reveal bottlenecks elsewhere in the infrastructure, including in the applications themselves.

Getting the maximum performance benefit from an AFA may require more or faster connections to the data center network, changes to how the network is structured and other network configuration details. Application servers may require new network adapters, more DRAM, adjustments to cache sizes and other server configuration details. Applications may require configuration changes or even some level of recoding. Some AFAs include utilities that will help identify the bottle­necks wherever they occur along the data path.

Best Practice #2: Mind the Failure Domain

Consolidation can yield dramatic savings, but it is prudent to consider the failure domain, and how much of an organization’s infrastructure should depend on any one component—including an all-flash array. While all the all-flash arrays that DCIG covers in its All-flash Array Buyer’s Guides are “highly available” by design, some are better suited to deliver high availability than others. Be sure the one you select matches your requirements and your data center design.

Best Practice #3: Use Quality of Service Features and Multi-tenancy to Consolidate Confidently

Quality of Service (QoS) features enable an array to give criti­cal business applications priority access to storage resources. Multi-tenancy allocates resources to specific business units and/or departments and limits the percentage of resources that they can consume on the all-flash array at one time. Together, these features protect the array from being monopolized by any one application or bad actor.

Best Practice #4: Pursue Automation

Automation can dramatically reduce the amount of time spent on routine storage management and enable new levels of IT agility. This is where features such as predictive analytics come into play. They help to remove the risk associated with managing all-flash arrays in complex, consolidated environments. For instance, they can proactively intervene by identifying problems before they impact production apps and take steps to resolve them.

Best Practice #5: Realign Roles and Responsibilities

Implementing an all-flash storage strategy involves more than technology. It can, and should, reshape roles and responsibilities within the central IT department and between central IT, develop­ers and business unit technologists. Thinking through the possible changes with the various stakeholders can reduce fear, eliminate obstacles, and uncover opportunities to create additional value for the business.

Best Practice #6: Conduct a Proof of Concept Implementation

A good proof-of-concept can validate feature claims and uncover perfor­mance-limiting bottlenecks elsewhere in the infrastructure. However, key to implementing a good proof-of-concept is having an environment where you can accurately host and test your production environment on the AFA.

A Systems Approach Will Yield the Best Result

Organizations that approach the AFA evaluation from a systems perspective–recognizing and honoring the fact that the data center is an interdependent system that includes hardware, software and people—and that apply these six best practices during an all-flash array purchase decision are far more likely to achieve the objectives that prompted them to look at all-flash arrays in the first place.

DCIG is preparing a series of all-flash array buyer’s guides that will help organizations considering the purchase of an all-flash array. DCIG buyer’s guides accelerate the evaluation process and facilitate better-informed decisions. Look for these buyer’s guides beginning in the second quarter of 2018. Visit the DCIG web site to discover more articles that provide actionable analysis for your data center infrastructure decisions.




Two Most Disruptive Storage Technologies at the NAB 2018 Show

The exhibit halls at the annual National Association of Broadcasters (NAB) show in Las Vegas always contain eye-popping displays highlighting recent technological advances as well as what is coming down the path in the world of media and entertainment. But behind NAB’s glitz and glamour lurks a hard, cold reality; every word recorded, every picture taken, and every scene filmed must be stored somewhere, usually multiple times, and available at a moment’s notice. It is these halls at the NAB show that DCIG visited where it identified two start-ups with storage technologies poised to disrupt business as usual.

Storbyte. Walking the floor at NAB, a tall, blond individual literally yanked me by the arm as I was walking by and asked me if I had ever heard of Storbyte. Truthfully, the answer was No. This person turned out to be Steve Groenke, Storbyte’s CEO, and what ensued was a great series of conversations while at NAB.

Storbyte has come to market with an all-flash array. However, it took a very different approach to solve the problems of longevity, availability and sustainable high write performance in SSDs and storage systems built with them. What makes it so disruptive is it created a product that meets the demand for extreme sustained write performance by slowing down flash and it does so at a fraction of the cost of what other all-flash arrays cost.

In looking at today’s all-flash designs, every flash vendor is actively pursuing high performance storage. The approach they take is to maximize the bandwidth to each SSD. This means their systems must use PCIe attached SSDs addressed via the new NVMe protocol.

Storbyte chose to tackle the problem differently. Its initial target customers had continuous, real-time capture and analysis requirements as they routinely burned through the most highly regarded enterprise class SSDs in about seven months. Two things killed NAND flash in these environments: heat and writes.

To address this problem, Storbyte reduces heat and the number of writes that each flash module experiences by incorporating sixteen mSATA SSDs into each of its Eco*Flash SSDs. Further, Storbyte slows down the CPUs in each of the mSATA module on its system and then wide-stripes writes across all of them. According to Storbyte, this only requires about 25% of the available CPU on each mSATA module so they use less power. By also managing the writes, Storbyte simultaneously extends the life of each mSATA module on its Eco-flash drives.

The end result is a low cost, high performance, very dense, power-efficient all-flash array built using flash cards that rely upon “older”, “slower”, consumer-grade mSATA flash memory modules that can drive 1.6 million IOPS on a 4U system. More notably, its systems cost about a quarter of that of competitive “high performance” all-flash arrays while packing more than a petabyte of raw flash memory capacity in 4U of rack space that use less power than almost any other all-flash array.

Wasabi. Greybeards in the storage world may recognize the Wasabi name as a provider of iSCSI SANs. Well, right name but different company. The new Wasabi recently came out of stealth mode as a low cost, high performance, cloud storage provider. By low cost, we mean 1/5 of the cost of Amazon’s slowest offering (Glacier) and at 6x the speed of Amazon’s highest performing S3 offering. In other words, you can have your low cost cloud storage and eat it too.

What makes its offering so compelling is that it offers storage capacity at $4.99/TB per month. That’s it. No additional egress charges for every time you download files. No complicated monthly statements to decipher to figure out how much you are spending and where. No costly storage architects to hire to figure out how to tier data to optimize performance and costs. This translates into one fast cloud storage tier at a much lower cost than the Big 3 (Amazon AWS, Google Cloud, and Microsoft Azure.)

Granted, Wasabi is a cloud storage provider start-up so there is an element of buyer beware. However, it is privately owned and well-funded. It is experiencing explosive growth with over 1600 customers in just its few months of operation. It anticipates raising another round of funding. It already has data centers scattered throughout the United States and around the world with more scheduled to open.

Even so, past horror stories about cloud providers shutting their doors give every company pause by using a relatively unknown quantity to store their data. In these cases, Wasabi recommends that companies use its solution as your secondary cloud.

Its cloud offering is fully S3 compatible and most companies want a cloud alternative anyway. In this instances, store copies of your data to both Amazon and Wasabi. Once stored, run any queries, production, etc. against the Wasabi cloud. The Amazon egress charges that your company avoids by accessing its data on the Wasabi cloud will more than justify taking the risk of storing the data you routinely access on Wasabi. Then in the unlikely event Wasabi does go out of business (not that it has any plans to do so,) companies still have a copy of data with Amazon that they can fail back to.

This argument seems to resonate well with prospects. While I could not substantiate these claims, Wasabi said that they are seeing multi-petabyte deals coming their way on the NAB show floor. By using Wasabi instead of Amazon in the use case just described, these companies can save hundreds of thousands of dollars per month just by avoiding Amazon’s egress charges while mitigating their risk associated with using a start-up cloud provider such as Wasabi.

Editor’s Note: The spelling of Storbyte was corrected on 4/24.