Lenovo TruScale and Nutanix Enterprise Cloud Accelerate Enterprise Transformation

Digital transformation is an enterprise imperative. Enabling that transformation is the focus of Lenovo’s TruScale data center infrastructure services. The combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Cloud is the Transformation Trigger

Many enterprises are seeking to go to the cloud, or at least to gain the benefits associated with the cloud. These benefits include:

  • pay-as-you-go operational costs instead of large capital outlays
  • agility to rapidly deploy new applications
  • flexibility to adapt to changing business requirements

For many IT departments, the trigger for serious consideration of a move to the cloud is when the CFO no longer wants to approve IT acquisitions. Unfortunately, the journey to the cloud often comes with a loss of control over both costs and data assets. Thus many enterprise IT leaders are seeking a path to cloud benefits without sacrificing control of costs and data.

TruScale Brings True Utility Computing to Data Center Infrastructure

The Lenovo Data Center Group focused on the needs of these enterprise customers by asking themselves:

  • What are customers trying to do?
  • What would be a winning consumption model for customers?

The answer they came up with is Lenovo TruScale Infrastructure Services.

Nutanix invited DCIG analysts to attend the recent .NEXT conference. While there we met with many participants in the Nutanix ecosystem, including an interview with Laura Laltrello, VP and GM of Lenovo Data Center Services. This article, and DCIG’s selection of Lenovo TruScale as one of three Best of Show products at the conference, is based largely on that interview.

As noted in the DCIG Best of Show at Nutanix .NEXT article, TruScale literally introduces utility data center computing. Lenovo bills TruScale clients a monthly management fee plus a utilization charge. It bases this charge on the power consumed by the Lenovo-managed IT infrastructure. Clients can commit to a certain level of usage and be billed a lower rate for that baseline. This is similar to reserved instances on Amazon Web Services, except that customers only pay for actual usage, not reserved capacity.

infographic summarizing Lenovo TruScale features

Source: Lenovo

This power consumption-based approach is especially appealing to enterprises and service providers for which one or more of the following holds true:

  • Data center workloads tie directly to revenue.
  • Want IT to focus on enabling digital transformation, not infrastructure management.
  • Need to retain possession or secure control of their data.

Lenovo TruScale Offers Everything as a Service

TruScale can manage everything as a service, including both hardware and software. Lenovo works with its customers to figure out which licensing programs make the most sense for the customer. Where feasible, TruScale includes software licensing as part of the service.

Lenovo Monitors and Manages Data Center Infrastructure

TruScale does not require companies to install any extra software. Instead, it gets its power utilization data from the management processor already embedded in Lenovo servers. It then passes this power consumption data to the Lenovo operations center(s) along with alerts and other sensor data.

Lenovo uses the data it collects to trigger support interventions. Lenovo services professionals handle all routine maintenance including installing firmware updates and replacing failed components to ensure maximum uptime. Thus, Lenovo manages data center infrastructure below the application layer.

Lenovo Provides Continuous Infrastructure (and Cost) Visibility

Lenovo also uses the data it collects to provide near real-time usage data to customers via a dashboard. This dashboard graphically presents performance versus key metrics including actual vs budget. In short, Lenovo’s approach to utility data center computing provides a distinctive and easy means to deploy and manage infrastructure across its entire lifecycle.

Lenovo Integrates with Nutanix Prism

Lenovo TruScale infrastructure services cover the entire range Lenovo ThinkSystem and ThinkAgile products. The software defined infrastructure products include pre-integrated solutions for Nutanix, Azure HCI, Azure Stack and VMware.

Lenovo has taken extra steps to integrate its products with Nutanix. These include:

  • ThinkAgile XClarity Integrator for Nutanix is available via the Nutanix Calm marketplace. It works in concert with Prism to integrate server data and alerts into the Prism management console.
  • ThinkAgile Network Orchestrator is an industry-first integration between Lenovo switches and Prism. It reduces error and downtime by automatically changing physical switch configurations when changes are made to virtual Nutanix networks.

Nutanix Automates the Application Layer

Nutanix software simplifies the deployment and management of enterprise applications at scale. The following graphic, taken from the opening keynote lists each Nutanix component and summarizes its function.

image showing summary list of Nutanix services

Source: Nutanix

The Nutanix .NEXT conference featured many customers telling how Nutanix has transformed their data center operations. Their statements about Nutanix include:

“stable and reliable virtual desktop infrastructure”

“a private cloud with all the benefits of public, under our roof and able to keep pace with our ambitions”

“giving me irreplaceable time and memories with family”

“simplicity, ease of use, scale”

Lenovo TruScale + Nutanix = Accelerated Enterprise Transformation

I was not initially a fan of the term “digital transformation.” It felt like yet another slogan that really meant, “Buy more of my stuff.” But practical applications of machine learning and artificial intelligence are here now and truly do present significant new opportunities (or threats) for enterprises in every industry. Consequently, and more than at any time in the past, the IT department has a crucial role to play in the success of every company.

Enterprises need their IT departments to transition from being “Information Technology” departments to “Intelligent Transformation” departments. TruScale and Nutanix each enable such a transition by freeing up IT staff to focus on the business rather than on technology. Together, the combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Transform and thrive.

 

Disclosure: As noted above, Nutanix invited DCIG analysts to attend the .NEXT conference. Nutanix covered most of my travel expenses. However, neither Nutanix nor Lenovo sponsored this article.

Updated on 5/24/2019.




Best Practices for Getting Ready to Go “All-in” on the Cloud

To ensure an application migration to the cloud goes well or that a company should even migrate a specific application to the cloud requires a thorough understanding of each application. This understanding should encompass what resources the application currently uses as well as how it behaves over time. Here is a list of best practices that a company can put in place for its on-premises applications before it moves any of them to the cloud.

  1. Identify all applications running on-premises. A company may assume it knows what applications it has running in its data center environment. However, it is better to be safe than sorry. Take inventory and actively monitor its on-premises environment to establish a baseline. During this time, identify any new virtual or physical machines that come online.
  2. Quantify the resources used by these applications and when and how they use them. This step ensures that a company has a firm handle on the resources each application will need in the cloud, how much of these resources each one will need, and what types of resources it will need. For instance, simply knowing one needs to move a virtual machine (VM) to the cloud is insufficient. A company needs to know how much CPU, memory, and storage each VM needs; when the application runs; its run-time behavior; and, its periods of peak performance to choose the most appropriate VM instance type in the cloud to host it.
  3. Identify which applications will move and which will stay. Test and development applications will generally top the list of applications that a company will move to the cloud first. This approach gives a company the opportunity to become familiar with the cloud, its operations, and billing. Then a company should prioritize production applications starting with the ones that have the lowest level of impact to the business. Business and mission critical applications should be some of the last ones that a company moves. Applications that will stay on-premises are often legacy applications or those that cloud providers do not support.
  4. Map each application to the appropriate VM instance in the cloud. To make the best choice requires that a company knows both their application requirements and the offerings available from the cloud provider. This can take some time to quantify as Amazon Web Services (AWS) offers over 90 different VM instance types on which a company may choose to host an application while Microsoft Azure offers over 150 VM instance types. Further, each of these provider’s VMs may be deployed as an on-demand, reserved, or spot instance that each has access to multiple types of storage. A company may even look to move to serverless compute. To select the most appropriate VM instance type for each application requires that a company know at the outset the capacity and performance requirements of each VM as well as its data protection requirements. This information will ensure a company can select the best VM to host it as well as appropriately configure the VM’s CPU, data protection, memory, and storage settings.
  1. Determine which general-purpose cloud provider to use. Due to the multiple VM instance types each cloud provider offers and the varying costs of each VM instance type, it behooves a company to explore which cloud provider can best deliver the hosting services it needs. This decision may come down to price. Once it maps each of its applications to a cloud provider’s VM instance type, a company should be able to get an estimate of what its monthly cost will be to host its applications in each provider’s cloud.

Companies have good reasons for wanting to go “all-in” on the cloud as part of their overall business and IT strategies. But integral to both these strategies, a company must also have a means to ensure the stability of this new hybrid cloud environment as well as provide assurances that its cloud costs will be managed and controlled over time. By going “all-in” on software such as Quest Software’s Foglight, a company can have confidence that its decision to go “all-in” on the cloud will succeed initially and then continue to pay-off over time.

A recent white paper by DCIG provides more considerations for going all-in on the cloud to succeed both initially and over time. This paper is available to download by following this link to Quest Software’s website.




20 Years in the Making, the Future of Data Management Has Arrived

Mention data management to almost any seasoned IT professional and they will almost immediately greet the term with skepticism. While organizations have found they can manage their data within certain limits, when they remove those boundaries and attempt to do so at scale, those initiatives have historically fallen far short if not outright failed. It is time for that perception to change. 20 years in the making, Commvault Activate puts organizations in a position to finally manage their data at scale.

Those who work in IT are loath to say any feat in technology is impossible. If one looks at the capabilities of any handheld device, one can understand why they have this belief. People can pinpoint exactly where they are almost anywhere in the world to within a few feet. They can take videos, pictures, check the status of their infrastructure, text, … you name it, handheld devices can do it.

By way of example, as I write this, I was present to watch YY Lee, SVP and Chief Strategy Officer of Anaplan, onstage at Commvault GO. She explained how systems using artificial intelligence (AI) were able within a very short time, sometimes days, became experts at playing games such as Texas Hold’em and beat the best players in the world at them.

Despite advances such as these in technology, data management continues to bedevil large and small organizations alike. Sure, organizations may have some level of data management in place for certain applications (think email, file servers, or databases,) but when it comes to identifying and leveraging a tool to deploy data management across an enterprise at scale, that tool has, to date, eluded organizations. This often includes the technology firms that are responsible for producing so much of the hardware that stores this data and software that produces it.

The end for this vexing enterprise challenge finally came into view with Commvault’s announcement of Activate. What makes Activate different from other products that promise to provide data management at scale is that Commvault began development on this product 20 years ago in 1998.

During that time, Commvault became proficient in:

  • Archiving
  • Backup
  • Replication
  • Snapshots
  • Indexing data
  • Supporting multiple different operating systems and file systems
  • Gathering and managing metadata

Perhaps most importantly, it established relationships and gained a foothold in enterprise organizations around the globe. This alone is what differentiates it from almost every other provider of data management software. Commvault has 20+ years of visibility into the behavior and requirements of protecting, moving, and migrating data in enterprise organizations. This insight becomes invaluable when viewed in the context of enterprise data management which has been Commvault’s end game since its inception.

Activate builds on Commvault’s 20 years of product development with Activate’s main differentiator being its ability to stand alone apart from other Commvault software. In other words, companies do not first have to deploy Commvault’s Complete Backup and Recovery or any of its other software to utilize Activate.

They can deploy Activate regardless of whatever other backup, replication, snapshot, etc. software product you may have. But because Activate draws from the same code base as the rest of Commvault’s software, companies can deploy it with a great deal of confidence because of the stability of Commvault’s existing code base.

Once deployed, Activate scans and indexes the data across the company’s environment which can include its archives, backups, file servers, and/or data stored in the cloud. Once indexed, companies can do an assessment of the data in their environment in anticipation of taking next steps such as eDiscovery preparation, remediate data privacy risks, and index and analyze data based upon your own criteria.

Today more so than ever companies recognize they need to manage their data across the entirety of their enterprise. Delivering on this requirement requires a tool appropriately equipped and sufficiently mature to meet enterprise requirements. Commvault Activate answers this call as a software product that has been 20 years in the making to provide enterprises with the foundation they need to manage their data going forward.




DCIG’s VMworld 2018 Best of Show

VMworld provides insight into some of the biggest tech trends occurring in the enterprise data center space and, once again, this year did not disappoint. But amid the literally dozens of vendors showing off their wares on the show floor, here are the four stories or products that caught my attention and earned my “VMworld 2018 Best of Show” recognition at this year’s event.

VMworld 2018 Best of Show #1: QoreStor Software-defined Secondary Storage

Software defined dedupe in the form of QoreStor has arrived. A few years ago Dell Technologies sold off its Dell Software division that included an assortment (actually a lot) of software products that emerged as Quest Software. Since then, Quest Software has done nothing but innovate like bats out of hell. One of the products coming out of this lab of mad scientists is QoreStor, a stand-alone software-defined secondary storage solution. QoreStor installs on any server hardware platform and works with any backup software. QoreStor provides a free download that will deduplicate up to 1TB of data at no charge. While at least one other vendor (Dell Technologies) offers a deduplication product that is available as a virtual appliance, only Quest QoreStor gives organizations the flexibility to install it on any hardware platform.

VMworld 2018 Best of Show #2: LoginVSI Load Testing

Stop guessing on the impact of VDI growth with LoginVSI. Right sizing the hardware for a new VDI deployment from any VDI provider (Citrix, Microsoft, VMware, etc.) is hard enough. But to keep the hardware appropriately sized as more VDIs are added or patches and upgrades are made to existing VDI instances, that can be dicey at best and downright impossible at worst.

LoginVSI helps take the guesswork out of any organization’s future VDI growth by:

  • examining the hardware configuration underlying the current VDI environment,
  • comparing that to the changes proposed for the environment, and then
  • making recommendations as to what changes, if any, need to be made to the environment to support
    • more VDI instances or
    • changes to existing VDI instances.

VMworld 2018 Best of Show #3: Runecast vSphere Environment Analyzer

Plug vSphere’s holes and keep it stable with Runecast. Runecast was formed by a group of former IBM engineers who over the years had become experts at two things while working at IBM: (1) Installing and configuring VMware vSphere; and (2) Pulling out their hair trying to keep each installation of VMware vSphere stable and up-to-date. To rectify this second issue, they left IBM and formed Runecast,

Runecast provides organizations with the software and tools they need to proactively:

  • identify needed vSphere patches and fixes,
  • identify security holes, and even
  • recommend best practices for tuning your vSphere deployments
  • with minimal to no manual intervention.

VMworld 2018 Best of Show #4: VxRail Hyper-converged Infrastructure

Kudos to Dell Technologies VxRail solution for saving every man, woman, and child in the United States $2/person in 2018. I was part of a breakfast conversation on Wednesday morning with one of the Dell Technologies VxRail product managers. He anecdotally shared with the analysts present that Dell Technologies recently completed two VxRail hyper-converged infrastructure deployments in two US government agencies. Each deployment is saving each agency $350 million annually. Collectively that amounts to $700 million or $2/person for every person residing in the US. Thank you, Dell.




Predictive Analytics in Enterprise Storage: More Than Just Highfalutin Mumbo Jumbo

Enterprise storage startups are pushing the storage industry forward faster and in directions it may never have gone without them. It is because of these startups that flash memory is now the preferred place to store critical enterprise data. Startups also advanced the customer-friendly all-inclusive approach to software licensing, evergreen hardware refreshes, and pay-as-you-grow utility pricing. These startup-inspired changes delight customers, who are rewarding the startups with large follow-on purchases and Net Promoter Scores (NPS) previously unseen in this industry. Yet the greatest contribution startups may make to the enterprise storage industry is applying predictive analytics to storage.

The Benefits of Predictive Analytics for Enterprise Storage

Picture of Gilbert and Anne from Anne of Avonlea

Gilbert advises Anne to stop using “highfalutin mumbo jumbo” in her writing. (Note 1)

The end goal of predictive analytics for the more visionary startups goes beyond eliminating downtime. Their goal is to enable data center infrastructures to autonomously optimize themselves for application availability, performance and total cost of ownership based on the customer’s priorities.

The vendors that commit to this path and execute better than their competitors are creating value for their customers. They are also enabling their own organizations to scale up revenues without scaling out staff. Vendors that succeed in applying predictive analytics to storage today also position themselves to win tomorrow in the era of software-defined data centers (SDDC) built on top of composable infrastructures.

To some people this may sound like a bunch of “highfalutin mumbo jumbo”, but vendors are making real progress in applying predictive analytics to enterprise storage and other elements of the technical infrastructure. These vendors and their customers are achieving meaningful benefits including:

  • Measurably reducing downtime
  • Avoiding preventable downtime
  • Optimizing application performance
  • Significantly reducing operational expenses
  • Improving NPS

HPE Quantifies the Benefits of InfoSight Predictive Analytics

Incumbent technology vendors are responding to this pressure from startups in a variety of ways. HPE purchased Nimble Storage, the prime mover in this space, and plans to extend the benefits of Nimble’s InfoSight predictive analytics to its other enterprise infrastructure products. HPE claims its Nimble Storage array customers are seeing the following benefits from InfoSight:

  • 99.9999% of measured availability across its installed base
  • 86% of problems are predicted and automatically resolved before customers even realize there is an issue
  • 85% less time spent managing and resolving storage-related problems
  • 79% savings in operational expense (OpEx)
  • 54% of issues pinpointed are not storage, identified through InfoSight cross-stack analytics
  • 42 minutes: the average level three engineer time required to resolve an issue
  • 100% of issues go directly to level three support engineers, no time wasted working through level one and level two engineers

The Current State of Affairs in Predictive Analytics

HPE is certainly not alone on this journey. In fact, vendors are claiming some use of predictive analytics for more than half of the all-flash arrays DCIG researched.

Source: DCIG; N = 103

Telemetry Data is the Foundation for Predictive Analytics

Storage array vendors use telemetry data collected from the installed product base in a variety of ways. Most vendors evaluate fault data and advise customers how to resolve problems, or they remotely log in and resolve problems for their customers.

Many all-flash arrays transmit not just fault data, but extensive additional telemetry data about workloads back to the vendors. This data includes IOPS, bandwidth, and latency associated with workloads, front end ports, storage pools and more. Some vendors apply predictive analytics and machine learning algorithms to data collected across the entire installed base to identify potential problems and optimization opportunities for each array in the installed base.

Predictive Analytics Features that Matter

Proactive interventions identify something that is going to create a problem and then notify clients about the issue. Interventions may consist of providing guidance in how to avoid the problem or implementing the solution for the client. A wide range of interventions are possible including identifying the date when an array will reach full capacity or identifying a network configuration that could create a loop condition.

Recommending configuration changes enhances application performance at a site by comparing the performance of the same application at similar sites, discovering optimal configurations, and recommending configuration changes at each site.

Tailored configuration changes prevent outages or application performance issues based on the vendor seeing and fixing problems caused by misconfigurations. The vendor deploys the fix to other sites that run the same applications, eliminating potential problems. The vendor goes beyond recommending changes by packaging the changes into an installation script that the customer can run, or by implementing the recommended changes on the customer’s behalf.

Tailored software upgrades eliminate outages based on the vendor seeing and fixing incompatibilities they discover between a software update and specific data center environments. These vendors use analytics to identify similar sites and avoid making the software update available to those other sites until they have resolved the incompatibilities. Consequently, site administrators are only presented with software updates that are believed to be safe for their environment.

Predictive Analytics is a Significant Yet Largely Untapped Opportunity

Vendors are already creating much value by applying predictive analytics to enterprise storage. Yet no vendor or product comes close to delivering all the value that is possible. A huge opportunity remains, especially considering the trends toward software-defined data centers and composable infrastructures. Reflecting for even a few minutes on the substantial benefits that predictive analytics is already delivering should prompt every prospective all-flash array purchaser to incorporate predictive analytics capabilities into their evaluation of these products and the vendors that provide them.

Note 1: Image source: https://jamesmacmillan.wordpress.com/2012/04/02/highfalutin-mumbo-jumbo/




Deduplication Still Matters in Enterprise Clouds as Data Domain and ExaGrid Prove

Technology conversations within enterprises increasingly focus on the “data center stack” with an emphasis on cloud enablement. While I agree with this shift in thinking, one can too easily overlook the merits of underlying individual technologies when only considering the “Big Picture“. Such is happening with deduplication technology. A key enabler of enterprise archiving, data protecton, and disaster recovery solutions, vendors such as Dell EMC and ExaGrid deliver deduplication technology in different ways as DCIG’s most recent 4-page Pocket Analyst Report reveals that makes each product family better suited for specific use cases.

It seemed for too many years enterprise data centers focused too much on the vendor name on the outside of the box as opposed to what was inside the box – the data and the applications. Granted, part of the reason for their focus on the vendor name is they wanted to demonstrate they had adopted and implemented the best available technologies to secure the data and make it highly available. Further, some of the emerging technologies necessary to deliver a cloud-like experience with the needed availability and performance characteristics did not yet exist, were not yet sufficiently mature, or were not available from the largest vendors.

That situation has changed dramatically. Now the focus is almost entirely on software that provides enterprises with cloud-like experiences that enables them to more easily and efficiently manage their applications and data. While this change is positive, enterprises should not lose sight of the technologies that make up their emerging data center stack as they are not all equally equipped to deliver them in the same way.

A key example is deduplication. While this technology has existed for years and has become very mature and stable during that time, the options in which enterprises can implement it and the benefits they will realize it vary greatly. The deduplication solutions from Dell EMC Data Domain and ExaGrid illustrate these differences very well.

DCIG Pocket Analyst Report Compares Dell EMC Data Domain and ExaGrid Product Families

Deduplication systems from both Dell EMC Data Domain and ExaGrid have widespread appeal as they expedite backups, increase backup and recovery success rates, and simplify existing backup environments. They also both offer appliances in various physical configurations to meet the specific backup needs of small, midsize, and large enterprises while providing virtual appliances that can run in private clouds, public clouds, or virtualized remote and branch offices.

However, their respective systems also differ in key areas that will impact the overall effectiveness these systems will have in the emerging cloud data stacks that enterprises are putting in place. The six areas in which they differ include:

  1. Data center efficiency
  2. Deduplication methodology
  3. Networking protocols
  4. Recoverability
  5. Replication
  6. Scalability

The most recent 4-page DCIG Pocket Analyst Report analyzes these six attributes on the systems from these two providers of deduplication systems and compares their underlying features that deliver on these six attributes. Further, this report identifies which product family has the advantage in each area and provides a feature comparison matrix to support these claims.

This report provides the key insight in a concise manner that enterprises need to make the right choice in deduplication solutions for their emerging cloud data center stack. This report may be purchased for $19.95 at TechTrove, a new third-party site that hosts and makes independently developed analyst content available for sale.

Cloud-like data center stacks that provide application and data availability, mobility, and security are rapidly becoming a reality. But as enterprises adopt these new enterprise clouds, they ignore or overlook technologies such as deduplication that make up these stacks at their own peril as the underlying technologies they implement can directly impact the overall efficiency and effectiveness of the cloud that one is building.




A Business Case for ‘Doing Something’ about File Data Management

The business case for organizations with petabytes of file data under management to classify and then place it across multiple tiers of storage has never been greater. By distributing this data across disk, flash, tape and the cloud, they stand to realize significant cost savings. The catch is finding a cost-effective solution that makes it easier to administer and manage file data than simply storing it all on flash storage. This is where a solution such as what Quantum now offers come into play.

Organizations love the idea of spending less money on primary storage – especially when they have multiple petabytes of file data residing on flash storage. Further, most organizations readily acknowledge that much of their file data residing on flash storage can reside on lower cost, lower performing media such as disk, the cloud, or even on tape with minimal to no impact to business operations if they know the files are infrequently or never accessed but can be accessed relatively quickly and easily if required.

The problem they encounter is that the “cure” of file data management is worse than the “disease” of inaction. Their concerns focus on the file data management solution itself. Specifically, can they easily implement and then effectively use it in such a way that they can derive value from it short and long term. This uncertainty about the success of implementing a file data management solution that is easier than the status quo of “doing nothing” prompts organizations to do exactly that: nothing.

Quantum, in partnership with DataFrameworks and its ClarityNow! software, gives companies new motivation to act. Other data management and archival solutions give companies the various parts and pieces that they need to manage their file data. However, they leave it up to the customer and their integrators and/or consultants to implement it.

Quantum and DataFrameworks differ in that they offer an integrated, turnkey, end-to-end solution that organizations need to have confidence to proceed. Quantum has integrated DataFrameworks ClarityNow! Software with its Xcellis scale-out storage and Artico archive gateway products to put companies on a fast track for effective file data management.

Source: Quantum

The Xcellis scale-out storage product was added to the Quantum product portfolio in 2015. Yet while the product is relatively new, the technology it uses is not – it bundles a server and storage with Quantum’s StorNext advanced data management software which has existed for years. Quantum packages it with its existing storage products to create an appliance-based solution for faster, more seamless deployments in organizations. Then, by giving organizations the option to include the DataFrameworks ClarityNow! software as part of the appliance, organizations get, in one fell swoop, the file data classification and management features they need in an appliance-based offering.

To give organizations a full range of cost-effective storage options, Quantum enables them to store data to the cloud, other disk storage arrays, and/or tape. As individuals store file data on the Xcellis scale-out storage and files age and/or become inactive, the ClarityNow! software recognizes these traits and others to proactively copy and/or move files to another storage tier.  Alternately, the Artico archive gateway can also be used in a NAS environment to move files onto  the tier or tiers of storage based on preset policies.

It should be noted this solution particularily makes sense in environments that minimally have a few petabytes of data and potentially even tens or hundreds of petabytes of file data under management. It is only when an organization has this amount of file data under management does it make sense for them to proceed with a robust file data management solution backed by the enterprise IT infrastructure such as what Quantum offers.

It is time for organizations who have seen their file data stores swell to petabyte levels who still are doing nothing to re-examine that conviction. Quantum, with its Xcellis scale-out storage solution and its integration with DataFrameworks ClarityNow!, has taken significant strides to make it easier than ever for organizations to deploy the type of file data management solution they need and derive the value they expect. In so doing, organizations can finally see the benefits of “doing something” about bringing their costs and headaches associated with file data management under control as opposed to simply “doing nothing.”

To subscribe and receive regular updates like this from DCIG, follow this link to subscribe to DCIG’s newsletter.

Note: This blog entry was originally published on June 28, 2017.



BackupAssist 10.0 Brings Welcomed Flexibility for Cloud Backup to Windows Shops

Today’s backup mantra seems to be backup to the cloud or bust! But backup to the cloud is more than just redirecting backup streams from a local file share to a file share presented by a cloud storage provider and clicking the “Start” button. Organizations must examine to which cloud storage providers they can send their data as well as how their backup software packages and sends the data to the cloud. BackupAssist 10.0 answers many of these tough questions about cloud data protection that businesses face while providing them some welcomed flexibility in their choice of cloud storage providers.

Recently I was introduced to BackupAssist, a backup software company that hails from Australia, and had the opportunity to speak with its founder and CEO, Linus Chang, about Backup Assist’s 10.0 release. The big news in this release was BackupAssist’s introduction of cloud independent backup that gives organizations the freedom to choose any cloud storage provider to securely store their Windows backup data.

The flexibility to choose from multiple cloud storage providers as a target when doing backup in today’s IT environment has become almost a prerequisite. Organizations increasingly want the ability to choose between one or more cloud storage providers for cost and redundancy reasons.

Further, availability, performance, reliability, and support can vary widely by cloud storage provider. These features may even vary by the region of the country in which an organization resides as large cloud storage providers usually have multiple data centers located in different regions of the country and world. This can result in organizations having very different types of backup and recovery experiences depending upon which cloud storage provider they use and the data center to which they send their data.

These factors and others make it imperative that today’s backup software give organizations more freedom of their choice in cloud storage providers which is exactly what BackupAssist 10.0 provides. By giving organizations the freedom to choose from Amazon S3 and Microsoft Azure among others, they can select the “best” cloud storage provider for them. However, since the factors as to what constitute the “best” cloud storage provider can and probably will change over time, BackupAssist 10.0 gives organizations the flexibility to adapt to any changes in conditions at the situation warrants.

Source:BackupAssist

To ensure organizations experience success when they backup to the cloud, it has also introduced three other cloud-specific features as well, which include:

  1. Compresses and deduplicates data. Capacity usage and network bandwidth consumption are the two primary factors that drive up cloud storage costs. By introducing compression and deduplication into this release, BackupAssist 10.0 helps organizations better keeps these variable costs associated with using cloud storage under control.
  2. Insulated encryption. Every so often stories leak out about how government agencies subpoena cloud providers and ask for the data of their clients. Using this feature, organizations can fully encrypt their backup data to make it inaccessible to anyone.
  3. Resilient transfers. Nothing is worse than having a backup two-thirds to three-quarters complete only to have a hiccup in the network connection or on the server itself that interrupts the backup and forces one to restart the backup from the beginning. Minimally, this is annoying and disruptive to business operations. Over time, restarting backup jobs and resending the same backup data to the cloud can run networking and storage costs. BackupAssist 10.0 ensures that if a backup job gets interrupted, it can resume from the point where it stopped while only sending the required amount of data to complete the backup.

In its 10.0 release, BackupAssist makes needed enhancements to ensure it remains a viable, cost-effective backup solution for businesses wishing to protect their applications running on Windows Server. While these businesses should keep some copies of data on local disk for faster backups and recoveries, the value of efficiently and cost-effectively keeping copies of their data offsite with cloud storage providers cannot be ignored. The 10.0 version of BackupAssist gives them the versatility to store data locally, in the cloud, or both with new flexibility to choose a cloud storage provider at any time that most closely aligns with their business and technical requirements.




Product Bells and Whistles do not a Tier One Storage Provider Make; Defining a Tier One Storage Provider, Part I

In almost every industry there is a tendency to use phrases such as Tier 1, Tier 2, and Tier 3 to describe providers, the products in a specific market, the quality of service provided, or some combination thereof. It is one applies these three terms to the storage industry and how to properly classify storage providers into one of these various tiers that the conversation becomes intriguing. After all, how does one define what constitutes and separates a Tier 1 storage provider from other providers in the market?

This topic was recently the subject of conversation with a longtime colleague of mine in the storage industry. Our conversation was initially about DCIG Buyer’s Guides. I talked a bit about the original methodology that DCIG used to develop its Buyer’s Guides and how DCIG recently updated its methodology to do a single body of research. That research is then used to create various Buyer’s Guide Editions by applying specific criteria to the underlying data to create each edition.

This intrigued him and prompted him to ask why certain storage providers that he would classify as “Tier 1” had their products listed alongside other providers that he viewed as Tier 2 or lower. His point was a provider that he would consider a “Tier 1” provider potentially has more skin in the game from a legal and liability standpoint as well as a testing perspective if it claims to support a specific feature than some of those he considered Tier 2 providers might have.

He argued Tier 2 providers tend to have far more to gain and less to lose when saying they can support a feature than Tier 1 providers. By the Tier 2 provider claiming its product can support a specific feature, it can potentially elevate the view of its product in the eyes of a prospective customer versus its Tier 1 competitor. Further, the Tier 2 provider can be more aggressive in claiming support for a specific feature since it does not necessarily have to meet the same bar from either a legal or testing perspective that a Tier 1 provider may have to meet.

That was a fair argument and that is certainly a fair case to make. In the case of DCIG, it has found that most providers, regardless of how one may classify it, generally accurately state the actual level of feature support for their products. While we do occasionally run across those that, how shall we say, are a bit too aggressive in their claims of feature support, we see those as being more the outliers than the rule.

Thus, one should not automatically conclude that just because a provider that one may view as Tier 2 and who claims support for a specific feature automatically misrepresents or overstates its ability to support that feature. It may be that they are ahead of the market in delivering this functionality. It may be that the feature on the product from the provider viewed as Tier 1 does not work or does not work well. It may even be that their definitions as to what constitutes feature support may differ from the definition that DCIG uses.

While I understand where my friend is coming from, I cannot fully agree with him on this point. Am I inclined and do I want to believe that providers viewed as Tier 1 by organizations are telling the truth in terms of holding themselves to higher legal standards to accurately represent the features on their products and thoroughly testing these features before publicly stating they support these features? Absolutely.

But as someone who has also worked for a Fortune 500 company on products from providers viewed as Tier 1, I cannot equate their statements as always being truthful about feature functionality. Unfortunately, I have personally found that buying their products still does not guarantee that their features will work any better than products from providers viewed as Tier 2. In some cases, I have found that products from these other providers may have features that are better supported and more robust.

It is these attributes of support and provider size that I suspect get more to the heart of how organizations classify and view a Tier 1 storage provider anyway. Sure, they may ideally want a provider that they view as Tier 1 to deliver a product full of the bells and whistles that they could want either now or in the future. But at the end of the day, they typically care more about the provider’s ability to remain financially viable over time and who can respond to support calls when needed as opposed to their products possessing a specific feature today. I will talk more about that in my next blog entry in this series.




DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of the 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide developed from the backup appliance body of research. Other Buyer’s Guides based on this body of research include the recent DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide and the forthcoming 2016-17 Integrated Backup Appliance Buyer’s Guide.

As core business processes become digitized, the ability to keep services online and to rapidly recover from any service interruption becomes a critical need. Given the growth and maturation of cloud services, many organizations are exploring the advantages of storing application data with cloud providers and even recovering applications in the cloud.

Hybrid cloud backup appliances (HCBA) are deduplicating backup appliances that include pre-integrated data protection software and integration with at least one cloud-based storage provider. An HCBA’s ability to replicate backups to the cloud supports disaster recovery needs and provides essentially infinite storage capacity.

The DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide weights, scores and ranks more than 100 features of twenty-three (23) products from six (6) different providers. Using ranking categories of Recommended, Excellent and Good, this Buyer’s Guide offers much of the information an organization should need to make a highly informed decision as to which hybrid cloud backup appliance will suit their needs.

Each backup appliance included in the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide meets the following criteria:

  • Be available as a physical appliance
  • May also ship as a virtual appliance
  • Includes backup and recovery software that enables seamless integration into an existing infrastructure
  • Stores backup data on the appliance via on premise DAS, NAS or SAN-attached storage
  • Enables connectivity with at least one cloud-based storage provider for remote backups and long-term retention of backups in a secure/encrypted fashion
  • Provides the ability to connect the cloud-based backup images on more than one geographically dispersed appliance
  • Be formally announced or generally available for purchase on July 1, 2016

It is within this context that DCIG introduces the DCIG 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide. DCIG’s succinct analysis provides insight into the state of the hybrid cloud backup appliance marketplace. The Buyer’s Guide identifies the specific benefits organizations can expect to achieve using a hybrid cloud backup appliance, and key features organizations should be aware of as they evaluate products. It also provides brief observations about the distinctive features of each product. Ranking tables enable organizations to get an “at-a- glance” overview of the products; while DCIG’s standardized one-page data sheets facilitate side-by- side comparisons, assisting organizations to quickly create a short list of products that may meet their requirements.

End users registering to access this report via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by-side feature comparisons of the products in which the organization is most interested.

By using the DCIG Analysis Portal and applying the hybrid cloud backup appliance criteria to the backup appliance body of research, DCIG analysts were able to quickly create a short list of products that meet these requirements which was then, in turn, used to create this Buyer’s Guide Edition. DCIG plans to use this same process to create future Buyer’s Guide Editions that further examine the backup appliance marketplace.

Additional information about each buyer’s guide edition, including a download link, is available on the DCIG Buyer’s Guides page.




Data Visualization, Recovery, and Simplicity of Management Emerging as Differentiating Features on Integrated Backup Appliances

Enterprises now demand higher levels of automation, integration, simplicity, and scalability from every component deployed into their IT infrastructure and the integrated backup appliances found in the DCIG’s forthcoming Buyer’s Guide Editions that cover integrated backup appliances are a clear output of those expectations. Intended for organizations that want to protect applications and data and then keep it behind corporate fire walls, these backup appliances come fully equipped from both hardware and software perspectives to do so.

Once largely assembled and configured by either IT staff or value added resellers (VARs), integrated backup appliances have gone mainstream and are available for use in almost any size organization. By bundling together both hardware and software, large enterprises get the turnkey backup appliance solution that was just a few years ago primary reserved for smaller organizations. In so doing, large enterprises can eliminate the need to spend days, weeks, or even months they previously had to spend configuring and deploying these solutions into their infrastructure.

The evidence of the demand for backup appliances at all levels of the enterprise is made plain by the providers who bring them to market. Once the domain of providers such as STORServer and Unitrends, “software only” companies such as Commvault and Veritas have responded to the demand for turnkey backup appliance solutions with both now offering their own backup appliances under their respective brand names.

commvault-pbba

Commvault Backup Appliance

symantec-netbackup-pbba

Veritas NetBackup Appliance

In so doing, any size organization may get any of the most feature rich enterprise backup software solutions on the market, whether it is IBM Tivoli Storage Manager (STORServer), Commvault (Commvault and STORServer), Unitrends or Veritas NetBackup, delivered to them as a backup appliance. Yet while traditional all-software providers have entered the backup appliance market,  behind the scenes new business demands are driving further changes on backup appliances that organizations should consider as they contemplate future backup appliance acquisitions.

  • First, organizations expect successful recoveries. A few years ago, the concept of all backup jobs completing successfully was enough to keep everyone happy and giving high-fives to one another. No more. organizations recognize that they have reliable backups residing on a backup appliance and these appliances may largely sit idle during off-backup hours. This gives the enterprise some freedom to do more with these backup appliances during these periods of time such as testing recoveries, recovering applications on the appliance itself, or even presenting these backup copies of data to other applications to use as sources for internal testing and development. DCIG found that a large number of backup appliances support one or more vCenter Instant Recovery features and the emerging crop of backup appliances can also host virtual machines and recover applications on them.
  • Second, organizations want greater visibility into their data to justify business decisions. The amount of data residing in enterprise backup repositories is staggering. Yet the lack of value that organizations derive from that stored data combined with the potential risk it presents to them by retaining it is equally staggering. Features that provide greater visibility into the metadata of these backups which then analyze it and help turn it into measurable value for the business are already starting to find their way onto these appliances. Expect these features to become more prevalent in the years to come.
  • Third, enterprises want backup appliances to expand their value proposition. Backup appliances are already easy to deploy but maintaining and upgrading them over time or deploying them for other use cases gets more complicated over time. To address these concerns, emerging providers such as Cohesity, which is making its first appearance in DCIG Buyer’s Guides as an integrated backup appliance, directly addresses these concerns. Available as a scale-out backup appliance that can function as a hybrid cloud backup appliance, a deduplicating backup appliance and/or as an integrated backup appliance, it provides an example of how an enterprises can more easily scale and maintain it over time while giving them the flexibility to use it internally in multiple different ways.

The forthcoming DCIG 2016-17 Integrated Backup Appliance Buyer’s Guide Editions highlight the most robust and feature rich integrated backup appliances available on the market today. As such, organizations should consider these backup appliances covered in these Buyer’s Guides as having many of the features they need to protect both their physical and virtual environments. Further, a number of these appliances give them early access to the set of features that will position them to meet their next set of recovery challenge,s satisfy rising expectations for visibility into their corporate data, and simplify their ongoing management so they may derive additional value from it.




SaaS Provider Pulls Back the Curtain on its Backup Experience with Cohesity; Interview with System Architect, Fidel Michieli, Part 3

Usually when I talk to backup and system administrators, they willingly talk about how great a product installation was. But it then becomes almost impossible to find anyone who wants to comment about what life is like after their backup appliance is installed. This blog entry represents a bit of anomaly in that someone willingly pulled back the curtain on what their experience was like after they had the appliance installed. In this third installment in my interview series with system architect, Fidel Michieli, describes how the implementation of Cohesity went in his environment and how Cohesity responded to issues that arose.

Jerome:  Once you had Cohesity deployed in your environment, can you provide some insights into how it operated and how upgrades went?

Fidel:  We have been through the upgrade process and the process of adding nodes twice. Those were the scary milestones that we did not test during the proof of concept (POC). Well, we did cover the upgrade process, but we did not cover adding nodes.

Jerome:  How did those upgrade go? Seamlessly?

Fidel:  The fact that our backup windows are small and we can run during the night essentially leaves all of our backup infrastructure idle during the day. If we take down one node at a time, we barely notice as we do not have anything running. But as software company, we expect there to be a few bumps along the way which we encountered.

Jerome:  Can you describe a bit about the “bumps” that you encountered?

Fidel:  We filled up the Cohesity cluster much faster than we expected which set its metadata sprawling. We went to 90-92 percent very quickly so we had to add in nodes in order to get the capacity back which was being taken up by its metadata.

Jerome:  Do you control how much metadata the Cohesity cluster creates?

Fidel:  The metadata size is associated with the amount of duplicated data it holds. As that grew, we started seeing some services restart and we got alerts of services restarting.

Jerome:  You corrected the out of capacity condition by adding more nodes?

Fidel:   Temporarily, yes.  Cohesity recognized we were not in a stable state and they did not want us to have a problem so they shipped us eight more nodes for us to create a new cluster.  [Editor’s Note:  Cohesity subsequently issued a new software release to store dedupe metadata more efficiently, which has since been implemented at this SaaS provider’s site.]

Jerome:  That means a lot that Cohesity stepped up to the plate to support its product.

Fidel:   It did. But while it was great that they shipped us the new cluster, I did not have any additional Ethernet ports to connect these new nodes as we did not have the additional port count in our infrastructure. To resolve this, Cohesity agreed to ship us the networking gear we needed. It talked to my network architect, found out what networking gear we liked, agreed to buy it and then shipped the gear to us overnight.

Further, my Cohesity system engineer, calls me every time I open a support ticket and shows up here. He replies and makes sure that my ticket moves through the support queue. He came down to install the original Cohesity cluster and the upgrades to the cluster, which we have been through twice already. The support experience has been fantastic and Cohesity has taken all of my requests into consideration as it has released software upgrades to its product, which is great.

Jerome:  Can you share one of your requests that Cohesity has implemented into its software?

Fidel:  We needed to have connectivity to Iron Mountain’s cloud. Cohesity got that certified with Iron Mountain so it works in a turnkey fashion. We also needed support for SQL Server which Cohesity into its road map at the time and which it recently delivered. We also needed Cohesity to certify support for Exchange 2016 so they expedited support for that so it is also now certified.

In part 1 of this interview series, Fidel shares the challenges that his company faced with its existing backup configuration as well as the struggles it encountered in identifying a backup solution that scaled to meet a dynamically changing and growing environment.

In part 2 of this interview series Fidel shares how he gained a comfort level with Cohesity prior to rolling it out enterprise-wide in his organization.

In part 4 of this interview series Fidel shares how Cohesity functions as both an integrated backup software appliance and a deduplicating target backup appliance in his company’s environment.




DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide Editions Now Available

DCIG is pleased to announce the availability of the following DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide Editions developed from the backup appliance body of research. Other Buyer’s Guide Editions based on this body of research will be published in the coming weeks and months, including the 2016-17 Integrated Backup Appliance Buyer’s Guide and 2016-17 Hybrid Cloud Backup Appliance Buyer’s Guide Editions.

Buyer’s Guide Editions being released on September 20, 2016:

  • DCIG 2016-17 Sub-$100K Deduplicating Backup Appliance Buyer’s Guid
  • DCIG 2016-17 Sub-$75K Deduplicating Backup Appliance Buyer’s Guide
  • DCIG 2016-17 Sub-$50K Deduplicating Backup Appliance Buyer’s Guide
  • DCIG 2016-17 US Enterprise Deduplicating Backup Appliance Buyer’s Guide

DCIG 2016-17 Deduplicating Backup Appliance Buyer’s Guide Edition had to meet the following criteria:

  • Be intended for the deduplication of backup data, primarily target-based deduplication
  • Includes an NAS (network attached storage) interface
  • Supports CIFS (Common Internet File System) or NFS (Network File System) protocols
  • Supports a minimum of two (2) hard disk drives and/or a minimum raw capacity of eight terabytes
  • Be formally announced or generally available for purchase on July 1, 2016

The various Deduplicating Backup Appliance Buyer’s Guide Editions are based on at least one additional criterion, whether list price (Sub-$100K, Sub-$75K and Sub-$50K) or being from a US-based provider.

By using the DCIG Analysis Portal and applying these criteria to its body of research into backup appliances, DCIG analysts were able to quickly create a short list of products that meet these requirements which was then, in turn, used to create Buyer’s Guide Editions to publish and release. DCIG plans to use this same process to create future Buyer’s Guide Editions that examine hybrid cloud and integrated backup appliances among others.

End users registering to access any of these reports via the DCIG Analysis Portal also gain access to the DCIG Interactive Buyer’s Guide (IBG). The IBG enables organizations take the next step in the product selection process by generating custom reports, including comprehensive side-by- side feature comparisons of the products in which the organization is most interested.

Additional information about each buyer’s guide edition, including a download link, is available on the DCIG Buyer’s Guides page.




Veritas Lays Out Its Own Agenda for Enterprise Data Management at Veritas Vision

This year’s Veritas Vision 2016 conference held a lot of intrigue for me. The show itself was not new. The Vision show has been an ongoing event for years though this was the first time in more than a decade that Veritas was free to set its own agenda for the entire show. Rather the intrigue was in what direction it would take going forward. This Veritas did by communicating that it plans to align its product portfolio and strategy to deliver on an objective that has, to date, eluded enterprise organizations and vendors alike for at least two decades:  enterprise data management.

Veritas’ intent to deliver comprehensive data management to enterprise organizations is likely to be welcomed by executives in large organizations but also viewed with a certain amount of apprehension. This is not the first time that a major technology provider has cast a vision to bring all data under centralized management as providers such as EMC and IBM have both done so in the past. However, each of these prior attempts resulted in outcome that may be best classified as somewhere between abject failure and total disaster.

Now Veritas stands before leaders in enterprise organizations and asks them to believe that it can accomplish what no prior technology provider has successfully been able to deliver. To its credit, it does have more proof points to support its claim that it can succeed where its competitors failed. Here’s why.

  1. Veritas probably protects more enterprise data using NetBackup than all of its competitors combined. This breadth of enterprise backup data under NetBackup’s purview puts Veritas in a unique position as compared to its competitors to understand more than just how much data resides in their archival, backup, and production data stores. It also gives NetBackup unparalleled visibility into the data stored in these various storage silos using the metadata that it has captured and stored over the years. Using this metadata, Veritas can create an Information Map that informs organizations where their data resides as well as provide insight into what information these data stores contain and how frequently the data has been accessed. This breadth and quantify of historical information contained in NetBackup is insight that Veritas’ competitors simply lack.
  2. Veritas (for the most part) remains a pure software play which aligns with the shift to software defined data centers that enterprise want to make. While Veritas admittedly does sell a NetBackup appliance (and it sells a lot of them,) there has been almost no discussion from Veritas executives at this event about expanding its hardware presence. Just the opposite, in fact. Veritas wants to boldly go into the software defined storage (SDS) software market and equip both cloud providers and enterprise organizations with the SDS software that they need to first create and then centrally manage a heterogeneous storage infrastructure. While I can envision Veritas building upon the success it has experienced with its NetBackup appliance in order to create a turnkey SDS software appliance, I see that more as a demand imposed upon them by their current and prospective customer base than a strategic initiative that they will aggressively seek to promote or grow.
  3. The technologies as well as the political structure within data centers have sufficiently evolved and matured to permit the adoption of an enterprise data management platform. 20 years ago, 10 years ago, and even a couple of years ago, enterprise organizations simply were not internally technically or politically ready for an initiative as aggressive as enterprise data management. The climate on both of those fronts has changed. On the technical side, the advent of flash, hyper-converged infrastructures (HCI), and software-defined flash have contributed to enterprise organizations getting more performance and consolidating their infrastructure at ever lower costs. As these infrastructure consolidations have occurred, IT head counts have remained flat or even declined leaving them no time to pursue strategic initiatives such as implementing an enterprise data management available from third parties. However, by using Veritas NetBackup as the foundation for an enterprise data management platform, enterprise organizations can lay the groundwork to achieve this strategic initiative using the people and resources they already possess.

Veritas outlined an aggressive and potentially a controversial vision for its own future as a software company as its plan is not without many potential pitfalls. Even as I sat through the multiple briefings at Vision, I have many reservations about the viability of its plan on its ability to gather information as comprehensively throughout organizations as it hopes to do as well as delivering an SDS software solution that works the way enterprises will need it to work for them in production to rely upon it.

That said, Veritas sits in the catbird seat from an enterprise perspective. I have to agree with its internal assessment that it is better positioned than any other enterprise software company to deliver on the vision as it has currently laid it out. The real task before Veritas now is to begin to execute upon this vision and show some successes in the field which it appears it has already begun to do. If that is the case, enterprise organizations may see the first vestiges of an enterprise data management platform that actually delivers on what it promises sooner than later.




SimpliVity Hyper-converged Solution Already Driving Enterprises to Complete Do-overs of their Data Center Infrastructures

Technologies regularly come along that prompt enterprises to re-do their existing data center infrastructure. Whether it is improved performance, lower costs, more flexibility, improved revenue opportunities, or some combination of all of these factors, they drive enterprises to update or change the technologies they use to support their business.

But every now and then a technology comes along that prompts enterprises to a complete do-over of their existing data center infrastructures. This type of dramatic change is already occurring within organizations of all sizes who are adopting and implementing SimpliVity.

Hyper-converged infrastructures have had my attention for some time now. Combining servers, storage, networking, virtualization, and data protection into a single solution and delivering it as an easy-to-manage and scale appliance or software solution, hyper-converged infrastructures minimally struck me as novel and even disruptive for small and midsized business (SMB) environment. But as enterprise play … well, let’s just say I was more than a bit dubious.

OmniStack Technology Snapshot

Source: Simplivity

That viewpoint changed after attending SimpliVity Connect last week in San Francisco. An intimate event with maybe 30 people attending (including SimpliVity employees, partners, and customers along with some analysts and press,) I had unfettered access to the people within SimpliVity making the decisions and building the product as well as partners and customers who were responsible for implementing and supporting SimpliVity’s OmniStack Technology.

Unlike too many analyst events where I sometimes sense customers, partners, and even the vendor feel obligated to curtail their answers or refrain from commenting, I saw very little of that at this event. If anything, when I challenged the customers on why they make the decision they did to implement SimpliVity, or partners on why they elected to recommend SimpliVity over traditional distributed architectures (servers, storage, and networking,) their answers were surprisingly candid and unrestrained.

One customer I spoke to at length over dinner was Blake Soiu, the IT director of Interland Corp, who thought at the outset that SimpliVity was simply too good to be true. After all, it promised to deliver servers, storage, networking, data protection, virtualization, and disaster recovery (<- yes, disaster recovery!) for less than what he would spend on refreshing his existing distributed architecture. Further, a refresh of his distributed architecture would only include the foundation for DR but not an actual working implementation of it. By choosing SimpliVity, he allegedly would also get DR.

Having heard promises like this in the past, his skepticism was palpable. But after testing SimpliVity’s product in his environment with his applications and then sharing the financial and technical benefits with Interland’s management team, the decision to switch to SimpliVity became remarkably easy to make.

As he privately told me over dinner, the primary concerns of the CEO and CFO are making money. The fact that they could lower their costs, improve the availability and recoverability of the applications in their infrastructures, and lower their risks was all that it took to convince them. On his side, he has realized a significant improvement in the quality of his life with the luxury of going home without being regularly called out. Further, he has a viable and working DR solution that was included as part of the overall implementation of SimpliVity.

Equally impressive were the responses from some of the value added resellers in attendance. One I spoke to at length was Ken Payne, the CTO of Abba Technologies. Abba was a former (and maybe still is) an EMC reseller that offers SimpliVity as part of its product portfolio. However, Abba does more than offer technology products and services, it also consumes them as part of its CloudWorks offering.

Resellers such as Abba have a lot on the line, especially when they have partnerships like providers such as EMC. However, in their evaluation of SimpliVity for both their internal use and as a potential offering to their customers, Payne felt like Abba almost had no choice but to adopt it to stay at the front end of the technology curve though it was difficult to say the least. He says, “It is akin to throwing out everything you ever knew and believed in about IT and starting over.”

Abba has since brought SimpliVity in-house to use as the foundation of its cloud offering and is offering it to his customers. The benefits from using SimpliVity have been evident almost from the outset. One of Abba’s customers, after using SimpliVity for three months, finally gave up on trying to monitor the status of backups using SimpliVity’s native data protection feature.

However, he gave up not because they failed all of the time. Rather, they never failed and he was wasting his time monitoring them. On the status of Abba using SimpliVity internally, Payne says, “The amount of time that Abba spends managing and monitoring its own infrastructure has dropped from 45 percent to five percent. On some weeks, it is zero.

To suggest a do-over of how one does everything is never easy and, to do it successfully, requires a certain amount of faith and, at this stage, a high degree of technical aptitude and appreciation of how complex today’s distributed environments truly are. In spite of these obstacles, organizations such Interland Corp and Abba Technologies are making this leap forward and executing upon do-overs of their data center infrastructures to simplify them, lower their costs, and get new levels of flexibility and opportunities to scale that existing distributed architectures could not easily provide.

But perhaps more impressive is the fact that SimpliVity is already finding its way into Global 50 enterprise accounts and displacing working, mission-critical applications. These types of events suggest that SimpliVity is ready for more than do-overs in SMB or even small and midsized enterprise (SME) data centers. It tells me that leading-edge, large enterprises are ready for this type of do-over in their data center infrastructures and have the budget and, maybe more importantly, the fortitude and desire to  do so.




Dell NetVault and vRanger are Alive and Kicking; Interview with Dell’s Michael Grant, Part 3

Every now and then I hear rumors in the market place that the only backup software product that Dell puts any investment into is Dell Data Protection | Rapid Recovery while it lets NetVault and vRanger wither on the vine. Nothing could be further from the truth. In this third and final part of my interview series with Michael Grant, director of data protection product marketing for Dell’s systems and information management group, he refutes those rumors and illustrates how both the NetVault and vRanger products are alive and kicking within Dell’s software portfolio.

Jerome: Can you talk about the newest release of NetVault?

Michael: Dell Data Protection | NetVault Backup, as we now call it, continues to be an important part of our portfolio, especially if you are an enterprise shop that protects more than Linux, Windows and VMware. If you have a heterogeneous, cross-platform environment, NetVault does the job incredibly effectively and at a very good price. Netvault development keeps up with all the revs of the various operating systems. This is not a small list of to-dos. Every time anybody revs anything, we rev additional agents and provide updates to support them.

dell_netvault

Source: Dell

 In this current rev we also improved the speed and performance of NetVault. We now have a protocol accelerator, so we can keep less data on the wire. Within the media server itself, we also had to improve the speed and we wanted to address more clients. Customers protect 1,000’s of clients using NetVault and they want to add even more than that. To accommodate them, we automate the installation so that it’s effective, easily scalable and not a burden to the administrator.

To speed up protection of the file system, we put multi-stream capability into the product, so one can break up bigger backup images into smaller images and then simultaneously stream those to the target of your choice. Obviously, we love to talk to organizations about putting the DR deduplication appliances in as that target, but because we believe in giving customers flexibility and choice, you can multi-stream to just about any target.

Re-startable VMware backup is another big pain point for a lot of our customers.. They really bent our development team’s ear and said, “Listen, going back and restarting the backup of an entire VMDK file is a pain if it doesn’t complete. You guys need to put an automatic restart in the product.

Think about watching a show on DVR. If you did not make it all the way through the show in the first sitting, you don’t want to have to go back to the beginning and re-watch the entire thing the next time you watch it. You want to pick up where you left off.

Well, we actually put similar capability in NetVault. We can restart the VM backup from wherever the backup ended. Then you can just pick back up knowing that you have the last decently mountable restore point at a point in time when it trailed off. Just restart the VM and get the whole job done. That cuts hours out of your day if you did not get a full backup of a VM.   .

Sadly, backing up VMDK files, particularly in a dynamic environment, can be a real challenge. It is not unusual to have one fail midway through the job or not have a full job when you go to look in the queue. Restarting that VM backup just made a lot of sense for the IT teams.

Those new features really highlight what is new in the NetVault 11 release that we just announced. Later in the first half of this year, you will see the accompanying updates to the agents for NetVault 11 so that we remain in sync with the latest releases from everybody from Oracle through Citrix and VMware, as well as any other agents that need to be updated to align with this NetVault 11 release.

Jerome:  Are the functionality of vRanger and AppAssure now being folded under the Rapid Recovery brand?

Michael: That’s a little too far. We are blending the technologies, to be sure. But we are still very much investing in vRanger and it remains a very active part of our portfolio. To quote the famous Mark Twain line, “the tales of vRanger’s death are greatly exaggerated.”

dell_vranger_image

Source: Dell

We are still investing in it and it’s still very popular with customers. In fact, we made an aggressive price change in the fall to combine vRanger Pro with the standard vRanger offering. We just rolled in three years of service and made it all vRanger Pro. Then we dropped the price point down several hundred dollars, so that’s it less than any of the other entry level price points for virtualized backup in the industry. We will continue to invest in that product for dynamic virtual environments.

So, yes, you will absolutely still see it as a standalone product. However, even with that being the case, there is no reason that we should not reach in there and get some amazing code and start to meld that with Rapid Recovery. As DCIG has pointed out in its research and, as our customers tell us frequently, they would like to have as few backup tools in their arsenal as possible, so we will continue to blend those products to simplify data protection for our customers. The bottom line for us is, wherever the customer wants to go, we can meet them there with a solution that fits.

Jerome: How are you positioning each of these three products in terms of market segment?

Michael:  I do want to emphasize that we focus very much on the midmarket. We define midmarket as 500 to 5,000 employees. When we took a look at who really buys these products, we found that 90 plus percent of our solutions are being deployed by midmarket firms. The technologies that we have just talked about are well aligned to that market, and that makes them pretty unique. The midmarket is largely under served when it comes to IT solutions in general, but especially when it comes to backup and recovery. We are focusing on filling a need that has gone unfilled for too long.

In Part 1 of this interview series, Michael shares some details on the latest features available in Dell’s data protection line and why organizations are laser-focused on recovery like never before.

In Part 2 of this interview series, Michael elaborates upon how the latest features available in Dell’s data protection line enable organizations to meet the shrinking SLAs associated with these new recovery objectives.




Hyper-converged Infrastructure Adoption is a Journey, not a Destination

Few data center technologies currently generate more buzz than hyper-converged infrastructure solutions. By combining compute, data protection, flash, scale-out, and virtualization into a single self-contained unit, organizations get the best of what each of these individual technologies has to offer with the flexibility to implement each one in such a way that it matches their specific business needs. Yet organizations must exercise restraint in how many attributes they ascribe to hyper-converged infrastructure solutions as their adoption is a journey, not a destination.

www.infinitydesktops.com

In the last few years momentum around hyper-converged infrastructure solutions has been steadily building and for good reason. Organizations want:

  • The flexibility and power of server virtualization
  • To cost-effectively implement the performance of flash in their environment
  • To grow compute or storage with fewer constraints
  • To know their data is protected and easily recoverable
  • To spend less time managing their infrastructure and more time managing their business

Hyper-converged infrastructure solutions more or less check all of these boxes. In so doing, organizations are shifting how they approach everything from how they manage their data centers to their buying habits. For instance, rather than making independent server, networking and storage buying decisions, organizations are making a single purchase of a hyper-converged solution that addresses all of these specific needs.

But here is the trap that organizations should avoid. Some providers promote the idea that hyper-converged infrastructures can replace all of these individual components in any size data center. While that idea may someday come to pass, that day is not today and, in all likelihood, will never be fully realized.

Hyper-converged infrastructure solutions as they stand today are primarily well suited for the needs of small and maybe even mid-sized data centers. That said, the architecture of hyper-converged infrastructure solutions lends itself very well to moving further up to the stack into ever larger data centers in the not too distant future as their technologies and feature sets mature.

But as their capabilities and features mature, hyper-converged infrastructure solutions will still not become plug-n-play where organizations can set-‘em-and-forget-‘em. While an element of those concepts may always exist in hyper-converged solutions, they more importantly lay the groundwork for a needed and necessary evolution in how organizations manage their data centers.

Currently organizations still spend far too much managing their IT infrastructure at a component level. As such, they do not really get the full value out of their IT investment with many of their IT resources utilized at less than optimal levels even as they remain too difficult to efficiently and effectively manage.

By way of example, what should be relatively routine tasks such as data migrations during server or storage upgrades or replacements typically remain fraught with risk and exceeding difficult to accomplish. While providers have certainly made strides in recent years to eliminate some of the difficulty and risks associated with this task, it is still not the predictable, repeatable process that organizations want it to be and that it realistically should be.

This is really where hyper-converged infrastructure solutions come into play. They put a foundation into place that organizations can use to help transform the world of IT from the Wild West that it too often is today back into a discipline that offers the more predictable and understandable outcomes that organizations expect and which IT should rightfully provide.

Organizations of all sizes that look at hyper-converged infrastructure solutions today already find a lot in them to like. The breadth of their features coupled with their ease of install and ongoing management certainly help to make its case for adoption. However smart organizations should look at hyper-converged infrastructure solutions more broadly as a means to introduce a platform that they can then use to start on a journey towards building a more stable and predictable IT environment that they can then leverage in the years to come.




X-IO Refers to iglu as “Intelligent Storage”; I Say It is “Thoughtful”

In the last couple of weeks X-IO announced a number of improvements to its iglu line of storage arrays – namely flash optimized controllers and stretch clustering. But what struck me in listening to X-IO present the new features of this array was in how it kept referring to the iglu as “intelligent.” While that term may be accurate, when I look iglu’s architecture and data management features and consider them in light of what small and midsize enterprises need today, I see the iglu’s architecture as “thoughtful.”

Anyone familiar with the X-IO product line knows that it has its origins in hardware excellence. It was one of the first to offer a 5-year warranty for its arrays. It has been almost maniacal in its quest to drive every ounce of availability, performance and reliability out of the hard disk drives (HDDs) used in its systems. It designed its arrays in such a way that it made them very easy and practical to scale without having to sacrifice performance or manageability to do so.

But like many providers of HDD arrays, X-IO has seen its apple cart upset by the advent of flash. Hardware features once seen imperative or at least highly desirable on HDD-based arrays may now matter little or not at all. In some cases, they even impede flash’s adoption in the array.

X-IO arguably also encountered some of these same issues when flash first came out. To address this, its ISE and iglu arrays carried forward X-IO’s strengths of having an in-depth understanding of the media in its arrays. These arrays now leverage that to optimize the performance of flash media to drive up to 600,000 IOPS of performance on either its hybrid or all-flash arrays. The kicker, and what I consider to be the “thoughtful” part of X-IO’s array architecture, is in how it equips organizations to configure these two arrays.

home_page_product_ise1-300x149

Source: X-IO Technologies

One of the primary reasons behind flash’s rapid adoption has been its high levels of performance – up to 10x or more than that of HDDs. This huge performance boost has led many organizations to deploy flash to support their most demanding applications.

Yet this initial demand for performance by many organizations coupled with the need for storage providers to quickly deliver on this demand resulted in many all-flash arrays either lacking needed data management features (clustering, replication, snapshots, etc.) or delivering versions of these services that were really not enterprise ready

This left organizations in a bit of a quandary. Buy all-flash arrays now that offered the performance for their applications that needed it or wait until the data management on them were sufficiently mature?

X-IO’s iglu and ISE product lines address these concerns (which I why refer to their design as “thoughtful.”) Organizations may start with ISE 800 Series G3 All-Flash Array data storage system which offers the performance that many organizations initially want and need when deploying flash. It is with the data management features (LUN management, Web GUI, etc.) that organizations need to do base line management of the array. However it does not provide the fuller suite of features that organizations may need an array to offer before they deploy it more widely in their data center.

It is when organizations are ready to scale and use flash beyond just a point solution in their data center that the iglu blaze fx comes into play. The iglu introduces these more robust data management services that many organizations often need an array to offer to justify deploying it more broadly in their data center.

The decoupling of performance and data management services such as X-IO has done with its ISE Data Storage Systems and iglu Enterprise Storage Systems reflects a very thoughtful way for organizations to introduce flash into their environment as well as a means for X-IO to independently innovate and deliver on both data management and performance features without organizations having to unnecessarily pay for features they do not need.

The recent announcements about its flash optimized controllers and stretch clustering on its iglu blaze fx illustrate this mindset perfectly. Organizations that need raw performance can get that baseline functionality that flash offers by continuing to deploy the ISE 800 data storage system. But for those who are ready to use flash more widely in their data center, need more functionality than simple performance to achieve that goal and are ready to make the investment without sacrificing the investment in flash that they have already made, X-IO offers such a solution. That is what I call thoughtful.




HP 3PAR StoreServ 8000 Series Lays Foundation for Flash Lift-off

Almost any hybrid or all-flash storage array will accelerate performance for the applications it hosts. Yet many organizations need a storage array that scales beyond just accelerating the performance of a few hosts. They want a solution that both solves their immediate performance challenges and serves as a launch pad to using flash more broadly in their environment.

Yet putting flash in legacy storage arrays is not the right approach to accomplish this objective. Enterprise-wide flash deployments require purpose-built hardware backed by Tier-1 data services. The HP 3PAR StoreServ 8000 series provides a fundamentally different hardware architecture and complements this architecture with mature software services. Together these features provide organizations the foundation they need to realize flash’s performance benefits while positioning them to expand their use of flash going forward.

A Hardware Foundation for Flash Success

Organizations almost always want to immediately realize the performance benefits of flash and the HP 3PAR StoreServ 8000 series delivers on this expectation. While flash-based storage arrays use various hardware options for flash acceleration, the 8000 series complements the enterprise-class flash HP 3PAR StoreServ 20000 series while separating itself from competitive flash arrays in the following key ways:

  • Scalable, Mesh-Active architecture. An Active-Active controller configuration and a scale-out architecture are considered the best of traditional and next-generation array architectures. The HP 3PAR StoreServ 8000 series brings these options together with its Mesh-Active architecture which provides high-speed, synchronized communication between the up-to-four controllers within the 8000 series.
  • No internal performance bottlenecks. One of the secrets to the 8000’s ability to successfully transition from managing HDDs to SSDs and still deliver on flash’s performance benefits is its programmable ASIC. The HP 3PAR ASIC, now it’s 5th generation, is programmed to manage flash and optimize its performance, enabling the 8000 series to achieve over 1 million IOPs.
  • Lower costs without compromise. Organizations may use lower-cost commercial MLC SSDs (cMLC SSDs) in any 8000 series array. Then leveraging its Adaptive Sparing technology and Gen5 ASIC, it optimizes capacity utilization within cMLC SSDs to achieve high levels of performance, extends media lifespan which are backed by a 5-year warranty, and increases usable drive capacity by up to 20 percent.
  • Designed for enterprise consolidation. The 8000 series offers both 16Gb FC and 10Gb Ethernet host-facing ports. These give organizations the flexibility to connect performance-intensive applications using Fibre Channel or cost-sensitive applications via either iSCSI or NAS using the 8000 series’ File Persona feature. Using the 8000 Series, organizations can start with configurations as small as 3TB of usable flash capacity and scale to 7.3TB of usable flash capacity.

A Flash Launch Pad

As important as hardware is to experiencing success with flash on the 8000 series, HP made a strategic decision to ensure its converged flash and all-flash 8000 series models deliver the same mature set of data services that it has offered on its all-HDD HP 3PAR StoreServ systems. This frees organizations to move forward in their consolidation initiatives knowing that they can meet enterprise resiliency, performance, and high availability expectations even as the 8000 series scales over time to meet future requirements.

For instance, as organizations consolidate applications and their data on the 8000 series, they will typically consume less storage capacity using the 8000 series’ native thin provisioning and deduplication features. While storage savings vary, HP finds these features usually result in about 4:1 data reduction ratio which helps to drive down the effective price of flash on an 8000 series array to as low as $1.50/GB.

Maybe more importantly, organizations will see minimal to no slowdown in application performance even as they implement these features, as they may be turned on even when running mixed production workloads. The 8000 series compacts data and accelerates application performance by again leveraging its Gen5 ASICs to do system-wide striping and optimize flash media for performance.

Having addressed these initial business concerns around cost and performance, the 8000 series also brings along the HP 3PAR StoreServ’s existing data management services that enable organizations to effectively manage and protect mission-critical applications and data. Some of these options include:

  • Accelerated data protection and recovery. Using HP’s Recovery Manager Central (RMC), organizations may accelerate and centralize application data protection and recovery. RMC can schedule and manage snapshots on the 8000 series and then directly copy those snapshots to and from HP StoreOnce without the use of a third-party backup application.
  • Continuous application availability. The HP 3PAR Remote Copy software either asynchronously or synchronously replicates data to another location. This provides recovery point objectives (RPOS) of minutes, seconds, or even non-disruptive application failover.
  • Delivering on service level agreements (SLAs). The 8000 series’ Quality of Service (QoS) feature ensures high priority applications get access to the resources they need over lower priority ones to include setting sub-millisecond response times for these applications. However QoS also ensures lower priority applications are serviced and not crowded out by higher priority applications.
  • Data mobility. HP 3PAR StoreServ creates a federated storage pool to facilitate non-disruptive, bi-directional data movement between any of up to four (4) midrange or high end HP 3PAR arrays.

Onboarding Made Fast and Easy

Despite the benefits that flash technology offers and the various hardware and software features that the 8000 series provides to deliver on flash’s promise, migrating data to the 8000 series is sometimes viewed as the biggest obstacle to its adoption. As organizations may already have a storage array in their environment, moving its data to the 8000 series can be both complicated and time-consuming. To deal with these concerns, HP provides a relatively fast and easy process for organizations to migrate data to the 8000 series.

In as few as five steps, existing hosts may discover the 8000 series and then access their existing data on their old array through the 8000 series without requiring the use of any external appliance. As hosts switch to using the 8000 series as their primary array, Online Import non-disruptively copies data from the old array to the 8000 series in the background. As it migrates the data, the 8000 series also reduces the storage footprint by as much as 75 percent using its thin-aware functionality which only copies blocks which contain data as opposed to copying all blocks in a particular volume.

Maybe most importantly, data migrations from EMC, HDS or HP EVA arrays (and others to come) to the 8000 series may occur in real time Hosts read data from volumes on either the old array or the new 8000 series with hosts only writing to the 8000 series. Once all data is migrated, access to volumes on the old array is discontinued.

Achieve Flash Lift-off Using the HP 3PAR StoreServ 8000 Series

Organizations want to introduce flash into their environment but they want to do so in a manner that lays a foundation for their broader use of flash going forward without creating a new storage silo that they need to manage in the near term.

The HP 3PAR StoreServ 8000 series delivers on these competing requirements. Its robust hardware and mature data services work hand-in-hand to provide both the high levels of performance and Tier-1 resiliency that organizations need to reliably and confidently use flash now and then expand its use in the future. Further, they can achieve lift-off with flash as they can proceed without worrying about how they will either keep their mission-critical apps online or cost-effectively migrate, protect or manage their data once it is hosted on flash.




Sorting through the Nuances in How Quality of Service is Implemented on Hybrid Storage Arrays

DCIG recently released two Buyer’s Guides on Hybrid Storage Arrays – the DCIG 2015-16 SME Hybrid Storage Array and the DCIG 2015-16 Midsize Enterprise Hybrid Storage Array – that examine many of the features that hybrid storage arrays offer. Yet what these Guides can only do at a high level is reveal how certain features are implemented on hybrid storage arrays without getting into any real detail in terms of how they are implemented. One such feature is Quality of Service.

The availability of Quality of Service or QoS as a feature on hybrid storage arrays is one that DCIG views as critical for these arrays to possess. As they are often deployed in highly virtualized environments and will need to support mixed application workloads, QoS gives organizations the flexibility to prioritize workloads on the hybrid storage array to provide predictable and controllable levels of performance for these various workloads.

While this concept is fairly straightforward to articulate, it is surprisingly difficult to successfully execute upon in a manner that satisfies every situation that organizations are likely to encounter in their environment. This is illustrated by the 13 different ways that QoS is implemented on the 27 hybrid storage arrays that were evaluated in its 2015-16 Midsize Enterprise Hybrid Storage Array Buyer’s Guide.

Of these arrays, fully 30% of these 27 hybrid storage arrays that DCIG evaluated offered no QoS functionality whatsoever. Translated, 13 different QoS options were available on the remaining 20 or so hybrid storage arrays. In other words, it can be said with a high degree of certainty that nearly every vendor takes a slight different tact when implementing QoS on their hybrid storage array.

Despite all of the differences between how QoS is implemented on these hybrid storage arrays, there are some similarities in how QoS is made available. Consider the following chart that shows the five most common ways in which QoS is available on midsize hybrid storage array models (DCIG defines midsize as those hybrid storage arrays scaling up to but not including 1 PB of capacity.)

Hybrid Storage Arrays QoS

The most common way that QoS is implemented on hybrid storage arrays is by auto-balancing I/O across all VMs, volumes or LUNs with just under 60% of arrays supporting this option. This QoS technique ensures that I/Os from all attached applications are serviced equally to provide some guarantee of performance. The drawback with this technique is that it treats I/O from all applications as equal and fails to prioritize and service I/O from higher priority applications over lower priority applications. While this approach is better than not having QoS at all, it may not meet the needs of organizations that need their hybrid storage array to handle the workloads of dozens or even hundreds of applications with varying priorities.

Predefined service levels goes a step further in the direction toward making sure that I/Os of higher priority applications are serviced ahead of lower priority applications. By providing Gold, Silver and Bronze service levels and assigning each VM, volume or LUN to one of these service levels, I/Os from applications in the Gold service level are serviced and given priority over those with lower service levels. However here again, certain issues can arise. Specifically, I/Os from lower priority applications may not be serviced in a reasonable amount of time as I/Os from higher priority applications crowd them out.

This challenge has given rise to a number of QoS features that take various approaches to ensure even I/O from lower priority applications are given attention in a timely manner. QoS features such as Minimum IOPs, Maximum IOPs and Max Response Times take different approaches to take care of these I/Os from lower priority applications.

The Minimum IOPS QoS feature does it best to ensure that a performance-intensive though lower priority application always gets a certain amount of array resources to maintain a minimum number of IOPs. The Maximum IOPs feature tries to accomplish the same task though it takes a slightly different means. It allows a lower priority application to essentially run unchecked in terms of generating IOPs until it reaches a certain maximum threshold. At that point, this QoS feature on the hybrid storage array begins to throttle and limit the number of IOPs to the predefined maximum number.

A third way hybrid storage arrays attempt to ensure even IOPs from lower priority apps are serviced is to use Max Response Time. In this scenario, I/Os from lower priority applications are put into a queue where they must wait up to a certain length of time (say 5-10 milliseconds) before they are serviced. Since a growing number of hybrid storage arrays service I/Os in 1 millisecond of time or less, this technique frees the hybrid storage array to potentially service up to 10 I/Os from a higher priority application before it services the I/O from a lower priority application.

The big takeaways for organizations evaluating hybrid storage arrays are two-fold. First, if you expect your hybrid storage array to host dozens or even hundreds of VMs, selecting a hybrid storage array that supports QoS becomes almost a necessity. Second, the more QoS options that are available on a hybrid storage array, the better the odds are that you will be deliver the level of QoS that the various applications hosted on the hybrid storage array need.

To get more information and see how many QoS options are supported on each hybrid storage array, end-users may register and download the latest DCIG 2015-16 Midsize Enterprise Hybrid Storage Array Buyer’s Guides at no charge by following this link to the DCIG Analysis Portal. I also take a look at QoS and other features on Hybrid Storage Arrays on this webinar that I recorded yesterday with NexGen Storage. You may view that webinar by following this link.

Bitnami