HCI Comparison Report Reveals Key Differentiators Between Dell EMC VxRail and Nutanix NX

Many organizations view hyper-converged infrastructure (HCI) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

HCI Appliances Deliver Radical Simplicity

Hyper-converged infrastructure appliances radically simplify the data center architecture. These pre-integrated appliances accelerate and simplify infrastructure deployment and management. They combine and virtualize compute, memory, storage and networking functions from a single vendor in a scale-out cluster. Thus, the stakes are high for vendors such as Dell EMC and Nutanix as they compete to own this critical piece of data center real estate.

In the last several years, HCI has also emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, plus non-disruptive hardware upgrades and data migrations are among the features that enterprises love about these solutions.

HCI Appliances Are Not All Created Equal

Many enterprises are considering HCI solutions from providers Dell EMC and Nutanix. A cursory examination of these two vendors and their solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their HCI appliances. Also, both providers pretest firmware and software updates and automate cluster-wide roll-outs.

Nevertheless, important differences remain between the products. Due to the high level of interest in these products, DCIG published an initial comparison in November 2017. Both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first page of HCI comparison reportUpdated DCIG Pocket Analyst Report Reveals Key HCI Differentiators

In this updated report, DCIG identifies six ways the HCI solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed feature matrix as well as insight into key differentiators between these two HCI solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




Dell EMC VxRail vs Nutanix NX: Six Key HCIA Differentiators

Many organizations view hyper-converged infrastructure appliances (HCIAs) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and virtualizing compute, memory, stor­age, networking, and data protection func­tions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deploy­ment and management. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center infrastructure real estate.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises consider HCIA solutions from providers such as Dell EMC and Nutanix, they must still evaluate key features available on these solutions as well as the providers themselves. A cursory examination of these two vendors and their respective solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their respective HCIA solutions. Both pre-test firmware and software updates holistically and automate cluster-wide roll-outs.

Despite these similarities, differences between them remain. To help enterprises select the product that best fits their needs, DCIG published its first comparison of these products in November 2017. There is a high level of interest in these products, and both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first pageIn this updated report, DCIG identifies six ways the HCIA solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed product matrix as well as insight into key differentiators between these two HCIA solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




Differentiating between High-end HCI and Standard HCI Architectures

Hyper-converged infrastructure (HCI) architec­tures have staked an early claim as the data center architecture of the future. The abilities of all HCI solutions to easily scale capacity and performance, simplify ongoing management, and deliver cloud-like features have led many enterprises to view it as a logical path forward for building on-premise clouds.

But when organizations deploy HCI at scale (more than eight nodes in a single logical configuration), cracks in its architecture can begin to appear unless organizations carefully examine how well it scales. On the surface, high-end and standard hyper-converged architectures deliver the same bene­fits. They each virtualize compute, memory, storage networking, and data stor­age in a simple to deploy and manage scale-out architecture. They support standard hypervisor platforms. They provide their own data protection solutions in the form of snap­shots and replication. In short, these two archi­tectures mirror each other in many ways.

However, high-end and standard HCI solutions differ in functionality in ways that primarily surface in large virtualized deployments. In these envi­ronments, enterprises often need to:

  • Host thousands of VMs in a single, logical environment.
  • Provide guaranteed, sustainable performance for each VM by insulating them from one another.
  • Offer connectivity to multiple public cloud providers for archive and/or backup.
  • Seamlessly move VMs to and from the cloud.

It is in these circumstances that differences between high-end and standard architectures emerge with the high-end HCI architecture possessing four key advantages over standard HCI architectures. Consider:

  1. Data availability. Both high-end and standard HCI solutions distribute data across multiple nodes to ensure high levels of data availability. However, only high-end HCI architectures use separate, distinct compute and data nodes with the data nodes dedicated to guar­anteeing high levels of data availability for VMs on all compute nodes. This ensures that should any compute node become unavail­able, the data associated with any VM on it remains available.
  2. Flash/performance optimization. Both high-end and standard HCI architectures take steps to keep data local to the VM by storing the data of each VM on the same node on which the VM runs. However, solutions based on high-end HCI architectures store a copy of each VM’s data on the VM’s compute node as well as on the high-end HCI architecture’s underlying data nodes to improve and optimize flash performance. High-end HCI solutions such as the Datrium DVX also deduplicate and compress all data while limiting the amount of inter-nodal communication to free resources for data processing.
  3. Platform manageability/scalability. As a whole, HCI architectures are intuitive to size, manage, scale, and understand. In short, if you need more performance and/or capacity, one only needs to add another node. However, enterprises often must support hundreds or even thousands of VMs that each must perform well and be protected. Trying to deliver on those requirements at scale using a standard HCIA solution where inter­-nodal communication is a prerequisite becomes almost impos­sible to achieve. High-end HCI solutions facilitate granular deployments of compute and data nodes while limiting inter-nodal communication.
  4. VM insulation. Noisy neighbors—VMs that negatively impact the performance of adjacent VMs—remain a real problem in virtual environments. While both high-end and standard HCI architectures support storing data close to VMs, high-end HCI architectures do not rely on the availability of other compute nodes. This reduces inter-nodal communication and the probability that the noisy neighbor problem will surface.

HCI architectures have resonated with organizations in large part because of their ability to deliver cloud-like features and functionality with maintaining their simplicity of deployment and ongoing maintenance. However, next generation high-end HCI architecture with solutions available from providers like Datrium provide organizations greater flexibility to deliver cloud-like functionality at scale such as offering better management and optimization of flash performance, improved data availability, and increased insulation between VM instances. To learn more about how standard and high-end HCI architectures compare, check out this recent DCIG pocket analyst report that is available on the TechTrove website.

Note: Image added on 7/27/2018.



Dell EMC VxRail vs Nutanix NX: Eight Key HCIA Differentiators

Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and/or virtualizing compute, memory, stor­age, networking, and data protection func­tions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deploy­ments and management. As such, the stakes are high for Dell EMC and Nutanix who are competing to own this critical piece of data center infrastructure real estate.

In the last couple of years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions from providers such as Dell EMC and Nutanix, they must still evaluate key features available on these solutions as well as the providers themselves. A cursory examination of these two vendors and their respective solutions quickly reveals similarities between them.

Both companies control the entire hardware and software stacks of their respective HCIA solutions as they pre-test firmware and software updates holistically and automate cluster-wide roll-outs. Despite these similarities, differences between them remain. To help enterprises select the product that best fits their needs, DCIG has identified eight ways the HCIA solutions from these two companies currently differentiate themselves from one another.

blurred image of first page of the reportDCIG is pleased to make a recent DCIG Pocket Analyst Report that compares these two HCIA products available for a complimentary download. This succinct, 4-page report includes a detailed product matrix as well as insight into eight key differentiators between these two HCIA solutions and which one is best positioned to deliver on key data center considerations such as:

  1. Breadth of ecosystem
  2. Data center storage integration
  3. Enterprise applications certified
  4. Licensing
  5. Multi-hypervisor flexibility
  6. Scaling options
  7. Solution integration
  8. Vendor stability

To access and download this report at no charge for a limited time, simply follow this link and complete the registration form on the landing page.




Comtrade Software HYCU Serves as a Bellwether for Accelerated Adoption of Hyperconverged Platforms

In today’s business world where new technologies constantly come to market, there are signs that indicate when certain ones are gaining broader market adoption and ready to go mainstream. Such an event occurred this month when a backup solution purpose built for Nutanix was announced.

This product minimized the need for users of Nutanix’s hyperconverged infrastructure solution to parse through multiple product to find the right backup solution for them. Now they can turn to Comtrade Software’s HYCU software confident that will get a backup solution purpose-built to protect VMs and applications residing on the Nutanix Acropolis hyperconverged infrastructure platform.

In the history of every new platform that comes to market, certain tipping points occur that validate and accelerate its adoption. One such event is the availability of other products built specifically to run on that platform that make it more practical and/or easier for users of that platform to derive more value from it. Such an event for the Nutanix Acropolis platform occurred this month when Comtrade Software brought its HYCU backup software to market which is specifically designed to protect VMs and applications running on the Nutanix Acropolis hyperconverged platform.

The availability of this purpose-built data protection solution from Comtrade Software for Nutanix is significant in three ways.

  • It signifies that the number of companies adopting hyperconverged infrastructures solutions has reached critical mass in the market place and that this technology is poised for larger growth.
  • It would suggest that current backup solutions do not deliver the breadth of functionality that administrators of hyperconverged infrastructure solution need; that they cost too much; that they are too complicated to use; or some combination of all three.
  • It indirectly validates that Nutanix is the market leader in providing hyperconverged infrastructure solutions as Comtrade placed its bets on first bringing a solution to market that addresses the specific backup and recovery challenges that Nutanix users face.

Considering that the Comtrade Software’s HYCU is just out of the gate, it offers a significant amount of functionality that make it a compelling data protection solution for any Nutanix deployment. One of Comtrade’s design goals was to make it is simple as possible to deploy and manage backup over time in Nutanix environments. While this is typically the goal of every product that comes to market, Comtrade Software’s HYCU stands apart with its ability to detect the application running inside of each VM.

One of the challenges that administrators routinely face is the lack of ability to easily discern what applications run inside a VM without first tracking down the owner of that VM and/or the application owner to obtain that information. In the demo I saw of HYCU, it mitigates the need to chase down these individuals as it can look inside of a VM to identify which application and operating system it hosts. Once it has this information, the most appropriate backup policies for that VM may be assigned.

Equally notable about the Comtrade Software HYCU product is its management interface. Rather than presenting and requiring administrators to learn a new management interface to perform backups and recoveries, it presents a management interface that closely if not exactly replicates the one used by Nutanix.

Every platform that attains broad acceptance in the marketplace reaches a point where partners come alongside it and begin to offer solutions either built upon it or that do a better job of performing certain tasks such as data protection and recovery. Comtrade Software offering its HYCU data protection software serves as a bellwether for where Nutanix sits in the hyperconverged market place. By coming to market when it has, Comtrade Software positions its HYCU offering as the front runner in this emerging space as it currently has no other competitors that offer purpose-built backup software for Nutanix hyperconverged infrastructure deployments.




DCIG Quick Look: Acquisition of SimpliVity Fits Right into HPE’s Broader Hybrid IT Strategy

Last week HPE announced its acquisition of SimpliVity, a provider of enterprise hyper-converged infrastructure solutions. While that announcement certainly made news in the IT industry, the broader implications of this acquisition signaled that enterprise IT providers such as HPE could no longer sit on the sidelines and merely be content to partner with providers such as SimpliVity as hyper-converged solutions rapidly become a growing percentage of enterprise IT. If HPE wanted its fair share of this market, it was imperative that it act sooner rather than later to ensure it remained a leading player in this rapidly growing market.

The good news is that in HPE’s acquisition of Simplivity is that HPE chose a product that aligned well with its existing hybrid IT strategy.

HPE Hybrid IT Strategy

HPE Hybrid IT StrategyThe challenge with this strategy as defined is that its existing HPE Hyper Converged 380 simply did not have the chops to scale into the enterprise. It was a great solution for the midmarket environments for which it was intended offering fast deployments, simplified on-going management, and right-sized for these environments. But when looking at enterprise IT environments that demand greater scalability and data management services, the HPE HC 380 did not quite fit the bill.

Enter HPE’s acquisition of SimpliVity.

SimpliVity stormed onto the hyper-converged infrastructure market a few years ago. While it certainly had success in the midmarket with its products, it more importantly also had success in the enterprise market which many of its competitors failed to breach. I was even present at some of its analyst conferences where end-users openly talked about leveraging SimpliVity to displace enterprise level converged infrastructure solutions. Heady times indeed, for a company so new to the market.

Clearly HPE took notice, probably in part because I suspect some of these displacements involved SimpliVity cutting in on its turf. However, an enterprise caliber hyper-converged infrastructure offering was a gap that HPE needed to fill in its enterprise product portfolio anyway. In this case, if you can’t beat ‘em, buy ‘em, which is exactly what HPE is in the process of doing.

In the analyst briefing that I attended last week, it was obvious that HPE had a clear vision of how it intended to merge SimpliVity into its portfolio of offerings. In talking about how SimpliVity would fit into its hybrid IT strategy, HPE explained how enterprise IT would consist of the traditional IT stack (servers, storage, and networking,) hyper-converged infrastructure, and the cloud.

Yet that emerging enterprise IT framework did not dissuade HPE. Rather, HPE recognized that it could cleanly incorporate SimpliVity into this  IT architecture by creating what it terms “composable workloads” that encompassed its existing 3PAR StoreServ and cloud platforms as well as the new SimpliVity platform. Using its tools (I assume to be developed,) application workloads can dynamically be placed on any of these platforms and then moved if and when needed.

Further adding to SimpliVity’s appeal was its availability of enterprise data management services. However, HPE uncovered that these services worked and were in use. In researching SimpliVity prior to the acquisition, HPE found that many if not most of SimpliVity’s existing customers used its compression and deduplication services as well as its built-in data protection features. In other words, SimpliVity did more than “check” the box that it offered these features. It delivered these features in a manner that companies could confidently deploy and use them in their environment.

HPE admitted that SimpliVity’s offering still had some work to do in the areas of ease of use and speed of deployment to match enterprise expectations in those areas. But considering that HPE has probably done as much or more work than almost any other enterprise provider to deliver on these types of expectations with its 3PAR and StoreOnce lines of storage, SimpliVity should experience great success near and long term in its forthcoming home at HPE.




HPE Practices What it Preaches to Bring about Its Own Digital Transformation

Change. Digital transformation. Disrupt. Eat your own young. These were just some of the terms and phrases uttered at this past week’s HPE Discover event in Las Vegas by HPE executives at all levels of the organization. Yet in the face of the changes that are about to sweep through the technology industry, a technology provider that touches as many organizations around the world as HPE does needs to have more than this type of mindset. It needs to have the products and strategy in place to back it up. Based upon what I saw at HPE Discover last week, HPE is executing upon these requirements.

The changes preparing to take place in the IT infrastructure of organizations of all sizes are probably the most substantial since the advent of distributed computing in the late 80s and early 90s. Then, like now, a new IT architecture was beginning to emerge within IT infrastructures that would fundamentally re-shape how organizations accessed their data and managed their applications.

However this next wave of change, unlike distributed computed which decentralized data, compute, and storage, is more of a hybrid between mainframe and distributed computing. It leverages powerful, mobile edge devices such as phones, tablets, laptops, and PC to capture and visualize data and combines those with powerful, scalable, centralized solutions that manage, store, and analyze the data using resources from both public and private cloud services providers. While this is a bit of a simplified description of this emerging hybrid IT architecture, this definition highlights both the new benefits that it provides to organizations even as it attempts to mitigate or even eliminate the historical drawbacks of both of the architectures upon which it is built.

But as technology providers in general and infrastructure providers in particular compete to bring solutions to market that align with this new hybrid architecture, it puts them in an uncomfortable and even a precarious position. They must sell solutions that may, in some cases, undermine, displace, or even devalue their existing products and solutions with new technologies and solutions that deliver more flexibility, scale and performance at a lower price.

This is both the threat and opportunity that technology providers currently faces and from which HPE is not exempt. Like other providers, it has a full portfolio of IT infrastructure products ranging from its 3PAR StoreServ to StoreOnce to its line of StoreVirtual products. These are minimally under attack and could even be displaced by this new hybrid cloud architecture unless HPE acts smartly, swiftly and with a high degree of precision. To its credit, HPE appears to be executing on all fronts to evolve and adapt its existing product lines to provide the new types of functionality and form factors that organizations are coming to expect and demand. Consider:

  1. Bundling Docker containers with every server it ships. HPE rightly recognizes Docker containers for the disruptive technology that it is and the pent-up market demand for it. By bundling Docker containers with every server it ships, HPE communicates and demonstrates that applications – not IT infrastructure – is becoming the new driving force in buying decisions.
  2. Micro Datacenter. While walking around the exhibit floor on the last day of the event, I saw this rack in a box sitting on wheels and wondered, “What in the world in this?” Turns out, it is exactly what it looked like – a micro datacenter for which I could not even find a link to on HPE’s website.

micro datacenter

Delivered in a self-contained box, it contains servers, networking, storage, UPS’s, and air conditioning in a sealed box so all organizations literally have to do is roll it onto the floor and turn it on. While not for everyone, anyone who has a remote site that needs lots of compute, storage, and availability and who does not want the headaches of setting it up and managing it, these are a dream come true. The HPE individual on the show floor with whom I spoke said he had taken five orders for these units from mining companies just during the HPE Discover event.

3. Programmable infrastructure. Anyone who has ever managed an IT infrastructure of any size knows that it is not a curse one would wish upon their own worst enemy. Further, mapping LUNs, creating zones, and troubleshooting network protocol issues can make one’s head spin while solving no practical business problems. However, as this new hybrid cloud architecture emerges, this problems are certainly going to decrease and may even pretty much go completely (though it is far too early to make that assumption at this stage.) However, it is safe to say that the infrastructure will become significantly easier to manage opening the door for organizations to programmatically manage it in ways that they have a hard time even envisioning right now. HPE appears to be at the forefront of delivering on these capabilities with its forthcoming Synnergy product that delivers what HPE describes as a “composable infrastructure.” While still in beta, HPE expects a late 2016 release of this product.

Notable in each of these three existing and forthcoming offerings is the decreasing emphasis on the infrastructure components that have typically been the focus of technology companies over the past few years if not the past decade or two. At the end of the day, business owners do not measure success in techno jargon or bits and bytes. They measure it in applications and solutions that lower costs and increase revenue. These solutions announced at HPE Discover seem to illustrate that HPE perhaps grasps this concept better than any time in its recent past.

The forthcoming technology change that is about to sweep though organizations in the years to come is just getting underway and organizations are justifiably concerned about making it. HPE’s willingness to make this level of changes to its own product lines and adopt them internally should provide some reassurance to organizations. The fact that HPE is taking the necessary steps to deliver the next gen architecture that organizations are coming to want and expect even as it exhibits a certain disregard for its own product line should encourage organizations to begin their own transformation knowing that HPE has already gone on before them making the hard choices to transform itself.




Server-based Storage Makes Accelerating Application Performance Insanely Easy

In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.

As flash transforms the demands of application owners, organizations want more options to cost-effectively deploy and manage it. These include:

  • Putting lower cost flash on servers as it performs better on servers than across a SAN.
  • Hyper-converged solutions have become an interesting approach to server-based storage. However, concerns remain about fixed compute/capacity scaling requirements and server hardware lock-in.
  • Array-based arrays have taken off in large part because they provide a pool of shared flash storage accessible to multiple servers.

Now a fourth, viable flash option has appeared on the market. While I have always had some doubts about server-based storage solutions that employ server-side software, today I changed my viewpoint after reviewing Datrium’s DVX Server-powered Storage System.

Datrium has the obvious advantages over arrays as it leverages the vast, affordable and often under-utilized server resources.  But unlike hyper-converged systems, it scales flexibly and does not require a material change in server sourcing.

To achieve this ends, Datrium has taken a very different approach with its “server-powered” storage system design.  In effect, Datrium split speed from durable capacity in a single end-to-end system.  Storage performance and data services tap host compute and flash cache, driven by Datrium software that is uploaded to the virtual host. It then employs its DVX appliance, an integrated external storage appliance, that permanently holds data and orchestrates the DVX system protects application data in the event of server or flash failure.

This approach has a couple meaningful takeaways versus traditional arrays:

  • Faster flash-based performance given it is local to the server versus accessed across a SAN
  • Lower cost since server flash drives cost far less than flash drives found on an all-flash array.

But it also addresses some concerns that have been raised about hyper-convered systems:

  • Organizations may independently scale compute and capacity
  • Plugs into an organization’s existing infrastructure.

Datrium Offers a New Server-based Storage Paradigm

StatelessServers_Diesl-1024x818

Source: Datrium

Datrium DVX provides the different approach needed to create a new storage paradigm. It opens new doors for organizations to:

  1. Leverage excess CPU cycles and flash capacity on ESX servers. ESX servers now exhibit the same characteristics that the physical servers they replaced once did: they have excess, idle CPU. By deploying server-based storage software at the hypervisor level, organizations can harness this excess, idle CPU to improve application performance.
  2. Capitalize on lower-cost server-based flash drives. Regardless of where flash drives reside (server-based or array-based,) they deliver high levels of performance. However, server-based flash costs much less than array-based flash while providing greater flexibility to add more capacity going forward.

Accelerating Application Performance Acceleration Just Became Insanely Easy

Access to excess server-based memory, CPU and flash combine to offer another feature that array-based flash can never deliver: push button application performance. By default, when the Datrium storage software installs on ESX hypervisor, it limits itself to 20 percent of the available vCPU available to each VM. However, not every VM uses all of its available vCPU with many VMs only using only 10-40 percent of their available resources.

Using Datrium’s DIESL Hyperdriver Software version 1.0.6.1, VM administrators can non-disruptively tap into these latent vCPU cycles. Using Datrium’s new Insane Mode, they may increase the available vCPU cycles a VM can access from 20 to 40 percent with a click of a button. While the host VM must have latent vCPU cycles available to accomplish this task, this is a feature that array-based flash would be hard-pressed to ever offer and unlikely could ever do with the click of a button.

Server-based storage designs have shown a lot of promise over the years but have not really had the infrastructure available to them to build a runway to success. That has essentially changed and Datrium is one of the first solutions to come to market that recognizes this fundamental change in the infrastructure of data centers and has brought a product to market to capitalize on it. As evidenced by the Insane Mode in its latest software release, organizations may now harness next generation server-based storage designs and accelerate application performance while dramatically lowering complexity and costs in their environment.




SimpliVity OmniStack 3.0 Illustrates Why Hyper-converged Infrastructures are Experiencing Hyper-Growth

As the whole technology world (or at least those intimately involved with the enterprise data center space) takes a breath before diving head first into VMworld next week, a few vendors are jumping the gun and making product announcements in advance of it. One of those is SimpliVity which announced its latest hyper-converged offering, OmniStack 3.0, this past Wednesday. In so doing, it continues to put a spotlight on why hyper-converged infrastructures and the companies delivering them are experiencing hyper-growth even in a time of relative market and technology uncertainty.

The business and technology benefits that organizations are experiencing who are already using hyper-converged infrastructure solutions are pretty stunning. SimpliVity shared that among the organizations who already use its solutions, one of the largest benefits that they realize is the reduction in the amount of storage that they need to procure and manage across their enterprise.

In that vein, about 33 percent or 180 of its 550+ customers achieve 100:1 data efficiency. In layman’s terms, for each 1TB of storage that they deploy as part of the SimpliVity OmniCube family, they eliminate the need to deploy an additional 99TBs of storage.

In calculating this ratio, SimpliVity measures the additional storage capacity across production, archive and backup that organizations normally would have had to procure using traditional data center management architectures and methods. By instead deploying and managing this storage capacity as part of a hyper-converged infrastructure and then deduplicating and compressing the data stored in it, many of its customers report hyper-storage reductions accompanied by similar cost savings.

In its OmniStack 3.0 announcement from earlier this week, SimpliVity builds upon this foundation so organizations and/or enterprises may more fully experience its benefits regardless of their size or location. Two key new feature that its OmniStack 3.0 release delivers:

  • Right-sized, right-priced product for ROBOs. The OmniCube CN-1200 delivers most if not all of the software functionality that SimpliVity’s larger models offer and does so in a form factor (~2.7TB usable) appropriately sized for remote and branch offices (ROBOs). The more intriguing part of this story, however, is that the CN-1200 may be managed centrally alongside all of the other SimpliVity models in a common console. In this way, ROBOs can get the benefits of having a hyper-converged solution in their environment without needing to manage it. They can instead leave those management responsibilities to the experts back in the corporate data center.
  • Centralized, automated data protection and recovery for ROBOs. I have personally always found it perplexing that application data management and data protection and recovery are largely treated as two separate, discrete tasks within data centers when they are so interrelated. Hyper-converged infrastructure solutions as a whole have been actively breaking down this barrier with the OmniStack 3.0 blasting another sizeable hole in this wall in two different ways as it pertains to ROBOs.

First, SimpliVity has created a hub and spoke architecture. Using this topology, the hub or central management console dynamically probes the enterprise network, detects models in these ROBO locations and then adds them to its database of managed devices. This is done without requiring any user input at the ROBO locations.

Second, data protection and recovery are done in its central management console so no additional backup software is necessarily required. The new feature in its OmniStack 3.0 release is the option to change backup policies in bulk. In this way, organizations that have ROBOs across dozens of offices with perhaps hundreds or even thousands of VMs in them can centrally add, change or update a backup or restore policy and then apply that change across all of the protected VMs in as quickly as a minute.

Using this built-in data protection feature, SimpliVity reports that 63 percent or approximately 345 of its customers can now perform recoveries of any of its applications across its enterprise in minutes as opposed to hours or days.

VMworld 2015 may have the industry as a whole hitting the pause button until everyone sees what types of announcements that VMware makes. However hyper-converged infrastructure providers such as SimpliVity are hitting the fast-forward button by bringing solutions to market that are forcing organizations of all size to re-think how they are going to deploy and implement virtualized infrastructures going forward.

No longer can or should organizations treat and manage hypervisors, data management and data protection software and server, storage and networking hardware as separate purchases that are then left to IT managers to configure and make work. By delivering these as single, comprehensive hyper-converged solutions with SimpliVity in particular making its OmniCube models more price competitive and even easier to deploy and manage in ROBOs, it is no wonder that more organizations are taking a hard look at deploying this type of solution in their environments.




Hyper-converged Infrastructures Poised to Go Hyper-Growth in 2016

Hyper-converged infrastructures are quickly capturing the fancy of end-user organizations everywhere. They bundle hypervisor, server and storage in a single node and provide the flexibility to scale-out to form a single logical entity. In this configuration, they offer a very real opportunity for organizations to economically and practically collapse their existing infrastructure of servers and storage arrays into one that is much easier to implement, manage and upgrade over time.

These benefits have many contemplating whether hyper-converged infrastructures foretell the end of big box servers and storage into a data center that is solely hyper-converged. There is a great deal of merit in this viewpoint as an infrastructure shift to hyper-converged is already occurring with a high likelihood that it will go hyper-growth as soon as 2016.

The driving force behind the implementation and adoption of hyper-converged infrastructures is largely two-fold. The advent of server virtualization – which has already been going on for the better part of a decade – and the more recent rise of flash memory as a storage media.

Server virtualization has reduced the number of physical servers that organizations have had to implement as, using this technology, they can host multiple virtual machines or VMs on a single physical server. The downside of server virtualization to date has been the inability of storage media – primarily internal hard disk drives (HDDs) – to meet the performance demands of hosting multiple VMs on a single physical server.

This has led many organizations to use externally attached storage arrays that can provide the levels of performance that these multiple VMs hosted on a single server required. Unfortunately externally attached storage arrays, especially when they are networked together, become very costly to implement and then manage.

In fact, many find that the costs of creating and managing this networked storage infrastructure can offset whatever cost savings they realized by virtualizing their servers in the first place. Then when one starts to factor in the new levels of complexity that networked storage introduces, the headaches associated with data migration and the difficulty in getting hypervisors to optimally work with the underlying storage arrays, it makes many wonder why they went down this path in the first place.

The recent rise of flash memory – typically sold as solid state drives (SSD) – changes the conversation. Now for the first time organizations can get the type of performance once only available in a storage array in the form factor of a disk drive that may be inserted into any server. Further, by putting the storage capacity back inside the server, they eliminate the complexity and costs associated with creating a networked storage environment.

This two factors have led to the birth and rapid rise of hyper-converged infrastructures. Organizations can eliminate implementing today’s networked server and storage solutions by using a single hyper-converged solution. Further, due to their scale-out capabilities, as more computing resources or storage capacity are needed, additional nodes may be added to an existing hyper-converged implementation.

These benefits of hyper-convergence have every major big box IT vendor from Dell and HP to Cisco and EMC proclaiming that they now have a hyper-converged story that competes with up-and-comers like Maxta, Nutanix, Simplivity and Springpath. These big box vendors recognize that a hyper-converged solution from any of these emerging providers has the potential to make their existing big box server, networking or storage array story one that no one wants to hear any longer.

One question that organizations need to answer is, “How quickly will hyper-converged solutions go hyper-growth? Near term (0-12 months) I see hyper-converged infrastructures cutting into the respective market shares of these different systems as organizations test drive them in their remote and branch offices as well as in their test and development environments. Once these trials are and, based upon what I am hearing from enterprise shops and the level of interest that they are displaying in this technology, 2016 could well be the year that hyper-converged goes hyper-growth.

At this point, it is still too early to definitively conclude the full impact that hyper-converged infrastructures will ultimately have on today’s existing data center infrastructures and how quickly that will happen. But when looks at how many new vendors are coming out of the woodwork, how quickly existing vendors are bringing hyper-converged infrastructure solutions to market and how much end-user interest there is in this technology, I am of the mindset that the transition to hyper-converged infrastructures may potentially happen much faster than anyone anticipates.




Facebook’s Disaggregated Racks Strategy Provides an Early Glimpse into Next Gen Cloud Computing Data Center Infrastructures

Few organizations regardless of their size can claim to have 1.35 billion users, have to manage the upload and ongoing management of 930 million photos a day or be responsible for the transmission of 12 billion messages daily. Yet these are the challenges that Facebook’s data center IT staff routinely encounter. To respond to them, Facebook is turning to a disaggregated racks strategy to create a next gen cloud computing data center infrastructure that delivers the agility, scalability and cost-effective attributes it needs to meet its short and long term compute and storage needs.

At this past week’s Storage Visions in Las Vegas, NV, held at the Riviera Casino and Hotel, Facebook’s Capacity Management Engineer, Jeff Qin, delivered a keynote that provided some valuable insight into how uber-large enterprise data center infrastructures may need to evolve to meet their unique compute and storage requirements. As these data centers daily may ingest hundreds TBs of data that must be managed, manipulated and often analyzed in near real-time conditions, even the most advanced server, networking and storage architectures that exist today break down.

Qin explained that in Facebook’s early days it also started out using these technologies that most enterprises use today. However the high volumes of data that it ingests coupled with end-user expectations that the data be processed quickly and securely and then managed and retained for years (and possibly forever) exposed the shortcomings of these approaches. Facebook quickly recognized that buying more servers, networking and storage and then scaling them out and/or up resulted in costs and overhead that became onerous. Further, Facebook recognized that the available CPU, memory and storage capacity resources contained in each server and storage node were not being used efficiently.

To implement an architecture that most closely aligns with its needs, Facebook is currently in the process of implementing a Disaggregated Rack strategy. At a high level, this approach entails the deployment of CPU, memory and storage in separate and distinct pools. Facebook then creates virtual servers that are tuned to each specific application’s requirements by pulling and allocating resources from these pools to each virtual server. The objective when creating each of these custom application servers is to utilize 90% of the allocated resources to use them as optimally as possible.

Facebook expects that by taking this approach that, over time, it can save in the neighborhood of $1 billion. While Qin did not provide the exact road map as to how Facebook would achieve these savings, he provided enough hints in his other comments in his keynote that one could draw some conclusions as to how they would be achieved.

For example, Facebook already only acquires what it refers to as “vanity free” servers and storage. By this, one may deduce that it does not acquire servers from the likes of Dell or HP or storage from the likes of EMC, HDS or NetApp (though Qin did mention Facebook did initially buy from these types of companies.) Rather, it now largely buys and configures its own servers and configures and configures them by itself to meet its specific processing and storage needs.

Also it appears that Facebook may be or is already buying the component parts that make up server and storage such as the underlying CPU, memory, HDDs and network cabling to create its next gen cloud computing data center. Qin did say that what he was sharing at Storage Visions represented what equated to a 2 year strategy for Facebook so exactly how far down the path that it is toward implementing it is unclear.

Having presented that vision for Facebook, the presentations at Storage Visions for the remainder of that day and the next were largely spent showing why this is the future at many large enterprise data centers but why it will take some time to come to fruition. For instance, there were some presentations on next generation interconnect protocols such as PCI Express, Infiniband, iWarp and RoCE (RDMA over Converged Ethernet).

This high performance, low latency protocols are needed in order to deliver the high levels of performance between these various pools of resources that enterprises will need. As resources get disaggregated, their ability can achieve the same levels of performance that can within servers or storage arrays diminishes since there is more distance and communication required between them. While performance benchmarks of 700 nanoseconds are already being achieved using some of these protocols, these are in dedicated, point-to-point environments and not in switched fabric networks.

Further, there was very little discussion as to what type of cloud operating system would overlay all of these components so as to make the creation and ongoing management of these application-specific virtual servers across these pools of resources possible. Even assuming such an OS did exist, tools that manage its performance and underlying components would still need to be developed and tested before such an OS could realistically be deployed in most production environments.

Facebook’s Qin provided a compelling early look into what the next generation of cloud computing may look like in enterprise data centers. However the rest of the sessions at Storage Visions also provided a glimpse into just how difficult the task will be for Facebook to deliver on this ideal as many of the technologies needed are still in their infancy stages if they exist at all.




Dell Quickly Putting Its Technology Pieces Together; Poised to Give Its Competitors a Run for Their Money

Dell has had all of the pieces for a number of years to be a next generation technology company that does more than just sell products but to actually integrate them and solve the broader, real world problems that enterprises face. However, to date, Dell has been trapped in the world of “1+1+1=1” where organizations only get the individual value that each Dell product has to offer but no broader synergistic value that using all of their products together could potentially and ideally collectively deliver. Yet at this year’s Dell World 2014, I saw more tangible evidence that the bigger value proposition that Dell has the potential and technologies to deliver is getting much closer to being a reality.

dell world 2014

Anyone who has been paying any attention to Dell over the last few years would have noticed it has purchased a slew of other technology companies that are now part of its portfolio. Just a few of the companies that is has acquired during this period of time include AppAssure, Compellent, EqualLogic, Ocarina Networks, Quest Software and SonicWall along with many others.

But Dell, like many other large companies that have gone on acquisition sprees, has largely failed to bring these disparate technologies together in a meaningful way that benefited organizations. While individual products within Dell’s technology portfolio obviously did well as standalone solutions within Dell – EqualLogic and Compellent particularly come to mind – Dell really did not ship as an end –to-end, integrated solution that harnessed the best of what all of these respective technologies had to offer and delivered them in a way that was easy for organizations to understand, digest and then use in their environments.

While Dell is not there yet (and may never be from the perspective of having these technologies “perfectly” integrated together,) it has made huge strides in the last year toward achieving this objective. Some integration is already in action and has been shipping for some time. Case in point, it has been leveraging the deduplication technology that was part of its Ocarina Networks acquisition a number of years, incorporating that into its Compellent and EqualLogic lines of storage arrays, DR series of deduplicating backup appliances, DL series of integrated backup appliances and its various backup software offerings (AppAssure, NetVault Backup and vRanger) to enable all of these different products with data deduplication capabilities.

Yet the broader integration that organizations really want companies like Dell to deliver is the end-to-end integration of their various software and hardware products so that their deployment does not result in the complex, discombobulated data centers that exist in too many of today’s enterprises.

Sure, there is value in each of these respective hardware and software products whether they come from Dell or another provider. But where Dell is really positioned and poised to set itself apart is that it can do more than offer each of these respective products. It can deliver them in such a way that the initial implementation and then their ongoing management do NOT become a long term distraction to the company’s business.

Evidence that this type of integration is happening throughout Dell’s entire product portfolio was apparent everywhere at this year’s Dell World. For example, all of the respective backup and recovery teams for the respective data protection products are closely working together to leverage the best technologies of what each product has to offer so they may be managed as one. While each product will remain separate and distinct, enterprises that need more than one for their unique business requirement will be able to realize distinct benefits by acquiring all of them from Dell.

Already enterprises can get access to all three of Dell’s backup software technologies (AppAssure, NetVault Backup and vRanger) by acquiring a single, capacity-based license. This option frees enterprises to select the best data protection technology for each of their applications without having to determine which product trade-off they want to live with. This also gives enterprises the flexibility to swap one out with another in the event that the product they initially implemented does not meet the application’s needsshort or long term. In other words, the “fear factor” associated with implementing these technologies is greatly reduced or even implemented.

These are good first steps by Dell but what is more encouraging is all of the “off the record” conversations that I had with product managers in the hallways, lobbies and restaurants in and around Dell World. One could just sense an energy that was missing at previous Dell Worlds (I have been to the last two.)

They are palpably excited about what the new private Dell is doing. People are talking to one another. Departments are working with one another to share technologies and do more than to protect their own turf. There is a free flow and exchange of ideas going on. Integration is happening that is going to fundamentally turn data centers upside down by making them easier to manage even as they add more value to the business.

These are heady times for Dell. While Dell still has to execute on these plans, dreams and visions that it has about the future, the good news is that it already owns many if not all of the core technologies it needs to make these dreams, plans and visions a reality. Now it just has to deliver. Based upon what I saw at Dell World 2014, that execution is happening and, when it is finished (which may be sooner than its competitors anticipate,) Dell is going to give them a run for both corporate data centers and their money.




IT’s New Role: The Business Technologist

As the role of IT changes from functioning as specialists to generalists, many IT staff members find themselves in the role of a Business Technologist. In this new role, they serve a two-fold purpose. First, they must understand and document the specific needs and requirements of the business by interfacing with key end-users and product managers. Once they document these needs, they then map those requirements to a specific technology solution that solves them.

Technologist Role

While this has theoretically always been IT’s purpose, this approach works better than ever as there are more preconfigured solutions than ever available. These solutions free IT staff to focus on establishing business requirements, mapping them to the most appropriate pre-built solution and then buying and deploying it as opposed to having to build it themselves. Solutions such as these are already available in the following configurations:

  • Appliances. Appliances ship as pre-configured servers with the needed hardware and software. While the level of integration between the hardware and software varies, the ideal appliance will ship as a fully integrated solution requiring minimal effort on the part of the organization to set up and put into production.
  • Cloud. Cloud solutions are available in both cloud computing and cloud storage options. Cloud computing is used to host applications with a third party provider while cloud storage providers host the data associated with specific applications. Both cloud options minimize or even eliminate the need to deploy hardware and/or software on premise.
  • Client/cloud architectures. This is a popular derivative of the cloud computing and cloud storage deployment options. This hybrid approach to cloud implementations involves keeping some of the application compute or storage on premise while hosting other application compute and/or storage with a cloud provider.
  • Converged infrastructures. Converged infrastructure solutions include all of the building blocks of today’s storage networks such as servers, networking and storage. These are similar to appliances in that they include all of the hardware and software needed for the solution to operate. They differ in that they can more easily scale to offer additional compute, network and storage capacity.
  • Internet connected devices. This is part of the emerging Internet of Things (IoT) in which all devices connect to the Internet. This includes many items which individuals may not normally associate with Internet connectivity such as dog collars, meat thermometers, wrist watches and even toothbrushes.[1] Connecting these devices to the Internet create new means for organizations to manage, monitor and utilize them.
  • Mobile apps and devices. Individuals increasingly need the flexibility to work anywhere at any time – at home, at work or on the road. Mobile applications and devices give them the flexibility to perform almost any task anywhere, potentially with minimal or no IT intervention required.

The growing and ready availability of these various solutions make it logical, practical and important for IT staff to prepare to deliver on this important new role of business technologist.




Today it is Really All About the Integrated Solution

As I attended sessions at Microsoft TechEd 2014 last week and talked with people in the exhibit hall a number of themes emerged including “mobile first, cloud first”, hybrid cloud, migration to the cloud, disaster recovery as a service, and flash memory storage as a game-changer in the data center. But as I reflect on the entire experience, a statement made John Loveall, Principal Program Manager for Microsoft Windows Server during one of his presentations sums up to overall message of the conference, “Today it is really all about the integrated solution.”

The rise of the pre-integrated appliance in enterprise IT has certainly not gone unnoticed by DCIG. Indeed, we have developed multiple buyer’s guides to help businesses understand the marketplace for these appliances and accelerate informed purchase decisions.

The new IT service imperative is time to deployment. Once a business case has been made for implementing a new service, every week that passes before the service is in production is viewed by the business as a missed revenue growth or cost savings opportunity—because that is what it is. The opportunity costs associated with IT staff researching, purchasing, integrating and testing all the components of a solution in many cases outweigh any potential cost savings.

An appliance-based approach to IT shrinks the time to deployment. The key elements of a rapidly deployable appliance-based solution include pre-configured hardware and software that has been pre-validated to work well together and then tested prior to being shipped to the customer. In many cases the appliance vendor also provides a simplified management tool that facilitates the rapid deployment and management of the service.

Some vendors in the TechEd exhibit hall that exemplify this appliance-based approach included DataOn, HVEconneXions, InMage, Nutanix and Violin Memory.

DataOn was previewing their next-generation Cluster-in-a-Box. Although the DataOn booth was showing their products pre-configured with Windows Server 2012 R2 and Storage Spaces, they also support other operating environments and are Nexenta certified. Nutanix takes a similar approach to deliver what they call a “radically simple converged infrastructure”.

I met the David Harmon, President of HVE ConneXions at the Huawei booth. HVE is using Huawei networking gear in combination with HVE’s own flash memory appliances to deliver VMware View-based virtual desktops to clients at a cost of around $200 per desktop. He told me of a pilot implementation where two HVE staff converted a 100 computer lab of Windows XP desktops to Windows 7 virtual desktops in just two days.

InMage Systems was showing their InMage 4000 all-in-one purpose-built backup and disaster recovery appliance that can also provide public and private cloud migration. I spoke with Joel Ferman, VP of Marketing, who told me that their technology is used by Cisco, HP and Sunguard AS; and that they had never lost a head-to-head proof of concept for either backup or disaster recovery. InMage claims their solution can be deployed in less than a day with no downtime. The appliance won the Windows IT Pro Best of TechEd 2014 award in the Backup & Recovery category.

Violin Memory was displaying their Windows Flash Array, an appliance that ships with Windows Storage Server 2012 R2 pre-installed. The benefits of this appliance-based approach was explained by Eric Herzog, Violin Memory’s CMO this way, “Customers do not need to buy Windows Storage Server, they do not need to buy blade servers, nor do they need to buy the RDMA 10-gig-embedded NICs. Those all come prepackaged in the array ready to go and we do Level 1 and Level 2 support on Windows Server 2012 R2.”

Today it is really all about the integrated solution. In many cases, the opportunity to speed the time to deployment is the deciding factor in selecting an appliance-based solution. In other cases, the availability of a pre-configured appliance puts sophisticated capabilities within reach of smaller IT departments composed primarily of IT generalists who lack the specialized technical skills required to assemble such solutions on their own. In either case, the ultimate benefit is that businesses gain the IT capabilities they need with a minimum investment of time.

This is the second in a series of blog entries based on my experience at Microsoft TechEd 2014. The first entry focused on how Microsoft’s inclusion of Storage Spaces software in Windows Server 2012 R2 paves the way for server SAN, and how Microsoft Azure Site Recovery and StorSimple promote hybrid cloud adoption.




Five Technologies that Companies Should Prioritize in 2014

One of the more difficult tasks for anyone deeply involved in technology is the ability to see the forest from the trees. Often responsible for supporting the technical components that make up today’s enterprise infrastructures, to step back and recommend which technologies are the right choices for their organization going forward is a more difficult feat. While there is no one right answer that applies to all organizations, five (5) technologies – some new as well as some old technologies that are getting a refresh – merit that organizations prioritize them in the coming months and years.

Already in 2014 DCIG has released three Buyer’s Guides and has many more planned for release in the coming weeks and months. While working on those Guides, DCIG has also engaged with multiple end-users to discuss their experiences with various technologies and how they prioritize technology buying decisions. This combination of sources – a careful examination of included features on products coupled with input from end-users and vendors – is painting a new picture of five specific technologies that companies should examine and prioritize in their purchasing plans going forward.

  • Backup software with a recovery focus. Survey after survey shows that backup remains a big issue in many organizations. However I am not sure who is conducting these surveys or who they are surveying because I now regularly talk to organizations that have backup under control. They have largely solved their ongoing backup issues by using new or updated backup software that is better equipped to use disk as the primary backup target.

It is as they adopt this new backup software and eliminate their backup problems, their focus is turning to recovery. A good example is an individual with whom I spoke this past week. He switched to a new backup software solution that solved his organization’s long standing backup issues while enabling it to lay a foundation for application recovery to the cloud.

  • Converged infrastructures. Converged infrastructure solutions are currently generating a great deal of interest as they eliminate much of the time and effort that organizations have to internally exert to configure, deploy and support a solution. However in conversations I have had over the last few weeks and months, it is large organizations that appear to be the most apt to deploy them.
  • Heterogeneous infrastructures. Heterogeneous infrastructures were all the rage for many years among all size organizations as they got IT vendors to compete on price. But having too many components from too many providers created too much complexity and resulting administrative costs– especially in large organizations.

That said, small and midsized businesses (SMBs) with smaller IT infrastructures still have the luxury of acquiring IT gear from multiple providers without resulting in their environments becoming too complex to manage. Further, SMBs remain price conscious. As such, they are more willing to sacrifice the notion of “proven” end-to-end configurations to get the cutting edge features and/or the lower prices that heterogeneous infrastructures are more apt to offer.

  • Flash primed to displace more HDDs. Those close to the storage industry recognize flash for the revolutionary technology that it is. However I just spoke to an individual this past week that is very technical but who has a web design and programming focus so he and his company were not that familiar with flash. He said that as they have learned more about it, they are re-examining their storage infrastructure and how and where they can best deploy it to accelerate the performance of their applications.

Conversations such as these hint that while flash has already gained acceptance among techies, its broader market adoption and acceptance is yet to come. To date, its cost has been relatively high. However more products offer flash as a cache (such as occurs in hybrid storage arrays) and offer technologies such as deduplication and compression. This will further drive down its effective cost per TB. By way of example, I was talking to one individual yesterday who aleadys offers a flash-based solution for under $300/TB (less than 30 cents/GB.)

  • Tape poised to be become the cloud archive medium of choice. When organizations currently look at how to best utilize the cloud, they typically view it as the ideal place to store their archival data for long term retention. This sets the cloud up as an ideal place in which to deploy tape as preferred medium to store this data largely due to its low operational costs, long media life and infrequent data access.

To accommodate this shift in how organizations are using tape libraries as well as make them more appealing to cloud services providers, tape library providers are adding REST APIs to their tape library interfaces so they appear as a storage target. While most organizations may not know (or care) that the data they send to the cloud is stored to tape, they do care about its cost. By cloud providers storing data on tape, they can drive down these costs to a penny or less per GB per month.




DCIG 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide Now Available

DCIG is pleased to announce the availability of its DCIG 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide. In this Buyer’s Guide, DCIG weights, scores and ranks 10 converged infrastructure solutions from six (6) different providers. Like previous DCIG Buyer’s Guides, this Buyer’s Guide provides the critical information that all size organizations need when selecting a converged infrastructure solution to help expedite application deployments and then simplify their long term management.

2014-15-Converged-Infrastructure-U50-logo-200x200

The need for a “datacenter in a box,” or “cluster in a can,” as some like to refer to converged infrastructure offerings, is one that is growing and expanding into many new markets. Originally geared mainly towards small to mid-sized businesses (SMBs) because of their general lack of either highly-trained specialists or an insufficiently staffed IT department to keep up with the many demands of their workplace, converged infrastructures are now finding their way into the enterprise companies.

The market for converged infrastructure systems is expected to grow as high as $8 billion in revenue for calendar year 2013. While this may seem insignificant when compared to the $114 billion in revenue garnered by general infrastructure, its projections for future growth fall in line with the high expectations for converged infrastructures expressed in this Guide. The converged infrastructure market is expected to grow 50 percent over the next three years, compared to only a 1.2 percent growth for the general infrastructure market.

Bundled and sold as a single SKU, converged infrastructures offer the full scope of hardware and software that an organization needs to essentially do a turnkey deploy­ment. Converged infrastructures package multiple technologies together in a single unit to include compute, storage and networking, along with a bundle of software for automation, management and orchestration. Converged infrastructures are becoming THE solution for enterprise organizations that have satellite or remote offices that need a consistent, consoli­dated IT implementation which may be easily managed and maintained remotely.

The benefits of implementing converged infrastructures into an organization are numerous, but they can be summarized as follows:

  • Offer help to overworked and understaffed IT teams. Converged infrastructures do not require as much research, planning, and time to implement in an environment as would buying each piece separately. Converged infrastructures present a validated and tested configuration whose key Return on Investment (ROI) lies in the staff budgeting benefits that come from its ease of implementation.
  • Improvements to IT staff workflow. Take the following example scenario: in a non-converged environment, storage and networking each need to be configured for specific servers and the company’s network. In a larger company that has a segmented IT department, the network team needs to become involved to provision what is necessary for an internal network as well as externally—and that is only the first step.

In a converged environment an IT department makes the decision that it needs 50 virtual machines (VMs) with some Exchange and SQL Server applications hosted on the solution as well. In this case, the converged infrastructure provider puts together an integrated, right-sized solution with all of the necessary server, storage and networking components.

This converged solution may then be set up via a wizard-based GUI on the front-end with VMs can be provisioned from there. Because all parts of the package are provided by one manufacturer, an organization does not need to become its own integrator.

  • Backup and recovery software included. Converged infrastructures often offer native backup and recovery software that can deduplicate and compress backup data so there is no need to purchase a separate backup tool eliminating the need for IT staff to test and implement this software. Converged infrastructure solutions from larger vendors may even replicate data from a converged infrastructure solution to a non-converged one and vice versa. These are especially desirable features, especially for organizations that have virtual environments located at regional or national data centers that they have previously constructed themselves.
  • Improved business continuity. Converged infrastructure solutions are often archi­tected to take advantage of the many numerous failover and high availability features found on today’s enterprise hypervisors. By deploying each application as a VM on the converged infrastructure solution, it immediately has access and can take advantage of features such as High Availability, Dynamic Resources Scheduler (DRS), vMotion, and others. In this scenario, access to this functionality can be presumed, as opposed to having to ask IT staff to dedicate time to test and implement these features.

It is in this context that DCIG presents its 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide. As prior Buyer’s Guides have done, this Buyer’s Guide puts at the fingertips of organizations a comprehen­sive list of converged infrastructure solutions and the features they offer in the form of detailed, standardized data sheets that can assist them in this important buying decision.

The 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide accomplishes the following objectives:

  • Evaluates converged infrastructure solutions with a starting list price of $US50,000 or less.
  • Provides an objective, third party evaluation of converged infrastructures that weights, scores and ranks their features from an end user’s viewpoint
  • Includes recommendations on how to best use the Buyer’s Guide
  • Scores and ranks the features on each converged infrastructure based upon criteria that matter most to end users so they can quickly know which products are the most appropriate for them to use and under what conditions
  • Provides data sheets for 10 converged infrastructures from six (6) different providers so end users can do quick comparisons of the features that are supported and not supported on each product
  • Provides insight into which features on a converged infrastructure will result in improved performance
  • Gives any organization the ability to request competitive bids from different providers of converged infrastructures that are apples-to-apples comparisons

The DCIG 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide evaluates the following 10 solutions that include (in alphabetical order):

  • Pivot3 vSTAC Edge Appliance
  • Pivot3 vSTAC R2S Appliance
  • Pivot3 VDI R2 P Cubed Appliance
  • Pivot3 vSTAC R2S P Cubed Appliance
  • Pivot3 vSTAC Watch R2
  • Quanta CB220
  • Riverbed Granite Core/Edge
  • Scale Computing HC3 Hyperconvergence
  • Simplivity Omnicube CN-2000
  • Zenith Infotec TigerCloud.

The Pivot3 vSTAC R2S P Cubed Appliance & Pivot3 vSTAC R2S Appliance shared the Best-in-Class ranking among converged infrastructures evaluated in this Buyer’s Guide. Both products scoring at the top in this Buyer’s Guide showed an innate flexibility in a highly competitive space. The Pivot3 models assembled all the pieces necessary for organizations to expect them to remain near the top in this space.

The DCIG 2014-15 $50K and Under Converged Infrastructure Buyer’s Guide is immediately available through the DCIG analyst portal for subscribing users by following this link.