The success and popularity of the DCIG Buyer’s Guides stem first from the methodology that DCIG uses to gather and synthesize product data and then how it publishes its findings. DCIG applies five internal guidelines to best define and identify products from a DCIG Body of Research to include in each Buyer’s Guide Edition. Further, each DCIG Buyer’s Guide discloses why it may not cover certain products and provides guidance to readers on how to best use the Guide.
Recently cloud backup and Disaster Recovery as a Service (DRaaS) have gone from niche markets into the mainstream with companies of ever larger sizes bringing these two technologies in-house. Zetta is one such provider that has largely grown up with this market having first started out as a cloud storage provider in 2008 before adding on cloud backup and DRaaS offerings in recent years. Last week I had the opportunity to speak with its CEO Mike Grossman who provided me with an update on Zetta and its technology offerings. Here are the key points that I took away from that conversation.
Approximately a month ago I posted a blog entry that examined what features constitute and separate Tier 1 providers from Tier 2 or lower providers in the market place. In that blog entry, I concluded that product features alone are insufficient to classify a provider as Tier 1. It is when one lays aside product features that four other characteristics emerge that a provider must possess – and which DCIG can objectively evaluate – that one may use to classify it as Tier 1.
In almost every industry there is a tendency to use phrases such as Tier 1, Tier 2, and Tier 3 to describe providers, the products in a specific market, the quality of service provided, or some combination thereof. It is one applies these three terms to the storage industry and how to properly classify storage providers into one of these various tiers that the conversation becomes intriguing. After all, how does one define what constitutes and separates a Tier 1 storage provider from other providers in the market?
The DCIG 2016-17 Midrange Unified Storage Array Buyer’s Guide weights, scores and ranks more than 100 features of twenty-three (23) products from eight (8) different storage vendors. Using ranking categories of Best-in-Class, Recommended and Excellent, this Buyer’s Guide offers much of the information an organization should need to make a highly informed decision as to which high end storage array will suit their needs.
Anyone who has ever to make a product choice that involves tens or hundreds of thousands of dollars knows that one of the more challenging aspects at the conclusion of the process is separating product fact from fiction. Often, the closer an organization gets to finalizing its buying decision, the more aggressive the competing vendors become in spreading fear, uncertainty, and doubt (FUD) to discredit the products and/or services of the other vendors. Using DCIG’s Competitive Research services, organizations may gain access to the critical data that they need to help separate myth from reality to reach a proper conclusion.
To help organizations evaluate available enterprise storage arrays and make informed decisions about the most appropriate array for their needs, DCIG is pleased to announce the availability of its body of research into enterprise storage arrays. DCIG’s body of research into enterprise storage arrays, presented and made available through its Analysis Portal, directly addresses this specific challenge that organizations routinely encounter when buying storage arrays.
In the last 12-18 months, software-only software-defined storage (SDS) seems to be on the tip of everyone’s tongue as the “next big thing” in storage. However, getting some agreement as to what features constitute SDS software, who offers it and even who competes against who, can be a bit difficult to ascertain as provider allegiances and partnerships quickly evolve. In this second installment of my interview series with Nexenta’s Chairman and CEO, Tarkan Maner, he provides his views into how SDS software is impacting the competitive landscape, and how Nexenta seeks to differentiate itself.
The end game for many hyper-converged providers is pretty clear: make inroads into enterprise data centers. To do that, however, requires that these solutions bring to market the features and functionality that enterprises expect and need to effectively and easily manage them short and long term. SimpliVity’s introduction of more automation and orchestration tools into its OmniStack 3.5 product should put enterprises on notice that SimpliVity has their data centers squarely in its sights.
Mapping worldwide names (WWNs) to LUNs and doing recurring rezoning in FC SANs is a reality that every SAN administrator deals with on a regular basis. However the latest features found in Gen 6 FC offers new hope for these individuals by making these jobs simpler and easier to perform. In this third and final installment in my interview series with QLogic’s Vice President of Products, Marketing and Planning, Vikram Karvat, he provides some insight into the multiple new features that Gen 6 FC offers to help SAN and storage administrators perform their jobs more efficiently and effectively.
All-flash arrays, cloud computing, cloud storage, and converged and hyper-converged infrastructures may grab many of today’s headlines. But the decades old Fibre Channel protocol is still a foundational technology present in many data centers with it holding steady in the U.S. and even gaining increased traction in countries such as China. In this first installment, QLogic’s Vice President of Products, Marketing and Planning, Vikram Karvat, provides some background as to why fibre channel (FC) remains relevant and how all-flash arrays are one of the forces driving the need for 32Gb FC.
Walking through airports, listening to the radio or watching television, it is difficult to miss the “Barracuda Networks” name on airport hallway posters or during commercial breaks. However as one who covers enterprise data protection and data storage, I still tend to think of “Barracuda Networks” in the context of “small and midsized enterprise.” While Barracuda did not try to dissuade me of that mindset in a recent conversation I had with it, that conversation did reveal six little known facts and features about the enterprise functionality that it offers behind the scenes to organizations of this size.
DCIG is pleased to announce the availability of its DCIG 2016-16 Hyper-converged Infrastructure Buyer’s Guide that weights, scores and ranks over 100 features from nearly 60 hyper-converged solutions from 17 different providers. Driven by growing corporate requirements to more effectively manage, utilize and scale commodity compute and storage for all types of applications, hyper-converged solutions have emerged as powerful alternative to existing server/SAN and converged infrastructure approaches. Like all previous DCIG Buyer’s Guides, this Buyer’s Guide provides the critical information that organizations need when evaluating hyper-converged infrastructure solutions to create short lists of products that match their specific requirements.
DCIG’s recently published 2015-16 All-Flash Array Buyer’s Guide has been getting a lot of attention, including some pretty harsh criticisms. DCIG published a blog entry earlier this week that addressed the false allegations that DCIG Buyer’s Guides are rigged “pay-to-say” research with predetermined outcomes. Today’s blog entry explains the proper role of a DCIG Buyer’s Guide, and gives vendors an opportunity to provide constructive feedback.
A storage decision that many small, midsize and large enterprise organizations are trying to make regards what type of array to host their production data on. This often comes down to the selection of either an all-flash or a hybrid storage array. Since most organizations do not have the luxury of saying, “Money is no object,” the majority are, for now, selecting hybrid storage arrays to get flash-like performance for their most active application data while using disk to store the bulk of their application data. It as organizations evaluate hybrid storage arrays that there are key factors that they need to consider.
Viewing hybrid cloud backup appliances strictly in the context of “backup and recovery” is a mindset that organizations must strive to overcome. While these appliances certainly fulfill this traditional role, new use cases are constantly emerging for these appliances. More specifically, hybrid cloud backup appliance have now matured to the point where organizations may view them more in the framework of a best practice for ensuring their organization’s continuity of business operations.
DCIG is pleased to announce the release of the DCIG 2015-16 Midsize Enterprise Hybrid Storage Array Buyer’s Guide that weights, scores and ranks more than 90 features of twenty-seven (27) different storage arrays or array series from twelve (12) different storage providers.
During the recent HP Deep Dive Analyst Event in its Fremont, CA, offices, HP shared some notable insights into the percentage of backup jobs that complete successfully (and unsuccessfully) within end-user organizations. Among its observations using the anonymized data gathered from hundreds of backup assessments at end-user organizations of all sizes, HP found that over 60% of them had backup job success rates of 98% or lower, with 12% of organizations showing backup success rates of lower than 90%. Yet what is more noteworthy is through HP’s use of Big Data analytics, it has identified large backups (those that take more than 12 hours to complete) as being the primary contributor to the backup headaches that organizations still experience.
On March 17, 2015, the Storage Performance Council (SPC) updated its “Top Ten” list of SPC-2 results that includes performance metrics going back almost three (3) years to May 2012. Noteworthy in these updated results is that the three storage arrays ranked at the top are, in order, a high end mainframe-centric, monolithic storage array (the HP XP7, OEMed from Hitachi), an all-flash storage array (from startup Kaminario, the K2 box) and a hybrid storage array (Oracle ZFS Storage ZS4-4 Appliance). Making these performance results particularly interesting is that the hybrid storage array, the Oracle ZFS Storage ZS4-4 Appliance, can essentially go toe-to-toe from a performance perspective with both the million dollar HP XP7 and Kaminario K2 arrays and do so at approximately half of their cost.
DCIG is pleased to announce the March 27 release of the DCIG 2015-16 Small/Midsize Enterprise (SME) Hybrid Storage Array Buyer’s Guide that weights, scores and ranks more than 90 features of twenty-two (22) hybrid storage arrays from nine (9) different storage providers.