The business case for organizations with petabytes of file data under management to classify and then place it across multiple tiers of storage has never been greater. By distributing this data across disk, flash, tape and the cloud, they stand to realize significant cost savings. The catch is finding a cost-effective solution that makes it easier to administer and manage file data than simply storing it all on flash storage. This is where a solution such as what Quantum now offers come into play.
When one looks at today’s lineup of software products that one would classify as cloud data protection, one might assume that every such product natively offers source side deduplication. If you do, you would be wrong. Software such as HPE Data Protector does not natively offer source-side deduplication but its reasons for opting out make sense once one takes a deeper look at the product.
The DCIG 2017-18 Hyperconverged Infrastructure Appliance Buyer’s Guide weights, scores and ranks more than 100 features of twenty-four (24) products from five (5) vendors that achieved rankings of Recommended or Excellent. Using ranking categories of Recommended and Excellent, this Buyer’s Guide offers much of the information an organization should need to make a highly-informed decision as to which hyperconverged appliance will suit their needs.
The DCIG 2017-18 Small/Midsize Enterprise All-flash Array Buyer’s Guide weights, scores and ranks more than 100 features of twenty-four (24) small/midsize enterprise-class all-flash arrays that achieved rankings of Recommended or Excellent. These products come from eleven (11) vendors including Dell EMC, Fujitsu, iXsystems, Kaminario, NEC, NetApp, Nimble Storage, Pivot3, Pure Storage, Tegile and Tintri. This Buyer’s Guide offers much of the information an organization should need to make a highly-informed decision as to which all-flash storage array will suit their needs.
Detect. Protect. Recover. I often see those three words when someone discusses the best methods for companies to deal with the scourge of ransomware. But stringing three words together in a marketing slogan does not a solution make. While understanding the steps needed to protect oneself against ransomware is certainly a requirement, knowing what features that backup software should possess and which products possess those features are equally important.
The success and popularity of the DCIG Buyer’s Guides stem first from the methodology that DCIG uses to gather and synthesize product data and then how it publishes its findings. DCIG applies five internal guidelines to best define and identify products from a DCIG Body of Research to include in each Buyer’s Guide Edition. Further, each DCIG Buyer’s Guide discloses why it may not cover certain products and provides guidance to readers on how to best use the Guide.
Recently cloud backup and Disaster Recovery as a Service (DRaaS) have gone from niche markets into the mainstream with companies of ever larger sizes bringing these two technologies in-house. Zetta is one such provider that has largely grown up with this market having first started out as a cloud storage provider in 2008 before adding on cloud backup and DRaaS offerings in recent years. Last week I had the opportunity to speak with its CEO Mike Grossman who provided me with an update on Zetta and its technology offerings. Here are the key points that I took away from that conversation.
Approximately a month ago I posted a blog entry that examined what features constitute and separate Tier 1 providers from Tier 2 or lower providers in the market place. In that blog entry, I concluded that product features alone are insufficient to classify a provider as Tier 1. It is when one lays aside product features that four other characteristics emerge that a provider must possess – and which DCIG can objectively evaluate – that one may use to classify it as Tier 1.
In almost every industry there is a tendency to use phrases such as Tier 1, Tier 2, and Tier 3 to describe providers, the products in a specific market, the quality of service provided, or some combination thereof. It is one applies these three terms to the storage industry and how to properly classify storage providers into one of these various tiers that the conversation becomes intriguing. After all, how does one define what constitutes and separates a Tier 1 storage provider from other providers in the market?
The DCIG 2016-17 Midrange Unified Storage Array Buyer’s Guide weights, scores and ranks more than 100 features of twenty-three (23) products from eight (8) different storage vendors. Using ranking categories of Best-in-Class, Recommended and Excellent, this Buyer’s Guide offers much of the information an organization should need to make a highly informed decision as to which high end storage array will suit their needs.
Anyone who has ever to make a product choice that involves tens or hundreds of thousands of dollars knows that one of the more challenging aspects at the conclusion of the process is separating product fact from fiction. Often, the closer an organization gets to finalizing its buying decision, the more aggressive the competing vendors become in spreading fear, uncertainty, and doubt (FUD) to discredit the products and/or services of the other vendors. Using DCIG’s Competitive Research services, organizations may gain access to the critical data that they need to help separate myth from reality to reach a proper conclusion.
To help organizations evaluate available enterprise storage arrays and make informed decisions about the most appropriate array for their needs, DCIG is pleased to announce the availability of its body of research into enterprise storage arrays. DCIG’s body of research into enterprise storage arrays, presented and made available through its Analysis Portal, directly addresses this specific challenge that organizations routinely encounter when buying storage arrays.
In the last 12-18 months, software-only software-defined storage (SDS) seems to be on the tip of everyone’s tongue as the “next big thing” in storage. However, getting some agreement as to what features constitute SDS software, who offers it and even who competes against who, can be a bit difficult to ascertain as provider allegiances and partnerships quickly evolve. In this second installment of my interview series with Nexenta’s Chairman and CEO, Tarkan Maner, he provides his views into how SDS software is impacting the competitive landscape, and how Nexenta seeks to differentiate itself.
The end game for many hyper-converged providers is pretty clear: make inroads into enterprise data centers. To do that, however, requires that these solutions bring to market the features and functionality that enterprises expect and need to effectively and easily manage them short and long term. SimpliVity’s introduction of more automation and orchestration tools into its OmniStack 3.5 product should put enterprises on notice that SimpliVity has their data centers squarely in its sights.
Mapping worldwide names (WWNs) to LUNs and doing recurring rezoning in FC SANs is a reality that every SAN administrator deals with on a regular basis. However the latest features found in Gen 6 FC offers new hope for these individuals by making these jobs simpler and easier to perform. In this third and final installment in my interview series with QLogic’s Vice President of Products, Marketing and Planning, Vikram Karvat, he provides some insight into the multiple new features that Gen 6 FC offers to help SAN and storage administrators perform their jobs more efficiently and effectively.
All-flash arrays, cloud computing, cloud storage, and converged and hyper-converged infrastructures may grab many of today’s headlines. But the decades old Fibre Channel protocol is still a foundational technology present in many data centers with it holding steady in the U.S. and even gaining increased traction in countries such as China. In this first installment, QLogic’s Vice President of Products, Marketing and Planning, Vikram Karvat, provides some background as to why fibre channel (FC) remains relevant and how all-flash arrays are one of the forces driving the need for 32Gb FC.
Walking through airports, listening to the radio or watching television, it is difficult to miss the “Barracuda Networks” name on airport hallway posters or during commercial breaks. However as one who covers enterprise data protection and data storage, I still tend to think of “Barracuda Networks” in the context of “small and midsized enterprise.” While Barracuda did not try to dissuade me of that mindset in a recent conversation I had with it, that conversation did reveal six little known facts and features about the enterprise functionality that it offers behind the scenes to organizations of this size.
DCIG is pleased to announce the availability of its DCIG 2016-16 Hyper-converged Infrastructure Buyer’s Guide that weights, scores and ranks over 100 features from nearly 60 hyper-converged solutions from 17 different providers. Driven by growing corporate requirements to more effectively manage, utilize and scale commodity compute and storage for all types of applications, hyper-converged solutions have emerged as powerful alternative to existing server/SAN and converged infrastructure approaches. Like all previous DCIG Buyer’s Guides, this Buyer’s Guide provides the critical information that organizations need when evaluating hyper-converged infrastructure solutions to create short lists of products that match their specific requirements.
DCIG’s recently published 2015-16 All-Flash Array Buyer’s Guide has been getting a lot of attention, including some pretty harsh criticisms. DCIG published a blog entry earlier this week that addressed the false allegations that DCIG Buyer’s Guides are rigged “pay-to-say” research with predetermined outcomes. Today’s blog entry explains the proper role of a DCIG Buyer’s Guide, and gives vendors an opportunity to provide constructive feedback.
A storage decision that many small, midsize and large enterprise organizations are trying to make regards what type of array to host their production data on. This often comes down to the selection of either an all-flash or a hybrid storage array. Since most organizations do not have the luxury of saying, “Money is no object,” the majority are, for now, selecting hybrid storage arrays to get flash-like performance for their most active application data while using disk to store the bulk of their application data. It as organizations evaluate hybrid storage arrays that there are key factors that they need to consider.