Enterprise storage startups are pushing the storage industry forward faster and in directions it may never have gone without them. It is because of these startups that flash memory is now the preferred place to store critical enterprise data. Startups also advanced the customer-friendly all-inclusive approach to software licensing, evergreen hardware refreshes, and pay-as-you-grow utility pricing. These startup-inspired changes delight customers, who are rewarding these startups with large follow-on purchases and Net Promoter Scores (NPS) previously unseen in this industry. Yet the greatest contribution startups may make to the enterprise storage industry is applying predictive analytics to storage.
In almost every industry there is a tendency to use phrases such as Tier 1, Tier 2, and Tier 3 to describe providers, the products in a specific market, the quality of service provided, or some combination thereof. It is one applies these three terms to the storage industry and how to properly classify storage providers into one of these various tiers that the conversation becomes intriguing. After all, how does one define what constitutes and separates a Tier 1 storage provider from other providers in the market?
Usually when I talk to backup and system administrators, they willingly talk about how great a product installation was. But it then becomes almost impossible to find anyone who wants to comment about what life is like after their backup appliance is installed. This blog entry represents a bit of anomaly in that someone willingly pulled back the curtain on what their experience was like after they had the appliance installed. In this third installment in my interview series with system architect, Fidel Michieli, describes how the implementation of Cohesity went in his environment and how Cohesity responded to issues that arose.
Every now and then a technology comes along that prompts enterprises to a complete do-over of their existing data center infrastructures. This type of dramatic change is already occurring within organizations of all sizes who are adopting and implementing SimpliVity.
Few data center technologies currently generate more buzz than hyper-converged infrastructure solutions. By combining compute, data protection, flash, scale-out, and virtualization into a single self-contained unit, organizations get the best of what each of these individual technologies has to offer with the flexibility to implement each one in such a way that it matches their specific business needs. Yet organizations must exercise restraint in how many attributes they ascribe to hyper-converged infrastructure solutions as their adoption is a journey, not a destination.
DCIG recently released two Buyer’s Guides on Hybrid Storage Arrays – the DCIG 2015-16 SME Hybrid Storage Array and the DCIG 2015-16 Midsize Enterprise Hybrid Storage Array – that examine many of the features that hybrid storage arrays offer. Yet what these Guides can only do at a high level is reveal how certain features are implemented on hybrid storage arrays without getting into any real detail in terms of how they are implemented. One such feature is Quality of Service.
It is almost a given in today’s world that for almost any organization to operate at peak efficiency and achieve optimal results that it has to acquire and use multiple forms of technology as part of its business processes. However what is not always so clear is the forces that are at work both insider and outside of the business that drive its technology acquisitions. While by no means a complete list, here are four (4) forces that DCIG often sees at work behind the scenes that influence and drive many of today’s technology infrastructure buying decisions.
Facebook is turning to a disaggregated racks strategy to create a next gen cloud computing data center infrastructure
Think “Dell” and you may think “PCs,” “servers,” or, even more broadly, “computer hardware.” If so, you are missing out on one of the biggest transformations going on among technology providers today as, over the last 5+ years, Dell has acquired multiple software companies and is using that intellectual property (IP) to drive its internal turnaround. In this sixth installment of my interview series with Brett Roscoe, General Manager, Data Protection for Dell Software, we discuss how these software acquisitions are fueling Dell’s transformation from a hardware provider into becoming a solutions provider.
2014 may eventually come to be characterized as the year of the tech break up. Tech conglomerates such as CA Technologies, HP, IBM and, most recently, Symantec have all opted to go down the “break up” route while others such as Cisco and EMC continue to experience internal and external pressures to pursue this option. But as enterprises look to create more agile, automated, cohesive infrastructures, it may be ultimately leave those such as Dell and Oracle that are opting to “make up” best positioned to deliver on these enterprise demands.
There will always be those organizations and individuals that will buy hardware and software at the lowest possible price, assemble these pieces themselves and then support the solution in production. But the time where organizations have to assemble the underlying components for key applications such as databases, email, file servers and now even backup has largely passed. In its stead, canned solutions such as appliances, converged infrastructures and reference architectures have emerged as the future of corporate IT.
2013 has become the year where discussions around software-defined data centers, networking and storage have gone mainstream. But when I talk with end-users from a number of organizations, they are somewhat scratching their head over why there is so much buzz over this technology. Most are looking to acquire and deploy technologies in their environments that are simpler to deploy and manage – not harder. As such, they sense these new software-defined solutions may only take them back in time to a place they do not want to be.
As many new and existing vendors (Scale Computing, Simplivity, Pivot3, Nutanix) come out with these “Datacenter (DC) in a Box” and “Compute in a Can” types of solutions it is worth noting that these are not only for SMBs but also solutions that enterprise shops should consider as well.
Every organization likes more storage capacity at a lower cost per GB. What makes them nervous are the growing risks they face from data loss as a result of a failed hard disk drive (HDD) in their virtualized environment. In this fourth part of my interview series with Scale Computing’s Global Solution Architect, Alan Conboy, and its EVP and GM, Patrick Conte, Alan chimes in as to what Scale Computing has done to eliminate the exposure window associated with failed HDDs.
IT staff in midsized organizations face a peculiar challenge: it is expected to be masters of the technology in use at the organization as well as being up-to-speed on all internal business initiatives. To accomplish this twin feat, they need a new type of product that takes the best technologies available today, packages them as a single SKU and then makes it easy to install and manage.
About a decade ago, give or take a few years, a huge debate raged in the storage industry as to what was the best form of storage virtualization. However all that debate created over time was an equally large sense of fatigue with many people souring on the whole topic of storage virtualization. To resolve that, the term “storage virtualization” has been given a facelift at the 2013 EMC World and with it a politically correct name: Software Defined Storage – that is available from EMC as EMC ViPR.
The need of businesses for greater responsiveness from their IT departments is driving data center automation. Data center automation requires a new approach to network architecture that results in networks that are flat for high performance, multipath for high availability, and open to orchestration for quick provisioning and re-provisioning as application loads move within and among data centers.
A question that gets raised almost every time that DCIG releases a Buyer’s Guide is, “Why are performance metrics not included in the Guide or considered in its evaluation of the products?” While DCIG has answered this question in various ways and in a number of forums over the last few years, we thought it appropriate to aggregate those randomly posted responses into a more definitive blog entry to address this particular question as it inevitably comes up.
I have disclosed the blog entries that have earned an honorable mention on DCIG’s website for the number of page views they received in 2012. I have also already revealed the Top 5 blog entries written in 2012 that were the most frequently read in 2012. So it is time today to begin to reveal the Top 10 most frequently viewed blog entries on DCIG’s website in 2012 regardless of what year they were published, starting with numbers 6 – 10.
Over the last five (5) years Dell has invested around $10 billion to acquire a far ranging set of hardware and software companies to include Compellent, EqualLogic, ExaNet, Ocarina, Quest, SonicWALL and others in an effort to transform itself into an enterprise solutions provider. These acquisitions now made, people are beginning to rightfully ask of CEO Michael Dell, “When exactly can end-users expect to see an integrated end-to-end data center solution that is based on these acquisitions?” The precise answer to that question still eludes even him.