Delivering always-on application availability accompanied by the highest levels of capacity, management and performance are the features that historically distinguish high end storage arrays from other storage arrays available on the market. But even these arrays struggle to easily deliver on a fundamental data center task: migrating data from one physical array to another. The introduction of the storage virtual array feature into the new HP XP7 dramatically eases this typically complex task as it facilitates data consolidations and migrations by migrating entire storage virtual arrays from one physical array frame to another while simplifying array management in the process.
Category Archives: Storage Management
As I was planning my 2014 calendar over the past two weeks, I noticed that two storage conferences that focused on heterogeneous computing environments and were popular during from 2000-2010 have either gone the way of the dodo bird or are only a shell of what they formerly were. Yet during that same period of time, I met with some storage engineers and architects in the Omaha area who were telling me their environments are more heterogeneous than possibly ever before. While these trends on the surface may seem contradictory, they underscore the growing frustration that management in companies have with IT in general and how they are desperately looking for IT solutions that just work.
In the decade ranging from about 2000-2010, the two “can’t miss” conferences in the data storage world were Storage Decisions and Storage Networking World. Data storage was undergoing a huge transformation from being direct-attached to network-attached and these two conferences were at the center of the vortex. Anyone who was anyone in the storage industry – analyst, vendor or end-user – was at these events as they showcased the best of what traditional players had to offer as well as many of the emerging technologies that were promising to re-shape the storage market.
Having attended many of these conferences, these are where I first saw many technologies such as backup appliances, deduplication, public storage clouds, scale-out storage, storage virtualization, storage resource management, thin provisioning and virtual server backup just to name a few. Each of these promised-and largely delivered-on solving key pain points that users were experiencing.
Yet these conferences fell short over time in an important aspect leading to their demise. They brought together competing vendors and put them in one place so users could view their wares, evaluate their products and bring them in-house to test and/or implement them. These conferences ultimately failed to transform themselves from solving specific customer pain paints to hosting vendors that offered holistic, macro-management solutions that could manage all of the product-specific solutions they had acquired over the years.
This was brought clearly into focus for me over lunch a couple of weeks ago that I had with a storage architect and a storage engineer. These two individuals are part of a global storage team that is responsible for managing all of the point solutions from various vendors brought in over the past decade. While it is a truly a heterogeneous environment, they find it very complex to manage, skill sets acquired in managing one technology do not easily transfer to managing other similar technologies, and vendor support for managing this heterogeneous environment is sketchy at best. Adding to their frustration, they have to support this environment while trying to support the latest management initiative that is going to fix all of these issues (aka – the cloud.)
This leads us to why organizations have largely shifted away from attending conferences sponsored by independent third parties such as TechTarget and IDC to vendor-sponsored events. Vendors like Dell, EMC, HP, IBM, Microsoft, Symantec and VMware now host their conferences that attract thousands if not tens of thousands of users in large part because they are feeding on this end-user belief that if they adopt their cloud solution, they can easily and effectively manage this heterogeneous cludge created by buying decisions from 2000-2010.
While the approach varies slightly by provider, the general theme is this. Buy all new stuff from us that now all magically works together. Pay us a bunch of money for services to migrate data off of your old IT gear onto this new gear. Stand back and get ready to enjoy all of the benefits of our cloud once all of your data is hosted on our gear. This may sound a bit simplistic but this seems to be the common theme in every pitch I hear. It is also why organizations, hoping against hope that what these vendors are saying is true, are attending these vendor-sponsored conferences in growing numbers.
My thoughts are these. First, the cloud solutions these providers are promising will fix all of your existing problems probably will not – at least not all of them. They certainly may solve a subset of the problems, but they will likely only contribute to making your existing heterogeneous environment even more heterogeneous – if that is possible – simply because there are always too many legacy products with their own proprietary protocols or requirements to be stand-alone that organizations will never be able to virtualize away by putting them into a cloud.
Second, this infatuation with vendor-sponsored conferences is likely just a near-term trend that has not fully run its course. At some point attendees are going to realize that no set of solutions from one provider is ever going to fully solve of their problems despite what vendors may promise. As this realization sinks in (and it may take a few years,) users will again start to seek out conferences that offer holistic solutions that have matured to the point they can manage products from multiple other vendors.
Third, such vendors still do exist and are even thriving despite the homogeneous, converged infrastructure mindset in which many organizations find themselves. Even as I write this, I am sitting in Colorado Springs, CO, attending a STORServer conference which recently inked a deal with CommVault so it could deliver a CommVault-powered backup appliance that is better suited to protecting today’s mobile, enterprise IT environments.
Organizations are understandably frustrated by the lack of interoperability and inability to manage the heterogeneous assortment of solutions they purchased over the years. However they need to be wary about falling into the trap of today’s vendor-sponsored conferences and the idea that homogeneous solutions are going to solve all of the problems. While they might, I would not bet the farm on it. Rather, I am more inclined to believe that heterogeneous IT environments are going to be alive and kicking for many years yet to come and that the sooner organizations recognize that and find (or even build) a solution to manage them, the sooner they will be happier with how their IT environment operates.
Converged infrastructures are emerging as the next “Big Thing” in enterprise datacenters with servers, storage and networking delivered as a single SKU. Yet what providers are beginning to recognize – and what organizations should begin to expect – is that unprecedented jumps in application performance and resource optimization are now possible. The first examples of these jumps are seen in today’s ZS3 Storage Systems announcement from Oracle as it raises the bar in terms of how Oracle Database performance and resource utilization can be delivered while ushering in a new era of application-storage convergence.
The main theme at this year’s EMC World is “Lead the Transformation” that EMC is illustrating through the use of superhero characters. The superheroes are represented as end users who come up with solutions to manage today’s complex storage environment while the villain is pictured as “Doc Lock-in” who requires our superheroes to “lock-in” on a single vendor to mitigate this complexity. Yet for those users who think strategically about their storage acquisitions, Doc Lock-in may not be the full-fledged villain that EMC World portrays him to be.
Bad news is only bad until you hear it, then it’s just information followed by opportunity. Information may arrive in political, personal, technological and economic forms. It creates opportunity which brings people, vision, ideas and investment together. When thinking about a future history of 2013, three (3) opportunities come to mind.
EMC’s VFCache announcement caused a lot of the buzz in the storage industry a few months ago as it was seen by some to be done in direct response to Fusion-io’s very disruptive ioMemory architecture. Today in the conclusion of my interview series with Fusion-io’s CMO Rick White, he provides his take on EMC’s recent VFcache announcement and how he sees this impacting both Fusion-io and EMC. (Editor’s Note: This interview with Rick was conducted when EMC’s VFCache was still known as “Project Lightning.”)
This past Monday EMC created a fair amount of buzz in the storage industry with its VFCache announcement that in essence validates the emergence of server-based flash technology in the enterprise. But does EMC VFCache go far enough? Fusion-io, who arguably invented this space, argues, “Definitely not!” In this first of a multi-part interview series with Fusion-io’s Chief Marketing Officer, Rick White, we talk about server-based flash technology and why it is poised to change enterprise data centers.
IBM briefed DCIG on the details around its October Active Cloud Engine product announcement on Wednesday, November 16, of this past week. The briefing covered three functional areas, two products, one statement of direction and ironically nothing about the cloud. However, IBM deserves kudos for making a big change to its scale out NAS (SONAS) product during its Active Cloud Engine product announcement.
Last week the DCIG team attended the Fall 2011 Storage Networking World (SNW) show in Orlando, FL. While there were a lot of cool storage companies, only two meetings left any kind of impression on me: one with IBM and another with SNIA.
As part of his opening remarks during his keynote on Tuesday morning, Symantec’s CEO Enrique Salem shared a comment that was made to him by a Symantec user, “We are in the middle of a time of profound meaningful change.” Truer words were never spoken as enterprises of all sizes are facing a broad spectrum of technology changes that are unequaled in this modern era of computing.
One of my favorite all time movies is The Terminator. It is one of those timeless classics whose video was less than optional, it had some cheesy special effects and it contained dialog that was highlighted by “I’ll be back.” Yet despite these flaws what carried The Terminator and still makes it popular to this day was its compelling story line.
This is one of my favorite times of the year as I look back on some of the most popular blog entries on DCIG’s site in the past year based on the number of page views. What makes it so intriguing for me is that it is similar to looking at a big wrapped gift under the Christmas tree and not knowing exactly what is in it. Every year I am never completely sure until this week which blog entries which will make up the Top Ten on DCIG’s site as the most read. This year is no exception.
We can all get caught up in the hoopla of new and slick storage technology features and lose sight of some the most important and basic details that keep our storage fabrics up and humming. Among these are the Fibre Channel cabling infrastructures and the distance limitations incurred by continued increases in FC speeds.
Organizations have a proclivity to look at storage arrays primarily in the context of how much storage capacity do they offer. But as storage arrays add features such as deduplication and thin provisioning, storage efficiency is taking on new importance as an evaluation criteria when selecting a storage array. This is raising questions as to what role, if any, that a storage array’s storage efficiency features should play in the final buying decision.
The real news this past week out of EMC World is not that EMC has decoupled its VMAX or Symmetrix controller heads from its back end disk drives, added some bells and whistles to it and called it “VPLEX”. The big news in my mind is that this decoupling puts the storage industry on notice that EMC has officially begun its transformation from a disk vendor into a provider of storage intelligence.
Upon arriving at Symantec Vision on Wednesday morning, it quickly became evident that the messaging at this year’s event focused on how the business world is shifting from a Systems-Centric View (policies and governance is done according to the physical devices on which they reside such as servers, networking and storage) of data management to an Information Centric View (policies and governance are set independent of what storage device on which the data resides).
Backup software is, if nothing else, a “Me-Too” space with each vendor adding new features to each release of its product to try to match what its competitors are doing as well as trying to add a few new twists of their own to differentiate themselves from the crowd. Today’s CA announcement of ARCserve r12.5 continues this trend. To remain competitive, r12.5 adds data deduplication as a core component of ARCserve, improves users’ abilities to recover guest VMs on virtual server operating systems and more tightly integrates ARCserve with popular applications. CA seeks to differentiate ARCserve from competitors with new native SRM reporting capabilities and providing assurance that organizations can restore their deduplicated backup data.
You can’t talk about storage these days without including virtualization somewhere in the conversation. The Spring 2009 SNW was no different as one of its Summits was devoted to virtualization. The Tuesday, April 7, Virtualization Summit proved very interesting even though it was dominated by vendors. Some of the better data points that came out of this Summit were from TheInfoPro and Boston Medical Center. Also, interesting tidbits on SSD are emerging as SSD appears to solve performance challenges for VMware-access-to-storage in high I/O environments as well as performance intensive development environments.
2009 is shaping up as the year of server virtualization. The hype around Citrix XenServer, Microsoft Hyper-V and VMware ESX Server is giving way to the reality of companies actually virtualizing their production servers as a means to improve energy efficiencies and slash infrastructure costs. But as companies virtualize these servers, many are leaving the familiarity of direct attached storage (DAS) and entering the world of networked storage for the first time. This is creating new challenges, especially for Windows servers using utilities such as defragmenters that will begin to operate on virtual machines (VMs) and defragment each VM’s associated file system.
In the computer industry, Diskeeeper is as synonymous with disk defragmentation as Microsoft is to Windows. In fact, any knowledgeable Microsoft Windows administrator knows that defragmenting a disk drive can provide application performance boosts of up to 176 percent, if you believe some reports. That makes Diskeeper a must-have in the eyes of some shops with performance intensive applications running on Windows servers. However as more enterprises virtualize their servers and disk drives, how does Diskeeper’s technology remain relevant? To get some answers to these questions, I recently spoke to Derek De Vette, VP of Public Affairs for Diskeeper Corporation.