On the surface, all-inclusive software licensing sounds great. You get all the software features that the product offers at no additional charge. You can use them – or not use them – at your discretion. It simplifies product purchases and ongoing licensing. But what if you opt not to use all the product’s features or only need a small subset of them? In those circumstances, you need to take a hard look at any product that offers all-inclusive software licensing to determine if it will deliver the value that you expect.
Enterprise storage startups are pushing the storage industry forward faster and in directions it may never have gone without them. It is because of these startups that flash memory is now the preferred place to store critical enterprise data. Startups also advanced the customer-friendly all-inclusive approach to software licensing, evergreen hardware refreshes, and pay-as-you-grow utility pricing. These startup-inspired changes delight customers, who are rewarding these startups with large follow-on purchases and Net Promoter Scores (NPS) previously unseen in this industry. Yet the greatest contribution startups may make to the enterprise storage industry is applying predictive analytics to storage.
VMware Virtual Volumes (VVols) stands poised to fundamentally and positively change storage management in highly virtualized environments that use VMware vSphere. However enterprises will only realize the full benefits that VVols have to offer by implementing a backend storage array that stands ready to take advantage of the VVols architecture. The HP 3PAR StoreServ family of arrays provide the virtualization-first architecture along with the simplicity of implementation and ongoing management that organizations need to realize the benefits that the VVols architecture provide short and long term.
Scalable. Reliable. Robust. Well performing. Tightly integrated with hypervisors such as Microsoft Windows and VMware ESXi. These attributes are what every enterprise expects production storage arrays to possess and deliver. But as enterprises grow their infrastructure, they need to manage more storage arrays with the same or fewer number of IT staff. This requirement moves storage array manageability center stage which plays directly into the strengths of HP 3PAR StoreServ storage arrays and HP 3PAR StoreServ Management Console (SSMC).
The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well.
The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well. While the value of software-defined storage has never been disputed, best practices associated with its implementation, management and support short and long term took time to develop. We are now seeing the fruits of these efforts as evidenced by some of the successful ways in which software-defined storage solutions are packaged and shipped.
Delivering always-on application availability accompanied by the highest levels of capacity, management and performance are the features that historically distinguish high end storage arrays from other storage arrays available on the market. But even these arrays struggle to easily deliver on a fundamental data center task: migrating data from one physical array to another. The introduction of the storage virtual array feature into the new HP XP7 dramatically eases this typically complex task as it facilitates data consolidations and migrations by migrating entire storage virtual arrays from one physical array frame to another while simplifying array management in the process.
As I was planning my 2014 calendar over the past two weeks, I noticed that two storage conferences that focused on heterogeneous computing environments and were popular during from 2000-2010 have either gone the way of the dodo bird or are only a shell of what they formerly were. Yet during that same period of time, I met with some storage engineers and architects in the Omaha area who were telling me their environments are more heterogeneous than possibly ever before. While these trends on the surface may seem contradictory, they underscore the growing frustration that management in companies have with IT in general and how they are desperately looking for IT solutions that just work. In the decade ranging from about 2000-2010, the two “can’t miss” conferences in the data storage world were Storage Decisions and Storage Networking World. Data storage was undergoing a huge transformation from being direct-attached to network-attached and these two conferences were at the center…
Converged infrastructures are emerging as the next “Big Thing” in enterprise datacenters with servers, storage and networking delivered as a single SKU. Yet what providers are beginning to recognize – and what organizations should begin to expect – is that unprecedented jumps in application performance and resource optimization are now possible. The first examples of these jumps are seen in today’s ZS3 Storage Systems announcement from Oracle as it raises the bar in terms of how Oracle Database performance and resource utilization can be delivered while ushering in a new era of application-storage convergence.
The main theme at this year’s EMC World is “Lead the Transformation” that EMC is illustrating through the use of superhero characters. The superheroes are represented as end users who come up with solutions to manage today’s complex storage environment while the villain is pictured as “Doc Lock-in” who requires our superheroes to “lock-in” on a single vendor to mitigate this complexity. Yet for those users who think strategically about their storage acquisitions, Doc Lock-in may not be the full-fledged villain that EMC World portrays him to be.
Bad news is only bad until you hear it, then it’s just information followed by opportunity. Information may arrive in political, personal, technological and economic forms. It creates opportunity which brings people, vision, ideas and investment together. When thinking about a future history of 2013, three (3) opportunities come to mind.
EMC’s VFCache announcement caused a lot of the buzz in the storage industry a few months ago as it was seen by some to be done in direct response to Fusion-io’s very disruptive ioMemory architecture. Today in the conclusion of my interview series with Fusion-io’s CMO Rick White, he provides his take on EMC’s recent VFcache announcement and how he sees this impacting both Fusion-io and EMC. (Editor’s Note: This interview with Rick was conducted when EMC’s VFCache was still known as “Project Lightning.”)
This past Monday EMC created a fair amount of buzz in the storage industry with its VFCache announcement that in essence validates the emergence of server-based flash technology in the enterprise. But does EMC VFCache go far enough? Fusion-io, who arguably invented this space, argues, “Definitely not!” In this first of a multi-part interview series with Fusion-io’s Chief Marketing Officer, Rick White, we talk about server-based flash technology and why it is poised to change enterprise data centers.
IBM briefed DCIG on the details around its October Active Cloud Engine product announcement on Wednesday, November 16, of this past week. The briefing covered three functional areas, two products, one statement of direction and ironically nothing about the cloud. However, IBM deserves kudos for making a big change to its scale out NAS (SONAS) product during its Active Cloud Engine product announcement.
Last week the DCIG team attended the Fall 2011 Storage Networking World (SNW) show in Orlando, FL. While there were a lot of cool storage companies, only two meetings left any kind of impression on me: one with IBM and another with SNIA.
As part of his opening remarks during his keynote on Tuesday morning, Symantec’s CEO Enrique Salem shared a comment that was made to him by a Symantec user, “We are in the middle of a time of profound meaningful change.” Truer words were never spoken as enterprises of all sizes are facing a broad spectrum of technology changes that are unequaled in this modern era of computing.
One of my favorite all time movies is The Terminator. It is one of those timeless classics whose video was less than optional, it had some cheesy special effects and it contained dialog that was highlighted by “I’ll be back.” Yet despite these flaws what carried The Terminator and still makes it popular to this day was its compelling story line.
This is one of my favorite times of the year as I look back on some of the most popular blog entries on DCIG’s site in the past year based on the number of page views. What makes it so intriguing for me is that it is similar to looking at a big wrapped gift under the Christmas tree and not knowing exactly what is in it. Every year I am never completely sure until this week which blog entries which will make up the Top Ten on DCIG’s site as the most read. This year is no exception.
We can all get caught up in the hoopla of new and slick storage technology features and lose sight of some the most important and basic details that keep our storage fabrics up and humming. Among these are the Fibre Channel cabling infrastructures and the distance limitations incurred by continued increases in FC speeds.
Organizations have a proclivity to look at storage arrays primarily in the context of how much storage capacity do they offer. But as storage arrays add features such as deduplication and thin provisioning, storage efficiency is taking on new importance as an evaluation criteria when selecting a storage array. This is raising questions as to what role, if any, that a storage array’s storage efficiency features should play in the final buying decision.