It is almost a given in today’s world that for almost any organization to operate at peak efficiency and achieve optimal results that it has to acquire and use multiple forms of technology as part of its business processes. However what is not always so clear is the forces that are at work both insider and outside of the business that drive its technology acquisitions. While by no means a complete list, here are four (4) forces that DCIG often sees at work behind the scenes that influence and drive many of today’s technology infrastructure buying decisions.
Category Archives: Virtualization
The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well.
The introduction of first generation software-defined storage solutions (often implemented as appliance and storage controller-based virtualization) went terribly awry when they were originally introduced years ago for reasons that the industry probably only now fully understands and can articulate well. While the value of software-defined storage has never been disputed, best practices associated with its implementation, management and support short and long term took time to develop. We are now seeing the fruits of these efforts as evidenced by some of the successful ways in which software-defined storage solutions are packaged and shipped.
In the early 2000’s I was a big believer in appliance and/or array-based storage virtualization technology. To me, it seemed like the most logical choice to solve some of the most pressing problems such as data migrations, storage optimization and reducing storage networking’s overall management complexity that were confronting the deployment of storage networks in enterprise data centers. Yet here we find ourselves in 2015 and, while appliance and storage array-based storage virtualization still exists, it certainly never became the runaway success that many envisioned at the time. Here are my top 3 reasons as to what went wrong with this technology and why it has yet to fully realize its promise. It did not and still does not sufficiently scale to meet enterprise requirements. The big appeal to me of storage virtualization appliances and/or array controllers was that they could aggregate all of an infrastructure’s storage arrays and their capacity into one giant pool of storage which could then…
n the early 2000’s I was a big believer in appliance and/or controller-based storage virtualization technology. To me, it seemed like the most logical choice to solve some of the most pressing problems such as data migrations, storage optimization and reducing storage networking’s overall management complexity that were confronting the deployment of storage networks in enterprise data centers. Yet here we find ourselves in 2015 and, while appliance and storage control-based storage virtualization still exists, it certainly never became the runaway success that many envisioned at the time. Here are my top 3 reasons as to what went wrong with this technology and why it has yet to fully realize its promise.
Facebook is turning to a disaggregated racks strategy to create a next gen cloud computing data center infrastructure
Physical, purpose-built deduplicating backup appliances have found their way into many enterprise data centers as they expedite installation and simplify ongoing management of backup data. However there is a growing business case for virtual appliances that offer the benefits of deduplication without the associated hardware costs. To determine when and if a virtual appliance is the correct choice, there are key factors that enterprises must evaluate to arrive at the right decision for a specific office or environment.
As DCIG prepares to release its forthcoming DCIG 2014-15 Virtual Server Backup Software Buyer’s Guide, it has unveiled a number of changes in the features offered on virtual server backup software and the ways in which they offer them. While support for VMware and its various capabilities certainly remain a focal point, support for other hypervisors, connectivity to public storage clouds and even tape support are becoming a bigger part of the conversation. Here are five early insights that DCIG has gleaned from the research that it has completed in recent weeks and months.
Organizations are becoming increasingly virtualized within their data center infrastructures which is leading them to aggressively virtualize the storage arrays in their infrastructure to complement their already virtualized server environment. As they do so, it behooves them to distinguish between, and have a clear understanding, of each virtual component that makes up their newly virtualized storage infrastructure. The need to clarify this terminology comes clearly into focus as organizations evaluate the multi-tenancy and virtual storage array capabilities found on many high end storage arrays.
Choosing the right backup appliance – physical or virtual – does not have to be complicated so long as an organization knows the right questions to ask and gathers the appropriate information. However, as organizations are gathering this information, most conclude that a virtual backup appliance is NOT the right answer in most circumstances. In this fifth and final installment of DCIG’s interview with STORServer President Bill Smoldt, he explains how to choose the most appropriate backup appliance for your environment and why a virtual backup appliance is probably not the choice you will be making.
VMware® VMmark® has quickly become a performance benchmark to which many organizations turn to quantify how many virtual machines (VMs) they can realistically expect to host and then perform well on a cluster of physical servers. Yet a published VMmark score for a specified hardware configuration may overstate or, conversely, fail to fully reflect the particular solution’s VM consolidation and performance capabilities. The HP ProLiant BL660c published VMmark performance benchmarks using a backend HP 3PAR StoreServ 7450 all-flash array provide the relevant, real-world results that organizations need to achieve maximum VM density levels, maintain or even improve VM performance as they scale and control costs as they grow.
Delivering always-on application availability accompanied by the highest levels of capacity, management and performance are the features that historically distinguish high end storage arrays from other storage arrays available on the market. But even these arrays struggle to easily deliver on a fundamental data center task: migrating data from one physical array to another. The introduction of the storage virtual array feature into the new HP XP7 dramatically eases this typically complex task as it facilitates data consolidations and migrations by migrating entire storage virtual arrays from one physical array frame to another while simplifying array management in the process.
ITaaS is the new Holy Grail with 75 percent of IT managers saying ITaaS aligns with their organization’s philosophy and needs. Accustomed to living in a world where each application had dedicated servers, networking and storage, ITaaS eliminates this issue. It aggregates these resources into a common pool that is accessible by all virtual machines (VMs) and their hosted applications that may be owned by multiple different departments or even different organizations. These resources may then be allocated to them at any time.
Anytime DCIG prepares a Buyer’s Guide – whether a net new Buyer’s Guide or a refresh of an existing Buyer’s Guide – it always uncovers a number of interesting trends and developments about that technology. Therefore it is no surprise (at least to us anyway) that as DCIG prepares to release its DCIG 2014 Enterprise Midrange Array Buyer’s Guide that it observed a number of interesting data points about enterprise midrange arrays. As DCIG looks forward to releasing this Buyer’s Guide, we wanted to share some of these observations and insights that we gained as we prepared this Guide as well as why we reached some of the conclusions that we did.
The time for the release of the refreshed DCIG 2014 Enterprise Midrange Array Buyer’s Guide is rapidly approaching. As that date approached, we have been evaluating and reviewing the data on the current crop of midrange arrays that will be included in the published Buyer’s Guide (information on over 50 models) as well as the models that will be included in DCIG’s online, cloud-based Interactive Buyer’s Guide (over 100 models.) Here is a peak into some of what we are finding out about these models in regards to their ability to deliver on data center automation, VMware integration and flash memory support.
In the fast-paced, ever-changing world of virtualization, the ability for a storage array to deliver performance at exactly the right time is essential. Unfortunately, most tiered storage systems are poorly equipped to respond to these new dynamics. This is where hybrid storage arrays come into play. In this third installment of my interview series with Rob Commins, VP of Marketing at Tegile Systems, we discuss the practical applications of storage and data movement in a virtualized world, how storage tiering falls short of consumer requirements and why Tegile’s hybrid storage is so well-equipped to meet them.
DCIG is pleased to announce the availability of its DCIG 2013 High Availability and Clustering Software Buyer’s Guide that weights, scores and ranks over 60 features on 13 different software solutions from 10 different software providers. This Buyer’s Guide provides the critical information that all size organizations need when selecting high availability (HA) and clustering software for applications running in their physical or virtual environments.
As many new and existing vendors (Scale Computing, Simplivity, Pivot3, Nutanix) come out with these “Datacenter (DC) in a Box” and “Compute in a Can” types of solutions it is worth noting that these are not only for SMBs but also solutions that enterprise shops should consider as well.
There is a tendency among technology providers to sometimes pooh-pooh the virtualization needs of small and midsized businesses and only focus on the needs of the “really big enterprises.” However when one considers that the 900,000+ companies with 20-500 employees in Canada, the UK and US are less than 30% virtualized, a tremendous opportunity exists for the right technology provider to meet their specific needs.
IT staff in midsized organizations face a peculiar challenge: it is expected to be masters of the technology in use at the organization as well as being up-to-speed on all internal business initiatives. To accomplish this twin feat, they need a new type of product that takes the best technologies available today, packages them as a single SKU and then makes it easy to install and manage.