During the recent HP Deep Dive Analyst Event in its Fremont, CA, offices, HP shared some notable insights into the percentage of backup jobs that complete successfully (and unsuccessfully) within end-user organizations. Among its observations using the anonymized data gathered from hundreds of backup assessments at end-user organizations of all sizes, HP found that over 60% of them had backup job success rates of 98% or lower, with 12% of organizations showing backup success rates of lower than 90%. Yet what is more noteworthy is through HP’s use of Big Data analytics, it has identified large backups (those that take more than 12 hours to complete) as being the primary contributor to the backup headaches that organizations still experience.
Category Archives: DCIG Sponsored Analysis
The closer any new solution comes to being non-disruptively introduced into existing organizational backup infrastructures, the greater the odds that the solution will succeed and be adopted more broadly. By Dell including FIPS 140-2 compliant 256-bit AES encryption and VTL features as part of its 3.2 OS release for its existing and new DR series of backup appliances at no charge, organizations have new options to introduce the DR Series appliances without disrupting their existing backup processes.
Scalable. Reliable. Robust. Well performing. Tightly integrated with hypervisors such as Microsoft Windows and VMware ESXi. These attributes are what every enterprise expects production storage arrays to possess and deliver. But as enterprises grow their infrastructure, they need to manage more storage arrays with the same or fewer number of IT staff. This requirement moves storage array manageability center stage which plays directly into the strengths of HP 3PAR StoreServ storage arrays and HP 3PAR StoreServ Management Console (SSMC).
On March 17, 2015, the Storage Performance Council (SPC) updated its “Top Ten” list of SPC-2 results that includes performance metrics going back almost three (3) years to May 2012. Noteworthy in these updated results is that the three storage arrays ranked at the top are, in order, a high end mainframe-centric, monolithic storage array (the HP XP7, OEMed from Hitachi), an all-flash storage array (from startup Kaminario, the K2 box) and a hybrid storage array (Oracle ZFS Storage ZS4-4 Appliance). Making these performance results particularly interesting is that the hybrid storage array, the Oracle ZFS Storage ZS4-4 Appliance, can essentially go toe-to-toe from a performance perspective with both the million dollar HP XP7 and Kaminario K2 arrays and do so at approximately half of their cost.
Features such as automated storage tiering and storage domains on today’s enterprise storage arrays go a long way toward making it feasible for organizations to successfully host multiple applications with different performance and priority requirements on a single array. However prioritizing the order in which data and I/Os are tiered is an entirely differently matter as organizations typically want the data and I/Os associated with their mission and business critical I/Os serviced ahead of lower priority applications. This is where the Quality of Service (QoS) Plus feature found on the Oracle FS1 comes into play as it does more than provide the “brains” behind its auto-tiering feature. It also re-prioritizes and re-orders application I/O according to each application’s business value to the enterprise.
Today backup and recovery looks almost nothing like it did 10 years ago. But as one looks at all of the changes still going on in backup and recovery, one can only guess what backup and recovery might look line in another 5-10 years. In this ninth and final installment of my interview series with Brett Roscoe, General Manager, Data Protection for Dell Software, Brett provides some insight into where he sees backup and recovery going over the next decade. Jerome: There is a lot excitement out there right now around data protection and how much backup and recovery has changed in the last 5 – 10 years. To a certain degree, it does not even look like it did 10 years ago. It makes me wonder what it is going to look like in 5 or 10 more years in terms of what new technologies are going to come to market or how they are going take advantage of…
Today backup and recovery looks almost nothing like it did 10 years ago. But as one looks at all of the changes still going on in backup and recovery, one can only guess what backup and recovery might look line in another 5-10 years. In this ninth and final installment of my interview series with Brett Roscoe, General Manager, Data Protection for Dell Software, he provides some insight into where he sees backup and recovery going over the next decade.
Dell has brought together its various data protection products into one suite to make it easier to address multiple backup challenges with a single solution.
Think “Dell” and you may think “PCs,” “servers,” or, even more broadly, “computer hardware.” If so, you are missing out on one of the biggest transformations going on among technology providers today as, over the last 5+ years, Dell has acquired multiple software companies and is using that intellectual property (IP) to drive its internal turnaround. In this sixth installment of my interview series with Brett Roscoe, General Manager, Data Protection for Dell Software, we discuss how these software acquisitions are fueling Dell’s transformation from a hardware provider into becoming a solutions provider.
Data protection has evolved well beyond the point where one can backup and recover data doing once a day backups. Continuous data protection, array-based snapshots, asynchronous replication, high availability, disaster recovery, backup and recovery in the cloud and long term backup retention are now all part of managing backup. However, the real question becomes, “Can one product even manage all of these different facets of backup and recovery? Or should a backup solution even try to accomplish this feat?” In this fifth installment of my interview series with Brett Roscoe, General Manager, Data Protection for Dell Software, we discuss this very important question of whether one backup product can do it all in today’s data center.
Hybrid storage arrays, which dynamically place data in storage pools that combine flash memory and HDDs, are rapidly expanding their market share in the enterprise space. These arrays use the latest generation of hardware – including multi-core CPUs and DRAM and flash caches – to offer high levels of performance and inline data optimization. However, the ZS4-4’s underlying architecture and its unique ability to integrate with Oracle Database 12c make it a superior storage platform to accelerate Oracle Database performance and reduce storage capacity requirements.
There is a magic moment associated with the sales process of almost any technology where the individual looking to make an acquisition has an “Aha!” moment, indicating they grasp the value of the technology and how it can help them move their business forward. In this fourth installment of my interview series with Dell Software’s General Manager, Data Protection, Brett Roscoe, we discuss how the virtual standby feature in the Dell DL integrated recovery appliances often leads to this “Aha!”moment.
Deriving value from the plethora of unstructured data created by today’s multiple sources of Big Data hinges on analyzing and acting on it in real-time. To do so, enterprises must employ a solution that analyzes Big Data streams as they flow in. Using TIBCO Software’s Event Processing platform, enterprises can process Big Data streams while they are still in motion providing real-time operational intelligence so they may take the appropriate action while the action still has meaningful value.
There are so many options available in today’s next generation of backup and recovery tools that sometimes it can be tough to prioritize which features to implement. In this third installment of my interview series with Dell Software’s General Manager, Data Protection, Brett Roscoe, we discuss four (4) best practices that organizations should prioritize as they implement next generation backup and recovery tools.
Physical, purpose-built deduplicating backup appliances have found their way into many enterprise data centers as they expedite installation and simplify ongoing management of backup data. However there is a growing business case for virtual appliances that offer the benefits of deduplication without the associated hardware costs. To determine when and if a virtual appliance is the correct choice, there are key factors that enterprises must evaluate to arrive at the right decision for a specific office or environment.
Perhaps nowhere does the complexity of the IT infrastructure within today’s organizations come more clearly into focus than when viewed from the perspective of data protection. Backup and recovery software sees first hand all of the applications and operating systems in an enterprise’s environment . Yet, at the same time, it is expected to account for this complexity by centralizing management, holding the line on costs, and simplifying these tasks even as it meets heightened end-user demands for faster backups and recoveries. To break through this complexity, there are three tips that any organization can follow to help both accelerate and simplify the protection and recovery of data in their environment.
There is literally a divergence occurring right now in data storage solutions. On one hand, a number of storage providers seek to deliver highly differentiated storage solutions that work with a broad set of applications and operating systems. On the other, a few providers focus on delivering a storage solution that tightly integrates with one or more applications to deliver unparalleled levels of application performance and ease of management. The latest Oracle ZFS Storage Appliance ZS3 Series with its new OS8.2 provide the best of what both of these categories of storage systems currently have to offer to deliver a storage platform that truly stands apart.
In this final blog entry from our interview with Nimbus Data CEO and Founder Thomas Isakovich, we discuss his company’s latest product, the Gemini X-series. We explore the role of the Flash Director and how it Gemini X-series appeals to enterprises as well as cloud service providers.
In this second blog entry from our interview with Nimbus Data CEO and Founder Thomas Isakovich, we discuss microsecond latencies and how the recently announced Gemini X-series scale-out all-flash platform performs against the competition.
In 2014, high-density flash memory storage such as the 4TB Viking Technology αlpha SSD will accelerate the flash-based disruption of the storage industry and of the data center. Technology providers that engage in a fresh high-density flash-storage-enabled rethinking of their products will empower savvy data center architects to substantially improve the performance, capacity and efficiency of their data centers. Businesses will benefit by reducing the cost of running their IT infrastructures while increasing their capacity to serve customers and generate profits.