When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.
No business – and I mean no business – regardless of its size ever wants to experience an outage for any reason or duration. However, to completely avoid outages means spending money and, in most cases, a lot of money. That is why, when someone shared with me earlier this week, that one of their clients has put in place a solution that keeps their period of downtime to what appears as a ‘glitch’ to their end-users for nominal cost, it struck a chord with me.
At the end of the year people naturally reflect on the events of the past year and look forward to the new. I am no different. It is as I reflect on the past year and look ahead on how IT infrastructures within organizations have changed and will change, 2017 has been as transformative as any year in the past decade if not the past 50 years. While that may sound presumptuous, 2017 seems to be the year that reflects the tipping point in how organizations will build out and protect their infrastructures going forward.
Vendors first started bandying about the phrase “cloud data management” a year or so ago. While that phrase caught my attention, specifics as what one should expect when acquiring a “cloud data management” solution remained nebulous at best. Fast forward to this week’s Veritas Vision 2017 and I finally encountered a vendor that was providing meaningful details as to what cloud data management encompasses while simultaneously performing a 180 behind the scenes.
Organizations have come to the realization that using disk as a backup storage target does more than simply solve backup problems. It creates entirely new possibilities for recovery. But as they recognize these new opportunities, they also see the need for backup solutions that offer them new options for application availability and recoverability backed by ease of management. The latest DataPlaform 4.0 release from Cohesity moves organizations closer to this ideal.
Usually when I talk to backup and system administrators, they willingly talk about how great a product installation was. But it then becomes almost impossible to find anyone who wants to comment about what life is like after their backup appliance is installed. This blog entry represents a bit of anomaly in that someone willingly pulled back the curtain on what their experience was like after they had the appliance installed. In this third installment in my interview series with system architect, Fidel Michieli, describes how the implementation of Cohesity went in his environment and how Cohesity responded to issues that arose.
Evaluating product features, comparing prices, and doing proofing of concepts are important steps in the process of adopting almost any new product. But once one completes those steps, the time arrives to start to roll the product out and implement it. In this second installment of my interview series with System Architect, Fidel Michieli, he shares how his company gained a comfort level with Cohesity for backup and disaster recovery (DR) and how broadly it decided to deploy the product in the primary and secondary data centers.
Every year at VMworld I have conversations that broaden my understanding and appreciation for new products on the market. This year was no exception as I had the opportunity to talk at length with Fidel Michieli, a System Architect at a SaaS provider, who shared his experiences with me about his challenges with backup and recovery and how he came to choose Cohesity. In this first installment in my interview series with Fidel, he shared the challenges that his company was facing with his existing backup configuration as well as the struggles that he had in identifying a backup solution that scaled to meet his dynamically changing and growing environment.
Every now and then I hear rumors in the market place that the only backup software product that Dell puts any investment into is Dell Data Protection | Rapid Recovery while it lets NetVault and vRanger wither on the vine. Nothing could be further from the truth. In this third and final part of my interview series with Michael Grant, director of data protection product marketing for Dell’s systems and information management group, he refutes those rumors and illustrates how both the NetVault and vRanger products are alive and kicking within Dell’s software portfolio.
Organizations of all sizes now look to host some or all of their applications with cloud hosting providers and for good reason.Yet organizations should not assume all cloud hosting providers are created equal. If anything, small and midsized enterprises (SMEs) may be particularly susceptible and even find themselves unnecessarily exposed to unexpected outages or extended periods of downtime if they do not carefully choose their cloud hosting provider.
Viewing hybrid cloud backup appliances strictly in the context of “backup and recovery” is a mindset that organizations must strive to overcome. While these appliances certainly fulfill this traditional role, new use cases are constantly emerging for these appliances. Hybrid cloud backup appliance have now matured to the point where organizations may use them in multiple roles besides just backup.
Organizations have long wanted to experience the thrills of non-disruptive backups and instant application recoveries. Yet the solutions delivered to date have largely been the exact opposite offering only unwanted backup pain with very few of the types of recovery thrills that organizations truly desire. The new Dell DL4300 Backup and Recovery Appliance successfully takes the pain out of daily backup and puts the right types of thrills into the backup and recovery experience.
In this 7th part of my interview series with Brett Roscoe, General Manager for Dell Software, we take an in-depth look at Dell’s data protection portfolio.
New technology always sounds great on the surface. However the ramifications of implementing and then managing it can be daunting, intimidating or both. Yet in the case of next generation backup and recovery tools, the improvements it provides over traditional backup can be so dramatic that NOT adopting and implementing them out is worse than trying to make existing backup software work in today’s virtualized, real-time environments. In this second installment of my interview series with Dell Software’s General Manager, Data Protection, Brett Roscoe, we discuss why it is imperative organizations move ahead with next generation backup and recovery tools.
Matt Urmston, StorageCraft’s Chief Evangelist and Director of Product Management, has worked in a variety of roles in backup, archiving, data recovery and high availability. In this third blog entry of this interview series, Matt emphasizes that StorageCraft’s value is in the recovery process–getting systems back online quickly and efficiently, and having that work every time.
There is backup and then there is backup. To meet the backup and recovery needs of today’s organizations, they need to verify that the selected backup appliance includes the features needed to protect their environment today and positions them to meet their needs into the foreseeable future. In this third installment of DCIG’s interview with STORServer President Bill Smoldt, he describes the new must-have features that backup appliances must offer.
Companies all want more reliable backup and recovery, with short recovery times when things go awry. In part II of this interview series with StorageCraft’s Chief Evangelist Matt Urmstom, we expand on how StorageCraft uses StorageCraft ImageManager and StorageCraft Headstart Restore technology to provide a full DR solution that can offer recovery in as little as five minutes, and also how ShadowProtect performs equally well in physical and virtual environments.
The one screen that no system admin ever wants to see is the dreaded blue screen of death (BSOD), especially when doing a recovery. Yet when recovering an application on a different hardware platform, BSODs become a distinct possibility. In this first installment of DCIG’s executive interview with StorageCraft’s Chief Evangelist, Matt Urmston, he explains the features that ShadowProtect offers to minimize or even eliminate the possibility of users encountering BSODs when conducting a recovery.
VMware recently announced the enhancement of its VMware vSphere Data Protection (VDP) Advanced product at the European edition of VMworld. The features and developments included in the 5.5 release decisively provide a robust backup and recovery package for SMBs, both on the high and low end, while becoming a viable alternative for enterprises looking to protect remote datacenters and office locations.
DCIG is pleased to announce the availability of its DCIG 2013 High Availability and Clustering Software Buyer’s Guide that weights, scores and ranks over 60 features on 13 different software solutions from 10 different software providers. This Buyer’s Guide provides the critical information that all size organizations need when selecting high availability (HA) and clustering software for applications running in their physical or virtual environments.