In 2014, high-density flash memory storage such as the 4TB Viking Technology αlpha SSD will accelerate the flash-based disruption of the storage industry and of the data center. Technology providers that engage in a fresh high-density flash-storage-enabled rethinking of their products will empower savvy data center architects to substantially improve the performance, capacity and efficiency of their data centers. Businesses will benefit by reducing the cost of running their IT infrastructures while increasing their capacity to serve customers and generate profits.
Providing high levels of capacity is only relevant if a storage array can also deliver high levels of performance. The number of CPU cores, the amount of DRAM and the size of the flash cache are the key hardware components that most heavily influence the performance of a hybrid storage array. In this second blog entry in my series examining the Oracle ZS3 Series storage arrays, I examine how its performance compares to that other leading enterprise storage arrays using published performance benchmarks.
The key for many enterprises today is to identify a storage provider that delivers the best of what next generation hybrid storage arrays have to offer. However, technology alone is not enough for enterprise organizations. This storage provider also has to meet internal financial stability and long-term viability requirements as well as deliver enterprise-class technical service and support.
Flash memory technology can deliver transformative application performance improvements that lead to results that matter to business—like faster decisions and the ability to serve more customers more quickly. But the cost of flash memory arrays and the technical know-how required to integrate them into the data center have thus far put them out of reach of many small and midsize enterprises (SMEs).
Hybrid storage arrays utilize dynamic data placement in a storage pool that combines flash memory and HDDs to deliver the exponential improvements in storage performance associated with flash memory arrays at a cost that makes sense to a broader range of organizations. Now HP has introduced a preconfigured hybrid storage appliance specifically designed for SMEs–the HP StoreVirtual 4335–that enables smaller IT departments to deliver the performance boost businesses want while also giving lean IT departments what they need–affordable technology that just works.
The growing importance of software in storage systems was certainly on display at VMworld 2013. I’m not talking about virtualization and the software defined data center; though virtualization is a critical driver of this trend. I am talking about impact of software on the design of storage systems and how that software delivers capabilities of value to businesses.
Eliminating the hassles and worries of email archive migrations requires that organizations use robust data migration software that is specifically tailored to meet these requirements. Globanet Migrate represents this next generation of email archiving migration software that is needed to satisfy these requirements. Offered by Globanet, which has used Globanet Migrate to perform thousands of verifiable email archive migrations, it contains a collective set of features and functionalities not found in any other email archive migration software.
Almost any deduplicating backup appliance can act as a backup target for any backup software. The ExaGrid EX Series also performs this function. However ExaGrid’s deduplicating backup appliances set themselves apart in three specific ways.
Email archive migrations bring into play considerations that are unique when compared to any other type of data migration. These concerns dictate that organizations use robust data migration software that is specifically tailored to meet the migration requirements of email archives. In this second installment of my 3-part blog series on email archive migrations, I examine the five (5) specific traits that email archive migration software must possess to successfully and securely move and/or consolidate existing archives into a new email archive.
The challenge every company faces is identifying the best products to deliver on these established requirements while still delivering the features needed to power tomorrow’s IT infrastructures. To assist in this task, DCIG Buyer’s Guides identify, weight and score product features to help organizations select products that deliver on these competing requirements. It is when DCIG turned its attention to midrange deduplicating backup appliances that it identified the ExaGrid EX Series of appliances as the best set of solutions for midsize enterprises to select from in order to meet their next generation IT infrastructure priorities while continuing to satisfy their age-old requirements.
Anyone responsible for performing any type of data migration knows that they can be complex and time consuming. Yet even in the world of data migrations, there are data migrations and then there are Data Migrations.
As organizations transform their IT infrastructures to introduce scale-out solutions in support of their virtualization initiatives, many also find that their traditional means of doing backup break. First Technology was no different. However as a service provider whose clients expected backups to work every day all of the time, it did not have the luxury of watching its traditional means of backup first break and then spending time looking for, testing and implementing a new solution. Better backup had to be part of its data center transformation Day 1.
First Technology’s Hosting Manager, Darryl Glass, realized that its business had fundamentally changed. First Technolog could no longer cobble together piecemeal technology solutions or spend time troubleshooting backend support issues. It had to focus on delivering technology solutions to new clients knowing that its data center could scale to meet these demands and then support them. To make this change from cobbled together technologies to a cost-effective and supported scale-out solution, Glass recognized that his first step involved identifying the right partner who had the right suite of solutions to meet his company’s needs.
Last week I began to explore the benefits of VM-aware Storage by discussing how it cuts the risk of virtualizing critical applications. In this article we unpack another benefit of VM-aware storage–specifically how it enables information technology departments to become more responsive to the business by substantially reducing the amount of staff time that must be dedicated to dealing with storage.
The many business benefits of virtualizing servers include cost savings through consolidation, improved uptime/availability, improved disaster recovery/business continuity capabilities, and faster development/testing/deployment of application enhancements. In light of these benefits, virtualizing critical applications would seem to be an obvious thing to do. Yet many still hesitate because of the risk that applications will suffer performance problems and that these problems will be more difficult to troubleshoot and resolve.
Many businesses are discovering that the traditional storage architecture–much of it dating from the 1980′s–is unsuited in multiple ways to the demands of todays virtualized and consolidated data center. The virtualized data center puts not just more demand, but new kinds of demand on the storage system. Tegile designed the Zebi all-flash and hybrid storage arrays specifically to address these demands and to provide the return that businesses need on their IT infrastructure investments.
Many IT departments would welcome an opportunity to deliver more performance from their existing servers, especially if it could be done without being forced to rework existing storage setups. SATADIMM from Viking Technology provides exactly this opportunity by packaging an SSD in a useful DDR3 form factor. Since most servers have open DIMM slots and unused SATA headers, SATADIMM provides a simple, inexpensive, and non-disruptive way to add SSDs to an existing server.
O’Reilly School of Technology does what many organizations now do when daily backing up its production data: it uses array-based snapshots on its NAS filer. However its internal policies call for it to copy each set of weekly or monthly array-based snapshots to another storage media (disk or tape) for long term data retention and offsite protection.
Ready or not, here comes the cloud and, for many organizations, backup to the cloud is squarely in their sights. However backup to the cloud does not mean they should abandon the best of what today’s localized backup processes have to offer. By instead taking a hybrid approach to cloud backup such as what Western Digital (WD®) offers, they can get on a secure path to storing data in the cloud without breaking either their backup processes or their budget.
Ask any business owner or IT administrator how much storage they will need in a few years and they will likely hem and haw trying to come up with a reasonable answer. Ask them to share their true feelings and they will in all likelihood respond, “I don’t know.” The good news is that storage architectures are now available that take the risk out of this uncertainty and that do not require yet another expensive, disruptive and risky forklift upgrade.
The volume of Electronically Stored Information (ESI) for most enterprises continues to grow with no end in sight. Storing and managing all ESI is overwhelming from an operational standpoint, increases legal liability and is cost prohibitive. Therefore, the question for the enterprise is what ESI should the Enterprise keep and what ESI can it legally depose of?
Historically known as data retention policy, the focus in 2012 and beyond is on defensible data destruction. Enterprises are no longer legally able to just destroy whatever ESI they no longer want to keep.