Monthly Archives: February 2009

Doing More with Less in a Slow Economy

By | Uncategorized | No Comments

Like everyone else these days, I’ve been watching the economic news with a feeling of dread combined with a strong sense of outrage. How did those idiots – and by “those idiots” I mean “them” and not “us” – screw this all up for the rest of us. Luckily, the talking heads on the news shows all assure us things will get better – just as soon as we hit absolute rock bottom.

In the mean time, life goes on for all of those who have to keep the lights on and the IT systems running. The only difference is that we’re now asked to do more with less, or in some cases the same with a lot less. But all is not lost. There are solutions that can help businesses do more without breaking the already reduced budget.

The old mantra was ROI – return on investment. Executives needed to see that the cost of new IT purchases was justified by what they could return. But that was in the good times. These days, the goal is ROA – return on assets. Show that the business can make better use of the data center equipment it already has. Virtualization is the first solution that comes to mind. Server virtualization can make full use of compute power by hosting multiple application servers in a single box. Storage virtualization offers similar efficiencies through consolidation and thin provisioning to reduce the cases of under-utilized equipment. Why purchase another 10-20TB of storage, when there may already be 20TB of wasted space assigned to servers that are not using it. And while this is not a new idea, the goal should be to achieve these efficiencies with the existing storage subsystems the company already owns, and not to go out and buy a new storage system that provides this. Remember, the goal is to do more with the assets you already have. The way to do this is with software. Software that works with any type of storage. Software that works with your existing Fibre Channel SAN (if you have one) or with your existing LAN (if you don’t). Software that works with your servers – whether these are physical servers or virtual servers. But most importantly, software that works with the applications your business relies on. Because in the end, the whole reason for all the hardware, and now the software, is to run these applications. Beyond storage virtualization, this software would also provide other advanced services that will let users do more with less. Data protection services that let users keep multiple versions of the important application data without requiring special hardware and using the existing storage in the most efficient manner. Rather than keeping multiple full copies of important data, it would use space efficient delta snapshots. This means that users could have 20 to 30 delta snapshots in less space than currently used to keep 2 or 3 full copies. More copies with less space. And more efficient copies can also mean more frequent copies. Data protection with remote replication gives this software the ability to provide disaster recovery, again without requiring special hardware to link the two sites. Having these features in software rather than in hardware means users are free to select different storage subsystems at the different sites. So if the business does need to purchase any new storage, it is free to choose the one that offers the best price rather than being forced to purchase one that’s compatible with the storage it already uses. Having advanced software means the replication features are optimized to analyze data, to ensure that amount of data transmitted is the minimum needed. This allows the replication feature to operate over long distances using the least amount of bandwidth, allowing the business to use existing WAN connections rather than being forced to upgrade to larger, more expensive connections. Again, do more with less. So in these tough economic times, the way to do more with less is to use the right software rather than investing in more hardware. At least in my humble opinion. What do you think?

Read More

The Next 800 Pound Gorilla in Small Business Networked Storage? Interview with Iomega President Jonathan Huberman Part 1 of 3

By | Iomega Corporation | No Comments

Direct attached storage often still predominates in small businesses, but as networked storage becomes more affordable and the management of it becomes easier and simpler to perform, network hard drives and network attached storage (NAS) appliances are poised to become much more pervasive. Recently Jerome Wendt, DCIG’s Lead Analyst and President, met with Jonathan Huberman, President of Iomega as well as the Consumer and Small Business Products Division of EMC, to discuss Iomega’s growing role in networked storage for small businesses and other similarly-sized work groups. In this first of a 3-part series, Jonathan examines current trends in networked storage for small businesses, how Iomega is differentiating itself from competitors and what advantages being a part of EMC brings to Iomega.

Read More

Transparency Requirements of New Electronic Health Records Present a Huge Challenge to Health Care Industry

By | Estorian Looking Glass | No Comments

Over the past year there has been a lot of talk and speculation about Electronic Health Records (EHR). The topic started making headlines last year as President Obama and Senator McCain sparred over how to best fix health care with EHR touted as the single best way to control the ever increasing costs of medical treatment. Although it remains to be seen if this is actually the case, the recent stimulus bill passed by Congress on February 13th, 2009, has ensured EHR projects will be funded.

Read More

New Tape Needs Call for Refreshes, not Overhauls, to Tape Libraries

By | Overland Storage | No Comments

In looking at the tape market and what it needs to provide in tape libraries to meet today’s organizational needs, it is refreshes, not overhauls, that are required to align with these needs. Because tape libraries are becoming a secondary, as opposed to a primary, backup target in customer environments, tape library providers need to re-prioritize and even scale back the number of changes they make because if users do not want or use specific features, they will not pay for them.

Read More

Should I Archive Today? Tick…tick…tick…

By | Uncategorized | No Comments

Should I Archive Today? Tick…tick….tick…

Data – do I need to save it? For how long do I need to save it? Do I need to save it in an immutable format? Do I have to comply with an existing regulatory requirement? Will the new Obama administration create new regulations with which I’ll have to comply? Will my company be around next year or will we be acquired by someone else? How could the new company’s corporate policies change my retention policies?

These are the types of questions we constantly hear organizations struggle with. These ultimately lead to archiving decisions being delayed for months or years. Meanwhile, the clock keeps ticking towards that inevitable lawsuit (think of the implications of the Federal Rules of Civil Procedures).  It’s not a matter of if, but when.  If you don’t come up with a strategy on how to tackle current or future policies and procedures, it will cost you more than what the damaging evidence potentially could.  Of course I’m talking about archiving for compliance purposes in this case.

Is your decision to implement an archive solution being dragged down by compliance efforts? What are you doing to solve the other ticking time bomb in your IT infrastructure?  Is your primary storage at or near full capacity?  Why are you not implementing archiving for storage management purposes today?

This is the struggle many IT organizations face and this is where Permabit is uniquely positioned to solve both problems today, even if you don’t have all the answers.

Let me explain….

Permabit Enterprise Archive is a very flexible, easy-to-deploy, and easy to manage multi-purpose disk-based archive.  Once the grid based storage system is installed, all storage is presented as a virtual pool.  There is no complicated and time consuming LUN provisioning needed.  Simply create a volume and away you go.  Now here’s the beauty of it, when you create a volume you can determine if it’s a Read/Write volume or a Write Once Read Many (WORM) volume.  You can have either or both of these intermixed within one storage grid.  Now here’s where it gets even better.  Do you need WORM?  Don’t know?  Who cares?  Just create the volume as Read/Write.  You can always come back later and perform a Convert to WORM action.  Or, if it’s for a litigation event, simply take a WORM snapshot of the volume and set the retention time to meet your needs. No new hardware needed and no additional software needed.  It’s all built right in.

Now, imagine you can have this capability today at a price/performance that far exceeds that of removable media.  With the built-in Scalable Data Reduction (SDR), you get the benefits of in-line, sub-file deduplication and compression within the storage volumes; therefore, allowing you to archive data from your most expensive tiers of storage (SAN and NAS) to Permabit, the most effective and reliable archive storage available.  So, go ahead, what are you waiting for?  Start saving your storage dollars today!

One last note, Permabit Enterprise Archive was awarded the Silver Medal award in the SearchStorage.com and Storage Magazine 2008 Storage Product of the Year.  The ironic thing was that an Archive product was a winner in the Backup hardware category.  Maybe everyone is starting to realize that to solve the backup problem, implementing an archiving solution should be the first step!

The Cost of NOT Keeping Archival and Backup Data on Disk

By | NEC Corporation of America | No Comments

Over the last few months DCIG has spent fair amount of time researching and documenting specific reasons why tape will not die. Green IT is the one reason we most often hear cited for retaining tape, though new disk-based deduplication and replication technologies coupled with new disk storage system designs that are based on grid storage architectures can offset some of those concerns. So before organizations think that after 30, 90 or 180 days that they should immediately move their archival and backup data, deduplicated or otherwise, from disk to tape just to save money, there are certain intangible savings from an eDiscovery perspective that keeping data on disk provides that are not always feasible on tape.

Read More

Data Protection Redesign is Top of Mind in 2009: Interview with Symantec SVP Deepak Mohan, Part I of III

By | Symantec | No Comments

Data protection is top of mind with more enterprise organizations today as they look to redesign data protection. Rapidly changing economic forces, new technologies and steadily growing volumes of data are prompting enterprises to rethink how they can best protect, manage and recover their data by leveraging these new technologies without introducing new people or extraordinary costs to accomplish these objectives. To get Symantec’s take on these new challenges facing organizations, DCIG lead analyst, Jerome Wendt, recently met with Deepak Mohan, Symantec’s senior vice president of the Data Protection Group, to discuss these topics.

Read More

Federal Stimulus Bill Clarifies Regulation Of Health Care Industry

By | Estorian Looking Glass | No Comments

If you have followed the news lately it would appear that the media and President Obama feel the economy is firmly entrenched somewhere between disaster and Armageddon, which has framed much of the debate surrounding the stimulus bills that are in both houses of Congress. When the Senate passed their version of the bill on February 9th, it promised $838 Billon dollars for spending projects designed to jump start the economy. But like most things in government there is a lot more in the details than the headlines. Now that the stimulus bill is out in the open, DCIG has a more clear view of where health care regulation is going and how IT will be affected.

Read More

Strike a Balance with Snapshot Integration

By | Uncategorized | No Comments

In listening to all the market rumblings, I realize there’s more said and written about backup than any other topic. A quick tour of Delicious, where thousands of IT and storage professionals share knowledge and ideas, underscores this backup fixation. Of the 369,960 links tagged “backup,” only 11,274 are tagged “backup+recovery.” I’m worried that the market is so focused on backup that we fail to recognize what’s really important… the “AND RECOVERY” part.

Why is recovery so often overlooked in the endless dialogue on backup? Is it because backups are more top-of-mind since they are performed regularly throughout the day and companies are constantly struggling to meet their operational backup windows? Or, is it because recoveries can be even more complex, costly and cumbersome than most backup and archive operations?

We need to strike a better balance between backup and recovery so IT managers don’t have to choose between meeting shrinking backup windows and supporting multiple recovery points. That’s why we’ve included new features in Simpana 8 that incorporate the capabilities of hardware-based snapshot technologies within a broader data management foundation.

Video Surveillance Moves Closer to Business Ready Solution

By | Overland Storage | No Comments

Video surveillance is shaping up as the next big thing in enterprise security. IP-based cameras from Mobotix and the continued growth of high-capacity network attached storage systems from Overland Storage make it possible for almost any size and type of organization to inexpensively deploy a video surveillance solution. But what was still missing until recently was a comprehensive backend support structure for implementing these solutions and then supporting them long-term.

Read More

The Squeeze is On

By | Uncategorized | No Comments

The 64 Oz. Cherry Coke

Enterprises have been super sized with primary disk for too long. It’s like buying a 64 oz. Cherry Coke at the local 7-Eleven day after day when you really aren’t that thirsty.  Enterprises continue to buy more primary capacity than they need or can afford.

I’ve visited a lot of companies over the last several months and I have yet to meet one employed IT leader who was looking to spend more than they had to for their company’s storage infrastructure.

Just like 7-Eleven with Cherry Coke, the largest storage vendors have a vested interest in continuing to super size customers.  EMC, HDS and IBM generate over half of their storage revenue from high performance (primary) storage.  Yet less than 20% of information in a typical enterprise is transactional.  Just like the mere pennies a serving of Cherry Coke costs 7-Eleven, the cost of disk drives is a small component of primary storage price, the three-letter guys make more money if the enterprise buys more of the high-margin primary storage.

Order the Double Espresso Instead

But, now the squeeze is on primary disk.

squeeze-diagram1

We are seeing three significant market movements which enable companies to stop buying primary disk and save huge dollars while satisfying all service level and data protection requirements.

Squeeze # 1 - information movement and archiving software is advancing, becoming more friction free and less costly and it is being used not only for email, but for file shares, database and other information required for regulatory, strategic or legal reasons.  A host of companies make great products in this area like Atempo, CommVault, IBM, Symantec, and ZL Technologies not to mention simple scripting employed by many companies.  Also, file virtualization is coming of age with the F5 ARX product.  In short, there are many tools to help the enterprise to classify, move, access and retain information.

These products make it cheaper, easier and safer for enterprises to move information to lower cost tiers of storage while providing high levels of accessibility.

Squeeze # 2 – the storage “target” tier is now very low cost.  By employing data deduplication, compression, thin provisioning, and advanced data protection we are producing the most scalable, most cost effective storage in the market today.  In fact, the effective cost of Permabit Archive storage is less than $1/GB, a fraction of JBOD and often lower than tape, yet Permabit storage provides greater data safety, accessibility and resiliency.

Squeeze # 3 – new Tier 0 storage offers higher performance and is more cost effective than old-style primary storage.  An example of this is BlueArc’s Titan product (Permabit is certified with it).  Notably BlueArc ships with a feature rich information mover which can automatically tier information based upon rules or usage patterns.  That makes Titan a natural to deploy with a Permabit Archive and the result is higher performance, scalability and resiliency all at a fraction of the cost of primary storage.

When I say the squeeze is on primary storage, the pressure is coming from two directions.

1.   Below – Many enterprises simply adopt a simple tiering strategy and move fixed information off of primary to a Permabit disk based archive.  That means 60-80% of primary capacity is freed up and no further investment is required.  The savings range greatly depending on what an organization pays for storage, but I’ve walked into organizations where customers stand to save 70% on storage costs.  In a multi-hundred terabyte environment and with fixed information growing at 40% per year, that translates to tens of millions of dollars in annual savings.

2.    Above – Companies are beginning to move to Tier 0 solutions ala BlueArc today and looking forward, the adoption of Solid State Disk drives will further increase the pressure. These are high performing (better, faster, cheaper) than old-style primary and when combined with automatic tiering of fixed or inactive information to a Permabit Enterprise Archive, the enterprise gains efficiency, data resiliency, performance and massive cost savings.

So, when you are thinking about managing storage growth, resist ordering the 64 oz. Cherry Coke, reach for the double espresso instead.

No Rewards for Proactively Detecting Illegal Activity Using eDiscovery Software; But is Presuming Guilt the Next Logical Step?

By | DCIG, Electronic Discovery, eMail Archive, Information Governance, Information Management, Litigation Readiness | 4 Comments

A recent DCIG blog entry called into question the value of Bear Stearns selection of Orchestria and its inability to detect the alleged illegal activities of two of its Asset Management portfolio managers. More specifically, it asked why Orchestria did not detect the illegal activities of these individuals and why Bear Stearns did not configure it to monitor for these activities in the first place. The blog posting prompted a comment and phone call from Alan Morley, one of the individuals formerly responsible for implementing and managing Orchestria at Bear Stearns and why monitoring, detecting and preventing this activity is not as easy as it sounds.

Read More

Archiving Helps to Answer the “Do More with Less” Data Center Mandate

By | Permabit Technology Corporation | No Comments

If you happened to attend any recent conferences or trade shows then you know that most of the discussions center on driving costs out of storage environments. In the current yo-yo economy we live in, most IT Directors are looking for new and unique ways to solve their storage dilemma as storage capacity continues to grow. One way enterprise IT organizations are tackling this problem is thru deduplication using a disk-based backup solution. Though this is definitely a good approach of tackling data growth and cost savings in the backup space, it does nothing to alleviate the burden of data growth on primary storage since backup solutions do not remove and archive aging production data.

This is where a robust archival system like the Permabit Enterprise Archive can help enterprises. The Enterprise Archive offers a grid-based architecture that organizations can add to existing production environments and start to realize reductions in their primary storage Day 1. The Enterprise Archive integrates deduplication and compression for storage space savings as well as offers scalability and openness that is unmatched in the market. Plus, plugging a Permabit system into your storage environment adds value in other ways which include:

  • Defers or eliminates purchases of primary storage. This will ensure that not only will data that needs to be archived be placed in its proper location, but that expensive primary storage is freed up that can then be used for new and more current data.
  • Provides an open storage target. One of the challenges of storing data on storage systems designed for archival is their propensity to use proprietary interfaces. This requires applications that can recognize these proprietary interfaces before storing and retrieving data from them. Using  open interfaces like CIFS or NFS eliminates the need for this custom development and makes archiving an option for almost any application.
  • Embeds robust, sophisticated levels of data protection. Archived data still requires some level of protection. Permabit uses a storage grid architecture that distributes the data across multiple storage nodes so it can withstand the loss of multiple disk drives or even the loss of an entire node without data loss.To protect against natural and site disasters, organizations can use its replication option so that it only replicates data that is not at the target site. This saves valuable dollars in the WAN environment, reduces the amount of archiving storage that needs to be procured and eliminates the need to backup the archived data.
  • Creates snapshots. Enterprise organizations may be required to preserve data in its current state to satisfy legal holds, but don’t want to stop using the archiving system. Using Permabit’s snapshot feature, enterprises can create as many or as few snapshots as they need to help satisfy whatever number of legal holds comes their way without causing disruptions to day-to-day operations.
  • WORM functionality. Write Once Read Many (WORM) enables enterprise to lock down data sets on disk to prevent deletion or modification, as well as provides retention mechanisms at the file, user or group levels

The days of enterprises continually managing data and storage the same old way are coming to an end and these tough economic times are driving organizations to act now in dealing with storage growth while keeping costs under control. From a technology and delivery perspective, the Permabit Enterprise Archive meets these new challenges.  Better managing production data and the storage it resides on is an initiative Permabit has advocated for years. It’s just that as IT Managers are once again being driven to do more with less, that it becomes more critical to better manage production data and not just back it up faster. By using the Permabit Enterprise Archive in lieu of more expensive production storage systems, they can start to do exactly that.

End of the RAID

By | Uncategorized | No Comments

To kick off the new year, we recently released our Storage Predictions for 2009. We’ve received a lot interest in this list since we released it, and I personally have been asked about prediction number 3, “RAID will Hit a Data Dead End”. Allow me to explain.

For this prediction, we say:

RAID Nears Retirement. As multi-tiered storage continues to evolve, SANs will become more complex, unified networks will emerge, and as newer and larger drive technologies such as 1 TB drives take root, RAID as a data protection technology will become irrelevant. Advanced data protection schemes based on Erasure Coding technology for long term reliable data storage will take hold putting additional pressure on legacy solutions depending on RAID.

RAID is a technology that has served us well, but there are two ways in which it fails to scale going forward. Most importantly, RAID technologies today have serious problems with large capacity drives, like the 1 and 1.5 TB drives shipping now. These problems will only become more pronounced with the 2 TB drives soon to be available.

First, RAID has an issue with the bit error rates on high-capacity drives, a problem I discuss in detail in our video “The Trouble with RAID”. The bit error rate is the rate at which a drive will fail to read a block. These failures are not due to complete spindle failures, but due to the statistical encodings used to store bits into magnetic domains on the drive. Drives don’t incur the penalty of read-after-write to verify the data written, so sometimes they manage to write data that cannot be read later despite the sophisticated error correcting codes used to protect the data on disk.

As I explain in an earlier post, the bit error rate of the drives can be catastrophic for RAID. In a RAID 4 or 5 rebuild it is necessary to read every bit off all the remaining disks. There’s a high probability this may not be possible with high-capacity drives. In RAID 6, the same problem occurs in the event of a double failure.

This very problem was raised at the recent Gartner Data Center Conference, in “The Enterprise Storage Scenario” by Roger Cox and Dave Russell. RAID is not a technology that is going to survive with higher and higher capacity drives, and enterprises must look to technologies like advanced erasure coding to meet data protection requirements.

Permabit Enterprise Archive protects against this pitfall within our RAIN-EC storage architecture. By recording additional recovery information we can rebuild from up to 8K of unreadable data without having to fail a drive or recover from another location. Even in the event of multiple failures you’re still protected against the bit error rate, something that RAID can’t do.

car-falling-sml

The second problem RAID faces are increased rebuild times. While drive capacities continue to grow exponentially, drive read performance does not. The read rate is dependent upon drive spindle speed and linear bit density. Large capacity drive spindles aren’t spinning any faster, with all of them in the 5400 or 7200 RPM class. Bit density is going up, but read rates only improve with the square-root of the rate at which capacity increases, because capacity gains from increases in two dimensions (around the disk and across it) while read rate gains from only one.

This means that RAID rebuilds take unacceptably long times on high capacity drives. Consider a rebuild at 25 MB/s for a set of 2 TB drives — this will take more than 22 hours! Can your data be without protection for nearly a full day?

Permabit Enterprise Archive’s RAIN-EC architecture helps here as well. While in a RAID system a group of drives constitute a set, RAIN-EC distributes data in a more sophisticated manner. The recovery information for data on one drive is spread evenly across all the other drives in the system. This means that in the event of a drive failure all the other drives participate in the reconstruction process, and each drive is only responsible for a small portion of the recovery. Thus, the rebuild rate goes up with each additional drive in a RAIN-EC system. With RAID, adding more drives always makes the rebuild rate go down (or stay the same).

That’s the pressure from the high capacity side, but RAID arrays, at least for disk, have serious pressure from the other side too — Solid State Drives (SSD). SSDs massively outperform low capacity 10K and 15K RPM drives, and within 18 months they’ll be at an equivalent price. Additionally, STEC tells me that their Zeus SSDs have bit error rates as low as 1 in 10^17, which protects in rebuilds significantly better than the equivalent 15K RPM drives.

Given reliability concerns when using high capacity disk drives and the end of road in view for 15K RPM performance-oriented disk, RAID arrays are being squeezed from both sides. High performance systems will continue to use similar technology on SSD, but archive systems require more advanced technologies for the future, and the future, as always, is sooner than you think.

Dynamic, Agile Infrastructures are Key to Picking Today’s MSPs

By | HP Storage | No Comments

Organizations have learned that the benefits of piece of mind, simplified operations and lower TCO that MSPs can offer are too good to pass up. By taking much of the burden of application maintenance and management off of internal IT resources, organizations can focus on more strategic initiatives that will help them respond more quickly to market opportunities and grow the business.

Read More

Measuring Data Change Rates is a Precursor to Creating an Effective DR Strategy

By | Inmage | No Comments

One of the more critical pieces of information that organizations need as they put together a disaster recovery plan is how much data they have in their environment and how quickly it is changing. The reason this information is so important is that without it, organizations often have no way to effectively size how much or what type of capacity they need to protect and recover their production data. In fact, I was astonished at how little information this was available about this topic or the fact that there were so few good articles on the subject.

Read More

A New Option for Enterprise Data Protection

By | Asigra Inc | 2 Comments

Enterprise data protection software is experiencing a fundamental shift in terms of what organizations expect it to deliver and the amount of distributed structured and unstructured data that it needs to protect. As recently as a few years ago, the expectations of enterprise organizations were relatively modest – support for most major operating systems, integration with major applications (MS Exchange, Oracle, etc.) and tape library support – as compared to today’s standards. While some of those requirements still hold true today, more has changed than has stayed the same. This is putting a great deal of pressure on data protection products to swiftly evolve.

Read More

Take the Anxiety out of Clustered Server Management

By | Symantec | No Comments

A clustered server environment is only as reliable as the system administrators who maintain it. The challenge they encounter after they configure and deploy the hardware and software that make-up a clustered environment is, “How to maintain it?” Most system administrators leave the configuration alone for fear of disrupting a mission critical application after it is initially deployed. Crucial details such as patches and configuration changes are not completed just due to the nature of the system itself. But what catches organizations off-guard is that at some point down the road when an event does prompt a failover from one server to another, the failover fails to occur because smaller changes have occurred in the environment that now preclude the failover from successfully taking place.

Read More