Categories
Restore

The criticality of RTO and RPO

Frequent readers of this blog know that I am obsessed with data protection in general and data restoration specifically.  Obviously these two elements are critical for today’s data-intensive businesses and there are a multitude of vendors providing solutions to address these challenges.  It can be difficult to assess the benefits of a given approach and the concepts of Recovery Time Objective(RTO) and Restore Point Objective(RPO) are useful metrics to consider when analyzing the benefits of different solutions.  In this blog entry, I will discuss these two measures and why they are relevant to your organization.

Recovery Time Objective

This is a critical metric for illustrating the risk of potential downtime.  SNIA defines the term as follows:

The maximum acceptable time period required to bring one or more applications and associated data back from an outage to a correct operational state

Categories
Restore

Why Recovery Matters: Two Case Studies

I started this blog over two years ago to focus on the criticality of data protection and specifically data recovery.  While technology continues to evolve, the importance of these two elements remains consistent.  Every company must have a recovery strategy to protect against data loss or corruption.  Some people may be inclined to de-emphasize backup and recovery based on the faulty assumption that today’s virtualized hardware and software is more reliable or flexible, but this is a mistake.  In the last month, we have seen two examples of why data recovery is critical, and both affected entities had large IT staffs and huge budgets.  Without an effective protection strategy, massive data loss would have been unavoidable in both cases.  The companies recovered the vast majority of their data but experienced an outage that was far longer and more damaging than either anticipated.

Categories
Backup Restore

Agent-based VMware Backups

My last blog post contained a poll asking visitors about their primary VMware backup methodology.  The survey listed the common approaches to protecting virtualized environments including traditional agent-based,  VCB/VADP, dedicated VMware backup application, snapshots and doing nothing.  The results suggest that that the agent-based approach is most commonly used.  I anticipate that end users will migrate to backup methodologies that support VMware’s VADP functionality, but believe that there will always be a subset of people who rely on the agent-based approach. When implementing the agent-based approach, you should consider the following:

Categories
Backup D2D Restore

Boost vendor lock-in

A couple of weeks ago, I blogged about the benefits of Symantec’s Open Storage Technology (OST). The technology enables accelerated disk-to-disk backups (D2D) primarily over IP connections and additional value-added features. Last week, EMC responded with their announcement of BOOST for NetWorker. Insiders have told me that the BOOST architecture is essentially the same as OST although the go-to-market strategy is very different. Of course a major difference is that OST has been shipping for over 3 years and BOOST will not be available until sometime in the second half of 2010.

As discussed previously, EMC/Data Domain was unable to create a true global deduplication solution so were forced to use OST to do the heavy lifting. Ironically, they could only support Symantec NetBackup and BackupExec with the new feature because NetWorker did not offer an advanced D2D interface. The BOOST announcement addressed the issues, but raises new questions. Specifically, BOOST is positioned as an EMC only solution, and it is unclear if the API will be shared with other vendors. In my opinion, this creates a challenge for EMC/Data Domain and NetWorker. Let’s look at how the situation impacts a variety of interested parties.

Categories
Backup Restore

Data protection storage and business value

George Crump posted an article over on Network Computing discussing why storage is different for data protection. He makes a number of points regarding the benefits of using a storage appliance approach versus a software-only model, and for the most part, I agree with his analysis. However, there is an important point missing.

The software-only model relies on a generic software stack that can use any hardware or storage platform. This extreme flexibility also creates extreme headaches. The software provider or ISV cannot certify every hardware and environment combination and so the customer is responsible for installing, qualifying and testing their system. Initial setup can be difficult, but support can be even harder.

What happens if the product is not performing? The support complexities become difficult. Do you call your software ISV, your storage vendor, your SAN provider, your HBA vendor? There are a myriad of different hardware pieces at play and the challenge becomes how to diagnose and resolve any product issues. This is less of a problem in small environments with simple needs, and rapidly becomes an issue as data sizes grow.

Categories
Backup Restore

Pondering VPLEX and backup

The Twittersphere was abuzz yesterday with EMC’s announcement of VPLEX. For those of you who missed it, VPLEX is a storage virtualization and caching solution that presents block storage over long distances. The initial release only supports data center and metro distances with a future of continental and global reach. The announcement struck me as yet another flavor of storage virtualization which is already offered by many vendors, and got me thinking about protecting VPLEX data.

Traditional data protection architectures revolve around the concept of a master backup server supporting slave media servers and clients. The master server owns the entire backup environment and tells each server when and where to backup. The model is mature and works well in today’s datacenters where servers are static and technologies like VMotion move VM’s to new servers within the confines of the datacenter. However, the concept of global VMotion can break this model.

Categories
Backup Deduplication Restore

Data Domain & GDA – Bolt-on to the rescue

One of biggest challenges facing today’s datacenter managers is protecting the vast quantities of data being generated. As volumes have increased, customers have looked for larger and larger backup solutions. Multi-node global deduplication systems have become critical to enable companies to meet business requirements and EMC/Data Domain’s response to these challenges has been “add another box” which is their answer to all capacity or performance scalability questions. It appears that Data Domain has acknowledged that this argument no longer resonates and has reverted to Plan B, bolt-on GDA.

The use of the term “bolt-on” stems from a previous blog post by EMC/Data Domain’s VP of Product Management, Brian Biles. In the entry, he characterizes other deduplication vendors as bolt-on solutions, and the obvious implication is that Data Domain is better because it is not a bolt-on. Few would agree with this assertion, but it is an interesting opinion and I will return to this later.

Categories
Backup Restore

Video Demo: SEPATON and Symantec OST

Today SEPATON announced that we are demonstrating OST technology at Symantec vision and I created this short video demo to highlight the technology.  Enjoy!

http://www.youtube.com/watch?v=E-HijDClbL0
Categories
Backup Physical Tape Restore

LTO-5 and Disk-based Backup

HP recently announced the availability of LTO-5 and they are currently hosting industry luminaries at their HP Storage Day. I received a question on Twitter from John Obeto about LTO-5 and what it means to VTL and wanted to answer it here. Note that I previously blogged about LTO-5.

The challenge with data protection is ensuring that you meet your backup and recovery requirements, and most companies have fixed SLAs. The advent of LTO-5’s larger tape sizes is nice, but tape size is not the problem, the issue is real world performance. Quantum’s LTO-5 specification suggests maximum performance of 140 MB/sec which is an impressive statistic, but in practice few end users achieve this. The challenge is even greater when you think about minimum required transfer rates as discussed in my fallacy of faster tape post

Categories
Backup Restore

Lessons learned from the COPAN acquisition

The rumors of the demise of COPAN were rampant in late 2009. There was broad speculation that general operations had wound down and that the company was maintaining a skeletal staff. It was clear that COPAN’s end was near and the management team was scrambling for an exit strategy. Most people assumed that the recent silence from COPAN suggested that the company had not survived.

It was in the context of this situation that I saw a tweet last night about COPAN being acquired. The first questions were who and for how much and the tweet suggested that the answers were SGI and $2 million dollars respectively. Wow, what an amazing decline. COPAN raised $124 million dollars in multiple financing rounds and they exit the market at a $2 million valuation.

COPAN focused on MAID (massive array of idle disks). The technology allowed them to spin down unused disks to reduce the power and cooling requirements. The design included proprietary highly dense disk packaging that provided the densest storage in the industry, and actually required some datacenters to specially reinforce their flooring. They focused on $/GB and said that they offered the lowest in the industry both from an acquisition and operational cost standpoint. All of this sounded compelling from a marketing perspective, but the reality was different.