Categories
Backup Restore

Pondering VPLEX and backup

The Twittersphere was abuzz yesterday with EMC’s announcement of VPLEX. For those of you who missed it, VPLEX is a storage virtualization and caching solution that presents block storage over long distances. The initial release only supports data center and metro distances with a future of continental and global reach. The announcement struck me as yet another flavor of storage virtualization which is already offered by many vendors, and got me thinking about protecting VPLEX data.

Traditional data protection architectures revolve around the concept of a master backup server supporting slave media servers and clients. The master server owns the entire backup environment and tells each server when and where to backup. The model is mature and works well in today’s datacenters where servers are static and technologies like VMotion move VM’s to new servers within the confines of the datacenter. However, the concept of global VMotion can break this model.

Categories
Backup Physical Tape Replication

Perspectives on Symantec OpenStorage

A couple of weeks ago SEPATON demonstrated OpenStorage (OST) at Symantec Vision and I posted a blog entry including a link to the demo. I wanted to explore OST in more detail.

OST is Symantec’s intelligent disk interface. It works with all types of disk targets and is most commonly implemented with deduplication enabled storage. OST addresses disk as disk and is different from the traditional tape-based metaphor. It handles backups as images and allows the backup application to simultaneously read and write data and incrementally deleted expired information. OST enables access to NetBackups native disk features such as San Client Backups, Media Server Load Balancing, Intelligent Disk Capacity Management and Storage Lifecycle policies. These are features of NetBackup that can benefit end users and are outside the scope of this blog. In this post, I want to discuss the features that are unique to OST.

The challenge that end users grapple with is how to move or transform data using their backup appliance while maintaining NetBackup (NBU) catalogue consistency. This can be a particularly difficult when using appliance-based tape copy or replication. OST addresses these issues by enabling the appliance to access the NBU catalogue. This means that NBU can instruct the appliance to replicate a copy of the data and maintain separate retention policies on the two copies. Let’s look at these features in more detail:

Categories
Backup Deduplication Restore

Data Domain & GDA – Bolt-on to the rescue

One of biggest challenges facing today’s datacenter managers is protecting the vast quantities of data being generated. As volumes have increased, customers have looked for larger and larger backup solutions. Multi-node global deduplication systems have become critical to enable companies to meet business requirements and EMC/Data Domain’s response to these challenges has been “add another box” which is their answer to all capacity or performance scalability questions. It appears that Data Domain has acknowledged that this argument no longer resonates and has reverted to Plan B, bolt-on GDA.

The use of the term “bolt-on” stems from a previous blog post by EMC/Data Domain’s VP of Product Management, Brian Biles. In the entry, he characterizes other deduplication vendors as bolt-on solutions, and the obvious implication is that Data Domain is better because it is not a bolt-on. Few would agree with this assertion, but it is an interesting opinion and I will return to this later.

Categories
Backup Restore

Video Demo: SEPATON and Symantec OST

Today SEPATON announced that we are demonstrating OST technology at Symantec vision and I created this short video demo to highlight the technology.  Enjoy!

http://www.youtube.com/watch?v=E-HijDClbL0
Categories
Backup Deduplication Replication

Deduplication ratios and their impact on DR cost savings

There is an interesting blog discussion between Dipash Patel from CommVault and W. Curtis Preston from Backup Central and TruthinIT regarding the increasing or decreasing benefits of deduplication ratios. They take different perspectives on the benefits of increasing deduplication ratios and I will highlight their points and add an additional one to consider.

Patel argues that increasing deduplication ratios beyond 10:1 provides only a marginal benefit. He calculates that going from 10:1 to 20:1 results in only a 5% increase in capacity efficiency and suggests that this provides only a marginal benefit. He adds that vendors who suggest that a doubling in deduplication ratios will result in a doubling cost savings are using a “sleight of hand.” He makes an interesting point, but I disagree with his core statement that increasing deduplication ratios beyond 10:1 provides only marginal savings.

Categories
Backup Physical Tape Restore

LTO-5 and Disk-based Backup

HP recently announced the availability of LTO-5 and they are currently hosting industry luminaries at their HP Storage Day. I received a question on Twitter from John Obeto about LTO-5 and what it means to VTL and wanted to answer it here. Note that I previously blogged about LTO-5.

The challenge with data protection is ensuring that you meet your backup and recovery requirements, and most companies have fixed SLAs. The advent of LTO-5’s larger tape sizes is nice, but tape size is not the problem, the issue is real world performance. Quantum’s LTO-5 specification suggests maximum performance of 140 MB/sec which is an impressive statistic, but in practice few end users achieve this. The challenge is even greater when you think about minimum required transfer rates as discussed in my fallacy of faster tape post

Categories
Deduplication

TSM Target Deduplication: You Get What You Pay For

I was recently pondering TSM’s implementation of target deduplication and decided to review ESG’s Lab Validation report on IBM TSM 6.1. There is quite a bit of good information in the paper, and some really interesting data about TSM’s target deduplication.

Before discussing the results, it is important to understand the testing methodology. Enterprise Strategy Group clearly states that the article was based on “hands-on testing [in IBM’s Tucson, AZ labs], audits of IBM test environments, and detailed discussions with IBM TSM experts.” (page 5) This means that IBM installed and configured the environment and allowed ESG to test the systems and review the results. Clearly, IBM engineers are experts in TSM and so you would assume that any systems provided would be optimally configured for performance and deduplication. The results experienced by ESG are likely the best case scenario since the average customer may not have the flexibility (or knowledge) to configure a similar system. This is not a problem, per se, but readers should keep this in mind.

Categories
Deduplication

TSM and Deduplication: 4 Reasons Why TSM Deduplication Ratios Suffer

TSM presents unique deduplication challenges due to its progressive incremental backup strategy and architectural design. This contrasts with the traditional full/incremental model used by competing backup software vendors. The result is that TSM users will see smaller deduplication ratios than their counterparts using NetBackup, NetWorker or Data Protector. This post explores four key reasons why TSM is difficult to deduplicate.

Categories
Backup Restore

Lessons learned from the COPAN acquisition

The rumors of the demise of COPAN were rampant in late 2009. There was broad speculation that general operations had wound down and that the company was maintaining a skeletal staff. It was clear that COPAN’s end was near and the management team was scrambling for an exit strategy. Most people assumed that the recent silence from COPAN suggested that the company had not survived.

It was in the context of this situation that I saw a tweet last night about COPAN being acquired. The first questions were who and for how much and the tweet suggested that the answers were SGI and $2 million dollars respectively. Wow, what an amazing decline. COPAN raised $124 million dollars in multiple financing rounds and they exit the market at a $2 million valuation.

COPAN focused on MAID (massive array of idle disks). The technology allowed them to spin down unused disks to reduce the power and cooling requirements. The design included proprietary highly dense disk packaging that provided the densest storage in the industry, and actually required some datacenters to specially reinforce their flooring. They focused on $/GB and said that they offered the lowest in the industry both from an acquisition and operational cost standpoint. All of this sounded compelling from a marketing perspective, but the reality was different.

Categories
General Marketing

Tuesday Humor


Click for larger view – Comic courtesy of xkcd.com

Via Beth Pariseau from TechTarget.