Categories
Deduplication General Marketing

Exchange deduplication ratio guarantee

Scott over at EMC recently posted his thoughts about deduplication ratios and how they vary widely. I agree with his assessment that compression ratios, change rates and retention are key ingredients in deduplication ratios. However, he makes a global statement, “If you don’t know those three things, you simply cannot state a deduplication ratio with any level of honesty….It is impossible”, and uses this point to suggest that SEPATON’s Exchange guarantee program is “ridiculous”. Obviously the blogger, being an EMC employee, brings his own perspectives as do I, a SEPATON employee. Let’s dig into this a bit more.

As the original author mentioned, the key metrics for deduplication include compression, change rate and retention. Clearly these can vary by data types; however, certain data types provide more consistent deduplication results. As you can imagine, these are applications that are backed up fully every night, have fixed data structures and relatively low data change rates. Some examples include Exchange, Oracle, VMware and others.

Categories
Deduplication Restore

The hidden cost of deduplicated replication

On the surface, the idea of deduplicated replication is compelling. By replicating deltas, the technology sends data across a WAN and dramaically reduces the required bandwidth. Many customers are looking to this technology to allow them to move to a tapeless environment in the future. However, there is a major challenge that most vendors gloss over.

The most common approach to deduplication in use today is hash-based technology which uses reverse referencing. I covered the implications of this approach in another post. To summarize, the issue is that restore performance is impacted as data is retained in a reverse referenced environment. Now let’s look at how this impacts deduplicated replication.

Categories
Deduplication

A little bit off topic – deduplication and primary storage

I am digressing slightly from my usual data protection focus, but I found a recent announcement from Riverbed very interesting. They are developing a deduplication solution for primary storage. As an employee of a vendor of deduplication solutions, I wanted to provide commentary.

First some background, Riverbed makes a family of WAN acceleration appliances that reduce the amount of traffic sent over a WAN using their proprietary compression and deduplication algorithms. SEPATON is a Riverbed partner and our Site2 software has been certified with their Steelhead platform. (A bit of disclosure here, I have worked with many people from Riverbed in the past including the VP of Marketing.)

Riverbed’s announcement is summarized in posts on ByteandSwitch and The Register. In short, they are developing a deduplication solution for primary storage. It will incorporate their existing Steelhead WAN accelerators and another appliance code named “Atlas” which will contain the deduplication metadata. (The Steelhead platform has a small amount of storage for deduplication metadata since little is needed when accelerating WAN traffic. The Atlas provides the metadata storage space required for deduplicating larger amounts of data and additional functionality.) A customer would place the Steelhead/Atlas appliance combination in front of primary storage and these devices would deduplicate/undeduplicate data as it is written/read from the storage platform. This is an interesting approach and brings up a number of questions:

Categories
Backup Deduplication

IBM Storage Announcement

As previously posted, I was confused about the muted launch of IBM’s XIV disk platform. Well, the formal launch finally occurred at IBM Storage Symposium in Montpelier, France. Congratulations to IBM, although I am still left scratching my head why they informally announced the product a month ago!

Another part of the announcement was the TS7650G which is Diligent’s software running on an IBM server. Surprisingly, there is not much new; it appears that they are banking on the IBM brand and salesforce to jumpstart Diligent’s sales. Judging by the lack of success in selling the TS75xx series, it will be interesting to see whether they will have any more success with this platform.

From a VTL perspective, IBM has backed themselves into a box. Like EMC, they have a historic relationship with FalconStor and have chosen a different supplier for deduplication. This creates an interesting dichotomy. Let’s look at the specs of their existing FalconStor-based VTL and newly announced technology.

Categories
Backup Deduplication Restore Virtual Tape

Keeping it Factual

I periodically peruse the blogosphere looking for interesting articles on storage, data protection and deduplication. As you can imagine, blog content varies from highly product centric (usually from vendors) to product agnostic (usually from analysts). I recently ran across a post over at the Data Domain blog, Dedupe Matters. This is a corporate blog where it appears that the content is carefully crafted by the PR team and is updated infrequently. Personally, I find canned blogs like this boring. That said, I wanted to respond to a post entitled “Keeping it Real” by Brian Biles, VP of Product Management. As usual, I will be quoting the original article.

A year or more later, Data Domain is scaling as promised, but the bolt-ons are struggling to meet expectations in robustness and economic impact.

Categories
D2D Deduplication Virtual Tape

Analyst Commentary on VTL

I am often perusing industry related sites to find what people are saying about disaster recovery and data protection. Most of these sites rely on independent contributors to provide the content. Given the myriad of viewpoints and experience levels, it is not uncommon to see a wide range of commentaries, some consistent with industry trends, and others not. I keep this in mind when reading these articles and generally ignore inconsistencies; however once in a while an article is so egregiously wrong that I feel a response is necessary.

In this case, I am referring to an article appearing in eWeek where the author makes gross generalizations about VTL that are misleading at best. Let’s walk through his key points:

VTLs are complex

I completely disagree. The reason most people purchase VTLs is that they simplify data protection and can be implemented with almost no change in tape policies or procedures. This means that companies do not have relearn new procedures after implementing a VTL and thus the implementation is relatively simple and not complex like he suggests.

He also mentions that most VTLs use separate VTL software and storage. This is true for solutions from some of the big storage vendors, but is not the case with the SEPATON S2100-ES2. We manage the entire appliance including storage provisioning and performance management.

Finally, he complains about the complexity of configuring Fibre Channel (FC). While it is true that FC can be more complex than Ethernet it really depends on how you configure the system. One option is to direct connect the VTL which requires none of the FC complexities he harps on. He also glosses over the fact that FC is much faster than the alternatives which is an important benefit. (My guess is that he is comparing the VTL to Ethernet, but he never clearly states this.)

Categories
Backup Restore Virtual Tape

Rube Goldberg reborn as a VTL

I have fond memories from my childhood of Rube Goldberg contraptions. I was always amazed at how he would creatively use common elements to implement these crazy machines. By using every day items for complicated contraptions, he made even the simplest process look incredibly complex and difficult. But that was the beauty of it, no one would ever use the devices in practice, but it was the whimsical and complex nature of his drawings that made them so fun to look it.

Rube Goldberg Definition
Image courtesy of rubegoldberg.com

It is the in the context of Rube Goldberg that I find myself thinking about the EMC DL3D 4000 virtual tape library. Like, Goldberg, EMC has taken an approach to VTL and deduplication that revolves around adding complexity to what should be a relatively simple process. Unfortunately, I don’t think that customers will treat the solution with the same whimsical and fun perspective as they did with Goldberg’s machines.

You may think that this is just sour grapes from an EMC competitor, but I am not the only one questioning the approach. Many industry analysts and backup administrators are confused and left scratching their heads just like this author. Why the confusion? Let me explain.

Categories
Deduplication Restore

Deduplication and restore performance

One of the hidden landmines of deduplication is its impact on restore performance. Most vendors gloss over this issue in their quest to sell bigger and faster systems. Credit goes to Scott from EMC who acknowledged that restore performance declines on deduplicated data in the DL3D. We have seen other similar solutions suffer restore performance degradation of greater than 60% over time. Remember, the whole point of backing up is to restore when/if necessary. If you are evaluating deduplication solutions, you must consider several questions.

  1. What are the implications to your business on the decreasing restore performance?
  2. What is it about deduplication technology that hurts restore performance?
  3. Can you reduce the impact on restore performance?
  4. Is there a solution that does not have this limitation?
Categories
Backup Deduplication Restore

DL3D Discussion

There is an interesting discussion on The Backup Blog related to deduplication and EMC’s DL3D. The conversation relates to performance and the two participants are W. Curtis Preston the author of the Mr. Backup Blog and the The Backup Blog’s author, Scott from EMC.  Here are some excerpts that I find particularly interesting with my commentary included. (Note that I am directly quoting Scott below.)

VTL performance is 2,200 MB/sec native. We can actually do a fair bit better than that…. 1,600 MB/sec with hardware compression enabled (and most people do enable it for capacity benefits.)

The 2200 MB/sec is not new, it is what EMC specifies on their datasheet; however, it is interesting that performance declines with hardware compression. The hardware compression card must be a performance bottleneck. Is the reduction in performance of 28% meaningful? It depends on the environment and is certainly worth noting especially for datacenters where backup and restore performance are the primary concern.

Categories
Backup Deduplication

6 Reasons not to Deduplicate Data

Deduplication is a hot buzzword these days. I previously posted about how important it is to understand your business problems before evaluating data protection solutions. Here are six reasons why you might not want to deduplicate data.

1. Your data is highly regulated and/or frequently subpoenaed
The challenge with these types of data is the question of whether deduplicated data meets compliance requirements. John Toigo over at Drunken Data has numerous posts on this topic including feedback from a corporate compliance usergroup. In short, the answer is that companies need to carefully review deduplication in the context of their regulatory requirements. The issue is not of actual data loss, but the risk of someone challenging the validity of subpoenaed data that was stored on deduplicated disk. TThe defendent would then face the added burden of proving the validity of the deduplication algorithm. (Many large financial institutions have decided that they will never deduplicate certain data for this reason.)

2. You are deduplicating at the client level
Products like PureDisk from Symantec, Televaulting from Asigra or Avamar from EMC deduplicate data at the client level. With these solutions, the client bears burden of deduplication and only transfers deduplicated (e.g. net new) data across the LAN. The master server maintains a disk repository containing only deduplicated data. Trying to deduplicate the already deduplicated repository will not result in storage savings.