Deduplication Marketing

SNW Recap

I returned from SNW in Phoenix last night and wanted to recap the event.  I had 10+ meetings at the show and there were multiple sessions and so am providing my perspectives on the event in general and the sessions I did attend.

Deduplication remains hot and still confuses many
I attended 5 different sessions on deduplication.  The content overlapped quite a bit and yet all but one of them was full.  The presentation in all cases focused primarily on deduplication and data protection.  I heard that there was a great panel discussion on primary storage deduplication which I unfortunately missed. Clearly, primary storage dedupe was not ignored, but it appeared that data protection remained the focus of the dedupe sessions.

Anecdotally, the most common deduplication question related to the difference between target and source deduplication.  It also appeared that deduplication adoption was limited.  When asked who was using some form of deduplication about 50% of the audience raised their hand, but when queried about system size, hands went down rapidly at around 10-15 TB.

The key takeaway is that deduplication remains a strong point of interest.  It appears that end users are still trying to understand the technology and how to implement it on a larger scale.


Global Deduplication Explained

W. Curtis Preston recently authored an article on explaining global deduplication.  This is an important topic which frequently causes confusion.  Curtis does a good job explaining the technology and what it means to end users and  I recommend the article.

A quick summary is that global deduplication means that a common deduplication repository is shared by multiple nodes in a system.  In these environments, a customer can backup their data to any node on a system and it will be deduplicated against related data.  This provides improved ease of use and scalability.


Deduplication 2.0

The folks over at the Online Storage Optimization blog recently wrote a post entitled Get Ready for Dedupe 2.0 where they outline their vision for the future of deduplication.  I read the post and was amazed at the similarity between their views and SEPATON’s core VTL architecture. I thought that it would be useful to address each of their points and indicate how they apply to SEPATON’s DeltaScale Architecture.

Backup D2D Deduplication Restore Virtual Tape

Streaming LTO-5

Chris Mellor (twitter:@Chris_Mellor) recently posted an article over at The Register about LTO-5 entitled Is LTO-5 the last harrah for tape?.  He makes an interesting point about the future of LTO and whether LTO-5 will be the last generation of the technology.  Most of the comments on the article disagree with Chris’s opinion.

I believe that there is another major issue with LTO-5 that must be addressed.  The challenge with LTO (and most other tape technologies) is its limited ability to throttle performance.  Users must carefully manage their environment to ensure that they stream their drives or else backup performance will decline dramatically.  As drives become faster, the challenge of optimizing your environment for the technology becomes more difficult.  You can read more about this in my blog post entitled The Fallacy of Faster Tape.

Deduplication Restore

CommVault and Forward Referencing

I was recently reading this document from CommVault that highlights their deduplication technology and was surprised by their use of the term “forward referencing”. Forward referencing is a common term in deduplication with a generally agreed upon definition. CommVault appears to have redefined the word and promoted their version as a feature.  This is confusing and possibly misleading because a reader might not realize that the definition of “forward referencing” in this document is completely different from the one  everywhere else in the industry.

Deduplication Virtual Tape

When is a node not a node?

One of the things that irks me is when press/analysts/vendors compare a competitor’s solution to a one node SEPATON solution.  SEPATON’s VTL as well as our DeltaStor deduplication and DeltaRemote replication products rely on our DeltaScale™ architecture which is designed around the concept of grid scalability.  The grid allows us to scale dynamically and transparently across multiple independent nodes.  This is very different from competing solutions that rely on a monolithic server approach.


NetApp and Quantum: Why an acquisition would be difficult

A couple of weeks ago, Robin Harris at Storagemojo blogged that he thought it would be a smart move for NetApp to acquire Quantum. I do not agree and think that a Quantum (QTM) and NetApp combination would create major competitive and business challenges and would not be successful in the long-term.


It’s final – EMC acquires Data Domain

Just a quick post to highlight Data Domain’s announcement that they have agreed to be acquired by EMC.  As mentioned in previous posts (see related posts below), NetApp did not have the financial strength to compete with EMC.

The companies that have lost the most in this deal are Quantum and NetApp, and it will be curious to see how NetApp responds.  I discussed NetApp’s situation briefly in Tuesday’s post.


EMC one-ups NetApp

As expected, EMC has increased their bid for Data Domain and is now offering $33.50 per share in cash. Data Domain has been ignoring EMC in favor of their preferred suitor, NetApp; however, with the recent increase, Data Domain has no choice but to consider the EMC offer.

This situation leaves NetApp in a tough spot. James Bond describes the situation perfectly in the movie, For Your Eyes Only,

“I’m afraid we’re being out-horse-powered!”

NetApp wants to acquire Data Domain (and the feeling is mutual), but they are being out-horse-powered by EMC. NetApp does not have the financial strength to go head-to-head with EMC’s increasingly aggressive all-cash offers. NetApp must be evaluating how badly they want Data Domain and at what cost.


Poll: Who will acquire Data Domain?

Things have been quiet on the EMC/NetAppData Domain for the last couple of weeks.  DDUP’s stock price remains above NetApp’s current purchase offer ($30) which suggests that people think the bids will increase.  I also found some seemingly contradictory articles.  The Motley Fool suggests that EMC should back out of bidding for Data Domain because they cannot win.  Storage indicates that EMC has upped their offer to match NetApp which suggests the EMC thinks they can win.  At the very least, we know that EMC has extended their current offer.

As previously posted, I believe EMC will acquire Data Domain. Who do you think will be the acquirer?

  • EMC: They are committed and will win at any cost. (65%, 17 Votes)
  • NetApp: DDUP's board favors NetApp. (27%, 7 Votes)
  • A mysterious company C. (8%, 2 Votes)
  • Nobody, DDUP will remain independent. (0%, 0 Votes)

Total Voters: 26

Loading ... Loading ...