Categories
Backup Deduplication Restore

SEPATON Versus Data Domain

One of the questions I often get asked is “how do your products compare to Data Domain’s?” In my opinion, we really don’t compare because we play in different market segments. Data Domain’s strength is in the low-end of the market, think SMB/SME while SEPATON plays in the enterprise segment. These two segments have very different needs, which are reflected in the fundamentally different architectures of the SEPATON and Data Domain products. Here are some of the key differences to consider.

  • Scalability – Rapid data growth is the norm for business today. To add either capacity or performance to a Data Domain environment, you have add a whole new system. For SMBs, managing two or three systems is workable. However, for enterprises with orders of magnitude more data, managing dozens of these “silos of storage” doesn’t make sense. It’s too complicated, too expensive and too inefficient (it doesn’t deduplicate data across systems). SEPATON’s grid-based architecture lets you add capacity or performance as you need it — in modular increments that are easy to add. The SEPATON design enables enterprises to manage tens of petabytes of data in a single system.
  • Backup Performance – A Data Domain system can back up SMB-sized data volumes within a typical backup window. However, Data Domain’s single node processing limit and in-line deduplication process make them far too slow to handle an enterprise scale backup. A SEPATON system can use as many as five processing nodes to backup and deduplicate data concurrently – without slowing performance. Each node can backup and deduplicate up to 25 TB per day.
  • Restore Performance – Again a Data Domain system may be able to handle small data volume of an SMB. However, it’s intrinsic design makes restores of large data impractical. First, it needs to reconstitute deduplicated data (even if the data was just backed up) before it can be restored. Second, the single-node processing limit discussed above, applies to restore times as well. SEPATON eliminates these issues and delivers the fastest restore times in the industry. It uses forward referencing of deduplicated data to enable it to restore the most recently backed up data without reassembly. It is also designed to restore data through any port using up to five processing nodes for wire speed performance.
  • VTL vs NFS/CIFS – Because large enterprises have substantial investments in physical tape, SEPATON designed its VTLs to integrate seamlessly in these environments. Customers can leverage existing tape-based policies and procedures. Data Domain uses an NFS/CIFS model to meet the needs of their SMB target market. Although they do have an FC VTL, their real focus (and 90 percent of their customers) is in the NFS/CIFS focused SMB market.
  • Management Control – For SMBs with straightforward backup needs, the Data Domain “all or nothing” method makes sense. However, enterprises have far more complex backup requirements. SEPATON meets this need in several ways. First, it automates all of the disk subsystem management tasks, including capacity allocation, performance throttling, and load balancing. Second, it lets administrators choose which data they want to deduplicate and the level of deduplication to apply. Third, it provides a self monitoring, “email home” functionality to notify admins when human intervention is needed.

These bullets highlight some of the key architectural differences between SEPATON and Data Domain. Data Domain’s solutions are ideally targeted at smaller environments where many of the above issues are less important. SEPATON’s solutions provide the best scalability and performance in the industry.

4 replies on “SEPATON Versus Data Domain”

You didn’t mention the one thing that DD always does mention. They have deduped replication and you don’t yet. It’s kind of really important if somebody wants to go NOTAPES, isn’t it? When are we going to see you announce that yet?

I know; I’m playing a broken record. I’m consistent, though. I give DD a hard time for not having global (multi-node) dedupe. I give EMC & Quantum a hard time for their restore speed issue. And I’m giving you a hard time for not having deduped replication.

Curtis,

It is my policy not to comment on unannounced products here on the blog. Deduplicated reduplication is an important feature and I ask that you stay tuned for more details.

Jay,

I am new to deduplication but I heard from someone that the Delta Store product is superior to other deduplication because it knows and can understand what is in the backup set (and it can therefore do a better job of deduplication).

I am wondering how this happens, do you have some API’s by which you can read the backup catalog or something. Second, I use encryption on my backup. Will Delta Store work with that.

Thanks,

Claire

Claire,

Thank you for your comment. DeltaStor understands the objects contained within each backup set by looking at the metadata embedded within the backup data stream. This happens transparently to the user and allows us to more effectively find redundancies.

Encryption randomizes the source data. This makes deduplication difficult if not impossible because there are no consistent redundancies. This situation impacts all deduplication algorithms, and your best bet is to disable encryption, if possible.

Leave a Reply

Your email address will not be published. Required fields are marked *