I started this blog over two years ago to focus on the criticality of data protection and specifically data recovery. While technology continues to evolve, the importance of these two elements remains consistent. Every company must have a recovery strategy to protect against data loss or corruption. Some people may be inclined to de-emphasize backup and recovery based on the faulty assumption that today’s virtualized hardware and software is more reliable or flexible, but this is a mistake. In the last month, we have seen two examples of why data recovery is critical, and both affected entities had large IT staffs and huge budgets. Without an effective protection strategy, massive data loss would have been unavoidable in both cases. The companies recovered the vast majority of their data but experienced an outage that was far longer and more damaging than either anticipated.
Tag: disaster
This blog primarily focuses on protecting corporate data, but I recently received a call from my father that reminded me of the criticality of protecting personal data. My father called expressing frustration that his laptop hard drive had failed and corrupted his data. Fortunately, he had backup copies of his most critical files on a USB stick; however, his email history and address book were not stored on the external device and were lost. I mention this story to remind you of the importance of personal data protection. What are you doing to backup your data?
There are many different approaches to protecting personal data. The two key concerns to consider are:
- What happens if I lose the hard drive where my data is stored or experience a software problem such as a virus?
- What happens if I suffer a more extreme data loss such as my house burning down?
Each question is critical, and the answer will vary depending on the data. For example, digital pictures of your family might have a different priority than your MP3 library. The former is irreplaceable and the latter is not. These priorities will impact the chosen data protection medium and methodology.
The latest scary backup story comes from a firm called Danger that makes the Sidekick PDA/phone. The Sidekick stores the majority of its data in a central data center and the data is loaded each time to the phone is restarted. The idea is that the data center provides protection if you lose your phone. A good idea, right? Well yes, assuming that Danger adequately protects its customers’ data.
A number of outlets are reporting that Danger suffered a catastrophic data loss and all users’ data has been lost. I checked with a family friend who confirmed that her Sidekick was down for a week and is now finally working as a phone, but her data is inaccessible. This is unacceptable; Sidekick users paid a monthly fee for this service and Danger should have maintained reasonable precautions to protect their customers data. Clearly this is a bad situation for everyone, and lessons to be learned by all.
Here are some key takeaways from this event.
Recent Comment
Recently an end user commented about how the replication performance on his DL3D 1500 was less than expected. As he retained more data online, his replication speed decreased substantially and EMC support responded that this is normal behavior. This is a major challenge since slow replication times increase replication windows and can make DR goals unachievable.
The key takeaway from the comment is that testing is vital. When considering any deduplication solution, you must thoroughly review it with limited and extended retention. In this case, the degradation appeared when data was retained and would not have been found if the solution was tested with limited retention. The key elements you should test include:
- Backup performance
- On the first backup
- With retention
- Restore performance
- On the first backup
- With retention
- Replication performance
- On the first backup
- With retention
In part 1, I touched on four of the most common challenges with data restoration in a disaster scenario. In this post, I will review some other key considerations. These examples focus on the infrastructure required after a disaster has occurred.
Hurricane Ike has been in the news lately and my sympathy goes out to all those affected. It is events like these that test IT resiliency. The damage can range from slight to severe and we invest in reliable and robust data protection processes to protect from disasters like this. The unfortunate reality is that, no matter how much you plan for it, the recovery process often takes longer and is more difficult than expected.
In many respects, data protection is an insurance policy. You hate to pay your homeowners premium every month, you do it because you know that it is your only protection if major damage ever happens to your house. In the case of data protection, you invest hours managing your backup environment to enable recovery from incidents like this. The unfortunate reality is that even with the best planning and policies things still may not turn out as expected. Four of the most common pitfalls I hear from customers include: