How To Make Backup And Recovery Foolproof
Nobody wants the nerve-wracking job of backup guy. But there's good news: Intelligent disk-based backup, virtualization, and lots of practice recoveries can make it safe to be in charge of the backup process.
March 13, 2012
As we'll cover in our upcoming webinar, Five Ways That Your Backups Will Get You Fired, there is probably no other IT process that can get you into more trouble, more quickly, than a failed backup. There is no worse feeling than realizing that the data you thought you were going to use to recover an application and save the day is corrupted--or worse, isn't even there. Having to then relay that news to the boss can be a career-ending conversation. Is it any wonder no one in IT wants to be the backup guy?
Several surveys in the last few weeks suggest that good backup and recovery is still a top priority for most IT managers. In fact, improving backup and recovery has been at the top of IT to-do lists, year after year, for as far back as I can remember. You would think that at some point over the last 25 years or so we would have resolved this problem.
Soon, using a combination of technologies, I think we can finally at least move it out of the top 10. Using virtualization and intelligent disk-based backup can make it safe to be in charge of the backup process. In fact, this can work so well that being in charge of backups can become one of the easiest jobs in IT.
Step one: Get a great backup
Conventional wisdom is that "backups are all about recovery." But I have found that if you are not getting a good backup, there is nothing to recover--or what you have to recover is so old that you might as well not have it. You have to back up frequently and completely in a short amount of time without requiring a massive network upgrade.
This requires an intelligent approach. The percentage of data that changes on a day-to-day basis per server is typically small. Backing up only the changed data is ideal. Doing so means you can protect more data and set backup jobs to be triggered throughout the day instead of once a day.
Step two: Develop a great test plan
Most of the unease that goes with backups lies in the mystery of not knowing if you will be able to recover data when crunch time rolls around. The easiest way to solve this problem is to test your ability so frequently that you will know that any attempt to recover data will work. The problem, of course, is finding the time to do that testing.
To perform recovery tests you'll need to find a standby server, put it on its own network, and recover all your data and applications to it. This takes physical hardware resources and one resource that IT never has enough of: time. Enter virtualization. Virtualization is key to a successful recovery process. Several modern backup applications not only have the ability to recover to a virtual machine, they can create a virtual machine directly from the backup file, eliminating all data transfer. Recoveries are finally faster than backups. There are vendors that can provide this virtual recovery capability even if none of your servers are virtualized.
Virtualization makes a huge difference to testing. You can recover a virtual instance of a server in minutes. This is better than just making sure you can copy data--you are starting the full server and its application in minutes, and that brings confidence. Virtualization is also cost efficient because only the last few backups need to be stored on disk. Archival data can be directed elsewhere.
There are other details to cover, such as disaster recovery and recovery-point objectives. However, with rapid backups and the even-faster, in-place recoveries that virtualization makes possible, the rest of your backup problems will quickly fall in line.
Track us on Twitter
George Crump is lead analyst of Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments. Find Storage Switzerland's disclosure statement here.
Read more about:
2012About the Author
You May Also Like