All too often, data protection is a checkbox exercise. Have we got backups of our data? Check! Three copies of each? Check! Are they on two different media? Check! Is one offsite? Check! All done!

Except you’re not all done. 

The 3-2-1 rule — three copies of data on two different types of storage media with one copy offsite — is an excellent maxim to follow. It all but guarantees that you’ll always have a copy of your data should someone accidentally delete a file or if a server melts down. But that’s all that 3-2-1 ensures, and it’s only half of the data protection equation. If you can’t recover data in a timely manner, it doesn’t matter how many backups you have. 

For example, let’s imagine your business suffers from a massive ransomware attack, an awful eventuality that is mostly a question of “when?” rather than “if?” A recent study found that 54% of companies hit by ransomware were unable to prevent it from encrypting their data – the average cost to fix it being just under $1.9 million. The average downtime was a staggering 23 days, according to a second report.  

The problem is that it takes too much time to recover from traditional backup files. In a major disaster or ransomware attack, IT may need to recover dozens, even hundreds of terabytes of critical data. Rehydrating the backups, transferring the data and reinstalling it on hardware takes time, and so much that many organizations have decided it’s cheaper to pay ransom than to suffer the costlier downtime involved in recovering from their backups. Earlier this year, Colonial Pipeline paid a $4.4 million ransom to decrypt its business systems so its gas pipeline could resume serving the southeastern United States. Undoubtedly, a company that large had backups, but they decided paying for the encryption key would cost less in the long run.

Recovering from disaster vs. recovering a server

In addition to the time it takes to recover from backups, there’s also the problem of how the organization recovers. Recovering a single file, workload or server is a straightforward task. Recovering a significant portion of an entire data center is most definitely NOT.

For starters, the order in which workloads are brought back up matters, because the modern IT stack is a complex web of dependencies. If the full supporting software infrastructure isn’t present, applications won’t run. 

Plus, there’s the issue of where an organization will recover. In the case of ransomware, the infrastructure is, presumably, fine. But that’s not necessarily the case if there’s a flood, earthquake or fire. If you do have the luxury of a second data center, do you have the capacity to restore critical applications there? Can you provide access to your end-users? And if you’re trying to recover to the cloud, keep in mind that it’s going to be an extremely frustrating experience, even if you do have people with cloud-specific skills. 

For all these reasons, a modern organization needs to protect itself with Disaster-Recovery-as-a-Service (DRaaS). In this scenario, a trusted partner works with you to design a DR strategy, implement an offsite repository — typically of virtual machine (VM) snapshots — and test recovery, either to the provider’s facility or to the cloud. In this way, the organization has a plan to recover quickly from an adverse event that takes out much of its data infrastructure. 

Conversational Geek

We’ve partnered with Conversational Geek to produce an eBook that goes into far more detail about the difference between backup and DRaaS. And it does so in a breezy, fun, conversational style that’s fun and easy to read. 

Backup is not DRaaS will help you understand:

You can check the eBook out yourself by clicking here. And if you’d like to learn more about OffsiteDataSync’s DRaaS capabilities, check out our pricing page. We’d be happy to speak with you.