, ,

How to Make Your Data Backups Better

Backing up your cloud environment is an important first step to making sure that your data and infrastructure is always available, but it doesn’t solve everything. Because agencies tend to deploy workloads across dozens of cloud platforms, for example, it can be difficult to quickly find the specific server or files required. Instead, administrators can spend hours examining the backups of each cloud platform to find what they need.

Speed of recovery is another challenge. With backups distributed across multiple clouds, it can be difficult to recover quickly in the event of an outage – what’s known as Recovery Time Objective (RTO). Closely tied to RTO is Recovery Point Objective (RPO) — the time between backups. “If you back up once a day and have a failure after 23 hours, you have lost 23 hours’ worth of data. Not too many agencies have that kind of tolerance for data loss,” said Sebastian Straub, a principal solutions architect with N2WS, which specializes in data protection for cloud-based workloads.

With so many workloads spread across so many clouds, it also can be difficult to control who has access to your backups. And then there are concerns about data sovereignty — where the data actually lives — and whether vendors or other parties might be able to access that data. These security and compliance concerns constitute very real roadblocks for agencies storing workloads in the cloud.

It’s also important to have confidence that backups are occurring consistently, and that the right people get notified when failures or other issues occur. For example, do you have the ability to carefully audit your environment to determine whether there are workloads that are not being backed up but should be?

Solution: Automated, Policy-Driven Backup and Recovery

To protect against data loss, security breaches and slow recovery times while providing real-time data access, agencies need an automated, policy-driven, comprehensive backup and recovery plan that includes all data stores and infrastructure.

To provide fast availability and recovery time, choose a solution that prioritizes speed and automation. For example, the N2WS virtual appliance uses fast snapshot-based block storage, allowing it to recover an entire environment — all data, network configurations and servers — within about 30 seconds. That’s particularly important when an entire region or data center fails.

If a disaster occurs, you should be able to recover everything at the same time instead of machine by machine. If a region fails, your solution also should support cross-region recovery. N2WS, for example, supports disaster recovery between AWS GovCloud (US) regions, so if one fails, everything is immediately available on the other site.

Providing full security and compliance is important with all backup scenarios. One way to do this in cloud environments is by using a solution that does not actually see or access any data it is backing up. Instead of filtering the data through a solution, for example, look for a solution that simply applies the instructions set by the organization to perform specific data manipulations. No third party, including the vendor, should have access or visibility into anything.

A comprehensive backup solution also should be able to automate how data is moved throughout tiers to save money. For example, data backed up to expensive Amazon Elastic Block Store (Amazon EBS) should be able to be moved to less expensive Amazon Simple Storage Service (Amazon S3) or even Amazon S3 Glacier Deep Archive, depending on its importance, relevance and how quickly it needs to be recovered.

This article is an excerpt from GovLoop’s recent report, “Securing the Future of Government Data in the Cloud.” Download the full report here.

Leave a Comment

Leave a comment

Leave a Reply