GovLoop

Why Resiliency in Government Data Is More Important Than Ever

3D illustration. Image background concept of cloud computing.

This blog post is an excerpt from the recent report, Resiliency in the Hybrid Cloud is Critical to Government. To read the full report, head here.

IT administrators might have once been able to think of a backup system as a spare tire – to be taken out and put to use whenever a primary system failed. But that’s no longer a practical approach, for a variety of reasons. For one, backup systems might be located close to a main data center. This means that flooding from a superstorm, for instance, could just as easily take out a backup system along with the data center.

Cost is another factor – and a big one. A backup center represents a significant fixed expense for hardware and software, not to mention the real estate required to house it. Even just moving a nearby backup center farther away from a data center can be too costly for governments with shrinking budgets. The expense of maintaining data and applications also can limit agencies to maintaining only the most mission-critical capabilities.

But the definition of mission-critical, too, is changing with the increased reliance on IT systems; agencies are viewing more and more of their capabilities as essential. A study by the Enterprise Strategy Group (ESG) found that IT administrators are reluctant to allow any IT resource to be offline for very long. Respondents – IT professionals involved in data protection technology decisions – said more than half of their servers had recovery time objectives of less than an hour, but that only 22 percent of organizations are assured of a 90 percent success rate for their recovery tests. Respondents also said that 35 percent of servers had recovery time service-level agreements of 15 minutes or less. Mere backup can’t meet that kind of demand. And, a redundant physical system would add on to that recovery time due to it being in yet another location.

What can meet those demands is a hybrid cloud solution such as the Veritas Resiliency Platform 2.0, which provides a unified, automated approach to resiliency operations, proactively managing complex environments to ensure operational uptime. And it can do so on demand, providing Disaster Recovery as a Service (DRaaS). The Veritas Resiliency Platform brings predictability to the matter of meeting recovery time, recovery point and other SLOs within multiplatform and multi-vendor cloud environments, be they private (in-house), public (outsourced) or hybrid (combined on-premise and offsite) clouds.

To see the advantages of an effective disaster recovery, consider the computing environment for which it will be providing resiliency. IT operations today tend to consist of a complex mix of platforms and operating systems running in and through the cloud, with multiple vendors hosting application data, all for the benefit of multiple stakeholders who all have their own requirements. Critical applications interact not only with hypervisors, but often also have complex interactions with other applications. In some cases, different tiers within the enterprise can exist in separate locations. All that makes providing 24/7 support and uptime a challenge.

In such an environment, old-school tactics such as manual processes or spreadsheets are ineffective. They are unable to provide centralized visibility, create inefficiencies and possibly increase – rather than decrease – the risk of downtime. Some tools in the cloud can integrate backup and replication but only work within virtual environments, as opposed to working across the full enterprise, which leads to a fragmented approach that doesn’t address all of an organization’s SLOs or the interactions among applications.

And while employing single, specific solutions for mission-critical applications may seem appealing on a perplatform basis, they too can increase inefficiencies for the overall environment and make it difficult to monitor the overall enterprise.

 

Exit mobile version