The Current State of IT Operations in the Public Sector

This blog is an excerpt of GovLoop’s recent industry perspective, Gaining End-to-End Visibility: Better IT Operations Through Operational Intelligence. Download the full perspective here.

To understand why IT operations is so critical to the effectiveness of today’s public sector, you must first understand what IT operations is.

Put simply, IT operations is the process of managing and monitoring the day-to-day IT infrastructure of an agency and troubleshooting any issues as they arise. This includes managing the provisioning, capacity, performance and availability of the computing, networking and application environments. Good IT operations is absolutely necessary for government, as they continue to be responsible for more efficiently delivering better services and applications both internally and externally.

But there’s more to IT operations than just keeping things running.

“There are really two dimensions to IT operations for the public sector,” said Bill Babilon, IT Operations Specialist for Splunk Public Sector. “One is the essential part, which is keeping the lights on in the data center. It’s also running and monitoring the basic physical infrastructure, and also the applications on top of that infrastructure that your agency depends upon.”

The second layer, Babilon explained, is being able to maintain a proactive IT operations approach, one where you’re able to detect problems as they arise or even predict them before they happen.

“Today more than ever, public sector IT needs to be able to predict where they’re going either from a capacity point of view or an operational expense investment,” he said. “You don’t want to just wait until you have an operations issue, then race off to fix it.”

How can you be proactive and predict issues before they happen?

That’s where the public sector IT operations landscape faces its biggest challenges. Today’s data center has evolved. It’s now a complex, layered group of siloed and interconnected technologies working in an environment without boundaries. When problems arise, finding the root cause or gaining visibility across the infrastructure to identify and prevent outages is nearly impossible. Meanwhile, virtualization and cloud infrastructures introduce additional complexity and create an environment that is more difficult to control and manage.

Additionally, reduced budget and legacy IT systems create another obstacle for agencies looking to improve their IT operations. Traditional tools for managing and monitoring IT infrastructure are out of step with the constant change happening in today’s data centers. These systems are inflexible, cost too much and are not architected for the complexity of today’s environments. Designed for a single function in IT, they do not work across multiple technologies to help solve problems. Further, their monitoring approaches are often based on filtering and summarizing. When problems arise, they typically lack the ability to drill down and provide granular analysis of IT data. Linking the various causes of performance issues and outages is especially challenging because traditional tools are siloed and can’t access and analyze all the relevant events across the IT landscape.

As the public sector moves more and more to virtualization and cloud computing, it’s critical to gain visibility across all components of their dynamic and complex environments to correlate problems occurring in one layer of the stack with key performance indicators in another. For example, an agency may detect that one application’s performance may be slower, not because of something in the application, but because its virtual machine got moved to a different host with less memory. How can they know that?

The answer lies in better operational intelligence.

Download the full perspective here

logo_splunk_1color_K copy

Leave a Comment

Leave a comment

Leave a Reply