, , , ,

AI in Government Is Moving Fast. Trust Is Moving Slower. A Gap State Leaders Must Close

At the Public Sector Network Government Innovation Showcase in Richmond, one theme came through clearly in every conversation. The pace of innovation is accelerating. The pace of trust is not. That gap is where state governments now operate.

Across Virginia, leaders are being asked to modernize services, adopt AI and deliver digital experiences that match what residents expect in their daily lives. Faster. Simpler. More transparent. As Commonwealth of Virginia, Secretary of Administration Traci DeShazor put it, government is no longer an abstract concept. It is a daily experience, and increasingly a digital one.

At the same time, leaders are navigating a more complex reality. New AI legislation, ongoing policy debates and heightened public scrutiny are shaping how and when innovation can move forward. Recent developments in Virginia’s General Assembly reflect this tension. Lawmakers are working to define guardrails around AI use while still encouraging innovation. That balance is not easy to strike.

This is not a technology problem. It is a trust problem. And more specifically, it is a data governance problem.

The pressure to move faster is real. In Richmond, there was no shortage of energy around AI. Agencies are already using AI including Agentic AI to improve records access, accelerate contract review, improve planning and streamline case management. These are not theoretical use cases. They are happening today.

What stands out is not ambition, it is the hesitation underneath it. Leaders understand the potential of AI. They also understand the risk. In government, every decision carries weight. Every dataset has implications. Every system must stand up to public scrutiny. Moving fast without control does not just create inefficiency. It erodes trust.

We have seen this pattern before. Agencies delay formal adoption out of caution, only to find unsanctioned experimentation happening anyway. Innovation does not stop. It just moves outside the lines. That is where risk multiplies.

One of the most important insights from the Richmond event was this: AI is not introducing new problems. It is exposing existing ones faster. Most state agencies not just in Virgina but across the country are still managing decades of content across shared drives, legacy systems, email and paper. Information is fragmented. Ownership is unclear. Policies are inconsistently applied.

That environment worked well enough when processes were manual. It does not work in an AI-driven world. AI depends on context. Context depends on well-governed information. Without it, outputs become unreliable, decisions become harder to explain and confidence begins to erode.

This challenge becomes even more urgent as we move into the next phase of AI: agentic systems. These are not tools that simply summarize or assist. They take action. They trigger workflows, move information between systems and support decisions in real time. In a contract management scenario, an AI agent could automatically extract terms, route approvals and flag compliance risks across thousands of documents. That level of automation is powerful, but it also raises the stakes. If the underlying data is incomplete or poorly governed, the agent does not just produce a flawed output. It executes flawed actions at scale.

This is why the agencies making real progress are not starting with AI. They are starting with governance.

What makes Virginia particularly interesting right now is that both sides of this challenge are playing out in real time. On one hand, there is clear momentum. Agencies are modernizing, digitizing services and exploring how AI can improve outcomes. From contract analysis to benefits processing, the focus is on delivering faster, more responsive services to citizens.

On the other hand, there is a growing recognition that governance must keep pace. Legislative conversations around AI reflect concerns about transparency, accountability and data protection. These are not barriers to innovation. They are prerequisites for it.

The leaders in Richmond were not asking whether to adopt AI. They were asking how to do it responsibly. That is a different question. And it leads to a different strategy.

For years, governance has been treated as a compliance exercise. Something separate from innovation. Something that slows things down. That mindset is changing.

What we are seeing across Virginia and other states is a shift toward governance as an operational foundation. Not policy documents sitting on a shelf, but systems that enforce consistency automatically. When governance is embedded into how information is captured, classified and managed, several things happen.

Data becomes more reliable. Processes become more repeatable. Decisions become more explainable. And AI becomes more trustworthy.

This is what enables practical, defensible use cases. Faster FOIA responses. More consistent contract evaluation. Better visibility into case data. All with auditability built in. AI does not create trust. Governance does. AI simply operates within it.

The path forward is clear, but not easy. State leaders are being asked to move faster while taking on more risk. To innovate while maintaining public trust. To modernize without disrupting essential services.

There is no shortcut through that challenge. But there is a pattern. The agencies making progress are doing three things consistently. They are getting control of their information environment. They are standardizing and automating how work gets done. They are building trust in their data before scaling AI. Only then are they expanding into more advanced use cases.

This is not about slowing down innovation. It is about making it sustainable. AI is not the starting point. It is the next step.

The conversation around AI will continue to evolve. New capabilities will emerge. Expectations will continue to rise. Legislative pressure will increase. Through all of it, one thing will remain constant. Trust will determine the outcome.

For state government leaders, the priority is not to adopt AI faster. It is to build a foundation that allows AI to be used responsibly, transparently and at scale.

That foundation is governance.

Now is the time to take a hard look at your information environment. Where is your data fragmented? Where are policies inconsistently applied? Where do processes rely on manual workarounds instead of enforced standards?

Start there.

Because in government, AI is not just about what is possible. It is about what is defensible. And the agencies that get this right will not just move faster. They will move forward with confidence.


Andy MacIsaac is a senior marketing leader at Laserfiche, where he drives go-to-market strategy and thought leadership for AI-powered content management, process automation, and data governance in the public sector. With more than two decades of experience partnering with government agencies and education institutions, he helps organizations modernize operations while maintaining security, compliance, and trust. Andy has led industry marketing, demand generation, and sales enablement initiatives across leading software and consulting organizations, translating complex technologies into practical outcomes. As a trusted advisor to CIOs and agency leaders, he is passionate about responsible innovation that improves efficiency, transparency, and service delivery.

Image by Iris,Helen,silvy from Pixabay

Leave a Comment

Leave a comment

Leave a Reply