Tennessee CIO Kristin Darby sees agentic AI’s potentially transformative value. But she also sees the value of taking a deliberate and responsible approach to adopting it.

Agentic AI could bring numerous benefits: automating tasks that require pulling information from disparate systems, executing end-to-end workflows, providing more robust digital services to the public and more.
But state officials aren’t rushing into agentic AI any more than they have with generative AI. In both cases, rather than getting caught in the hype, they are taking a governance-first approach, said Darby, prioritizing protecting citizens and ensuring public trust.
“We’re experimenting, but definitely not in isolation,” she said. “We’re focusing on embedding AI in government in a way that aligns with policies, security, workforce readiness and measurable outcomes.”
To guide its work, Tennessee is developing safeguards in five areas.
1. Accountability
Imagine a human logging into a network, receiving information, reaching a decision and acting on it. Agencies track that activity in case they’re audited, and they need to take the same approach with AI agents, logging every agent’s activity and tracing their decision-making processes.
At this point, that’s still tricky because the tools aren’t there yet, said Darby. “You have to design that visibility into the architecture, and many out-of-the-box solutions don’t have the level of sophistication and logging that we feel is required based on our government standards,” she said.
2. Access Controls
One of the core principles of cybersecurity is least privilege: Employees should have access only to the data and systems they need to do their jobs. As with accountability, that same principle must apply to agents. To enforce least privilege, an agency needs to track both the identity and permission levels of every agent.
Unfortunately, the traditional identity management solutions that agencies use with their employees were not designed to work with agents, she said. Tennessee is developing identity capabilities to support agentic initiatives and testing those systems in its AI Innovation Lab (see “Responsible Innovation” below).
3. Error Propagation Risk
IT professionals might remember the early days of robotic process automation — a forerunner, of sorts, to agentic AI. RPA software bots were designed to mimic human activity but to work much faster, which was a strength when a bot worked correctly. But if the bot was programmed incorrectly, the mistake would scale very quickly.
The same is true of agentic AI, except that agents can act — and problems can propagate — even faster, Darby said. Agencies need tools both for monitoring performance and heading off cascading errors.
4. Procurement and Vendor Ecosystem
AI has become a major driver in the IT industry, with many vendors both integrating AI capabilities into their existing solutions and developing new AI-based offerings. The problem is that AI-related standards are still evolving.
Tennessee has begun including AI-related clauses in all relevant contracts that align with its AI policies around transparency, testing, security and other requirements, Darby said, and that language will evolve as the technology evolves.
5. Public Trust
At a basic level, public trust in AI is built on transparency and disclosure, Darby said. As agentic AI is incorporated into a growing range of services, constituents will want to understand how — and where — the technology is being used. The challenge is that AI’s role can vary widely across systems.
Its function in an email platform, for example, may look very different from its role in a customer call center or an enterprise resource planning application. “So what is realistic for us to communicate? How do we alert and disclose?” she said. “Those are the new standards that need to be created.”
A Vision of Transformation
In the year ahead, Darby’s team will be working hard to address the many questions around agentic AI. But the more important question, she said, is not “How do we use AI?” but “How do we create better experiences for everyone?”
She sees agentic AI as one of the tools in the state’s toolbox for transforming the services it provides. In particular, agentic AI has the potential to help the state better tailor services and interactions to constituents’ specific needs. Employees, in turn, will feel satisfied that they’ve met those needs, Darby said.
“Leveraging technology helps us optimize the outcome there,” she said, “and it becomes less about AI and what’s changing and more about the value being delivered that’s truly transformative.”
Responsible Innovation
As part of its AI push, Tennessee has created the AI Innovation Lab, a secure, cloud-based environment where teams can test tools, systems and workflows before putting them into production. This ensures that they align with governance and security mandates, scale effectively, and deliver the intended outcomes.
Through the lab, teams uncover policy gaps, refine procedures and gain hands-on experience with AI systems. Darby notes that the lab also supports workforce development, helping staff learn by experimentation in a safe environment.
“I think that it’s important that governments create environments [like this] so that responsible innovation doesn’t become a barrier to progress,” said Darby.
This article appears in GovLoop’s new guide Get Ready Now for Agentic AI. To learn more about what agentic AI is, why it matters and how it can affect the government workforce, download the guide here.



Leave a Reply
You must be logged in to post a comment.