Site icon GovLoop

How to Take a Nimble Approach to AI Governance

AI governance is not just about defining rules and processes or do’s and don’ts. It’s about cultivating an environment in which AI initiatives are more likely to succeed (and less likely to go off the rails). Here are some guideposts to help you on your way.

Get Out of ‘Proof-of-Concept Purgatory’

So many things are possible with AI but not everything is worthwhile. Agencies need to consider the potential return on investment of each use case, said Mike Horton, Acting Chief AI Officer, U.S. Department of Transportation. Start by showing that a use case improves productivity in measurable ways. Better yet, find ways to apply that use case organizationwide or even in other agencies. “Right now, we’re stuck in this POC purgatory because people are doing pet projects, and we’re really not lining things up where we can really get the advantage that we need to get,” Horton said.

Give Employees Room to Experiment

Agencies might be tempted to take a wholly top-down approach to AI, focusing on developing use cases in strategic areas. But there’s something to be said for taking a bottom-up approach and having employees take the lead, said Felipe Millon, Government Go[1]to-Market Lead at OpenAI. He noted that Pennsylvania saw great results from a yearlong pilot that gave 175 employees an opportunity to study how ChatGPT could make them more efficient. By the end of the pilot, employees estimated that AI was saving them eight hours a week. “Organizations need to do both strategies,” Millon said. “You need to just take these tools and make them generally available, but then also figure out ways to revolutionize the most important parts of your mission.”

Keep Employees Mission-Focused

Natalie Buda Smith, AI and Digital Strategy Director at the Library of Congress, also believes in allowing employees to experiment with these tools but recognizes that this can make managers nervous. In short, they worry that employees will do something that’s not productive. Just having employees learn AI tools is productive, said Buda Smith. Still, when providing them, make it clear that staff should focus on mission[1]supporting tasks. That prevents a free-for-all while also giving employees plenty of latitude, she said. “Working within the framework of your strategy, and the objectives and goals within your strategy, they can come up with some pretty amazing things,” she said.

Take Stock of Existing AI Solutions

Most agencies already have invested in AI without realizing it, said Danielle Greshock Worldwide Director of ISV Partner Solutions Architecture, Amazon Web Services. That’s because so many software vendors have built AI into their solutions or offer AI features that agencies can use. Agencies need to factor these offerings into their AI planning, she said. “You want to start thinking about the tools that you already buy that are offering more and more features in the AI space that you can take advantage of right out of the box,” Greshock said. At the same time, she added, you want to make sure employees use those offerings responsibly.

Create a Shared AI Infrastructure

Zach Whitman, Chief AI Officer and Chief Data Scientist at the U.S. General Services Administration, said the agency is especially interested in looking for ways to share AI use cases. Although there are a variety of use cases across agencies, “there are tons of similarities,” he said. That suggests a tantalizing possibility: creating a shared AI infrastructure on which each agency could build. Such an infrastructure might include servers for hosting large language models and services for accessing foundation models or application programming interfaces (APIs). The goal would be to lower the barrier to entry for agencies, Whitman said, “making it so that a fed can pull the right model for the right use case at the right time and not be encumbered by procurement issues.”

Stick to Your Principles (and Show Your Work)

The concept of responsible AI remains a top priority for government agencies. Although the definition of responsible AI is still evolving, agencies should continue to focus on certain bedrock principles and, most importantly, transparency, said Jim Ford, Director, Federal Partner Solutions, at  Microsoft. For starters, agencies need to be clear about what data goes into their models and the prompts they use to generate content. As they adopt agentic AI, which can initiate action without human intervention, agencies must understand and be ready to explain that decision-making process. “Just like we inspect human workers and what they do, we are going to have to do the same thing [with agentic AI],” Ford said.

This article appeared in our report, “How to Deliver on the Promise of AI.” To read more about how governments are putting AI into action, download it here:

 

Photo by fauxels at pexels.com
Exit mobile version