GovLoop

Meeting AI Where You Are

Traditional, task-specific AI has helped federal agencies improve operational efficiency, productivity and decision-making. With the emergence of generative AI, agencies are beginning to experiment with its ability to automate, augment and accelerate work. The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI) acknowledged the tremendous opportunity of AI and our shared commitment to harness it responsibly. Its focus on the federal government’s use of AI systems is critical, as the government’s massive purchasing power can set the bar for adopting trustworthy AI. But what’s the best way for federal agencies to take their AI development and deployment to the next level with generative AI?

Most agencies will get the best value from adopting a use case driven approach, according to Ryan Macaleer, Vice President, Data and AI for the U.S. Federal Market at IBM.

Adopt a Use Case Driven Approach

“Traditionally, AI projects were task specific and siloed. That technology enabled advancements in operational efficiency, productivity and decision making, but it required a lot of resources, computational power, and [skilled] data scientists to develop algorithms and rules for a specific task,” he explained. That can be very expensive for one piece of work. “Each task had to have its own governance, its own data, its own structure,” he said.

Federal agencies’ experimentation with generative AI demonstrates that AI has moved beyond number-crunching and repetitive tasks. It’s now capable of natural language processing  (NLP), grasping context and exhibiting elements of creativity. But one of the greatest challenges to adopting generative AI is knowing where to begin. According to Macaleer, developing an AI strategy means identifying how an agency can best use AI. In other words, identifying AI use cases.

IBM has identified three use cases that can offer agencies a quick return on investment: human resources, citizen services and application modernization. This approach allows agencies to develop uniform standards for governance, security, bias detection and other concerns. The AI is not confined to solving one particular problem, said Macaleer, “and you get the benefit of scale across an agency.”

Embrace Explainability and Build Trust

While the potential of generative AI is exciting, navigating the landscape requires a balancing act between progress and prudence. One common concern about generative AI is that it can seem like a “black box” whose workings are unclear. Knowing what data went into an output and why — explainability — builds trust.

Developing robust mechanisms to ensure the responsible use of generative AI technology is essential. “That is why we built powerful AI governance into watsonx, our comprehensive AI and data platform, to give federal agencies the ability to manage the entire lifecycle of AI, including the training, tuning, deployment and ongoing governance,” said Macaleer.

“It’s an exciting time,” Macaleer said. “I think we have so much more to gain than we do to be afraid of. So, let’s embrace it together.”

This article appeared in our guide, “Gearing Up for AI.” To learn more about AI’s transformative impact in government and prospects for 2024, download the guide here:

 

Photo by Cytonn Photography at Pexels.com
Exit mobile version