, ,

Government Agencies Need Generative AI Laboratories

It seems that not a week passes without the rollout of a new AI tool that is marketed as a “gamechanger” to government agencies. 

With the release last year of ChatGPT and the explosion of research and work into powerful new generative AI tools, government agencies now have at their disposal some incredibly powerful options. These tools promise to reshape the way organizations operate and conduct their business, and how they communicate and engage with their customers.

But how can agencies determine what the most appropriate use of generative AI is? And how can they begin to test ideas and approaches to the responsible and ethical use of AI without running afoul of the many open ethical, legal, and security questions surrounding these tools?

To help identify the best use of generative AI options to support an agency’s mission and improve services to customers, agencies need generative AI Laboratories.

Room to Experiment

Some agencies, such as the General Services Administration, have restricted employee access to AI tools as they assess the implications of adopting these new technologies on their operations and their interactions with customers. Other agencies, including the US Department of Agriculture, have chosen a measured approach to adopting generative AI.

Some state and local governments have taken an experimental approach, encouraging employees to test new generative AI options to see how they can help government employees do their work. Rather than banning the use of generative AI tools, the City of Boston instead issued guidelines intended to set guardrails for how employees use the new technology

Because of their increasing ubiquity and power, AI tools are already finding their way into government agencies and being used by public-sector employees to do their work. However, because of the many unresolved questions about data privacy and security, accuracy, intellectual property and copyright, and other issues, these tools also carry with them potential risks for agencies starting to use them.

Government agencies need room to find the best uses of generative AI to achieve their goals and help those they serve.

Characteristics of an AI Laboratory

Generative AI is a tool. We are responsible for the outcomes of our tools. For example, if autocorrect unintentionally changes a word — changing the meaning of something we wrote — we are still responsible for the text. Technology enables our work; it does not excuse our judgment or our accountability.

– City of Boston Interim Guidelines for Using Generative AI

To be effective and help governments determine outcomes for those they serve, a generative AI Laboratory ideally would have the following qualities:

  • It would be based on open source Large Language Models (LLM), to provide maximum flexibility, mitigate vendor lock-in, and avoid issues related to data privacy.
  • The LLM used should be auditable and easily corrected based on the quality of generated output.
  • The Laboratory would run in a closed environment, such as inside a dedicated virtual private cloud (VPC) environment to avoid issues of data security.
  • It would produce results and outcomes that are open and transparent, as called for by Executive Order 13960 (Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government).
  • The Laboratory would be easy to obtain, not requiring a complex or lengthy procurement in order for agencies to start learning how they can put generative AI to use. 

We’ve seen the incredible pace of innovation around generative AI tools, and we know it’s inevitable that government employees will adopt them to help do their jobs. Governments need ways to safely experiment with these new tools, and discover for themselves what works best for those they serve.


Mark Headd is a Government Technology SME at Ad Hoc. He is the former Chief Data Officer for Philadelphia, serving as one of the first municipal chief data officers in the United States. He holds a Master’s Degree in Public Administration from the Maxwell School at Syracuse University, and is a former adjunct instructor at the University of Delaware’s Institute for Public Administration. He spent 6 years with the General Service Administration’s Technology Transformation Services (TTS), serving on the leadership team for 18F and leading customer success efforts for TTS’ cloud platform, which supports over 30 critical federal agency systems.

Image by George Hodan on publicdomainpictures.net

Leave a Comment

Leave a comment

Leave a Reply