, , , ,

Government Must Understand the Types of AI to Effectively Implement

AI has grown increasingly prevalent across the federal government — since 2017, U.S. government AI-related contract spending has increased roughly 2.5 times. With more than 300 AI use cases cited within government as of May 2023, it’s evident that this technology provides ample opportunities for innovation and efficiency. But despite its growing popularity, potential biases, discriminatory outcomes and transparency remain important considerations to ensure AI is a positive force in government.

AI is often thought of as one singular technology, but there are several types of AI and each with a unique set of uses — and concerns. While some agencies may use AI to deploy virtual assistants or chatbots, others may use it for vulnerability reporting.

Many individuals are hesitant to adopt or even learn about AI. But understanding the different forms, such as predictive, generative and human-in-the-loop, in tandem with their relationship with ethical concerns, will help navigate the challenges associated with successful implementation.

The federal government must be prepared to adopt and set an example of what responsible use looks like, as its presence will continue to play a significant role in government operations.

Generative AI creates new data based on a previous dataset or algorithm and is commonly used to generate content — including images, videos and text — when provided with a prompt. ChatGPT, the popular chatbot developed by OpenAI, falls under this category of AI. In government, generative AI can improve customer experience (CX) with chatbots and virtual assistants that generate comprehensive answers to citizen questions.

While it’s a useful tool for creative purposes, concerns include spreading misinformation, privacy, copyright and creation of biased content. Since generative AI tools create content based on previous data, concerns around misinformation and bias can be mitigated by implementing rigorous data collection practices, such as using diverse data sets and conducting regular application audits.

One potential solution to address generative AI concerns would be to add a marker to content created by AI through its feedback loop, ensuring a viewer or reader realizes what they’re looking at is a product of generative AI. When selecting training data for generative AI, reviewing for alignment with laws and regulations, using historical or creating real-world scenarios, as well an ensuring diversity in training data can reduce biases. This is dependent on upfront and regular audits and updates to data sets as regulations and stakeholder dynamics change.

Another important goal is maintaining plain language standards so non-technical citizens and staff interacting with generative AI have full access and can understand the output in clear, meaningful, and non-technical terms. Additionally, leveraging internal government best practices like an AI ethics committee, stakeholder participation, and accountability structures can ensure governance is in place to maintain and regularly review outputs, use cases, and updated training data to enact remediation plans accordingly.

Predictive AI can anticipate outcomes based on a previous set of inputs or machine learning (ML) algorithms. This form of AI can be useful in fraud detection, healthcare and financial forecasts, and when equipped with accurate data, predictive AI technologies can forecast disease outbreaks and identify at-risk patients based on their previous health outcomes. However, potential bias is a concern, and lack of transparency can make predictive AI systems difficult to understand.

By using a diverse and explainable ML algorithm with quality thresholds or minimum data standards, predictive AI can be structured to provide transparent, easily digestible outcomes. Human-in-the-loop processes can further mitigate bias and address transparency concerns.

Finally, Human-in-the-Loop AI combines human knowledge and AI, offering recommendations or automation for appropriate tasks. With human oversight, human-in-the-loop AI addresses common transparency, bias and accountability concerns.

Human insights ensure AI systems are not making autonomous decisions, allowing humans to ultimately decide what outcomes AI can produce. For example, if a generative AI system is producing inaccurate images based on the given prompt, the human-in-the-loop will be able to validate or deny the model’s outcome and provide feedback to adjust the algorithm.

As much as users expect AI to produce perfect outcomes, there is always the chance for error, as it’s a constant cycle of trial and error. The more human involvement and high-quality data that is coupled with AI, the better the outcomes. 

Focus on Human-Centric Principles

Citizens rely on government agencies daily for critical services and information, so they must lead with a human-centric mindset while deploying emerging technologies, including AI.

As AI continues to be implemented across government, human-centered AI (HCAI) will be a key success factor. Similar to human-in-the-loop AI, HCAI preserves human knowledge and interaction, providing more transparent and equitable outcomes.

AI technology is meant to streamline tasks, allowing agency personnel to focus on more high value work, which is exactly the role that HCAI plays in government operations. The capabilities that AI can provide when paired with human interaction are unparalleled, including making government personnel more efficient, enhancing the citizen experience and detecting fraud.

As agencies continue to experiment and deploy this technology, government leaders and employees must continue learning about AI and its faults to ensure it is used properly, now and in the future.


Laura Stash is Executive Vice President of Solutions Architecture at iTech AG.

Image by Gerd Altmann from Pixabay

Leave a Comment

Leave a comment

Leave a Reply