, , ,

Using AI to Reinforce Our Highest Values and Rebuild Trust in Government

Government agencies are increasingly leveraging artificial intelligence (AI) to enhance their operations and decision-making processes. According to a recent report by Deloitte, “All governments will have at least three enterprise-wide hyperautomation initiatives launched or underway by 2024.”

There are numerous efficiencies to be gained in the public sphere, yet the ethical implications of AI deployment should not be ignored. Our values as a country and our democracy depend on our ability to ensure that AI is used in ways that promote the voices of ALL Americans, and not just the big corporations building them. 

The proliferation of AI-based technology is everywhere. Most people interact with AI in some form daily, whether they realize it or not. While this brings numerous efficiencies and possibilities (auto complete text messages, recommendations in dating apps, suggested products in our shopping cart, etc.), many Americans are concerned about the impact of AI on society. According to a recent Pew survey, nearly twice as many Americans say they are “more concerned than excited” by the increased use of AI in daily life.

Embracing human-centered design and community feedback is critical to ensuring responsible use of AI and cultivating trust within communities. By integrating these design principles, agencies can align their AI initiatives with President Biden’s AI Bill of Rights. Focusing on community engagement, algorithmic discrimination protections, and data privacy can help governments establish ethical AI systems that foster trust within communities.

Community Engagement: The Foundation of Safe and Effective Systems

One of the fundamental principles outlined in the AI Bill of Rights is the requirement for community engagement before the deployment of AI systems. Experience management provides a structured approach to engage community stakeholders throughout the AI development process, ensuring that their voices are not only heard but incorporated into the design process.

Traditional in-person engagement methods like focus groups and community listening sessions should be augmented with digital options to ensure a wider distribution and greater representation. For instance:

  • Conduct a community pulse: Design surveys with a mix of open-ended and closed-ended questions to gather both qualitative and quantitative data. Ensure that feedback is representative by sending the surveys via multiple channels and languages.
  • Build a community panel: Conduct longitudinal studies, track changes in community sentiment over time, and collect feedback with a panel. Establishing a continuous feedback loop with the community enables you to address concerns and make data-driven decisions.
  • Embed feedback collection directly into websites or applications: Gather feedback seamlessly within the community’s preferred digital platforms.

Regardless of the method you choose, it is crucial to clearly communicate the purpose of the community feedback, ensure confidentiality, and provide an opportunity for participants to provide additional comments or suggestions. Consider providing incentives or rewards to encourage community participation and show appreciation for their valuable input.

Algorithmic Discrimination Protections: Upholding Fairness and Oversight

AI models learn from existing data, often referred to as “training data.” Unfortunately, much of the data used for these models is unaccountable and lacks diverse representation because it’s often pulled from websites that exclude the voices of marginalized people, due in part to inadequate Internet access, underrepresentation, or filtering practices.

A comprehensive study published by MIT researchers discovered that facial recognition systems developed by prominent technology companies exhibit racial and gender biases, with notably higher error rates for women and people of color. Left unchecked, these models can produce harmful outcomes to society.

To prevent algorithmic discrimination, it is imperative that government agencies prioritize representative data and emphasize clear oversight by humans. Human-centered design ensures that human oversight remains an essential aspect of AI systems. While AI algorithms can process vast amounts of data and provide valuable insights, they should never replace human judgment entirely.

A 2023 report by the AI Now Institute emphasizes that human review and intervention are indispensable to ensuring fair outcomes and fostering accountability. By actively seeking input from diverse perspectives, government agencies can mitigate the risks of algorithmic biases and proactively address potential issues prior to deployment.

Human oversight serves as a check-and-balance mechanism, ensuring that decisions made by AI systems align with ethical standards and legal requirements. Agencies should conduct equity assessments regularly as part of system design, and use dashboarding tools with demographic breakdowns of outcomes.

Organizations also should, in order to continuously test and mitigate for disparities, conduct an algorithmic impact assessment rooted in independent evaluation and plain language reporting. Results should be made public whenever possible.

Data Privacy: Empowering Individuals and Gaining Consent

Respecting data privacy is a critical element of ethical AI deployment — and perhaps its very foundation. The AI Bill of Rights emphasizes that data collection must be permitted and consented to.  This is vital because companies increasingly collect user data, track user activities, and then share the data with other companies as a core foundation of their business models.

Human-centered design ensures that privacy is at the forefront of AI-system development. Getting input from a representative sample of the community ensures that data privacy policies protect personal data. It also upholds individuals’ rights to control how their data is used.

By incorporating privacy-enhancing technologies and adopting privacy-by-design principles, government agencies can protect sensitive data while providing individuals with control over their personal information.

Certification programs such as FedRAMP and StateRAMP are highly respected and recognized. When a government agency is looking to adopt new technology, the FedRAMP standards and framework help to streamline the assessment of that provider and its solution’s data privacy and security measures — meaning that the agency knows the technology under consideration has been pre-vetted and certified for cybersecurity excellence.

As government agencies continue to embrace AI technology, it is essential to prioritize trust-building with the communities they serve. By actively engaging communities, upholding fairness through algorithmic discrimination protections, and respecting data privacy, government agencies can create AI systems that are safe, effective, and ethical. By combining the principles outlined in the AI Bill of Rights with a human-centered approach, agencies can foster trust, promote inclusivity, and leverage AI to drive positive societal impact.


Chelsie Bright is head of public sector at Qualtrics, leading a team that supports hundreds of state and local government organizations and 90 federal offices, including every cabinet-level department.

Image by Gerd Altmann from Pixabay

Leave a Comment

Leave a comment

Leave a Reply