,

Additional Guardrails for AI use in Government

Hollywood has created an entire genre of movies around the fear that humankind will be subjugated or destroyed through the takeover of a self-aware and malevolent artificial intelligence (AI). Many of these movies were made long before artificial intelligence went mainstream, a process accelerated in recent years by the rise of large language Models (LLMs) powering services and applications like ChatGPT.

While it’s easy to spot the gaping plot holes in these movies now that we’re living through the time when AI takes a more intimate hold of our lives, there are still plenty of things to worry about and dangers we need to guard against. Artificial intelligence, whether we know it or not, is becoming more deeply embedded in our lives — finding its way into processes that affect people in very personal ways.

A good example of this can be seen in recent stories about health insurance providers using AI for things like processing approvals for coverage. As these stories show, AI can sometimes be used in ways that end up denying health coverage for people who are actually eligible, and desperately need it. When economic incentives exist that favor a specific determination (health insurers benefit when they pay fewer claims), adding AI to these existing processes can magnify bad outcomes. These stories provide an object lesson in the need for governance structures to guide the implementation of AI.

This same need for AI oversight and guardrails exists for governments that want to use it as part of existing processes, like applications for benefits or government services. People applying for government services and benefits may be facing health or financial issues, recovering from an accident or natural disaster, or have children in need. The promise of using AI as part of these processes is that it can help streamline them, reducing the time it takes to complete them, and reducing error rates. 

But when incentives exist that favor certain outcomes in government processes, adding AI can harm people in deeply personal ways. The “success” of public benefit programs too often focuses solely on the number of ineligible claims denied. Historically, government administrators and public officials have given higher priority to reducing fraudulent claims and the denial of ineligible applicants as the best measures of success. And while these measures are certainly important, they should not be our only measures of the success of a benefit program. As governments move to adopt AI into application processes, it is important to have clear governance structures in place to guard against negative outcomes.

How do we create these structures, and what should they look like? A good example can be seen is the recent guidance from the Department of Labor to states on unemployment insurance benefits. This new guidance focuses on metrics that will better ensure equitable access to benefits for those who need them:

Identifying and preventing all forms of improper payments — including underpayments and erroneous denials — are critical to ensuring program integrity, and equitable access plays a key role in supporting these efforts. (emphasis added)

UNEMPLOYMENT INSURANCE PROGRAM LETTER NO. 01-24

By creating success metrics for states that include minimizing improper denials, this guidance sets up an important guardrail against some of the things being seen in the healthcare industry. The true success of a program meant to support those in need of assistance cannot be measured solely by efforts to deny ineligible applicants. It is also critical to ensure we minimize the number of times those who truly are eligible are improperly denied benefits. And when these improper denials do happen, they need to be rectified quickly.

Even though it turns out that the dangers of an AI apocalypse are probably overblown, there are real concerns and real dangers that we need to guard against as AI becomes more ubiquitous in our lives. One of those concerns is that as AI gets adopted into government benefit processes, eligible claimants are unfairly denied benefits when they most need them.

And that’s an outcome that is scarier than any Hollywood movie.


Mark Headd is a Government Technology SME at Ad Hoc. He is the former Chief Data Officer for Philadelphia, serving as one of the first municipal chief data officers in the United States. He holds a Master’s Degree in Public Administration from the Maxwell School at Syracuse University, and is a former adjunct instructor at the University of Delaware’s Institute for Public Administration. He spent 6 years with the General Service Administration’s Technology Transformation Services (TTS), serving on the leadership team for 18F and leading customer success efforts for TTS’ cloud platform, which supports over 30 critical federal agency systems.

Photo courtesy of the Oregon Department of Transportation.

Leave a Comment

Leave a comment

Leave a Reply