GovLoop

Safeguarding Trust in Your AI

Business team working together with computer in an office

Government officials are starting to discover the enormous potential and power of artificial intelligence (AI) in the public sector.

But building AI into an organization means redesigning the very foundation of how decisions are made. Just like when adding any new decision-maker to an organization, officials need to build a base of trust before rolling out an AI system in the real world.

Many agencies don’t yet have the data science foundation to support AI programs.

“What we see is a landscape of AI opportunities,” said Chandler McCann, General Manager for DataRobot’s Government Solutions team. DataRobot provides a highly trusted end-to-end enterprise AI platform and supporting services. “When AI doesn’t prove trustworthy, that can have drastic consequences – for government especially.”

According to McCann, trusted AI starts with three critical components. Without these pillars, agencies can’t trust their AI’s decisions.

1. GUARDRAILS

AI can make mistakes in dozens of ways. There can be missing or mislabeled data. Models can become overconfident, or find patterns in coincidences. And without the appropriate guardrails, AI can be misled, and ultimately it can mislead humans.

Counterintuitively, you should be wary of models with perfect accuracy – as that likely means training data was too clean or included the right answers. Without knowing its limitations, your model could make an overconfident decision as a result. DataRobot offers automated machine learning with numerous guardrails that let humans remove data that leads to false confidence. Humans can also make their own decisions when the model isn’t confident.

2. GOVERNANCE

AI accomplishes nothing in a vacuum. Organizations can only reap the rewards of AI once its decisions are made and used in the real world. But these decisions have to be tracked, with someone held accountable for their success.

Just because an AI system is accurate today, that doesn’t mean it will be in six months. As the world changes, so does data. And if a model is never updated, it will eventually be no better than a coin flip.

With AI governance, every decision your AI makes is tracked, so you can check it’s adding value and not losing accuracy over time. Once the world changes around the model, AI governance ensures that someone is responsible for retraining the model on new data.

3. TRUST ASSESSMENT

AI depends on pattern recognition, using machine learning to find trends in data. But an AI model can learn trends from data that it shouldn’t use – like race, sex or age – leading to biased or unfair decisions.

“We’ve seen examples where health care AI models have considered patients’ race or income instead of health,” McCann said.

It’s possible to build an AI system in conflict with human values. But this problem is avoidable, as long as the AI includes a trust assessment to identify how its decisions affect people and ensure that its decisions align with organizational values.

AI is built to help organizations make more efficient, faster decisions. With ideal tools and proactive steps, it also leads to decisions that are fairer, more ethical and more trustworthy.

This article is an excerpt from GovLoop’s recent guide, “Your Data in the Year of Everything Else.” Download the full guide here.

Exit mobile version