Why Trustworthy AI Is Not Good Enough

Consider this scenario: During an investigation, an auditor asks a program manager to explain how a particular decision was made. Would “AI told me” be an acceptable answer? Not likely. Even if the manager relied on a high-quality AI system with a strong track record, he would still need to demonstrate how the AI arrived at its recommendation. That’s why trustworthy AI, while necessary, is not sufficient. It also must be defensible.

Defensible AI produces results that are both accurate and explainable, with the ability to link outputs to specific training datasets and assess the provenance of that data. The system should also present the results in ways that align with the technical expertise of the audience. These and related capabilities enable agencies to better evaluate, refine and govern their AI models, said John Chao, Director of Federal Products at Seekr.

“Defensible AI is the new standard,” he said. “We want to ensure that we can defend the answers and outputs, defend the sense that I feel good about this, and not just go with blind trust.”

In this video interview, Chao discusses how agencies can achieve defensible AI and adopt AI with greater confidence. Topics addressed include:

  • The key attributes of defensible AI
  • Four foundational capabilities for improving the defensibility of AI
  • The importance of continuous governance

Leave a Comment

Leave a comment

Leave a Reply