The Future of AI Hangs on Ethics, Trust

Chezian Sivagnanam, Chief Architect at the U.S. National Science Foundation, believes the next several years could prove critical in laying the groundwork for the broad use of artificial intelligence.

Sivagnanam first got interested in the idea of artificial intelligence 30 years ago. At the time, researchers were focused on the idea of building neural networks – that is, technology could mimic human thought processes. But the concept was largely theoretical. Today, AI technology is rapidly evolving. The challenge now is to put in place the disciplines to ensure both the effective and ethical use of AI, Sivagnanam said.

“If you look at people who want to build a career out of AI, they are interested in learning algorithms and training data, but few spend time learning [how to use AI] responsibly, ethically and transparently,” he said.

An Ethics Curriculum?

In the next two to three years, Sivagnanam expects to see an industry emerge around the creation of what’s known as synthetic data, which presents both opportunity and risk.

For the most part, today’s AI systems learn by analyzing large amounts of relevant real-world data and finding key patterns and features. It takes a massive amount of data, some of which might include personally identifiable information (PII), including personal health information (PHI).

In the future, companies could make a business out of providing organizations with synthetic data, which is generated by computer algorithms trained by real-world data. The challenge is ensuring the synthetic data does not contain any vestiges of PII or PHI and the underlying algorithms do not embed any unintended bias that would undermine the AI models that use its data.

Issues around ethics need to be incorporated into the education and training of data scientists and others involved in AI, Sivagnanam said.

“We need to make sure that the people who are creating these algorithms and using these data sets understand the challenges, that they are thinking about the [ethics] angle,” he said.

First Ethics, Then Revolution

Over the next five years or so, we could see a revolution in the use of AI, Sivagnanam said. Think about the self-driving car industry. At this point, human drivers are still a necessary part of the equation. But AI pioneers are hard at work trying to change that, and quickly. Similar advances are likely in other applications of AI.

Over the next three to five years, Sivagnanam hopes to see the AI industry mature. As part of that, he expects to see the development of regulations and guidelines around AI and ethics, both from the federal government and from industry organizations.

That work is already getting underway, and NSF is playing a role. Through a grants program called Fairness in Artificial Intelligence (FAI), NSF supports researchers working on ethical challenges in AI.

The Hard Part of AI

When people think about all the work that goes into creating AI programs, they probably think about the process of writing algorithms and building models. But that’s not the hardest part, Sivagnanam said.

Instead, the challenge is ensuring that the program gets buy-in from the people who are supposed to benefit from it. That means addressing their concerns about its adaptability, scalability and above all, its trustworthiness. One way to do that is to involve the intended community of users in the process from the get-go. A good place to start is with the use case.
“When you have a use case for any innovation, start working with the community – open up your use case, democratize it, get feedback on it,” Sivagnanam said.
This is especially important for AI, because you want the users to understand how it works.

That’s essential to trust. Sivagnanam calls it a people-centric approach to AI.

NSF has taken it further, building a community of pioneers who are interested in driving innovation. The agency will conduct micro-pilots with them, giving them the opportunity to provide feedback on functionality as the program is taking shape.

“And the good part about this approach is that as soon they see it as their invention, it’s no longer an IT invention. It’s a business invention,” Sivagnanam said.

Having these early adopters can go a long way toward gaining the trust of the larger community of users. “You are empowering these pioneers to be change champions,” he said.

This article is an excerpt from GovLoop’s guide, “Conversations With CXOs: Your Crash Course on the Future of Gov.” Download the full guide here.

Leave a Comment

Leave a comment

Leave a Reply