GovLoop

Why Your Agency Should Think About AI Ethics

Artificial intelligence (AI) is such a promising technology that some experts say it could rival the invention of fire. After all, machines that can mimic human cognitive abilities like learning could potentially have earth-shattering consequences. With such astronomical expectations, how can agencies place guardrails around AI and keep the technology from getting out of hand?

The answer is AI ethics. AI ethics concerns which behavior is right or wrong when it comes to the humans and the machines involved in AI. For agencies at every level, the way they approach AI ethics could have a profound impact on their futures.

On Thursday, during GovLoop’s latest online training, three government thought leaders explained why AI ethics is a vital concern for every corner of the public sector. The speakers were:

The group explained three reasons why any agency considering AI should carefully consider the rules of behavior governing it:

1. Protecting data

AI requires vast amounts of data to operate, so the way agencies protect this information is paramount. For example, a health care agency would not want sensitive data about its patients becoming public without permission.

“When you have lots of customers giving you proprietary data, you want to make them feel warm and fuzzy that you’re taking care of it with either AI or humans,” DeVillamil said.

2. Promoting transparency

Sometimes, people have a harder time trusting something they do not understand. Thinking machines are no exception, so clarity around how AI operates is key for preserving trust between agencies and the constituents they serve.

“You need to be able to explain the process behind your algorithms,” Wright said. “It has to be available for the public to examine.”

Without transparency, the way AI makes decisions may remain murky. And without openness, people may fear how AI may impact their lives.

3. Avoiding bias

When it comes to the public sector, bias must be a top priority. Agencies that do not consider how weighted their products or services are towards specific outcomes may unfairly impact specific parts of the communities they serve – say, a small religious demographic.

Subsequently, Alterovitz urged agencies to tread carefully when designing AI models so that prejudices do not creep into the tools’ final output.

“If the training data is flawed, the models can have biased or unethical outcomes,” he said.

The last word

Government AI is not ubiquitous, so there is still time to decide which direction the technology will head in. Given AI’s enormous promise, the agencies who consider the tool’s ethical considerations now may avoid serious problems later.

 

This online training was brought to you by:

Exit mobile version