, ,

Navigating the Grey Areas in Artificial Intelligence Implementations: Perspectives on Ethics in AI

Faster than most understand, artificial intelligence is becoming (or already has become) an ever-present artifact of modern life. This technology offers great potential to markedly improve the lives of many people around the world but just as with any other nascent technology, without responsible use it also has the potential to be used in ways that further disadvantages the most vulnerable and underserved populations.

Over the past few months, the importance of these issues has been cemented by the Executive Order on Artificial Intelligence and launch of www.whitehouse.gov/ai. Of the four pillars discussed within it, building ‘AI with American Values’ highlights the importance of taking specific action to ensure that AI is developed responsibly, that the decisions AI makes can be explained, and that the impacts AI has on workforces are considered before implementation.

Most recently, on May 9th,  a bipartisan group of Senators reintroduced “The Artificial Intelligence in Government Act” which, among other things, requires agencies using AI to publish governance plans.  A companion bill introduced in the House,  was co-sponsored by Congressman Jerry McNerny, who emphasized the importance of balancing how we “…harness the full potential of AI while also identifying and reducing potential harmful effects.”

The National Institute of Standards and Technology (NIST) as directed by the Executive Order on AI has established a workgroup on AI Standards and will be hosting a Federal Engagement in Artificial Intelligence Standards Workshop on May 30, 2019 in order to solicit stakeholder discussion on this subject. This confluence of activity illustrates that at present without explicit AI policy, organizations are navigating the grey areas of implementing AI responsibly in unbounded territory.

Understanding the sources of potential bias that can create inequities in AI implementations and what methods exist to correct for them is just the start. The more important component being the challenge of deciding on what core values align with your organizations mission, and then wrestling with the ethical grey area of tradeoffs involved when correcting for these biases.

The Trade Space is Almost Never Black and White

To help prepare you for taking on this challenge we offer some perspectives across several industries that AI is transforming today to examine how the core values of your organizations mission can greatly impact the path taken.

When developing systems of clinical decision support (CDS), selection, sampling and reporting bias exists across healthcare data. Without attempts to control these biases, resulting systems may deliver unequal outcomes for sensitive classes. But, for systems that deliver intervention, diagnosis or treatment, enforcing equal outcomes across sensitive classes (mandating that CDS must perform equally well for all sensitive classes) may be counterproductive. Where on the spectrum between saving the most lives and preserving equity in outcomes are the values of your organization most reflected?

When developing systems for financial services, issues that have impacted historical availability and access to opportunity may also bias datasets. Systems that are meant to provide services in an equal manner should be cautious in their use of sensitive class data and also consider what data may be missing, biased, or contain mutual information (e.g. zip codes or last names may contain information correlated with sensitive class data). Where on the spectrum between protection of sensitive classes and accepting unequal outcomes resulting from equity in process are the values of your organization most reflected?

When developing systems for national defense, the same data biases exist with the further complication for potential adversarial misinformation being present within your data. Considering the likelihood and magnitude of errors (e.g. A terrorist attack that involves a WMD would have a very high cost) becomes a critical concern in defense. This is further complicated by the counterfactuals that do not exist when these systems are successful in reducing or eliminating threats. Where on the spectrum between the level of privacy & personal freedoms impedance and preserving national security are the values of your organization most reflected?

When developing systems for customer service, many of these same biases can exist however issues related to workforce displacement and dispersion of work should be considered. Systems may have the ability to automate large portions of certain workforces out of jobs improving the bottom lines of firms deploying these strategies. At the same time, these systems may also have the unintended consequence that work that was once completed by an employee of a company is now completed by not just your AI virtual assistant but also your customer. Where on the spectrum between investments in systems and investments in people are the values of your organization most reflected?

Find the North Star of Your Organization First

Reflecting on core values, compared with the specific context of the problem at hand, can help guide choices when implementing AI but does not make the choices around implementing AI any easier. These ethical questions exist in a moral grey area, where the core values that are most sacred, and perspective, can greatly impact what outcomes are desirable. We recommend organizations ensure that the technologist and executive sponsors implementing AI understand the sources of bias in modeling and the methods to correct for bias as a baseline. But, more importantly, that firms reflect deeply on what core values must be held paramount in these implementations before you start them.

 Co-author:  Drew Smith, Manager Accenture Federal Services, Applied Intelligence

Dominic Delmolino is a GovLoop Featured Contributor. He is the Chief Technology Officer at Accenture Federal Services and leads the development of Accenture federal’s technology strategy. He has been instrumental in establishing Accenture’s federal activities in the open source space and has played a key role in the business by fostering and facilitating federal communities of practice for cloud, DevOps, artificial intelligence and blockchain. You can read his posts here.

Leave a Comment

Leave a comment

Leave a Reply