, ,

The Ethics of Inclusive AI

By Mike Gifford and Emily Ryan

The rapid advancement of artificial intelligence (AI) and machine learning (ML) technologies has raised a number of ethical questions, particularly in terms of their impact on different segments of society. As AI becomes increasingly integrated into our daily lives, it is crucial to address issues of inclusivity, bias, and fairness in AI development and deployment. In this article, we will discuss the challenges associated with achieving inclusive AI, the organizations and principles working to address these issues, and practical considerations for designing AI systems that serve the needs of all members of society.

Bias and AI Inclusivity

The Organization for Economic Co-operation and Development (OECD) has come up with a number of strong AI principles. When comparing OECD to other organizations, very few of the others took into account broader societal goals. In the OECD’s AI principles, one key area stood out for me: 1.1 – Inclusive growth, sustainable development and well-being:

“This Principle highlights the potential for trustworthy AI to contribute to overall growth and prosperity for all—individuals, society, and planet—and advance global development objectives.”

The principle specifically highlights the potential to “contribute to overall growth and prosperity for all,” however, there are currently real ethical issues with how AI is implemented for people with disabilities. This is a topic we are well acquainted with, as experts in digital accessibility, particularly in government. For instance, our company, CivicActions, works extensively on digital accessibility for our clients, such as the Department of Veterans Affairs (VA), and helps guide their efforts to make their sites more inclusive for those with disabilities. This has given us a much higher awareness about how society often excludes the underrepresented or disabled. Once we introduce the question of AI, and its inherent biases, we open up a new matrix of inclusivity issues.

As discussed in AI and Disability: A Double-Edged Sword from We Count, an Inclusive Design Research Centre project, inherent bias against underrepresented groups and outliers in AI and big data pose huge issues for the field.

Disability can fundamentally change how a person interacts with the world. These anomalies often cause failures in statistical models. People are more complicated than our data models. For instance, there just isn’t enough data on low-vision users with dyslexia to statistically group them into a meaningful model. 

People are starting to talk about data deserts and their impact on AI. We know that digital data is heavily based on the experiences of industrialized countries. It is also clear that there will be fewer training datasets from people who are poor or non-technical. These are just some of the reasons AI and disabilities are currently incompatible.

Like other marginalized communities, there is a real risk of AI enhancing current societal disadvantages, rather than working to overcome them. Jutta Treviranus has been a leading voice in this space and has been calling for changes in the AI development process. If the goal of AI is to help us build a society that works well for everyone, then we need the people building it to come from diverse backgrounds. Treviranus points out how teams building AI, if they want to have more inclusive outcomes, will benefit by intentionally involving people with disabilities. 

We need to do more to support the outliers in our communities and find ways to check our biases rather than enforcing and magnifying them. We might find that computer algorithms could be built to check for human bias. However, currently, we know they cannot. And this is where we need to look at processes for ensuring disability bias isn’t manifesting as AI technology is designed and deployed.

Inclusive AI in Practice: Designing for the Edge

An ethical application of AI will constantly question who is excluded (intentionally or not) by our models. Common data-driven approaches often have us looking for a solution that suits the needs of an average user. This unintentionally can exclude people on the margins, who do not fit nicely in a bell curve. The Pareto Principle, also known as the 80/20 rule, has often been used to justify excluding people with disabilities from the products and services we build. 

We know that 1 in 4 US adults live with a disability. When we add in permanent, temporary and situational disabilities, we can be confident that a significant number of people are excluded from physical or digital spaces. Not having average eyesight, hearing, fine-motor control or cognitive abilities should not exclude people from engaging with the websites they love to use, let alone those they need to use.  

It is key to think about intersectionality and each individual’s unique experiences, including understanding that many people belong to more than one disadvantaged community. We must acknowledge that English may be a second language for some people with disabilities and that some may be unable to afford the latest technology. Intersectional analysis is difficult for humans and machines. Through experience, we know that people have different roles and abilities. 

Our data algorithms also need to know this and be prepared to handle it.

AI Blindspots

AI isn’t very good at providing good answers if there isn’t a lot of good data. The contextual information relevant to unusual audiences is often stripped away to meet the needs of big data analysis. 

The MIT AI Blindspot project highlights some of the many ways that a team’s workflows can generate harmful, unintended consequences. These blindspots have a disproportionate effect on marginalized communities, such as people with disabilities.

“Like any blindspot, AI blindspots are universal — nobody is immune to them — but harm can be mitigated if we intentionally take action to guard against them.” — MIT’s AI Blindspot

If 25% of the population lives with a disability, you’d think that would be clear by reviewing lists of our employees, students or users. Unfortunately, there is a large stigma against people with disabilities, so people often do not disclose their challenges, even if they could get accommodations to do their job better. Many disabilities are invisible. You often can’t tell if someone has low vision or a hearing impairment. When looking at who uses digital services, it is nearly impossible to see if assistive technology is being used. 

Because of accessibility barriers, people with disabilities also are excluded from efforts to build representative data sets. If you haven’t explicitly thought about how to involve screen-reader users, it is likely they will be overlooked. The innovative strategies that people with disabilities deploy as workarounds to system barriers can result in the marginalization or removal of these people’s data. 

It is only when we reflect on the people in our lives that we can begin to appreciate how disability is intertwined in the human experience. 

Looking Ahead

We need to gain confidence in designing for the fringe, knowing that if we can manage the extremes, more people will have an equitable experience. Ensuring that the process of AI development intentionally involves people with disabilities is one key way to reduce the likelihood that they will be accidentally excluded. 

In the future, there likely will be evolving customs around how we disclose if AI has been used in digital projects. Creating a protocol around disclosure of how these AI tools are built will give users some more context and information regarding confidence about how biases — either AI or human — have been managed. 

Additional Resources

There are a number of great organizations dedicated to exploring AI ethics. The Montreal AI Ethics Institute, Alan Turing Institute, and AI Now are just a few of many think tanks exploring this.  All the companies diving into AI research also at some point had their own AI ethics teams. Governments have done their share, too, with everyone from the EU’s exploration of Trustworthy AI to the Ethical Principles for AI developed by the U.S. Department of Defense.

While there is a great deal of overlap between these organizations, one of the standout guides that is regularly referred to is the one produced by the Organization for Economic Co-operation and Development (OECD).


Mike Gifford is a Senior Strategist at CivicActions and a thought leader on digital accessibility in the public sector. Previously, he was the Founder and President of OpenConcept Consulting Inc., a web development agency specializing in building open source solutions for the open web. OpenConcept was an impact driven company and Certified B Corporation. Like CivicActions, OpenConcept worked extensively with the Drupal CMS. Mike was also part of the Government of Canada’s Open Source Advisory Board. Mike spearheaded accessibility improvements in Drupal since 2008, and officially became a Drupal Core Accessibility Maintainer in 2012.

Emily Ryan has been a cross-functional technologist for 26 years, working in the public and private sector as a designer, full stack developer and user researcher. She has worked across Federal, state and local government and in 2020, she left the private sector to become a Presidential Innovation Fellow, detailed to the Department of Justice. In her first year, she worked on high profile, public-facing projects for the Civil Rights Team, partnering with the DOJ and 18F on the Civil Rights Complaint Portal and the new ADA.gov redesign. She later moved into the Deputy Attorney General’s Office and worked on several large agency initiatives. In 2020, she began a Master’s Degree in government with a focus on social justice issues at Harvard Extension School. She is currently the Director of Design at CivicActions, where she leads a team of 17 designers and content strategists working across impactful agency projects at the VA, CMS and NSF.

Photo by Marcus Aurelius on pexels.com

Leave a Comment

Leave a comment

Leave a Reply