Canadian AI Leaders on Ethical Standards and Global Collaboration
, , , ,

Canada Leads Government AI on Ethics & Global Collaboration

As artificial intelligence (AI) and machine learning slowly work their way into our daily lives, fears about discriminatory practices and technical flaws are becoming a reality. Little global consensus on what responsible AI looks like and headlines documenting discrimination caused by algorithmic processes have left some public sector AI initiatives in limbo.

In recent years, however, governments around the world have slowly incorporated AI and machine learning into data management and automation processes to improve backend and public-facing services. In the U.S., for example, the Veterans Affairs Department (VA) uses AI in its REACH VET program to pinpoint veterans at high risk for suicide.

The latest update of the White House’s National Artificial Intelligence Research and Development Strategic Plan released earlier this year included the addition of public-private partnerships as a new focus area to help accelerate federal AI initiatives.

Canada, on the other hand, has become known as a global leader in AI regulation as it navigates the apprehensions surrounding AI, such as bias and accountability. Considering the country’s longtime and large-scale investments into AI, the Canadian advantage comes as no surprise.

Executive Director of Pan-Canadian Artificial Intelligence Strategy at the Canadian Institute for Advanced Research (CIFAR), Elissa Strome, explained that Canada’s AI lead lies in its forward-thinking personnel.

“We have such a long-standing history in Canada of having some of the world’s leading researchers in AI,” Strome said. “Talent is our biggest asset in Canada, so we’ve built our strategy around keeping that talent and building talent.”

The data supports this claim. From 2016 to 2017, Canada invested $1.3 billion toward AI Research and Development, according to Canada’s global investment attraction and promotion agency, Invest in Canada. Toronto has the highest concentration of AI start-ups and Montréal has the highest concentration of students and researchers studying AI in the world.

The talent and the money backing Canadian AI is impressive, but it’s the country’s dedication to ethical use of AI and automated decision-making in the public sector that makes it stand out.

Groundbreaking Strides in Canadian AI 

While AI discrimination is not new, an April report by researchers at the AI Now Institute at New York University reignited the conversation after expressing concerns about possible structural bias. The white male-dominated workforce of the AI industry is bleeding into algorithmic capabilities, according to the AI Now analysis. The AI Now Institute survey of over 150 studies and reports about AI found that bias in current AI systems reflect historical patterns of discrimination and ultimately called for a reevaluation of AI policies and workplaces.

In 2016, for example, an automated passport photo evaluator in New Zealand told a man of Asian descent that his eyes were closed, even though they were clearly open. Two years later, Amazon scrapped a machine-learning hiring system that favored men after recognizing patterns of male dominance in the tech industry.

In other words, the data that the automated systems are based on don’t always account for a diversity of users.

But Canada is working hard to maintain ethical standards for AI. It released an evolving Directive on Automated Decision-Making that pushes the government to commit to utilizing AI while maintaining “transparency, accountability, legality, and procedural fairness.”

Natalie McGee, Executive Director of Enterprise Strategic Planning for the Treasury Board of Canada Secretariat, explained how the Canadian government is attempting to remediate AI concerns in the public sector.

“The government is recognizing that before it adopts [AI] technology, we really need to address the bias and responsible use requirements not only for transparency but for accountability in using automation,” McGee said.

That’s why the Canadian government and research firms have already taken steps toward implementing ethical and responsible AI initiatives from both a workforce and a technical perspective.

The Pan-Canadian AI Strategy, which represents the first national AI strategy developed by any country, is a research and innovation initiative to strengthen Canada as an AI research hub. The strategy also aims to unite thought leaders to examine the societal implications of AI before implementation.

As a part of this strategy, CIFAR puts on training programs for college students that promote equity, diversity and inclusion. The AI for Good Summer Lab, for example, promotes inclusive tech cultures by empowering women.

Another program, Data Science for Social Good, partners with public organizations to extract insights based on their datasets. The program’s 2019 projects include the use of machine learning to classify laboratory tests results sponsored by the BC Centre for Disease Control.

McGee explained that the government currently relies on existing legislation, such as the Canadian Charter of Rights and Freedoms, to implement policies that protect citizens from AI and automated decision-making discrimination. The process includes ensuring that decisions are based on high quality and unbiased data, evaluating the impact of a decision being made and providing recourse for citizens if the technology gets it wrong.

For example, the government implemented an algorithm impact assessment to score potential levels of risk. This is meant to provide agencies looking to implement AI with practical steps that they can take to remediate discrepancies before they occur. Project managers or end users answer 60 to 80 questions to determine whether the technology they want to implement will be low or high risk. Those results are tied back to Canadian policies on how to move forward with the project.

“Through existing administrative policy, we’ve introduced what it truly means to incorporate responsible and ethical use inside automated decision-making,” McGee said.

According to McGee, Canada was able to introduce the world’s first national strategy on the responsible and ethical use of AI by building upon the country’s digital standards and tackling the first problem, which was automating, instead of trying to solve every problem from the get-go.

From the research perspective at CIFAR, solving the bias and discrimination discrepancies surrounding AI requires reviewing both the technology itself and whose implementing it.

“The questions around equity, diversity and inclusion is an area that we need to pay the most attention to right now,” said Strome, “Whether we’re talking about equity, diversity and inclusion in the population of people that are contributing to the development of this technology or we’re talking about ensuring that the application of AI technology is supporting and advancing equity, diversity and inclusion.”

To remediate these concerns, the team at CIFAR prioritizes reaching out to women and other underrepresented groups in the technology sector. The team reaches across national borders to find new recruits and lower the possibility of bias and discrimination in the final product by engaging talented employees from a variety of backgrounds.

How the U.S. Compares

Since 2016, when the original AI strategic plan was released under President Barack Obama, federal implementation of AI has grown and shifted. The American AI Initiative was the Trump administration’s most noteworthy adjustment to implementation, which included investing in research and development, setting governance standards and building a workforce for an AI enterprise.

It’s been a relatively slow trek. While, the National Institute of Standards and Technology (NIST) is still developing a plan for federal use, which would include technical standards for AI, agencies have started implementing their own approaches – and have seen impressive successes.

The National Institutes of Health (NIH) is finding new ways to use AI in biomedical research. Researchers at NIH have used AI to detect irregular heartbeats, hinting at its ability to improve electrocardiogram (EKG) readings. The Defense Department (DoD) has not only established an intelligence center dedicated to AI but also released a strategy to partner with private sector organizations to incorporation AI into key missions.

DoD CIO Dana Deasy noted at the release of his agency’s strategy that collaboration is important to DoD’s successful implementation of its AI strategy,

“The success of our AI initiatives will rely upon robust relationships with internal and external partners,” he said. “Interagency, industry, our allies and the academic community will all play a vital role in executing our AI strategy.”

Canadian officials also suggested that collaboration between countries is the key to developing a better policy to benefit all.

Canada, alongside France, spearheaded the International Panel on Artificial Intelligence (IPAI) to spark collaboration between countries as they work to implement their own AI strategies.

“The idea of this panel is that it’s going to facilitate international collaboration in a multi-stakeholder manner,” McGee said. “It’s really to provide a mechanism for countries to share a multidisciplinary analysis and foresight.”

Strome explained that international collaboration is necessary to fields such as AI because the technology transcends any single country’s perimeter. There are no boundaries, and therefore fostering an international dialogue is crucial to reaping the most benefits from AI.

“The opportunity for AI to really deliver incredibly positive social and economic benefit for the world is there,” Strome said. “If we get it wrong, there’s a real lost opportunity. But if we get it right, it has a really tremendous opportunity to bring great benefit.”

Leave a Comment

One Comment

Leave a Reply

Avatar photo Blake Martin

Great piece, Katie! I had no idea of Canada’s AI edge until reading this, and I’m happy to hear that care is being taken to mitigate bias and other concerns.