Canada's Adoption of AI in Immigration Raises Serious Rights Implications

Wednesday, September 26, 2018

U of T report finds that use of automated decision-making technologies to augment or replace human judgment threatens to violate domestic and international human rights law; recommends best practices to support Canadian leadership in AI and human rights

Ottawa, September 26th, 2018Algorithms and artificial intelligence are augmenting and replacing human decision-making in Canada’s immigration and refugee system, with alarming implications for the fundamental human rights of those subjected to these technologies, says a report released today by the University of Toronto’s International Human Rights Program (IHRP) and the Citizen Lab at the Munk School of Global Affairs and Public Policy.

The 88-page report, titled “Bots at the Gate: A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System,” details how the federal government’s use of these tools threatens to create a laboratory for high-risk experiments. These initiatives may place highly vulnerable individuals at risk of being subjected to unjust and unlawful processes in a way that threatens to violate Canada’s domestic and international human rights obligations, implicating decisions on multiple levels.

“Our legal system has many ways to address the frailties of human decision making,” said Dr. Lisa Austin, professor at the University of Toronto’s Faculty of Law and an advisor on this report. “What this research reveals is the urgent need to create a framework for transparency and accountability to address bias and error in relation to forms of automated decision making. The old processes will not work in this new context and the consequences of getting it wrong are serious."  

The ramifications of using automated decision-making in the sphere of immigration and refugee law and policy are far-reaching.Marginalized and under-resourced communities such as residents without citizenship status often have access to less robust human rights protections and less legal expertise with which to defend those rights. The report notes that adopting these autonomous decision-making systems without first ensuring responsible best practices and building in human rights principles at the outset may only exacerbate pre-existing disparities and can lead to rights violations including unjust deportation.

Since at least 2014, Canada has been introducing automated decision-making experiments in its immigration mechanisms, most notably to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. Recent announcements signal an expansion of the uses of these technologies in variety of immigration decisions that are normally made by a human immigration official. These can include decisions on a spectrum of complexity, including whether an application is complete, whether a marriage is genuine, or whether someone should be designated as a “risk.”

The report provides a critical interdisciplinary analysis of public statements, records, policies, and drafts by relevant departments within the Government of Canada, including Immigration, Refugees and Citizenship Canada, and the Treasury Board of Canada Secretariat. The report additionally provides a comparative analysis to similar initiatives occurring in similar jurisdictions such as Australia and the United Kingdom. In February, the IHRP and the Citizen Lab submitted 27 separate Access to Information Requests and continue to await responses from Canada’s government.  

The federal government has invested greatly in positioning Canada as a leader in artificial intelligence, and the report acknowledges that there are many benefits to be gained from such technologies. However, without proper oversight, automated decisions can rely on discriminatory and stereotypical markers—such as appearance, religion, or travel patterns—as erroneous or misleading proxies for more relevant data, thus entrenching bias into a seemingly “neutral” tool, says the report. The nuanced and complex nature of many refugee and immigration claims may be lost on these automated technological decision-makers, leading to serious breaches of internationally and domestically protected human rights, such as the right to privacy, the right to due process, and the right to be free from discrimination.

“We have often seen that when governments deploy new technology intended for systemic use, lack of thoughtful safeguards or understanding of potential impacts can quickly spiral into harmful consequences,” said Prof. Ron Deibert, Director of the Citizen Lab. “The Canadian government should not be test-driving autonomous decision-making systems on some of our most vulnerable, and certainly not without first putting in place publicly reviewed algorithmic impact assessments and a human rights-centered framework for the use of these tools in such high-stakes contexts.”

The report recommends Ottawa establish an independent, arms-length body with the power and expertise to engage in comprehensive oversight and review of all uses of automated decision systems by the federal government; publish all current and future uses of AI by the government; and create a task force that brings key government stakeholders alongside academia and civil society to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.

The report notes that Canada presently faces a unique opportunity to become a global leader in the development and usage of AI that protects and promotes human rights principles, setting an example for other countries also experimenting with similar tools and systems. 


Media Stories