The A-Team takes O-Town: IHRP and Citizen Lab consult the federal government on "Bots at the Gate"

Julie Lowenstein (2L) and Solomon McKenzie (3L)

This past September, we had the privilege of joining the International Human Rights Program in Ottawa for a series of consultations about the use of automated decision-making in Canada’s asylum and immigration system. The trip also launched the IHRP and Citizen Lab’s joint report Bots at the Gate: A Human Rights Analysis of Automated Decision-Making Systems in Canada’s Immigration and Refugee System. Over the course of three days, we met with a variety of government departments, agencies, and members of parliament. Our advocacy emphasized the importance of integrating a commitment to human rights into the process of procuring and developing any technology that may replace or augment human decision-making in Canada’s immigration and refugee system. The trip resulted in productive engagement from all stakeholders.

The A-Team in front of Centre Block

The A-Team in front of Centre Block. Credit: Tourist.

What is Automated Decision-Making?

Automated decision-making describes a constellation of different uses of data or technology to help or replace human decision-makers. Automated decision-making can describe technologies as complex as artificial intelligence or as simple as Excel spreadsheets. In the context of government administrative systems, automated decision-making can be used to review parts or all of an individual’s application for a service or classification.

Canada’s immigration and refugee system currently requires humans to assess applications. Automating decision-making could allow for a computer system to highlight or sort applications, suggest the probability of success of an application, or even assess the application in lieu of a human decision-maker.

What does Bots at the Gate focus on?

Bots at the Gate aims to ensure that the Canadian government generates a human rights-focused system for the review, design, and procurement of automated decision-making tools in the refugee and immigration system. The report does not intend to prevent the development of automated decision-making technologies, nor does it lionize human decision-making as inherently superior to automated decision-making. Particularly in the immigration space, human decision-making can often be inconsistent, flawed, or biased.

Bots at the Gate focuses on ensuring that Canada remains a world leader in both human rights and the development of appropriately designed automated decision-making systems.

Bots at the Gate’s analysis of the current and potential use of automated decision-making led the authors to recommend:

  1. Greater transparency into the uses and applications of automated decision-making systems, both at procurement and on an ongoing basis;
  2. The adoption of binding standards and review processes for the use of automated decision-making systems by the Federal government; and
  3. The creation of a federal task force to bring inter-industry and inter-disciplinary groups together to review the best uses for automated decision-making systems.

Why does Bots at the Gate matter?

The Canadian Border Services Agency (the “CBSA”) and Immigration, Refugees and Citizenship Canada (“Immigration Canada”) are actively experimenting with the adoption of autonomous decision-making systems in the immigration context. In 2018, the Government tendered a Request for Proposals for the design of a system that would help assist frontline decision makers and automate inter alia the process of analyzing Pre-Removal Risk Assessments (“PRRA”) and Humanitarian and Compassionate decisions (“H+C”). PPRAs and H+C decisions are often highly discretionary. The applications are multi-faceted, and their review requires an assessment of the current social and political conditions in the applicant’s country of origin and a careful review of various aspects of the applicant’s personal history and circumstance. An incorrect assessment could result in the deportation of an individual to deadly conditions.

The stakes of automating this type of decision-making are high, and thus require a similarly high level of precise calibration on the part of the automated system. However, automated systems are not inherently neutral. If an automated decision-making system is fed biased or inaccurate data, or is provided biased or unclear criteria for success, the system will generate biased and inaccurate results.

Ensuring that human rights are integrated into the collection and analysis of data, identification of potential sources of bias, and establishment of the criteria for successful applications are essential for automation to be used appropriately and justly.

The A-Team

This trip was an awe-inspiring opportunity to watch an interdisciplinary advocacy team from the IHRP and Citizen Lab shine. The team’s particular strength was in its ability to pivot and mould discussions to different audiences. The team included Petra Molnar (Technology and Human Rights Researcher at the IHRP), Samer Muscati (Director of the IHRP), Yolanda Song (William C. Graham Research Associate at the IHRP), Cynthia Khoo (Google Policy Fellow at the Citizen Lab), Professor Audrey Macklin (University of Toronto Faculty of Law Professor and Chair in Human Rights Law) and us (clinic students at the IHRP).

Meeting at Parliament with Arif Virani

Meeting at Parliament with Arif Virani. Credit: Parliament Staffer.

The Trip

During our three days in Ottawa, we consulted with the Treasury Board, Canadian Human Rights Commission, Immigration Canada, the CBSA, Global Affairs Canada, Innovation Canada, the Prime Minister’s Office, and various members of parliament.

The advocacy team was very warmly received. Our meetings made it clear that policy-makers are aware that Canada is at a crossroads with automated decision-making technologies. Government officials understood the potential risks and benefits associated with automation, and were receptive to the idea that human rights should be integrated in program design from the beginning of the implementation stage. It was clear that this was the right time for this trip, as many organizations were still relatively early in their process of reviewing the potential of integrating automated decision-making into their service provision.

The meeting with the Treasury Board of Canada Secretariat (the “TBS”) was particularly animated. The TBS is currently in the midst of an iterative process of developing cutting-edge directives for the use of automated decision-making across the Federal Government. The TBS’s work on automated decision-making provided the backbone of Bots at the Gate. Of particular interest was their algorithmic impact assessment tool, which will help government agencies assess the risks associated with automating particular types of decision-making.

We had similarly exciting conversations with the Canadian Human Rights Commission (the “Commission”), who have been dedicating increased energy into thinking about the intersection of human rights and technology. The Commission underscored that Canada has the potential to become a leader in both human rights and autonomous decision-making, and that the integration of a rights perspective can actually bolster the competitiveness and desirability of Canadian-design autonomous decision-making technologies.

The advocacy team was thrilled to have the opportunity to engage with both Immigration Canada and the CBSA. Immigration Canada confirmed that they are scoping out automated decision-making technologies, but they are currently only looking to augment, rather than replace, human decision-making. They stressed that the use of automated decision-making tools would be for the purpose of triaging “easy yeses” rather than precluding entry. Importantly, they noted the risks involved in the use of these technologies in heavily discretionary decisions such as PRRAs and H+Cs. This sentiment was also reflected in our meeting with the CBSA. Of particular note was the CBSA’s clear understanding of the discriminatory potential of poorly calibrated automated decision-making, and their investment in pre-empting these types of issues at the development stage.

Global Affairs Canada provided an international perspective on the development of automated decision-making. They noted that governments across the world are trying to tackle the complexities related to the use of automated decision-making. Global Affairs Canada exposed us to a dynamic team that is carefully reviewing the operability of various automated technologies. They were particularly helpful in developing new directions for research, for instance by directing us to analyze different professional sectors’ understanding of bias, which could help develop a common language for bias across all the professions that develop and use automated decision-making systems.

Finally, our meetings with the Prime Minister’s Office and members of parliament provided an opportunity for representatives to gain greater understanding of this developing issue. Our discussion highlighted the fact that migration is currently a hot-button issue on Parliament Hill. These discussions reinforced that this is an important time to stress that any discussions about immigration must focus on ensuring that Canada has a fair, accessible, and human rights compliant immigration and refugee system.

Conclusion

Overall, the Ottawa trip was incredibly positive, with strong engagement and feedback from stakeholders. It was clear that many parts of government are starting to tackle the hard issues surrounding the use of automated decision-making. It was also clear that Bots at the Gate provided a useful framework to help government agencies and departments think critically about the procurement and roll-out of automated decision-making systems.