Human Rights Panel: Technological Experiments in the Digital Age

Friday, January 18, 2019
Farida Deif (moderator), Canada director at Human Rights Watch; Petra Molnar, technology and human rights researcher at the IHRP; Irene Poetranto, senior researcher at the Citizen Lab; and Cynthia Wong, internet and human rights researcher at Human Rights Watch

(L) Farida Deif (moderator), Canada director at Human Rights Watch; Petra Molnar, technology and human rights researcher at the IHRP; Irene Poetranto, senior researcher at the Citizen Lab; and Cynthia Wong, internet and human rights researcher at Human Rights Watch.

Story and photos by Chelsey Legge, 4L JD/MPP

An all-female panel of experts convened at the Faculty of Law on January 14 to discuss the human rights implications of new technologies and their use by states and public agencies around the world. Jointly hosted by Human Rights Watch Canada and the International Human Rights Program (IHRP), “Technological Experiments in the Digital Age: Artificial Intelligence, Internet Freedoms, and the State” drew a full crowd in the Moot Court Room. The panel was also livestreamed on Facebook with the video already viewed more than 5,000 times.

Farida Deif (moderator), Canada director at Human Rights Watch, began by discussing the power of the internet and social media as tools for the expression and sharing of ideas, including by activists and, increasingly, by the victims of human rights violations (see the recent case of Rahaf Mohammed, an 18-year-old Saudi woman who was granted asylum in Canada after sharing her story on Twitter from her hotel room in Bangkok, Thailand).

Full Moot Court Room for the IHRP Human Rights Watch panel

“Fearing the power of new technologies, many authoritarian states have devised ways to filter, monitor and disrupt internet freedom,” said Deif. At the same time, governments are increasingly experimenting with new technology, such as artificial intelligence (AI). “How do we ensure that human rights are front and centre in these conversations?” she added.

Petra Molnar, technology and human rights researcher at the IHRP and co-author of the report Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System, discussed the human rights impact of new technology, such as AI and machine learning in the immigration space.

For example, the Chinese government intends to implement a social credit system, where citizens are given a score indicating their trustworthiness based on algorithms that analyse massive amounts of personal data. “It is truly one of the most Orwellian applications of technology,” said Cynthia Wong.

“Any time you are trying to augment or replace a human immigration officer, there is a real impact on real people’s lives,” said Molnar. “Now is a unique time to speak across sectors about these issues, the ramifications of these technologies, and how we’re going to move forward.”

Irene Poetranto, senior researcher at the University of Toronto’s Citizen Lab, discussed the Lab’s research on cyber security from a human rights perspective, including censorship, content filtering, and the role of algorithms.

“The fundamental issue is that there is a lack of accountability and transparency [in] how these algorithms are put together,” said Poetranto. She identified three major areas of concern. The securitization of cyberspace – justified by reference to threats of state-sponsored espionage, cyber crime, and terrorism – risks infringing basic human rights, especially freedom of expression. Second, as more people become connected, particularly in states with weak rule of law and a lack of good governance, more individuals and civil society organizations are vulnerable to digital attacks. Third, new technologies are extending the reach of the state; in Ethiopia, for example, government censorship and surveillance affects not only citizens but members of the Ethiopian diaspora community as well.

panel poster

Cynthia Wong, senior researcher on internet and human rights at Human Rights Watch, stressed the importance of anticipating “how technology is going to impact our ability to enjoy human rights.” She explained China is at the cutting edge of harmful uses of technology. For example, the Chinese government intends to implement a social credit system, where citizens are given a score indicating their trustworthiness based on algorithms that analyse massive amounts of personal data.

“It is truly one of the most Orwellian applications of technology,” said Wong. China also hopes to integrate voice recognition technology into mobile phones. Wong noted that Chinese companies are not the only companies enabling these repressive applications of technology; international companies, such as Facebook and Google, are working to create censored versions of their websites and search engines so they can access the considerable Chinese market.

Deif engaged the panellists in a thoughtful discussion about AI, surveillance, and internet freedom. She asked Molnar whether she sees the trends identified in Bots at the Gate in other countries. Molnar confirmed that many countries are turning to emerging technologies to manage migrants and refugees. “Internationally, we’re seeing a proliferation of technology at the border.”

In some European airports (Latvia, Hungary, and Greece), governments are rolling out AI lie detectors. Molnar asked: “How is this going to work, exactly? Will these machines be able to take into account cultural differences in communication, [or] the impacts of trauma on communication and memory?” She added that these technologies force us to reckon with basic issues like informed consent. For instance, refugees in Jordanian refugee camps must submit to retinal scanning to access the sums they receive as aid from the World Food Program. “If you get your retina scanned, you get to eat. It’s quite coercive.”

Deif asked the panellists whether a computer could be programmed to be less biased than an individual. Molnar responded: “We know there are really complex problems with human decision-making. The issue here is that we need to get away from thinking about technology as something that is neutral.” She explained that technology is very capable of replicating existent inequalities. “Really, it’s not neutral at all. It’s a social construct, just like law, just like policy, just like language.”

The conversation shifted to the effects of AI on our perceptions of fairness and accountability. Molnar noted that “the stakes are really high, especially when we’re experimenting with technology in an opaque space like migration.” Wong implored the audience to think about error rates. “Facial recognition [technologies] misidentify racial and ethnic minorities at a higher rate.” She noted several studies have found racial and gender biases in AI technologies; for instance, one system designed to analyse emotions consistently rated black faces as more angry and unhappy than white faces.

Deif asked the panellists whether they are seeing a rise in ‘digital authoritarianism.’

Line to view panel“Especially after the Arab Spring, governments have been trying to bend the internet towards greater political and social control,” said Wong. “China is really the leader in this, [but we see it] replicated in Vietnam, Saudi Arabia, and elsewhere.” Poetranto noted a lot of surveillance technologies originate in the West and are then marketed throughout the world with few restrictions – for example, the Canadian-manufactured NetSweeper. “The concern is that there is a race to the bottom when it comes to cyber security.” Wong added that surveillance laws should be written as though the governments we most fear are in power. “Maybe you trust your current government, but a new government is only one election away.”

Finally, Deif noted human rights are generally not front and centre in the decision-making processes of technology companies. She asked how we might change that. “The key is thinking about human rights right from the outset,” said Molnar. “This includes [incorporating human rights] education in engineering and coding programs.” Wong agreed, and stressed the importance of “breaking down the silos between the human rights communities and the tech communities.”

One question from the audience asked about the relationship between smart home technology and domestic violence. Poetranto spoke about technology becoming more affordable and ubiquitous, and how this opens new avenues for threats and harassment. “We’re seeing ‘smart abuse.’ It’s becoming more and more difficult for those being targeted by perpetrators to find a sense of safety, because now there are multiple avenues for access.” She said the problem begins in the design phase. “It’s important for those who are making applications and different types of technology to recognize how they can be used for abusive purposes.”

Questions continued to pour in after the event ended, both in-person and online. The panellists were delighted with the high level of interest and engagement.

You can check out the discussion on Twitter, at #HumanRightsTECHTO and #BotsAtTheGate.