Interview with Alexa Koenig: A Discussion on the use of Open-Source Evidence in Legal Proceedings

ByAndrew Parker (2L)Alexa KoenigPhoto: Alexa Koenig

Alexa Koenig, JD, PhD is an adjunct professor and the Co-Executive Director and Co-Faculty Director of the Human Rights Center at the University of California, Berkeley School of Law. Among other appointments, she previously served as co-chair and co-founder of the Technology Advisory Board of the Office of the Prosecutor at the International Criminal Court. She is the co-author of Digital Witness: Using Open Source Methods for Human Rights Investigations, Advocacy and Accountability (Oxford University Press, 2019) and Graphic: Trauma and Meaning in our Online Lives (Cambridge University Press, forthcoming 2023). 

Tell us a little about how you first became involved in open-source investigations. 

I was fortunate to join the Human Rights Center as a graduate student researcher while working on my PhD at UC Berkeley. I was a lawyer before I came to Berkeley and had specialized in cyberlaw, so was already thinking about the relevance of digital technologies to legal practice. During my first few years at the Center, our brilliant directors (Camille Crittenden and Eric Stover) recognized that digital technologies were beginning to transform how human rights researchers documented–and could document–violations. Smartphones and social media were rapidly shifting how human rights activists and investigators worked and had tremendous potential to shift the field further. At the same time, satellite imagery was being increasingly deployed to better understand everything, from the movement of troops to the location of mass grave sites to the construction of roads and buildings that might be used to facilitate conflict. In 2009, the Center hosted what we think was the first global workshop on new and emerging technologies and human rights practice, called “Soul of the New Machine.” That kicked off the next fifteen years of work at the Human Rights Center, through which we were thinking critically about the role and potential role of digital technologies in securing accountability for violations. During that period, we often acted as a hub to bring technologists, human rights practitioners, lawyers, and others together to advance the field of practice. 

As I transitioned from a graduate student researcher role to Executive Director in 2012, I became increasingly involved in our work with the International Criminal Court (ICC), grappling with how digital information could increasingly be used to corroborate survivor/witness testimonies and could be triangulated with more traditional types of evidence to strengthen the evidentiary basis of international criminal cases. Through that, I helped to set up the ICC’s technology advisory board (with Alison Cole) and became co-chair of the board (we had previously helped the ICC set up its scientific advisory board). What Alison had the foresight to recognize in the very early 2010s was the potential role that social media content in particular might play in documentation of international crimes. She and I began working together and in parallel to explore how the field could advance. While journalists were pioneering some incredible methods for online fact-finding and verification, we were figuring out where that work had relevance for international legal practice. The lack of clear answers around that led to the creation of the Investigations Lab (to explore the ways in which digital information could support legal, journalistic and human rights fact-finding) and later the establishment of the Berkeley Protocol.

You were instrumental in the development of the Berkeley Protocol, which seeks to standardize and strengthen open-source investigation practices. Do you think the Berkeley Protocol will pave the way for a greater use of open-source evidence in legal proceedings? 

Yes, I do think the Berkeley Protocol will help advance the use of open source evidence in legal proceedings. The protocol was created in direct response to the need of lawyers to figure out how online open source information (the information that is publicly accessible on the internet) can support legal practice. In the mid 2010s, I was getting calls from lawyers all over the world asking what they should do with the videos and photographs being sent to them over encrypted messaging platforms, or the potential documentary evidence they were finding on YouTube, Facebook and other platforms. How do they download and preserve that information? How do they store it? How do they code and tag it for greatest utility, including so that they (or others) could find the information when needed? How do they think about the chain of custody of digital data? How do they verify authenticity? 

The protocol was an attempt to, first, aggregate existing jurisprudence and research on these issues, and second, to create a uniform way of talking about digital open source information. The terminology was all over the place, with OSINT being the primary terme du jour. Yet, in most instances, the information being collected wasn’t just being used for intelligence (or decision-making) purposes, but also as lead information or linkage evidence. Ultimately, the protocol is a normative document that is helping to standardize how we communicate in the space, as well as raise awareness of the kinds of information that may be helpful to court processes. It also underscores how to handle such material responsibly, with an awareness of digital, physical, and psychosocial security risks, as well as emphasizing the importance of incorporating professional ethics into the handling of these kinds of materials. 

Ultimately, the protocol creates a foundation for using digital open source information in legal practice. What every legal team still needs, however, are standard operating procedures that get into the specifics of how they are going to work with the information: what specific tools are they going to use for downloading and preserving the data? To analyze the data? What kind of VPN will they use to mask their identity when conducting online searches? We have helped a number of organizations develop those SOPs, using the Berkeley Protocol as a starting place for asking important questions. The annexes in the back of the protocol are designed to help teams create their own documentation templates, create investigation plans, and assess the tools they’re thinking about incorporating into their workflows. 

We were intentional in making the protocol tool agnostic, given the rapid pace of technological change. We wanted a document that could endure, and thus it had to be principles based. Any SOPs float on top of that foundation.

Do you see the development of sophisticated “deep fake” technology as posing a risk to the reliability of open-source evidence? 

Yes and no. As many others have pointed out, the biggest risk politically and to some extent practically (at least currently) is the liar’s dividend by which people allege that videos or photographs that show them engaging in illegal activity are fakes. Another big risk is that posed by shallow fakes, the mis-contextualizing of visual imagery or written posts, or the surface level manipulation of what you see. As I’ve written about previously, the risk of deep fakes underscores the importance of engaging in a multi-part verification process like that outlined in the Berkeley Protocol, which has people assess the technical data affiliated with a post or other digital item, as well as the source and the content. And of course some extraordinary technologists are working on technical, political, and regulatory responses to the growing risks, which gives me hope, even if the field has become a bit of an arms race [between] those creating technologies designed to deceive and those creating technologies to assess authenticity and reliability.

Open-source investigations into human rights abuses often involve spending hours reviewing graphic or disturbing evidence. Do you have any advice on good mental hygiene practices for students interested in open-source investigations? 

Yes! Andrea Lampros, co-founder and past resiliency manager of Berkeley’s Investigations Lab, and I have written a book on this that will come out in June. There are a few best practices that we recommend, building off the incredible work already done by others in the space (a shout out here to Sam Dubberley, currently of Human Rights Watch and formerly of Amnesty International, who has played a pivotal role in strengthening awareness of these issues). First, it's important to generate awareness of your baseline functioning, so that you’re aware when you’re beginning to be affected (for example, sleeping poorly or a lot, or increasing your drinking) and can take time off, or engage in other practices that help you process heavy emotions, whether getting professional help, baking, or otherwise. It’s critical to cultivate awareness also of what helps you when you’re struggling. For some that may be meditation, for others exercise, for others playing with kids … it’s incredibly individualistic. You also want to be aware of your particular triggers. Again, everyone is different as to what affects them most, so if you’re part of a team, maybe there’s a way to divide up what you are looking at to share that load or support each other by taking on the material that affects you less than others, and vice versa.

Second, there are time-tested tips and tricks, such as keeping the sound on videos turned way down or off, minimizing screen size when you’re just trying to see what content a video contains but don’t need to analyze that video, turning off autoplay on your social media feeds so you can control your exposure, being sure to label graphic videos with what kinds of graphic content they contain before sharing with others (so they can make smart choices about whether, when and/or where to watch), and thinking critically about where you look at graphic content and when. For example, you want to avoid scrolling in your bed late at night or alone in order to protect those times and spaces from being affiliated with upsetting material. Also don’t just watch to watch—have a reason.

Third, community is important. Can students be partnered up, so that they have at least one other person as a support? Can you celebrate the milestones as a team? Can you connect with people who have been affected by the atrocity you’re witnessing, to understand the human dimensions of the work that you’re doing and to connect yourself to the reasons why this work is important?

While there’s so much more to talk about, I would also stress the importance of acknowledging and even celebrating impact. Even little “wins” are worth pointing out. 

Finally, a great resource for more information is provided by RatedResilient, as well as that provided by the Investigations Lab at UCSC, some of which was created in partnership with our team. 

What advice do you have for students who are interested in OSINT but who don’t know how to get started? 

There are several excellent online and offline resources. One free online training was put together by Amnesty International with Advocacy Assembly. I would also follow various organizations and individuals on Twitter. If you pull up #OSINT, you’ll start seeing resources. A number of organizations provide free or low cost courses. I’m biased, but I also think the book Digital Witness can provide a solid foundation for thinking about everything from the history of open source investigations, to the methods, to the ethics. We’re beginning work now on the next edition.

I’m really excited to see how this field evolves and am heartened to see these kinds of methods scale, ideally in ways that are thoughtful and keep the dignity of all those impacted at the center.