Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Use Signal, use Tor? The political economy of digital rights technology
(USC Thesis Other)
Use Signal, use Tor? The political economy of digital rights technology
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
USE SIGNAL, USE TOR?
THE POLITICAL ECONOMY OF DIGITAL RIGHTS TECHNOLOGY
by
Nathalie Maréchal
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMMUNICATION)
December 2018
Copyright 2018 Nathalie Maréchal
! 2
Acknowledgements
This dissertation would not have been possible without the help, support, and
encouragement of many friends, family, and colleagues. I’d like to first thank my advisor,
Patricia Riley, who understood and believed in this project before anyone else did — including
me. I owe a great deal of gratitude to my dissertation committee, Christina Dunbar-Hester,
Patrick James, and Shawn Powers, for all the insightful comments and conversations along the
way. I’d also like to thank François Bar, Manuel Castells, and Jonathan Aronson for serving on
my quals committee and preparing me for the dissertation process.
A number of scholars beyond my committee were kind enough to give me feedback on
early drafts of this project, for which I am eternally grateful. I especially want to acknowledge
Gabriella Coleman, Robert Gehl, Sarah T. Roberts, and Joss Wright for their intellectual
generosity in talking through various aspects of this study with me. Comments on an early draft
of the Tor Project case study from participants in a workshop organized in October 2017 by Data
& Society in New York City were invaluable in shaping that monstrous glob of information into
a coherent narrative: Ahad Ali, Finn Brunton, Robert Domanski, Joan Donovan, Jessa Lingel,
Jeff Nagy, Kellie Owens, Jans-Hendrik Passoth, Nitin Sawhney, and Anne Washington. At
Annenberg, my friends and colleagues Lik Sam Chan, Cat Duffy, Michelle Forelle, and Sarah
Myers West were particularly generous with their comments and pep talks. Eternal love and
sunshine to my cohort — the entering cohort of 2013 — and to the many other PhD students
whose paths crossed mine in Los Angeles and elsewhere. Special shout-out to my colleagues
from the 2015 Annenberg-Oxford Media Policy Institute, the 2016 Hackademia Summer
! 3
Institute, and the 2017 Oxford Internet Institute Summer Doctoral Programme. Being part of a
global community of internet researchers has meant the world to me.
Before I joined the Trojan Family, the faculty and staff at American University taught me
how to think, how to write, and how to survive academia, especially Maria Green Cowles,
Anthony Quainton, Craig Hayden, Patrick Thaddeus Jackson, and Jamie Wyatt. Once an Eagle,
always an Eagle.
I first became interested in the political economy of digital rights technology during my
COMPASS fellowship at Ranking Digital Rights in 2014. From the very beginning, Rebecca
MacKinnon has mentored me to grow into the scholar and advocate I am today, and opened
doors for me that I didn’t know were there. The Ranking Digital Rights team, past and present,
has been a source of tremendous insight, growth, and inspiration over the years, and I am so
grateful to be working with you again. (Dear Priya Kumar: You will always be my “work wife”).
I probably would have lost my mind if it weren’t for my friends and family, who
continued to love me and make me laugh even when they had absolutely no idea what I was
talking about. Thank you for loving this “internet doctor” through it all — you know who you
are.
Most importantly, I want to thank my partner Patrick Ferguson, who has been my rock in
every away. I know he’s every bit as happy that this dissertation is finally done as I am.
And last, but certainly not least, I want to thank the dozens of research participants who
generously spent hours talking to me about their work, and who trusted me with their life stories.
Special thanks to the communities that gather at the Internet Freedom Festival, RightsCon, and
the Citizen Lab Summer Institute. This one’s for you.
! 4
Table of Contents
Acknowledgements 2
Table of Contents 4
List of Tables 5
Abstract 6
Introduction 7
Chapter 1 - Information controls, digital rights, and resistance 22
Chapter 2 - Methodology and ethical considerations 55
Chapter 3 - The digital rights space: Portrait of a social movement 79
Chapter 4 - A political history of U.S. internet freedom funding 127
About the case studies 177
Chapter 5 - Psiphon: Software for the soft war? 180
Chapter 6 - Tor: Peeling the onion 218
Chapter 7 - Signal: Bringing encryption to the masses 282
Chapter 8 - Telegram: From Russia with crypto 319
Conclusion 359
Appendix A: Interview script 388
References 393
! 5
List of Tables
Table 2.1. Participant observation at digital rights conferences 67
! 6
Abstract
This dissertation explores the political economy surrounding some of the new
communication tools undergirding 21
st
century social movements, what I call digital rights
technologies: tools that allow people to maintain privacy and/or access to the open internet.
Governments are increasingly willing and able to surveil internet users and to control the flow of
information within and across their borders, and Silicon Valley’s search for a business model has
given rise to surveillance capitalism, an economic system based on ubiquitous electronic tracking
of citizens’ online and offline activities. Classical authoritarianism and surveillance capitalism
are converging to give rise to networked authoritarianism, a political system that leverages ICTs
and media regulation to carefully control the expression of dissent in a way that gives the
impression of limited freedom of expression without allowing dissent to gain traction, with grave
implications for democracy and human rights. In response, a transnational social movement
dedicated to digital rights has emerged, with roots in previous waves of contention over
communication rights. Part of the digital rights movement’s repertoire of contention involves the
development of digital rights technologies, and this study follows the activists and technology
developers fighting for human rights in today’s digitally mediated world. This work adds to the
current literature through four case studies of organizations producing digital rights tools
(Psiphon, Tor, Signal, and Telegram). Drawing on over three years of participant observation in
the digital rights space, the study further analyzes the trajectory of this social movement, its
relationship to the U.S government’s Internet Freedom agenda, and the impact of digital rights
technologies on geopolitics.
! 7
Introduction
What is digital rights technology?
This study explores the political economy surrounding some of the new communications
tools undergirding 21
st
century social movements, what I call digital rights technology. I begin
with a brief explanation of what I mean by that.
Digital rights technology refers to the broad set of hardware and software tools that allow
individuals to better protect their privacy, access the information they wish to access
notwithstanding censorship attempts by nation-states or other actors, express themselves as they
wish in both the public sphere and in private, or any combination of the above. All this occurs in
the context of a globally networked society where communication is mediated by information
and communication technologies (ICTs). This includes circumvention tools that help users get
around technical censorship (such as geofencing, keyword-based censorship, etc.) that is
mandated by the state, encrypted messaging applications, browser plug-ins that stymie
commercial web tracking, and other tools that perform similar functions. All kinds of people use
digital rights tools, sometimes without even knowing it, but they are particularly important for
activists and social movements.
This dissertation is specifically concerned with the use of digital rights tools to protect
and advance human rights, be they those of the communicant herself, of groups to which she
may belong, or of others. This is not to ignore the fact that digital rights tools are “dual use”
technologies (at the very least): all of the tools discussed in this dissertation can be, have been,
! 8
and will continue to be used for good and for ill. This is almost certainly true regardless of the
normative framework used to distinguish between good and evil. For example, secure messaging
applications like Signal, WhatsApp, and Telegram can be used to schedule dinner plans among
friends, coordinate political rallies, plan terrorist attacks, and carry out drug deals. Tor Onion
Services (formerly known as Hidden Services) similarly provide more-private access to websites
like Facebook, the New York Times, and ProPublica, but also to online drug forums and to sites
dedicated to exchanging images of child sexual abuse. Censorship circumvention software like
Psiphon and Lantern, virtual private networks (VPNs), and the Tor Browser allow users in China
(for example) to access pro-democracy online content, but can also facilitate the evasion of
intellectual property laws. I routinely use my university VPN to remotely access scholarly
databases hosted by the USC Libraries, but also to watch my favorite Netflix content while
traveling abroad — technically a violation of the Netflix Terms of Service, if not (to my
knowledge) a breach of university policies.
Many governments are increasingly willing and able to surveil internet users and to
control the flow of information within and across their borders. In response, free and open source
software developers — some with ties to specific dissident groups — are developing tools to
maintain privacy and/or access to the open internet and preserve an important “coordination
good” — an activity people engage in to win power, but that governments can restrict without
excessive repercussions on the economy (Bueno de Mesquita and Downs, 2006). For example,
restricting all internet access would be catastrophic for economic growth, but insisting on
monitoring internet activity only chills online political activities. Many governments can see
what an individual does online, so email and other communication tools can still be used to
! 9
coordinate sales and purchases for business, but are not likely to be used for organizing a
dissidents’ reading group or staging a protest—coordinating civil society. Citizens would still
surf the web for weather forecasts or grain prices in their region, but could be penalized for
accessing online content that is critical of the government. A number of governments have
identified digital rights tools as being critical to their political opponents’ ability to organize, so
they are restricted. This action is believed to be low risk as it is likely to be relatively
inconsequential to economic growth, and therefore those tools can be banned with seemingly
little risk to the government. (Whether this is an empirically accurate perception is a different
question that falls beyond the scope of this study.)
The concept of coordination goods is key to understanding the political economy of
information controls, internet freedom and digital rights. A closely related concept is
emancipatory communication practices (ECPs), theorized by Stefania Milan (2013) as “ways of
social organizing seeking to create alternatives to existing media and communication
infrastructure” (p. 9). Milan’s study examines community radio (see also Dunbar-Hester, 2014)
and “radical techies,” defined as “the groups and networks of individuals who provide alternative
communication channels on the Internet to activists and citizens, on a voluntary basis and
through collective organizing principles” (p.12). This research takes a different approach and
primarily interrogates professionals for whom developing and supporting digital rights
technology is their main source of income (the ongoing professionalization of these “radical
techies” is examined in Chapter 3). I selected my case studies by identifying some of the most
impactful technology development projects, and only then delving into the financial
arrangements sustaining their developers (case study selection is further discussed in Chapter 2).
! 10
Background
Information and communication technologies (ICTs) have a long history of subverting
power relationships, spurring opposition from authority figures who then attempt to control
access to these technologies and their use. The most recognizable example is the printing press,
which was credited with ending the Catholic Church’s monopoly over the written word
(especially the Word of God) by enabling critiques and change to Church orthodoxy, thus leading
to the Protestant reformation (Hanson, 2008). ICTs can, of course, also serve the powerful. For
example, transoceanic telegraph cables are widely credited with contributing to the growth and
stability of the European empires during the 19th and early 20th centuries (Starosielski, 2015).
The development and adoption of new ICTs is thus a common site for contestation between those
who have power and seek to keep it, and those who lack power and wish to seize it. This study
intends to advance our understanding of this struggle by investigating the emergence of internet-
related technologies as the latest chapter in human negotiations over who gets to communicate
what, when, and how (Deibert et al., 2008, 2010, 2011).
As the internet matures, the verdict on its potential as a driver of social change is mixed.
Online communication seems neither quite the technology of liberation that many utopians
dreamed of, nor the technology of control that technoskeptics have cautioned against (Deibert,
2013; MacKinnon, 2012; Morozov, 2011). Instead, the internet is an example of coordination
goods — an activity people engage in to win power, but that governments can restrict without
excessive repercussions on the economy (Bueno de Mesquita and Dowds, 2006).
! 11
Today, significant contestation arises from the ability of nonstate actors — civil society
activists but also criminal and terrorist organizations — to impart, receive, and exchange
information securely and anonymously (see Owen, 2015). Although the particulars of each
national debate over internet regulation are specific to that cultural and political context, debates
in different nations are strikingly similar in theme and tone. Authorities insist that they need the
ability to surveil and censor web content and online communications for reasons of national
security, public morals and social harmony, while civil society activists argue that these powers
could be, and often are, used to stifle dissent and protect the status quo, to the overwhelming
benefit of existing elites. Opponents see the authorities’ arguments as insufficient at best, and at
worst as smoke and mirrors designed to obfuscate the true goal: building a network architecture
designed for social control, what Rebecca MacKinnon (2011, 2012) and others have called
networked authoritarianism. This is most visible in China, where the ruling Communist Party is
in the process of implementing a “social credit rating system” that will reward desirable
behaviors and discourage undesirable ones. This system will tie citizens’ access to credit,
transportation, education, job opportunities, and more to their scores (Botsman, 2017). Such
social engineering should not be a complete surprise as the Chinese government has been wary
of the internet’s potential to disrupt power relations from the beginning, and has been a pioneer
in techniques for controlling information since the early 2000s. It founded the Shanghai
Cooperation Organization (SCO) in 2001, along with Russia and several Central Asian republics,
as a learning forum for new opportunities to develop information controls (Nocetti, 2015).
Meanwhile, in the West, Silicon Valley’s search for a business model for the internet has
given rise to what has come to be known as surveillance capitalism (Zuboff, 2015). Also called
! 12
data capitalism (West, 2017), this economic system is based on ubiquitous electronic tracking of
citizens’ online activities, and increasingly, their offline behavior as well. The digital exhaust of
our daily lives is data mined by Google, Facebook, and other major data aggregators in order to
sell the activity of our “eyeballs” to advertisers, who in turn do their best to sell us anything and
everything. A host of companies in industries ranging from telecommunications to gaming
similarly track and monetize the activities of their users and any others whose behavior they can
capture. Scholars and social critics have been criticizing this arrangement for some time, but
these people were seen as Casssandras and largely dismissed as paranoid, especially by Silicon
Valley executives themselves. It didn’t take long for all sorts of influence peddlers — terrorist
networks, nation-states, and profit-driven clickbait factories churning out “fake news” among
them — to take advantage of these new pathways to people’s hearts and minds.
As the aftermath of the 2016 U.S. presidential election has revealed, this influence
infrastructure can and evidently has been used for political purposes as well as commercial ones.
Internet-based advertising and media platforms enable the dissemination of political
disinformation through “precision propaganda” techniques like behavioral data collection, digital
advertising platforms, search engine optimization, social media management software, and
algorithmic advertising technology (Ghosh & Scott, 2018). The empirical evidence is still being
gathered and analyzed, but it is already clear that democracy in the social media era is markedly
different, as political parties and other actors can target micro-groups of voters with surgical
precision in opaque and unaccountable ways. Effects range from depressing voter turnout to
subliminally activating racial prejudices and using disinformation to induce voters to vote against
! 13
their own interests. Political communication and public debate do not work the same way today
as they did as little as ten years ago, when Barack Obama was first elected.
Facebook CEO Mark Zuckerberg initially dismissed the idea that “fake news” on
Facebook had influenced the 2016 election as “crazy” (Newton, 2016), even though then-
President Obama had warned him of the serious nature of the threat (Ong, 2017). Facebook
subsequently spent much of 2017 investigating whether Russian influence operations were
conducted via its platform (as have Google and Twitter, at the behest of the FBI and the U.S.
Congress). The evidence so far indicates that the Russians did utilize these platforms and
supports the argument made by sociologist Zeynep Tufekci in her September 2017 TED Talk:
Silicon Valley has built the infrastructure for surveillance authoritarianism, all in service of
getting people to click on ads (Tufekci, 2017b). This has grave implications for democracy and
human rights, implications that are undiminished by the free enterprise, disruption and
innovation discourses underpinning surveillance capitalism’s development and by the tech titans’
claim to have acted with the best of intentions. At least the autocrats at the SCO did it on
purpose.
The 2016 U.S. election
This study also serves as a timely engagement with the matter of the 2016 presidential
election in the United States, the “Brexit” referendum earlier that same year, and related
disinformation efforts that preceded these events and continue to follow. To some this might
appear to be a scholarly hazard of writing about contemporary events and being compelled to
! 14
address additional issues. However, these concurrent issues are simply exemplars of the systems
1
being examined—outcomes that could be anticipated by the theory of surveillance capitalism.
The escalation of Russian disinformation campaigns against the Democratic party in the
U.S. presidential election in 2016, which had earlier played a significant role in the British
referendum on leaving the European Union, were part of a broader campaign by Vladimir Putin’s
regime to alter the balance of power in the international system. By hastening the demise of the
Pax Americana, attempting to dismantle NATO and the transatlantic alliance, and threatening the
European Union’s continued existence, Putin enhances Russia’s potential for regional
dominance. Moreover, Western promotion of democracy, human rights, internet freedom,
transparency, anti-corruption, and other civic values threatens Russia’s kleptocratic elites and
their ability to grow ever richer on the backs of ordinary people. These disinformation campaigns
I first experienced this dynamic in 2011, while I was writing my MA thesis on media
1
coverage of WikiLeaks, as the Manning trial and Julian Assange’s attempts to avoid extradition
to Sweden were ongoing (Maréchal, 2013). Two years later, I was getting ready to start the USC
Annenberg doctoral program, where I planned to study institutional reactions to WikiLeaks,
when Edward Snowden and his journalist colleagues revealed the extent of NSA surveillance to
the world, leading to a shift in my research agenda. And then, studying for my qualifying exams
in 2015, I noticed that much of the new scholarship on privacy, surveillance, internet governance,
and internet freedom included caveats to the effect of, “I didn’t set out to write a book about
mass surveillance, but Snowden happened, and I can’t pretend that it didn’t even though it’s
somewhat orthogonal to my research questions.” All this is to say that I am no stranger to
baroque plot twists. Just like real life, social science is often stranger than fiction.
! 15
should be viewed as part and parcel of an ideology that sees information as a threat (not a right),
dismisses civil society as so many “useful fools” who do the bidding of governments, and
cynically rejects human rights discourses as mere window-dressing (Maréchal, 2017a). Beyond
the pro-social evaluative stance of this research project, it is now evidentially apparent that the
Russian government’s willingness to take “active measures,” in the parlance of Soviet-era
officials, against adversaries believed to threaten its sovereignty has the potential to gravely
destabilize the international system. If the United Kingdom ends up leaving the European Union,
and if the rest of the Trump presidency is as calamitous as its first year, the globe may be an even
more dangerous and unstable place than the one that emerged during the timeframe of this study.
Research questions
As a scholar and activist, I align myself with those elements in global civil society who
are pushing back against networked authoritarianism. I hope that the present study will help my
comrades better instead the context in which they operate as well as the tools and strategies at
their disposal. This dissertation’s primary research questions ask: 1) Where did the digital rights
movement come from, and where is it going? 2) What commonalities do the digital rights space
and the U.S. Internet Freedom agenda share, what tensions divide the two milieus, and how are
they negotiated? And 3) How are digital rights technologies changing global communication
systems? What does this tell us about the geopolitics of information?
Understanding the new geopolitics of information is essential to finding new discourses
and policy interventions to combat the rise and spread of networked authoritarianism. Trump and
! 16
Brexit exemplify the importance of these questions but they are simply indicators of systemic
issues that also gave rise to the failed attempts to bolster Marine Le Pen’s presidential aspirations
in France (see Maréchal, 2017b) or to oust Angela Merkel from the German chancellorship —
both of which owe much to the Kremlin. In the future, the actors might be China or North Korea
or a rising international terrorism organization. Scholars were initially most concerned with
attacks on international freedom of expression and privacy, but it is clear that the battle is now
being waged on many domestic fronts as well. This project focuses neither on the behemoth tech
companies who built this surveillance capitalism infrastructure, nor on the nation-states
competing for global dominance in the 21
st
century. Instead, this story follows the activists and
technology developers fighting for human rights in today’s digitally mediated world. Among the
constellation of such projects, I focus on two of the best-known tools available to internet users
who care about free expression, access to information, and privacy online: Tor and Signal.
Curiously, each of these tools (along with Psiphon, an anti-censorship tool) has a close financial
relationship to the U.S. government, receiving substantial funding tied to the Internet Freedom
agenda. The fourth case study examines Telegram, a platform created by Russian tech
entrepreneur Pavel Durov that combines secure messaging features with some social networking
affordances, and whose history intersects with recent Russian politics in surprising ways. As we
will see, the Telegram site offers interesting analytical contrasts that help illuminate the nature of
the relationship between the ideology of internet freedom and the digital rights movement.
! 17
Dissertation overview
This dissertation evolved during a fellowship at a policy think tank located less than a
block from the White House in Washington, D.C. The digital rights community worries — albeit
2
not as acutely as in the winter of 2017 — that the Trump administration will gut the internet
freedom funding that sustains much of the space.
Conducting research for this study raised a number of ethical questions from the
beginning. A member of the digital rights activist community in my own right, I wanted to tell
my colleagues’ stories and amplify their voices at a time when public debate about
communication rights (i.e. encryption, online expression, net neutrality) so often missed the mark
or lacked nuance. I also wanted to respect activists’ privacy and be mindful of the very real
security threats that many of them faced because of their work, and I wanted to meet high
standards of scholarship. I knew from several years of studying information controls and
participating in international activist conferences that the struggle for communication rights was
a highly contentious one. The information controls literature is surveyed in Chapter 1.
If the stakes for this study were always high, the climate in which I was working changed
palpably after the 2016 election. Many colleagues (notably those of Arab, Persian or South Asian
I was a Senior Fellow in Informational Controls (Feb. - Sept. 2016) and Senior
2
Research Fellow (Oct. 2016 - July 2017) at Ranking Digital Rights (RDR). RDR is a non-profit
research initiative housed at New America’s Open Technology Institute, working with an
international network of partners to set global standards for how companies in the information
and communications technology (ICT) sector should respect freedom of expression and privacy.
! 18
descent, regardless of nationality or country of residence) were forced to make difficult choices
about crossing the U.S. border. As a white, affluent U.S. citizen I was not overly concerned for
my own safety, and I was prepared to become an ACLU test case should Customs and Border
Protection (CBP) or Immigration and Customs Enforcement (ICE) demand that I unlock my
devices at the border, but I was worried about protecting my research data. For example, I spent
the month of March 2017 in Europe for fieldwork and conferences related to my fellowship,
taking my laptop with me. I was prepared to walk away from the physical machine if needed,
and felt confident that my hard drive was adequately encrypted, but I worried about losing my
data (I had promised research participants that my interview notes and their identifying
information would never be stored in the cloud). After consulting with several digital security
experts, I traveled with an encrypted backup in my checked luggage, and sent another one home
with a colleague on a separate flight. In the end, I passed through customs without incident, but
the experience led me to the conclusion that I should no longer cross the border with my main
laptop, and I purchased a Chromebook to use as a travel device. Chapter 2 of this dissertation
details this study’s research design and methods (including case selection) as well as some of the
ethical quandaries I’ve faced throughout this journey.
Chapters 3 and 4 examine the formulation and operationalization of digital rights/internet
freedom (the two phrases are often used interchangeably, though they are analytically distinct)
from the bottom-up and from the top-down, respectively. Chapter 3 looks at the transnational
social movement dedicated to digital rights, tracing its roots in the cypherpunk ethos (see West,
2018) and the international human rights movement. The analysis is grounded in the sociology of
social movements, and builds on the scholarship of a number of scholar-activists who identify
! 19
with the movement themselves — as do I (see, for example, Coleman, 2013, 2014; Croeser,
2012; Deibert, 2013; Juris, 2008; MacKinnon, 2012; Milan, 2013; Tufekci, 2017a). Finally, the
chapter reflects an autoethnographic experience as a researcher and advocate working within the
“digital rights space” (as members refer to their social movement) through my fellowships with
Ranking Digital Rights. It notably engages with ongoing controversies and debates within the
digital rights space about gender, inclusion, and sexual violence within the movement. The fourth
chapter traces the political history of internet freedom funding through the Bush and Obama
administrations. In doing so it builds on the work of Shawn Powers and Michael Jablonski
(2015), but with a particular focus on the human rights and technology development aspects of
what became popularly known as the Internet Freedom agenda during Hillary Clinton’s tenure in
Foggy Bottom. This chapter is informed by extensive interviews, both on and off the record, with
current and former government officials, and with recipients of internet freedom funding,
conducted between 2015 and 2018.
It is against this background that I then present my four case studies: Psiphon, Tor,
Signal, and Telegram. The first two are ethnographic case studies, however, access to the Signal
and Telegram teams was much more difficult, thus those chapters rely on mediated analysis
comprised of secondary data. I had tremendous difficulty securing interviews with developers
and others working on Signal, even though I occasionally interacted with members of the team at
conferences and via mailing lists. Telegram’s developers are highly skeptical of internet freedom
discourses and deliberately maintain their distance from the movement: they do not attend the
same conferences or participate in the same mailing lists, and there appears to be very little
social contact, if any, between the digital rights space and Telegram. The Telegram case study is
! 20
thus based on archival research, discourse analysis, and interviews with colleagues in the digital
rights space with specific expertise on Telegram’s history, the tool’s technical design, and its user
bases in Iran, Russia, and elsewhere.
After the four case study chapters, the conclusion offers discussion and analysis that
integrates concepts across the macro, meso, and micro levels of analysis. It considers how the
digital rights space — and civil society more broadly — is implicated in the open conflict
between the two paradigms about the role of information that are now in stark competition: an
illiberal authoritarian ideology that sees (and uses) information as a dangerous weapon, and
another, a transatlantic alliance whose commitments to the free flow of information have yet to
adjust to the new economics of the social media era and surveillance capitalism.
This project is the fruit of over three years of fieldwork and active participation in the
“digital rights space”. It combines ethnography and policy analysis to argue that far from being
pawns in the great game played by nation-states and multinational corporations, digital rights
activists navigate the power relations inherent in internet governance and global information
politics to advance their own values and strategic objectives. They do so with various degrees of
sophistication and political savvy, but invariably with a strong sense of their own autonomy.
“Radical techies” (Milan, 2013) and “liberation technologists” (Postill, 2014) in particular
exhibit an unwavering belief in their ability to shape technology and thereby influence political
processes, though many stop short of specifying which political processes they mean to
influence. Rather, they emphasize the ways that digital rights tools empower their users
(imagined to be human rights defenders, political dissidents, journalists, and endangered
minorities) to engage in the political processes of direct relevance to them. Critics who dismiss
! 21
this rhetoric as a simplistic belief in technological solutionism are arguing against a straw man.
In over three years of fieldwork in the digital rights movement, I have never encountered the
argument that technology on its own would enhance democracy, promote human rights, or
otherwise save the world. Rather, developers of digital rights tools articulate their activities in
service of the work that the tools enable their users to do. They argue that journalism, human
rights advocacy, political activism, and other democracy-sustaining activities require privacy,
access to information, and freedom of expression, and that digital rights tools are necessary to
protect those rights in the era of surveillance capitalism and pervasive governmental information
controls.
! 22
Chapter 1 - Information controls, digital rights, and resistance
Introduction to the chapter
Chapter 1 explores the reasons behind the rapid growth of information controls and the
restrictive use of internet infrastructure by nation-states to affect political outcomes domestically
and in the international system — one of the phenomena that digital rights technology is
intended to combat. It combines empirical research and theoretical work from a wide range of
academic fields and disciplines — including communication, international relations, internet
governance, political science, science and technology studies, and the sociology of social
movements and political change — to map how how nation-states (often in collaboration with
the private sector) use censorship and surveillance technology to stymie dissent, and why. Going
beyond technical descriptions and single-country analysis, the chapter explores how these
techniques have evolved over time, and how they spread from information controls pioneers like
China and Russia to other jurisdictions, notably through the Shanghai Cooperation Organization
(SCO). Special attention is paid to the role of private companies in the implementation of
information controls: both companies that explicitly design and sell technologies of control
(Hacking Team, Gamma Group) and companies that provide services to individuals in exchange
for data access (Google, Facebook).
The review of the literature will tackle three questions in turn, which collectively form
the background for this study: How have ICTs been used as tools for liberation, broadly defined?
How have nation-states reacted to ICTs’ potential to upend the existing social order? And, how
! 23
are ICT companies implicated in this dynamic? Taken holistically, these bodies of literature point
to the same conclusion drawn by MacKinnon (2011, 2012), Pearce & Kendzior (2012), Tufekçi
(2017a, 2017b), and others: we are careening head-first toward networked authoritarianism.
This empirically-grounded chapter foregrounds the harms that information controls cause
for individuals, for social movements, and for societies at large, thus creating demand for digital
rights technology. It explores state actors’ motivations for controlling information flows, how
they exert this control, as well as the why and how civil society actors develop digital rights
tools. Throughout the chapter, I also consider the private sector’s implication in these dynamics.
We now know that data mining for marketing purposes forms the backbone of the internet’s
business model. Social networking sites and the proliferation of “free” software and services
(Google, Yahoo, Facebook) have intensified this trend. At the same time, governments
increasingly are willing and able to surveil individuals and not just each other. They do so in part
by piggy-backing on data flows generated for commercial purposes (Angwin, 2014; Greenwald,
2014; Scheer, 2015; Schneier, 2015). For example, several of of the NSA programs revealed by
Edward Snowden in 2013 relied on user information traveling between Google data centers in an
unencrypted format (Schneier, 2015). Google now encrypts data in transit between its facilities,
but companies like Geofeedia were long able to sell access to the social media “firehoses” (i.e.
the raw data from Facebook and Twitter) to law enforcement agencies, without any oversight or
transparency. Users may feel that the utility they derive from using Gmail or Instagram is
3
In reaction to the Cambridge Analytica scandal, Facebook has recently started to restrict
3
third parties’ ability to access user data. Analyzing these recent developments is outside the scope
of this project.
! 24
enough to warrant trading away their data to Silicon Valley corporations, with the understanding
that in return they will be served with targeted advertising, but few understand that this data can
later be shared with government agencies, combined with datasets from other sources, and used
in myriad surprising, potentially dangerous ways without their knowledge or content.
This sets the stage for my study, which focuses on a specific subset of the civil society
actors who are actively resisting this gathering threat by developing digital rights tools, and
whose particular institutional relationships to the U.S. government through congressionally-
appropriated internet freedom funding complicate traditional notions of geopolitics.
Technologies of liberation and control ?
This chapter engages primarily with scholarship from international relations, political
science, media studies, and related fields, which have tended to regard ICTs as either inherently
liberating or controlling (Aouragh & Chakravartty, 2016). As limiting and lacking in nuance as
this frame can be, it is an important debate to trace because of its enduring influence on policy
and politics, both at the global level and within individual countries. It is nonetheless also
important to acknowledge the much more nuanced science and technology studies (STS)
tradition that stretches back at least as far as Lewis Mumford’s 1964 observation that “from late
neolithic times in the Near East, right down to our own day, two technologies have recurrently
existed side by side: one authoritarian, the other democratic, the first system-centered,
immensely powerful, but inherently unstable, the other man-centered, relatively weak, but
resourceful and durable” (cited in Winner, 1980). The “liberation or control” debate can be read
! 25
as deliberation over how the internet and associated technologies should be categorized. Before
turning to the recent history of that debate, let us consider the notion that technology is socially
constructed in ways that are much more nuanced than “good or bad,” a well-established theory in
the STS literature.
Adherents of the social construction of technology (SCoT) theory argue that “What
matters is not technology itself, but the social circumstances of their development, deployment,
and use” (Winner, 1980, p. 122), but Winner argues that doing the “detective work” to reveal the
details of that social construction is not enough: one must also “take technical artifacts
seriously,” as these tools or objects can be “political phenomena in their own right” (p. 123). In
some cases, the tangible technology might reflect a “way of settling an issue in a particular
community” (p. 123), as in the case of the Long Island parkway tunnels that were designed to be
too short for public bus transit (thus excluding the undesirables riding the bus from accessing
public parks and beaches) or the low-quality molding machine whose purpose was to defeat the
factory workers’ unionization efforts. Winner further points to the built environment in most
societies, which largely excludes people with disabilities from full participation in society, as an
example that “the very process of technical development is so thoroughly biased in a particular
dimension that it regularly produces results counted as wonderful breakthroughs by some social
interests and crushing setbacks by others” (p. 125). The disability rights movement and the
American With Disabilities Act are two social interventions aimed at remedying this bias, and
that leave their mark in physical artifacts like wheelchair ramps, automatic doors, and the like.
Yet another example is the mechanical tomato harvester, which nudged California’s tomato-
growing industry toward hardier (but less flavorful) varietals that could withstand the machine’s
! 26
rough handling and encouraged large-scale growing operations whose economies of scale
justified the expensive machines. In Winner’s telling, this is not the result of a conspiracy, but of
“an ongoing social process in which scientific knowledge, technological invention, and corporate
profit reinforce each other in deeply entrenched patterns that bear the unmistakable stamp of
political and economic power” (Winner, 1980, p. 126).
There are also instances of “inherently political technologies” that “appear to require, or
to be strongly compatible with, particular kinds of political relationships” (Winner, 1980, p. 123).
Winner’s analysis of such technologies builds on Friedrich Engels’ essay “On Authority” (1872),
which argues that 19
th
century sociotechnical systems like cotton-spinning mills, railways, and
seafaring ships demanded an authoritarian mode of organization to coordinate the many
sequences of tasks needed for their operation. The essay was a rebuttal against the anarchists of
the time who called for replacing capitalism with non-hierarchical modes of social organization,
arguing that industrial society such as existed in 19
th
century Great Britain demanded “the
creation and maintenance of of a particular set of social conditions as the operating environment
of that system” (p. 130). Others — Winner gives the example of the solar power advocates of his
day — argue that some technologies may not necessarily require a particular set of social
conditions, but that they are “strongly compatible with” a given governance system. This is
essentially the argument behind internet freedom discourses: access to the free and open internet
may not require democracy, but it is more compatible with democracy than with
authoritarianism. Similarly, scholars like Rebecca MacKinnon and Zeynep Tufekci who argue
that the surveillance capitalism developed by Silicon Valley corporations has paved the way
! 27
toward networked authoritarianism are making a claim that this socio-techno-economic
assemblage has certain inherent political qualities.
Nearly four decades after Winner’s article, there is ample empirical evidence that “the
internet” (with all the ambiguity inherent to that term) is compatible with all sorts of socio-
political arrangements. Monarchies, theocracies, communist dictatorships, liberal democracies,
and even traditional hunter-gatherer societies have all connected to the network of networks.
Consequently, there is no model of governance that we can say the internet requires or makes
inevitable. But “the internet” is not a single technology, and while “the internet” has not (yet)
splintered in the way envisioned by Milton Mueller (2010, 2017) and others, it is undeniable that
people encounter a different “internet” when they connect in Los Angeles, Tehran or Beijing. It is
perhaps more useful, then, to think of distinct “internet technologies” like email, social media,
the World Wide Web, censorship circumvention software, or encrypted messaging applications.
“The internet” is a not a thing, it is a sociotechnical system that comprises physical
infrastructure, technical protocols, legislation, economic incentive structures, unfathomable
amounts of content of dizzying variety, contradicting social norms, and more (DeNardis, 2015).
Consequently, this study considers four case studies of such internet technologies as distinct
sites.
As we will see, the point is not that internet technologies are “good” in some contexts and
“bad” in others. What we mean by “the internet” has changed over time, as different people with
different biases, values and goals developed new internet technologies that complemented and at
times supplanted the ones that came before, and found new ways to use existing technologies.
The analysis of that evolution is complicated by the insistent claims put forth by successive
! 28
generations of internet pioneers that their innovations were driven by “freedom” or “liberation,”
two terms that are used interchangeably to mean a host of different things (Kelty, 2014).
Chapters 3 and 4 will focus, respectively, on what communication rights activists and American
policymakers mean by “digital rights” and “internet freedom,” but other types of “freedom” have
also been associated with computers and electronic communication. These include, in no
particular order: the freedom to tinker; the freedom to enhance natural human thought; the
freedom to transcend the physical limitations of time and space; liberation from clunky
mainframe computers; from proprietary software; from surveillance; from commuting, etc.
In one sense, civil society actors figured out how to take advantage of internet
technologies to suit their needs before powerful institutions did, and for a time they were able to
take advantage of the internet’s newest affordances in a highly visible way. That dynamic is
discussed in the “internet as liberation” section of this chapter. As institutional power structures
— nation-states but also capitalist corporations — integrated the internet into their models of
how the world works, they developed and implemented strategies to control that new
sociotechnical system and neutralize the threat that it presented to their grasp on political power
and economic profit, as discussed in the subsequent “internet as control” section.
Keeping in mind the limitations inherent to both the “liberation/control” binary and to the
idea of “the internet” as a single technology, we now turn to an overview of the scholarly and
policy debates over the internet’s potential for liberation and control.
! 29
Internet as liberation
Digital rights technology matters because communication matters, and in the network
society, communication is simultaneously enabled and constrained by technology, specifically
those that together comprise the internet. There is a rich literature on the myriad ways that
internet technologies have revolutionized society by changing how people can communicate with
each other across space and time. As with any new technology, reactions to the internet’s
diffusion have run the gamut from utopian to dystopian (Fisher & Wright, 2001). As Elizabeth
Hanson (2008) notes in her seminal book on “The Information Revolution and World Politics,”
Technological optimists stress the possibilities for positive change resulting from the new
technologies. The most optimistic are enthusiastic about the potential for increased
empowerment of individuals and other non state actors. The Internet in particular
provides a global resource of information to anyone who has accessed to a networked
computer and knows how to use it. […] The reduced cost and time to to communicate
over long distances allows those who are stuck on the peripheries to transcend the
tyranny of geography and to interact with the rest of the world. This equalizing, leveling
effect can help to reduce gaps between rich and poor, urban and rural, and large states and
small ones. Enhanced information and communication capacities increase productivity,
facilitate international commerce, and foster growth in a global economy. Increased
transparency from multiple, globalized media makes authoritarian governments
more difficult to maintain and all governments more accountable to their citizens.
! 30
Electronic networks facilitate the organization and efforts of transnational
coalitions to influence the policy agendas of governments and international
organizations (pp. 4-5; emphasis added).
Hanson did not, of course, predict the central role that social media would come to play
in world politics, in the global economy, or in the daily lives of billions. In fact, neither Facebook
nor Twitter appear in the book’s index, despite being well-established in certain markets
(including the U.S.) by 2008. Manuel Castells’ (2007) work on power and counter-power in the
network society identifies social media and the blogosphere (“the media of mass self-
communication”) as the space where counter-power confronts society’s power holders. Facebook
and Twitter were still new then, and had not yet developed content moderation practices,
monetization schemes, or robust public policy agendas of their own. They were still several years
away from being recognized as the “sovereigns of cyberspace” (MacKinnon, 2012). In the lost
world of only a decade ago, “dominant elites [were] confronted by the social movements,
individual autonomy projects, and insurgent politics that [found] a more favorable terrain in the
emerging realm of mass self-communication” (Castells, 2007, pp. 258-259).
U.S. foreign policy has similarly viewed the internet as an instrument for “transforming
authoritarian, non-liberal states into liberal ones” (McCarthy, 2011, p. 90). The administrations
of Bill Clinton, George W. Bush and Barack Obama all tied the free flow of information online
to international human rights, in a continuation of longstanding U.S. international
communication policy. The rhetoric of the Cold War emphasized the superiority of the American
system of free speech, freedom of the press, and generally unrestricted access to information for
! 31
ordinary people, as opposed to the Soviet system of censorship, centrally-controlled media, and
restricted circulation of ideas and information. The free flow of information and free enterprise
(along with other factors) worked hand in hand to ensure the U.S.’s economic superiority over
the Soviet Union, a dynamic that would only accelerate with the rise of the information society
in the West. Indeed, Castells and Kiselyova (1995) argue that this death grip on information was
the primary reason for the USSR’s implosion, while many government officials credit American
public diplomacy (or propaganda — the difference can be somewhat subjective) with hastening
the demise of the “Evil Empire” (this is a debatable conclusion — see Morozov, 2011). One can
understand why successive generations of U.S. policymakers came to the conclusion that
information in general, and the internet in particular, helped open closed societies. The prevailing
view was that the internet would eventually turn autocracies into something resembling
democracies.
And to be fair, oftentimes the internet does just that. Internet-enabled ICTs can be read as
“liberation technology”: “any form of information and communication technology that can
expand political, social, and economic freedom” (Diamond, 2010, p. 70). Of course, these
technologies are just tools, and their affordances can just as easily be deployed for authoritarian
purposes as for democratic, rights-enhancing ones. The same was true of television, radio, the
telegraph, and the movable-type printing press. A more nuanced concept is Milan’s
“emancipatory communication technologies” (ECPs) construct, which emphasizes the
construction of alternative communication infrastructure that “emancipates” activists and citizens
from the constraints of corporate and state-controlled media (2013). Nonetheless, Larry
Diamond’s seminal article on liberation technology (2010) is more nuanced that it tends to get
! 32
credit for, identifying four key ways that the internet can “liberate” society from the yoke of
oppression: by widening the public sphere, by documenting abuses and holding the powerful
accountable, by facilitating “fast, large-scale popular mobilizations” (p. 78), and through
economic development and improved access to social services. Diamond credits “liberation
technology” with enabling and/or publicizing the 2004 Orange Revolution in Ukraine, the 2005
Cedar Revolution in Lebanon, the 2007 Saffron Revolution in Myanmar, Iran’s Green Movement
in 2009, and more localized protests in Kuwait in 2005 and Venezuela in 2007, among others.
These events reinforced U.S. policymakers’ preexisting belief in the power of ICTs to
propel societies to freedom and prosperity. Madeline Carr’s masterful “U.S. Power and the
Internet in International Relations: The Irony of the Information Age” (2016) provides a useful
framework for understanding the paradigms through which American leaders view the
relationship between internet and freedom. Though her study does not focus on internet freedom
specifically, she demonstrates that U.S. policymakers hew to what she calls the “information age:
technology and social power” paradigm, which “links changes in approaches to power which
have emerged largely since the conclusion of the Cold War, with the information revolution” (p.
32). As Carr explains, this paradigm “is characterized by a technologically determinist view
which (often due to insufficient technical knowledge) tends to regard new technologies as value
laden and autonomous — an exogenous force which exerts its influence upon societies and states
and to which they must adapt. This leads to an assumption in the literature that Internet
technology has universal implications for state power” (p. 32). Policymakers and bureaucrats
understand internet freedom with different degrees of nuance, of course. Nevertheless, at its most
simplistic, the ideology of internet freedom presupposes that “the introduction of technology into
! 33
different societies will have the same outcome regardless of cultural, political or social factors or
influences” (p. 24). Underlying this universalist belief is an understanding of the internet and
related tools as essentially American technologies, imbued with liberal values of openness,
innovation, and “freedom.” This instrumentalist view of technology lionizes innovation as a
social good in and of itself, implying that any negative repercussions on society can and should
be solved through yet more technological innovation. Technology in general, and the internet in
particular, is treated as an actor “in possession of a specific set of values and destined for a
particular trajectory that must not be disturbed” (p. 23).
The Arab Spring of 2011, especially the aftermath of those revolutions, complicates those
rosy views of the internet’s impact on sociopolitical change. The history of the Arab Spring has
been extensively documented and analyzed (see Hussain & Howard, 2013a, 2013b; Tufekci,
2017a; and Coleman, 2014 on the role of Anonymous in the early days of the Arab Spring), and
so I will only discuss it superficially here. Two key aspects merit foregrounding. The first is the
use of ICTs (notably mobile phones and social networking sites) to mobilize protests, document
atrocities committed by the authorities, and communicate with the international community. As
Zeynep Tufekci (2017a) has demonstrated, ICTs had an ambivalent effect on the MENA region’s
social movements, helping protests get off the ground quickly while permitting movements to
leap-frog over much of the hard work of organizing that enables cohesion and resilience (this
dynamic is further discussed in Chapter 3). The second is the impact of U.S. diplomatic cables
released by WikiLeaks in fall 2010 (the so-called “CableGate” leaks) documenting the endemic
corruption, abuse of power, and incompetence of the region’s strongmen (Maréchal, 2013).
While Julian Assange’s 2012 claim that he and WikiLeaks “planned for most of what has
! 34
occurred over the past 12 months” (meaning the Arab Spring) was characteristically overblown
(Hastings, 2012), one must not dismiss the power for would-be revolutionaries of having
evidence — from American diplomats, no less — that their national leaders were every bit as bad
as they had suspected (York, 2013). In a very real sense, ICTs did help provide the conditions for
the Arab Spring to turn into a conflagration. It is safe to say that the Arab Spring would not have
played out the way it did if it weren’t for WikiLeaks, Anonymous, Facebook, Twitter, and the
free and open internet (Tufekci, 2017a). That being said, it is also important to note that much of
the analysis of the Arab Spring emphasizes the technological aspects of the region’s revolutions
and social movements at the detriment of highlighting the MENA region’s history of colonial
oppression (Aouragh & Chakravartty, 2016).
The story doesn’t end there, of course. Nearly eight years later, only Tunisia comes close
to having achieved a successful democratic transition. At the other end of the spectrum, Syria is
suffering through a full-blown civil war and humanitarian crisis. Analyzing the specific ways that
the various MENA governments cracked down against protesters after 2011 would be beyond the
scope of this dissertation, but it is clear that access to ICTs did not have the universally liberating
effects on the MENA region that a simplistic reading of internet freedom discourses would
suggest. Instead, autocrats learned how to use censorship and surveillance to inhibit information
flows, discover the offline identities of dissidents, and violently suppress the voices crying out
for change.
Those who argue that no one saw this coming were willfully ignorant of the many
warnings coming from academia and civil society. Earlier in this section I quoted Manuel
Castells’ description of the internet (specifically social media and the blogosphere) as “a more
! 35
favorable terrain” for civil society movements confronting the powerful (Castells, 2007, pp.
258-259). Crucially, the quotation continues: “Under such circumstances, a new round of power
making in the communication space is taking place, as power holders have understood the need
to enter the battle in the horizontal communication networks” (Castells, 2007, p. 259). The next
section traces precisely why and how governments in particular have reasserted control over the
means of communication.
Internet as control
If Max Weber famously described states as having a monopoly on the legitimate use of
violence, Monroe Price (2015) argues that the definition is currently being amended, with states
claiming a monopoly (or at least a privileged position) with respect to the “legitimate use of
information” (p. 4). As is true of violence, this monopoly can be “bargained away by treaty or
argument” (p. 4). For Price,
What is implicit in [Weber’s] argument is that a state will seek to recover elements of its
monopoly over violence that it has delegated, bargained away or lost through other
means. Perhaps the same is true with respect to speech and the state’s recuperative
impulse. (p. 4)
Price is very clear that he is not arguing that “a state ought in principle to have
management capabilities over information flows,” but that “major elements of such management
! 36
are inevitable” (p. 4). The central argument of Price’s book is that
In the constitutionally circumscribed areas where a government justifiably (and consistent
with carefully restrictive international norms) has a proper role to play, it should have the
implied capability of doing so, including through managing technical challenges . . .
[embracing] responses to powerful states that abuse control of information and weaker
states where the capacity to function needs buttressing. (pp. 4–5)
Indeed, the internet presents a significant challenge to state sovereignty, particularly for
countries whose conceptions of sovereignty are tied to controlling information flows. But even in
countries where information is articulated as a right, cases like the “San Bernardino iPhone 5c”
4
In December 2015, Syed Rizwan Farook and Tashfeen Malik shot 26 people, killing 14,
4
at the Inland Regional Center in San Bernardino, CA. They were both killed by police after a
pursuit. In the course of the investigation into the couple’s motives and possible ties to overseas
terrorist groups, the FBI asked Apple to help decrypt Farook’s work-issued iPhone 5c, which was
encrypted in such a way that it could not be unlocked without Farook’s passcode (Farook had
physically destroyed his and his wife’s personal devices). A series of technical errors impeded
efforts to reset the account password, and Apple refused to modify the work device’s operating
software to allow the FBI to brute-force the password, prompting a lawsuit that was eventually
dropped after the FBI unlocked the phone via alternate means. There are conflicting reports that
the FBI hired either the Israeli company Cellebrite or unspecified “professional hackers.” See
Nakashima, 2016.
! 37
make clear that contemporary societies are in the midst of a fundamental renegotiation of the
norms governing the transmission of information: who can publish information, who can read or
otherwise consume it, and who can communicate with whom, and by what means, without fear
of interference, interception, or retribution. This renegotiation involves balancing complex trade-
offs between individual autonomy, privacy, due process, the rule of law, and other considerations.
No longer limited by the affordances of communicative technologies that required the physical
transportation of a material artifact, communication is now, in theory at least, boundless. Yet the
nation-state retains its territorial boundaries, and is fiercely attached to its dominion over what
happens within its borders. As Milton Mueller (2010) points out, this disconnect between
national sovereignty and the transnational nature of the internet is at the heart of the controversy
over internet governance, which includes regulating the ability of internet users to evade
governmental controls over the flow of information.
Although one UN report argued that online anonymity and encryption are critical to
human rights, a treaty instrument specifically guaranteeing these rights under international law is
missing (Kaye, 2015). National laws remain a patchwork of outdated, technologically ill-
informed, and at times contradictory legislation. Absent firm legal grounding, “code is
law” (Lessig, 2006) — the information society’s version of “might is right” — and nation-states
now have the ability to implement technical controls over internet use within their borders, and
even beyond. Complex and opaque arrangements between governments and corporations further
constrain citizens’ ability to communicate online. As Milton Mueller (2010) and others have
noted, netizens owe the relative freedom that they enjoyed in the first decade of the 21st century
! 38
to governments’ delayed reaction to the emergence of cyberspace as a site of contention. The
respite is over now, and the battle for control of the means of communication is well underway.
This section explores the reasons behind the rapid growth of information controls and the
restrictive use of internet infrastructure by nation-states to affect political outcomes domestically
and in the international system. It combines empirical research and theoretical work from a wide
range of academic disciplines — including internet governance, communication, political
science, international relations, science and technology studies, and the sociology of social
movements and political change — to unpack how nation-states (often in collaboration with the
private sector) use censorship and surveillance technology to stymie dissent. The role that private
companies in the ICT sector play in this dynamic is discussed throughout.
Information controls
The term “information controls” refers to an emerging field at the intersection of
communication, international relations, political science, sociology, computer science, and law,
that documents and theorizes on the struggle between nation-state attempts to control citizens’
online activity, and civil society efforts to thwart those attempts. The phrase is closely associated
with the University of Toronto’s Citizen Lab and its director, Ronald Deibert. The Citizen Lab
defines “information controls” in the following terms:
Information controls can be broadly understood as techniques, practices, regulations or
policies that strongly influence the availability of electronic information for social,
! 39
political, ethical, or economic ends. Controls can be developed by public or private
bodies, deployed at varying aspects of digital networks, and are variably apparent to users
of Internet-connected systems.
Technically, these controls can include filtering, distributed denial of service attacks,
electronic surveillance, malware, or other computer-based means of denying, shaping and
monitoring information. We will also consider tools and methods for bypassing or
undermining information controls such as Internet filtering circumvention,
anonymization, and encryption, which in aggregate empower individuals to exercise their
human rights online.
At a policy level, information controls can include laws, social understandings of
‘inappropriate’ content, media licensing, content removal, defamation policies, slander
laws, secretive sharing of data between public and private bodies, or strategic lawsuit
actions. There has been a marked increase in the use of such controls over the past fifteen
years. To counter the harmful uses of these controls new policies have developed,
including international agreements on internet governance principles, encryption
measures for communication security and user anonymity, export controls and targeted
sanctions, and regulations protecting privacy and other rights.
Studying the technical operation of information controls and the political and social
context behind them is an inherently multidisciplinary exercise. However, currently there
! 40
is no established discipline that incorporates both technical and contextual research
approaches because expertise is scattered across a range of disparate fields (Citizen Lab
website, 2015).
Information controls research to date falls under three broad categories: why states
implement control over information, how they do so, and examinations of the social movements
and activists resisting information controls. This section covers the first two categories, while the
third will be covered in the “digital rights space” chapter (chapter 3).
Why: The case of China
The superficial answer to why illiberal and undemocratic governments seek to control
information is that access to information, freedom of expression, and the ability to communicate
privately (notably through the use of encryption) threatens the sociopolitical status quo and
elites’ hold on power. This interpretation is commonly implied in internet freedom discourses (as
I frequently observed during my fieldwork), and is not necessarily incorrect in all (or even most)
cases, but it obscures the diversity of normative frames that can be found in each national
context. Analyzing each national discourse regarding internet regulation would be well outside
the scope of this dissertation. Instead, this section provides an overview of the narrative deployed
by China’s government to justify information controls and mobilize domestic as well as
international support for their policies. The discursive justifications provided by the Iranian and
! 41
Russian governments will be analyzed in the case studies focused on Psiphon and Telegram,
respectively, owing to the particular relationship between each tool and that government.
These three countries — all U.S. geopolitical adversaries — were selected primarily
because of their active engagement in internet governance forums and increasingly sophisticated
approaches to censorship and surveillance. They are also regional hegemonic powers that seek to
influence other nations in their respective regions (and beyond). In turn, these characteristics
mean that there is a wealth of scholarship about the policy and practice of information controls in
these three countries. Last but not least, all three governments articulate their information
controls policies in opposition to the U.S. Internet Freedom agenda, and the three countries’
populations are implicitly and at times explicitly construed as beneficiaries of internet freedom
programs, including the development of digital rights tools.
The Chinese Communist Party (CCP) has been in power since the 1949 revolution, and
control over information and dangerous ideas has been a cornerstone of state policy since that
time. The proper role of journalism is seen as being subservient to politics, what historians call
“Maoist journalism” (Wu, 2017). The CCP is the sole authorized political party in the country,
monopolizing control over all press and media, which are required to disseminate officially
approved news and analysis exclusively (Xiao, 2011).
Mainland China first connected to the global internet in 1987 (though its use was limited
until the mid-1990s), and the state gradually developed a sophisticated approach to information
controls, combining technical filtering, an intermediary liability regime that makes internet
companies responsible for user-generated content and even content that users access through the
service, “real world” repression and intimidation (which in turn induce chilling effects), careful
! 42
public opinion monitoring, and a vast propaganda and media manipulation apparatus
(MacKinnon, 2011; Xiao, 2011). While Western discourse about China frequently invokes
metaphors borrowed from the Cold War, such as “the Great Firewall of China” or “the Iron
Curtain 2.0,” the reality is much more complex (Tsui, 2008). In an unpublished conference paper,
Chinese scholar Li Yonggang has suggested that a better metaphor would be a hydroelectric dam:
China's current crop of leaders come primarily from a technocratic, engineering
background, and thus they apply their engineers' perspective to governance. If you
approach Internet management in this way, the system has two main roles: managing
water flows and distribution so that everybody who needs some gets some, and managing
droughts and floods - which if not managed well will endanger the government's power.
It's a huge complex system with many moving parts, requiring a certain amount of
flexibility - there's no way a government can have total control over water levels or water
behavior. Depending on the season, you allow water levels in your reservoir to be higher
or lower... but you try to prevent levels from getting above a certain point or below a
certain point, and if they do you have to take drastic measures in order to prevent
complete chaos. The managers of the system learn as they go along and adjust their
behavior accordingly. It is a very resource-intensive enterprise in terms of people, money,
and equipment, but while some functions are delegated to the private sector the
government doesn't feel comfortable privatizing the whole thing, for fear of loss of
control (and power) (cited in MacKinnon, 2008).
! 43
Indeed, an empirical study by Harvard’s Gary King, Jennifer Pan and Margaret E.
Roberts (2013) found that “the censorship program is aimed at curtailing collective action by
silencing comments that represent, reinforce, or spur social mobilization, regardless of content,”
and that “posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are
not more likely to be censored” — which had been the assumption among many experts (p.1).
The system of technical filters that limits the ability of Chinese “netizens” to access
sensitive content located on servers beyond the country’s borders is only the crudest component
of the overall apparatus, and is easily circumvented with a modicum of technical skill (though
the recent crackdown on VPNs has made the task more arduous). Nor is China’s internet a stale,
staid place: it is “a highly contentious place where debate is fierce, passionate, and also
playful” (MacKinnon, 2010; Yang, 2009). Indeed, the use of social steganography, wordplay, and
codewords by Chinese internet users wishing to fly below the censors’ radar has been well-
documented (Xiao, 2011). The genius of the Chinese system lies in its careful calibration of
censorship, chilling effects, and media manipulation (notably through the use of the “fifty-cent
party,” bloggers who are reportedly paid that sum for each pro-CCP post). As a result, in China,
“the Internet is a subtle and effective tool through which the CCP is actually prolonging its rule,
bolstering its domestic power and legitimacy, while enacting no meaningful political or legal
reforms” (MacKinnon, 2010, pp. 10-11) — a dynamic that Baogang He and Mark Warren (2011)
have called “authoritarian deliberation.” According to Xiao Qiang (2011), a leading Chinese
human rights defender who teaches at the University of California at Berkeley,
! 44
The Chinese government’s Internet-control system mainly aims to censor content that
openly defies or attacks CCP rule or contradicts the official line on such taboo topics as
the Tiananmen Square massacre or Tibet. Most important, however, is preventing the
widespread distribution of information that could lead to collective action such as mass
demonstrations or signature campaigns (p. 52).
The CCP sees the control of information, and therefore of the population, as a core
responsibility for the overall benefit of society. For example, in an address to the CCP Politburo
in January 2007, then-President Hu Jintao called for improved technologies, content controls,
and network security for monitoring the internet, saying, “Whether we can cope with the Internet
is a matter that affects the development of socialist culture, the security of information, and the
stability of the state” (quoted in Xiao, 2011, p. 51). The narrative used to justify these techniques
rests on the notion of “constructing a harmonious society.” Rooted in China’s Confucian
traditions and popularized by Hu Jintao, the “harmonious society” paradigm privileges the
common good of society (as determined by its leaders) over the rights, wishes and desires of its
individual members. Individuals are expected to play their designated roles, to perform their
duties to their elders and other authority figures, and to avoid disturbing social harmony. The
relationship between the government and the population is highly paternalistic: the government
has a duty of care to the people, who in turn are expected to respect and obey the Party
leadership.
A distinguishing characteristic of Chinese information controls is the attention paid by the
CCP to online discourse — the “authoritarian deliberation” discussed above (He & Warren,
! 45
2011). While some authoritarian regimes monitor the internet primarily (or even exclusively) to
identify dissidents and censor them (or worse), the CCP tracks online discourse and modulates its
own decisions accordingly, with individual cadres using the internet to keep their subordinates in
line when corruption, police violence or environmental degradation is too egregious to ignore,
for example:
These more forward-looking officials believe that the government should selectively
tolerate or even welcome Internet expression as a barometer for public opinion.
Permitting such expression allows the government to collect information about society, to
be more responsive to citizens’ concerns, and to provide a safety valve for the release of
public anger. The Internet can also help to hold local officials more accountable—to the
central authorities as well as to the public. In addition, the Internet plays a role in
promoting political change when the interests and agendas of different government
agencies or administrative levels do not align. In such a case, public opinion may help to
bolster one side over the other (Xiao, 2011, p. 59).
How: Five generations of information controls
Having explored the ideology behind China’s information controls regime, I now turn to
the implementation of these restrictions around the world. The Open Net Initiative’s
groundbreaking trilogy, consisting of Access Denied: The practice and policy of global Internet
filtering (2008), Access Controlled: The shaping of power, rights, and rule in cyberspace (2010),
! 46
and Access Contested: Security, identity, and resistance in Asian cyberspace (2011) argues that
the global practice of information controls has undergone three generational shifts in rapid
succession (Deibert, Palfrey, Rohozinski, & Zittrain, 2010). It should be noted that the
5
“generations” described below are not necessarily chronological in their use. Most governments
that censor the internet do so using a mix of techniques from different generations, and some skip
a generation altogether.
First generation information controls use crude technical means like barriers to access,
keyword filtering, and wholesale blocking of entire websites and domain names. These defensive
measures prevent the population from accessing forbidden content, essentially aiming to translate
national boundaries from the physical world into cyberspace. China’s “Great Firewall” is a
classic example of first generation information controls, and the Roskomnadzor blacklist is a
poorly executed example of the same. Democratic countries also practice first generation
6
The Open Net Initiative (ONI) was a partnership between the University of Toronto’s
5
Citizen Lab, Harvard’s Berkman Center (now Berkman Klein Center), and the SecDev Group, an
Ottawa-based open-source analysis and intelligence consultancy. ONI disbanded in 2014, in part
due to ideological differences between the principal investigators.
Media in the Russian Federation, including the internet, is regulated by a branch of the
6
Ministry of Communications and Mass Media, the Federal Service for Supervision of
Communications, Information Technology, and Mass Media, better known as Roskomnadzor. It
has the authority to block certain types of content without a court order, but its keyword-based
filtering process is notoriously clunky and is unevenly applied by different Internet Service
Providers (Freedom House, 2015).
! 47
information controls, albeit on a more limited basis, and they generally focus on copyright
violations and images depicting the sexual exploitation of children. First-generations controls are
relatively easy to measure and to circumvent, notably through the use of digital rights tools like
Tor and Psiphon — two of this project’s case studies (Deibert, 2016).
The second generation involves creating legal and technical frameworks allowing public
and private authorities to deny access to information on a case-by-case basis. “Just-in-time”
blocking and sporadic internet shutdowns linked to specific political events like elections and
protests exemplify this method. The cooperation of private companies like internet service
providers (ISPs), telecommunications operations and over-the-top intermediaries like social
networking sites and messaging apps is often required for second generation strategies to be
effective. For example, governments frequently order ISPs to shut down network access in
specific areas or to filter the specific protocols associated with V oIP calls or even individual
messaging applications like WhatsApp. The companies that produce the hardware and software
required for network operations — routers, switches, cables, servers, and more — are under
pressure to “bake in” censorship and surveillance by installing so-called “backdoors” into their
products. Many Chinese chat applications include such capabilities, and the Russian government
mandates SORM compliance for all internet intermediaries operating in the country. Repeated
7
calls from various countries — including democracies officially committed to free expression
and privacy — to ban secure messaging tools (such as WhatsApp and Telegram, also examined
as part of this study) or to require the use of “responsible encryption” also fall under this
SORM is the Russian acronym for System of Operational Investigative Measures, the
7
country’s apparatus for so-called “lawful interception” of telecommunications.
! 48
category, as do efforts to ban circumvention tools like Tor and Psiphon that help users bypass
first generation controls. Finally, requirements that users register their SIM cards with the
authorities, provide proof of identity when using an internet cafe, and link their online activities
to their “real name” (and often their biometrics as well) make anonymous speech impossible,
creating “chilling effects” that inhibit the expression and even the consumption of controversial
online content (Deibert, 2016; Penney, 2017).
As we have seen, first generation information controls are defensive in nature, while the
second generation tactics mobilize social institutions and the private sector to entrench
censorship and surveillance. Building on these, third generation controls go on the offensive. For
example, the Citizen Lab has documented targeted cyber-espionage against Chinese human
rights defenders and democracy activists perpetrated by state actors, using the same toolkit
implicated in traditional espionage against governments and corporations, such as targeted use of
malware (“spearphishing”) to spy on activists, infiltrate their organizations and disrupt their work
(Marczak et al., 2015). More prominently, the Russian government has been conducting
disinformation operations in a wide range of countries. Here again, private companies are
implicated in surveillance and propaganda efforts that are driven by state actors. Companies like
Gamma Group, Hacking Team, and Blue Coat sell “off-the-shelf” surveillance tools that are
ostensibly designed for legitimate law enforcement purposes, but in practice are used to target
political opponents, members of minority groups, and human rights activists. Nonstate actors like
ISIS have also been documented to use these tools (Scott-Railton & Hardy, 2014). And of course,
the seemingly Russian-sponsored attacks on a number of liberal democracies were made possible
by the exploitation of American platforms Facebook and Twitter.
! 49
Deibert (2016) further identifies advocating for illiberal practices in internet governance
arenas as a possible fourth generation of information controls, intended to embed authoritarian
structures in the international system. This includes efforts to impose nation-state control over
cyberspace by weakening multistakeholder forums like the Internet Corporation for Assigned
Names and Numbers (ICANN) and the Internet Governance Forum (IGF) in favor of multilateral
venues like the International Telecommunications Union (ITU) and the United Nations (UN),
where only nation-states have a seat at the table and the “one country, one vote” rule prevails.
Regional multilateral organizations like the Shanghai Cooperation Organization (SCO), the
Collective Security Treaty Organization (CSTO), and the Gulf Cooperation Council (GCC)
provide forums for like-minded authoritarian governments to collaborate. Conducting business in
regional languages (i.e. not English) and meeting behind closed doors, these groups do not even
pretend to be transparent and operate below the radar of the media, civil society, and other
watchdog groups.
Finally, Zeynep Tukeçi (2017a) notes that in many cases censorship operates not by
preventing the production and dissemination of information, but by denying it the public’s
attention by flooding the public sphere with false, misleading, or distracting information: this is
censorship by distributed denial of attention. The issue moving forward is less censorship
through silencing dissident voices, but by denying them an audience. This type of media
manipulation by “distributed denial of attention” should thus be viewed as a fifth generation of
information controls.
! 50
Origins of digital rights technology
Digital rights technology did not emerge from the ether any more than other technologies
did. The international relations and political science fields’ understanding of technology is
underdeveloped, though scholars like Madeline Carr are attempting to bring those disciplines up
to speed. In “U.S. Power and international relations: The irony of the information age” (2016),
Carr argues that “instrumentalism and technological determinism which have dominated IR
theories about power are both inappropriate lenses through which to examine Internet
technology” (p. 189), and that “there can be no basis for studies that continue to assume that the
Internet has universal power implications for other states” (p. 190). This study takes its cue from
Carr, and from scholars like MacKenzie and Wajcman (1999) and Pinch (1988) before her in
rejecting categorical distinctions between science, technology, artifacts such as hardware, and the
uses to which humans put technology as artificial. In other words, to study technology is to study
society, and I engage with instrumentalist approaches and technological determinism critically.
This study takes the notion that technology is socially constructed as its point of departure,
delving further into the question of how society shapes technology — and vice-versa.
This dissertation aims to contribute to the field of Internet Studies by examining the
origins of digital rights technology — their social construction — through a set of four case
studies, three of which are intimately connected to the U.S. Internet Freedom agenda in ways that
will be discussed in detail in the respective case studies and in the chapter devoted to the Internet
Freedom Agenda. The object of the fourth case study, Telegram, arose in reaction to heavy-
handed interference from the Russian government in its citizens’ online political speech and
! 51
organizing, but keeps the global digital rights activist community at a distance and maintains an
oppositional stance vis-à-vis the Internet Freedom agenda.
My research shows that while the impetus for creating digital rights tools arose from the
grassroots, and the values embodied in the tools and embraced by their creators are genuine, it is
highly unlikely that Tor, Psiphon, or Signal would exist, much less be as successful as they are
today, if it weren’t for the political and financial support of the U.S. government (as for
Telegram, until recently its financial backing came exclusively from its founders, the Durov
brothers; the group’s experiments with crytocurrencies will be detailed in chapter 8). This is not
to say — as some have suggested — that the global digital rights movement is Astroturfed. The
chapter “The Digital Rights Space: Portrait of a Social Movement” traces that movement’s roots
in earlier waves of contestation over communication rights, including the international human
rights movement, the anti-corporate globalization movement, and the cypherpunk movement of
the 1990s, centered around San Francisco. As we will see, the ideologies of internet freedom,
international human rights, and cypherpunk ideals often make for awkward bedfellows.
Understanding these tensions is key to understanding not only the global digital rights movement
but also the very nature of communication flows in the 21
st
century.
Resistance
Digital rights technology emerged at nearly the same time as information controls
technology. CGI Proxy, an early censorship circumvention tool, was coded by James Marshall in
1996, about a year after China and Singapore first announced that they would be restricting their
! 52
citizens’ internet access. As for the use of encryption by private individuals, it has been highly
contentious since Whitfield Diffie and Martin Hellman first developed public-key cryptography
in 1976 (Levy, 2001; West, 2018). From the very beginning, the story of the internet has been a
constant negotiation and renegotiation of who gets to communicate what, to whom, at what cost,
and under what kind of oversight. For example, the shift from paying for internet access by the
minute or megabyte to monthly access fees changed the way people used the internet. Policy
battles over net neutrality largely oppose internet service providers and over-the-top providers
over who gets to extract the most economic profit from internet infrastructure. In much of the
Middle East and North Africa region, state-controlled telecommunications companies block
V oice over Internet Protocol (V oIP) services because the availability of free international calls
threatens their revenue streams — though it is also the case that police, military and intelligence
services have a vested interest in limiting access to communication channels that they cannot tap
into for surveillance purposes.
This pattern of negotiation and renegotiation is the backdrop against which our story
unfolds. Digital rights tools should be read in two ways at once, that each trace back to their
intellectual forebears. In keeping with the cypherpunks, digital rights toolmakers seek to carve
out a space to communicate as they see fit even as the various actors around them brawl and
skirmish in a doomed effort to settle the rules of the game once and for all. At the same time, this
communicative safe zone allows individual activists and civil society organizations broadly
aligned with the international human rights movement to organize efforts to influence
governments and corporations in their perpetual renegotiations, with the goal of creating norms
that respect human rights. As we will see in the chapter on the digital rights space, the global
! 53
digital rights movement is a “meta” social movement fighting for the right of all social
movements to exist (Carroll & Hackett, 2006; Dunbar-Hester, 2014; Juris, 2008; Maréchal, 2015;
Wolfson, 2014). Its members are far from homogeneous, spanning across cleavages of ideology,
geography, geopolitics, religion, gender, gender identity, sexual orientation, race, class, and
more. While my case studies focus on tools and their developers, the movement itself is much
broader than that, with frequent tensions that often track the ideological divergences between the
cypherpunks and the human rights defenders. Notably, my fieldwork coincided with significant
tensions over gender and sexual violence in the community, and the proper way to address what
everyone agreed (at least publicly) was an intolerable problem. This fraught dynamic will be
8
addressed in the “Digital Rights Space” chapter. The particular attention to gender and sexual
violence should not be taken to mean that other forms of structural violence are absent in the
digital rights community; far from it. However, it is the variant that has been most visible to me
In spring 2016, prominent “Tor evangelist” Jacob Appelbaum was forced out of the Tor
8
Project after years of allegations of bullying, sexual harassment, assault and other objectionable
behavior. In fall 2017, “rockstar programmer” and security researcher Morgan Marquis-Boire
was publicly accused of raping dozens of women over more than a decade, and was subsequently
removed from several advisory positions in the digital rights community. A few weeks later, Ali
Bangi, then the co-director of the Toronto-based ASL19, which monitors information controls in
Iran and the broader MENA region, was charged with rape and forcible confinement, leading to
his resignation. The latter events coincided with the #MeToo movement and renewed public
discussion about sexual politics and gender in the workplace.
! 54
during my fieldwork. This is surely due to my status as a white, formally educated, upper-
middle-class cisgender woman with Western passports who is a native speaker of English.
At the geopolitical level, digital rights technology is an essential component of the liberal
arsenal in the struggle against encroaching networked authoritarianism, “a political system that
leverages ICTs and media regulation to carefully control the expression of dissent in a way that
gives the impression of limited freedom of expression without allowing dissent to gain
traction” (MacKinnon, 2011; Maréchal, 2017a). This is a global phenomenon with roots in both
the liberal network society’s surveillance or data capitalism (Zuboff, 2015; West, 2017) and in
illiberal autocracies like Russia and China. The internet economy’s business model, which relies
on the accumulation of data about individuals and their online and offline habits with a view to
facilitating targeted advertising, has (perhaps unwittingly, but not unpredictably) created the
necessary infrastructure for a society of total control (MacKinnon, 2012; Tufekçi, 2017b).
Incidentally, social control through the use of information has long been the goal of authoritarian
regimes. For example, Russia does not view internet governance, cybersecurity, and media
policy as separate domains. Rather, all the areas covered by those disciplines falls under
“information security” for Russian foreign policy (Maréchal, 2017a; Nocetti, 2015). Similarly,
the People’s Republic of China continues to use a panoply of tools to control and distort its
citizens knowledge and understanding of their own country and international affairs, including
so-called “little brothers” and the “Ten Cent Party” (MacKinnon, 2011, 2012; Tsai, 2016). This
view of information and knowledge as threats — not rights — was perhaps most tragically
embodied in the Cultural Revolution, which killed millions between 1966 and 1976.
! 55
Chapter 2 - Methodology and ethical considerations
Introduction to the chapter
This chapter describes the research design and methods for the present study. I begin by
briefly describing the genesis of this project before delving into the participatory, engaged and
militant research approach (Costanza-Chock, 2014; Juris, 2008; Milan, 2013) that has guided the
study. Next, I turn to case study selection, data collection, and data protection. The chapter
concludes with a discussion of the funding for this study and of my position as a scholar-activist.
Genesis of the study
This study originated in the summer of 2014, after my first year at USC. I was back in
Washington, DC (where I had lived previously) for a COMPASS summer fellowship at Ranking
Digital Rights, a research initiative housed at New America’s Open Technology Institute that
promotes corporate accountability for free expression and privacy online. I had previous work
experience in the human rights field, and this fellowship was a chance to explore the intersection
of human rights and communication policy.
Among many other developments that summer, the online publication Pando Daily
published a series of articles criticizing the funding relationship between the Tor Project and the
U.S. government and pushing back against those critiques (Levine, 2016a; Levine, 2016b;
Norton, 2016). The controversy was a hot topic among my new colleagues, and I followed the
! 56
debate closely. While I viewed the conspiracy theories spun by Pando Daily’s writers with
skepticism, I was nevertheless intrigued by what I saw as a paradox: Tor was simultaneously
supported by some U.S. government agencies and lambasted by others. It seemed to me that the
federal government was at war with itself over online anonymity and privacy.
Back at USC, I pursued the question of Tor’s relationship to the U.S. government as part
of my cognate coursework in International Relations over the course of the next year, and
quickly realized that I had stumbled into an even thornier puzzle than I had realized. Encouraged
by Karen Reilly, who was then the Tor Project’s director of development and whom I had met at
a conference, I attended the Circumvention Tech Festival in Valencia in March 2015 in order to
interview members of the Tor community, and learned about the dizzying variety of tools that
9
were being developed to help people evade online surveillance and censorship mechanisms. I
came to see Tor as just one exemplar within a broader universe of digital rights tools.
When the time came to design my dissertation project, I knew that the Tor Project would
be at the heart of the study, but also that it couldn’t be the whole study. After identifying my
major research questions, I selected additional case studies that would help me address the
complex relationship between digital rights technology development, the U.S. Internet Freedom
agenda, different contestations of surveillance capitalism, and the promotion of human rights and
democracy in the 21
st
century.
In order to fund this trip, I asked Ranking Digital Rights for permission to give a
9
presentation about the project at the CTF. Because I was speaking at CTF, the USC Graduate
School and the Annenberg School were willing to support the trip.
! 57
Engaged, participatory, and militant research
While I am neither a coder nor a developer, and have never worked on digital rights tools
in any other capacity, I am a member of the broader digital rights space that includes at least
three of my case studies (Telegram’s inclusion is debatable). It would be disingenuous for me to
feign complete neutrality or objectivity in this context: in general, I believe that people should be
able to circumvent online censorship and to encrypt their communications. Moreover, over the
course of my fieldwork I have developed relationships with many of my research participants.
While some may see this proximity to my topic and research participants as a limitation, being
embedded in the space has allowed me to build relationships and a reputation for trustworthiness,
which in turn helped me gain access to spaces that can be otherwise wary of newcomers. It is
impossible to overstate the extent to which my association with Rebecca MacKinnon, director of
Ranking Digital Rights, has opened doors for me. One research participant told me that this was
the primary reason he agreed to be interviewed: “I don’t know you very well, but if Rebecca
vouches for you, that’s enough for me.”
10
One of the exemplars for this study is Stefania Milan’s “Social movements and their
technologies: Wiring social change” (2013). Milan’s book sought to reverse the trend of social
movement theorists writing for one another, “often at the expense of a fruitful connection with
the constituencies being studied” (p. 182). Her fieldwork found that activists tended to see
academic research as irrelevant to their daily lives and struggles, and were hesitant to participate
in research in part because they didn’t know who was going to use it, and for what purpose.
Interview, October 18, 2017. Towson, MD.
10
! 58
Likewise, many of the activists and developers whom I interviewed wanted to know who I was,
why I was doing this research, and what the end products would be before they would agree to
be interviewed. In many cases, they were reassured to know that I was “one of them” (at least in
some respects) and that I was motivated by a desire to find a public space for their voices and
inject their perspectives into scholarly and policy debates from which they otherwise felt
excluded.
Engaged research
Milan uses the term “engaged research” to describe her approach, meaning “ those
inquiries into the social world which, without departing from systematic, evidence-based, social
science research, are designed to make a difference for disempowered communities and people
beyond the academic community” (Milan, 2010, p. 856; Milan, 2013, p. 182-183). Similarly,
Jeffrey Juris (2008) argues that “it is possible produce ethnographic accounts that are rigorous
and useful for activists” (p. 19). I concur with Milan that “engaged research has reflexivity at its
core” (2013, p. 183), and I make a point of “reflecting critically on the self as researcher”
throughout this dissertation (Guba & Lincoln, 2005, p. 210).
Engaged research is related to, yet distinct from, action research, which emphasizes
“collaboration between research professionals and stakeholders” (p. 183). In both cases, research
is motivated by “social transformation” and “undertakes research that matters to the subjects
under study and does not harm them.” Engaged research further “shares the ethnographic-
methodologist concern with ‘taking them back to the field’ (Adler et al., 1986, p. 371) with the
dual purpose of empowering social actors and of validating research findings through the
! 59
activists’ feedback” (Milan, 2013, p. 184). Where engaged research departs from action research
is in its methods: engaged research, in Milan’s conception, involves a researcher who is herself
an activist, while action research envisions a structured process of co-investigation between
researchers and activists who jointly create both change and knowledge. In engaged research,
“the researcher does not necessarily take an active part in organizing change, nor is the exercise
of theorizing subordinate to social change” (p. 183).
Participatory research
Sasha Costanza-Chock’s “Out of the shadows, into the streets! Transmedia organizing
and the immigrant rights movement” (2014) exemplifies the participatory research approach, a
term that “denotes an orientation to research that emphasizes the development of communities of
shared inquiry and action” (p. 205). This orientation is both normative and practical:
Ethically, I believe in democratic decision making, including decision making in research
and design processes. I also believe in shared ownership of the outcomes of research and
design. Practically, I believe that participatory methods involving people at all stages of a
research or design process are most likely to produce innovative knowledge and tools that
respond to people’s goals, strengths, and needs in everyday life (p. 205).
However, Costanza-Chock further notes that “the incentive structure of the academy
militates against participatory research,” as the market for scholarly publications is “largely
incompatible with shared community ownership of research.” On a practical level, this means
! 60
that authors must take responsibility for “research questions, study design, and authorship” as
well as for “errors, omissions, and any misrepresentations” (p. 206). Similarly, the ultimate
responsibility for the words on these pages is mine alone, though I included opportunities for
collaborative research and participant input through the research process.
In July 2016, I attended the Citizen Lab Summer Institute at the University of Toronto as
part of my Information Controls fellowship, where I organized an open feedback session on my
11
research design, interview script and data protection protocol. In keeping with participatory
research principles (Costanza-Chock, 2014), the goals for the session were to: 1) formally open a
channel of dialogue between myself as the researcher and the digital rights community as the
participant pool; 2) get input from the community on my research questions, interview script
(including specific wording of questions), and data protection protocol; and 3) provide an
Funded and administered by the Open Technology Fund, a sub-entity within Radio
11
Asia (a 501c3 nonprofit which is itself overseen by the Broadcasting Board of Governors), the
Information Controls Fellowship “supports examination into how governments in countries,
regions, or areas of OTF’s core focus are restricting the free flow of information, cutting access
to the open internet, and implementing censorship mechanisms, thereby threatening the ability of
global citizens to exercise basic human rights and democracy; work focused on mitigation of
such threats is also supported” (see https://www.opentech.fund/requests/icfp). I was a Senior
Fellow in Information Controls at Ranking Digital Rights from February to October 2016. My
primary work output was to adapt RDR’s existing assessment methodology to the human rights
risks posed by “mobile ecosystems,” defined as smartphones, their operating systems, their app
stores, and the user account linking user information and activities into a single profile.
! 61
opportunity for potential participants to ask any questions on their minds about me, my project,
and my research goals. My colleague Sarah Myers West, who was also attending the Summer
Institute in connection with an unrelated project, took notes during the meeting with the full
knowledge and consent of the participants. I owe her a great deal of gratitude for her labor. The
meeting occurred before I received my Institutional Review Board (IRB) certification, and while
the notes were used to structure the research design they were not used for the dissertation itself.
Militant research
Jeffrey Juris (2008) coined the term “militant ethnography” to describe projects where the
researcher’s “political and intellectual commitments coincide” (p. 19). This accurately describes
my subjective position in the field, as I share my research participants’ commitment to privacy
and freedom of expression online, though my contributions to the movement consist of research,
writing, and policy advocacy rather than technology development. My ethnography became
more explicitly “militant” mid-2017, when long-simmering tensions concerning gender,
diversity, inclusion, and sexual violence in the movement turned into a conflagration. I became
actively involved in efforts to address the systematic toxic masculinity that pervades much of the
digital rights space by helping to draft an open letter calling on all activists to contribute to
efforts to “Protect Our Spaces” (2017), participating in conference calls, speaking on several
conference panels dedicated to promoting healthy organizational cultures, and helping organize a
day-long RightsCon pre-conference event focusing on sexual violence in the movement. Other
activists have told me repeatedly that incorporating a gender focus into my existing research, and
publicly presenting this research, would be among the most valuable contributions I could make
! 62
to this effort. This militant approach to research is most apparent in Chapter 3 (The Digital
Rights Space: Portrait of a Social Movement) and Chapter 6 (Tor case study).
Case studies and case selection process
As mentioned previously, the dissertation was designed around the investigation of the
contradictions inherent in the Tor Project research completed in 2014/2015. The additional three
case studies were selected to complement my initial findings about Tor, its relationship with the
U.S. government, and its role in the evolving geopolitics of information.
The encrypted messaging application Signal, developed by Open Whisper Systems, was
already established as the premier communication tool within the digital rights movement when I
was designing this study in late 2015. Signal was among the first messaging apps to provide end-
to-end encryption at no cost to the end-user, and the financial support it received from the Open
Technology Fund tied it directly to the U.S. Internet Freedom agenda. In addition, the popular
messaging application WhatsApp, acquired by Facebook in early 2014, had announced in
November of that year that it would be incorporating the Signal protocol into its platform (Evans,
2014). By April 2016, the billions of messages exchanged daily via WhatsApp were protected by
end-to-end encryption (Metz, 2016). The partnership made the Signal protocol the most widely
used secure messaging application on the planet and raised a number of questions about the
relationship between the digital rights movement, the U.S. Internet Freedom agenda, and Silicon
Valley.
! 63
In addition to online anonymity and privacy, I also wanted to address activist resistance
to internet censorship through technical means. While Tor can be used for censorship
circumvention, its technical design means that “circumvention only” tools may often be better
suited to meet users’ needs. Moreover, it was apparent that the Tor case was so complex that a
more straightforward case was needed to illustrate the geopolitical implications of censorship
circumvention projects. Given my focus on the relationship between U.S. foreign policy and
digital rights technology development, I narrowed my options to Psiphon and Lantern, two tools
with funding relationships to the Internet Freedom agenda, however, studying Psiphon
diversified my dataset as it is a Canadian for-profit company. Finally, I knew that I would be able
to gain access thanks to my connections to the University of Toronto’s Citizen Lab, where the
code that eventually became Psiphon was first developed.
The fourth case study, Telegram, was added at the suggestion of my advisor Patricia
Riley. In late 2015 and early 2016, European (and to a lesser extent American) media coverage
was saturated by sensationalistic stories that tied the platform to international Islamic terrorism
because a small number of ISIS sympathizers and suspected terrorists had reportedly used
Telegram’s “secret messaging” function to plan attacks. The media narrative was that Telegram
and its supposedly unbreakable encryption was the linchpin of ISIS’s strategy in Europe.
Moreover, Telegram — which combines aspects of secure messaging and social networking — is
one of the most popular tools worldwide that lacks connections to either the U.S. Internet
Freedom agenda or to Silicon Valley. As such, it offers a useful contrast to the other case studies.
As participants in the Citizen Lab meeting pointed out, the case studies are biased in
favor of Western tools and teams. The pragmatic reason for that is that I am only able to conduct
! 64
research in Western languages (I am fluent in English, French and Spanish), and my professional
network is primarily within the U.S.-centric “digital rights/internet freedom” network of actors
and organizations, thus ensuring greater access to those organizations and projects. Moreover,
studying groups based in non-democratic countries with robust information controls regimes
would raise serious security risks for both myself and the communities I might study. Last but
not least, a focus on the U.S. is appropriate given the outsized impact of U.S. Internet Freedom
agenda funding on digital rights technology development and the U.S.’s unique role in the
international system.
Data collection
Data collection for this study took place between March 2015 and March 2018, including
the pre-dissertation study on the Tor Project. The principle data sources were interviews,
participant observation, primary documents produced by the case study organizations and their
funders, and the software artifacts themselves. Secondary sources such as extant scholarly work
and media articles were also used.
Ethnographic interviews
As with any ethnographic project, a key decision was determining the boundary
conditions for the network of individuals being studied. This is why I chose the tool or project as
the point of entry into the digital rights space. While it is difficult to establish who “counts” as a
digital rights activist, it is much more straight-forward to determine who works, or has worked,
! 65
on a given project. In addition to the project communities themselves, I also include the
perspective of individuals who work for organizations that fund the tools. Questions of funding
streams, conditions attached to financial support, and of strange bedfellows can be found
throughout the digital rights space, and funders have a privileged perspective into these debates.
Funding organizations include the Open Technology Fund; the State Department’s Bureau of
Democracy, Human Rights and Labor; the Naval Research Lab; the Mozilla Foundation; the
Ford Foundation; the MacArthur Foundation; the Open Society Foundations; and more.
I identified the first set of interview participants from project and organization websites,
as well as from my own knowledge of the digital rights space from attending various meetings
and conferences over the years. Subsequently I used a purposive sampling approach, asking
participants to recommend and introduce me to others who fit the work criteria. This has been
very fruitful, particularly with regard to identifying former members of project teams whose
names would no longer be on publicly available websites, but who nevertheless have great
knowledge and insight. I completed 70 formal interviews for this dissertation.
The dissertation interview script was informed by the earlier fieldwork conducted in 2015
and by informal, off-the-record conversations with members of the digital rights space that I’ve
held in the course of my work for Ranking Digital Rights. The questions (see Appendix A) were
designed to delve into participants’ professional narratives and thereby cumulatively trace the
histories of the projects they are involved in — in essence, to surface an oral history of the digital
rights movement, or at least a subset thereof. I also asked participants for their opinions of
various actors, networks, and decision-points relevant to their work, thus surfacing tensions,
contradictions and ambiguities as experienced by the individuals involved. Finally, a small
! 66
number of questions (posed at the beginning of the interview to avoid priming effects) asked
participants to define terms like “circumvention technology,” “liberation technology,” “digital
rights technology,” “internet freedom technology,” etc. I found early in my research that the
terminologies coined by academics and policymakers to describe active participants in digital
rights civil society are at times met with incomprehension, ambivalence, or even hostility by the
individuals in question. For this reason, I included questions designed to elicit “gut” responses to
common phrases pertaining to the subject matter.
The study was approved by the USC Institutional Review Board (IRB) in August 2016,
and was classified as Exempt. All interview participants were given a chance to review the IRB-
approved Informed Consent Facts Sheet and were asked for verbal consent prior to the interview.
The Facts Sheet is available on my personal website.
12
Participant observation
Throughout the fieldwork period, I attended and participated in a large number of
conferences and meetings dedicated to internet privacy, activism, policy, and research, allowing
me to interact with users, developers, and funders of digital rights technology tools. These
experiences helped me formulate questions about important tensions and frictions related to the
funding of the digital rights space, U.S. foreign policy and neoliberalism, and helped me build
trust relationships with various members of the community — including government funders —
https://nathaliemarechal.files.wordpress.com/2016/06/informed-consent-facts-sheet-
12
marechal-dissertation.pdf
! 67
that helped me gain access that I believe is unprecedented. I conducted participant observation at
the following field sites:
Table 2.1. Participant observation at digital rights conferences
Field site name Location Date
Citizen Lab Summer Institute Toronto, Canada July 2014
Circumvention Technology
Festival
Valencia, Spain March 2015
Citizen Lab Summer Institute Toronto, Canada June 2015
Chaos Communication Camp Zehdenick, Germany August 2015
Internet Governance Forum João Pessoa, Brazil November 2015
Internet Freedom Festival Valencia, Spain March 2016
RightsCon San Francisco, CA March/April 2016
Citizen Lab Summer Institute Toronto, Canada July 2016
Open Technology Fund
Summit
Towson, MD October 2016
Internet Freedom Festival Valencia, Spain March 2017
RightsCon Brussels, Belgium March 2017
Internet Freedom
Implementers Meeting
Montreal, Canada June 2017
Internet Freedom Festival Valencia, Spain March 2018
RightsCon Toronto, Canada May 2018
Citizen Lab Summer Institute Toronto, Canada June 2018
! 68
Archival research
Additionally, I collected and analyzed primary documents produced by or related to each
of the four case study organizations as well as U.S. government funding organizations, such as
webpages, blog posts, financial statements, white papers, conference proceedings, and public
statements. The documents cited herein are listed in the References.
The walkthrough method
Each of the projects examined in this study offers software applications for mobile
devices and desktop computers. These apps are the main interface through which users engage
with the technology. While my focus is neither on users nor on the user experience per se, these
are nonetheless relevant to the study. Consequently, each case study employs an abbreviated
version of the walkthrough method described by Light et al. (2016).
The walkthrough method is “grounded in a combination of science and technology
studies with cultural studies” (p. 1), and “is a way of engaging directly with an app’s user
interface to examine its technological mechanisms and embedded cultural references to
understand how it guides users and shapes their experiences” (p. 2). The process invites the
researcher to engage with the app in the same way that a new user would, from initial sign-up, to
usage, to exiting from the app (including terminating one’s user account, if applicable).
On a theoretical level, the walkthrough method is “grounded in the principles of Actor-
Network Theory (ANT),” which “foregrounds relational ontology according to which
sociocultural and technical processes are mutually shaping” (Callon, 1989; Latour, 2005; cited in
Light et al., 2016, p. 6). “In the case of apps, user interfaces and functions are therefore
! 69
understood as non-human actors that can be mediators” (p. 6). In addition to documenting and
analyzing the user experience, the walkthrough method also examines an app’s vision, operating
model and governance, which “allows researchers to understand how an app’s designers,
developers, publishers and owners expect users to receive and integrate it into their technology
usage practices” (p. 9). Of these three dimensions, my research focuses primarily on the
operating model, which Light et al. define as “its business strategy and revenue sources, which
indicate underlying political and economic interests” (p. 10).
The method as described by Light et al. avoids interacting with other users for ethical
reasons, but this is not always possible, particularly if interacting with other users is the app’s
core function: a walkthrough of Signal or WhatsApp that doesn’t involve messaging other users
would be incomplete. This was not an issue for this study as I frequently use Signal, WhatsApp,
Wire, and other messaging apps discussed in this study, for both personal and professional
purposes. I used Telegram to follow the official “channel” of Pavel Durov, through which he
communicates directly with users and makes occasional announcements about the company.
Data protection
As noted previously, in July 2016 I workshopped my research design and data protection
protocol at the Citizen Lab’s Summer Institute in Toronto. One of the most important take-away
from the meeting was the participants’ nearly universal discomfort with recorded interviews.
While the standard practice in research interviews is to produce an audio recording that is later
transcribed, it quickly became clear that this population was unlikely to consent to this. As
! 70
several meeting participants pointed out, the human voice is highly recognizable, and the
security concerns of many interviewees would require me to protect voice recordings much more
stringently than typed notes. One participant in this meeting said, “I like typing because if I say
something is off the record, you stop typing and I know it’s off the record.” Moreover, there
13
was a consensus that digital rights activists would not be comfortable with me using commercial
transcription services, and I did not relish the idea of transcribing dozens of hour-long
interviews. While automated voice to text software is readily available, several meeting
participants raised the question of the transcription companies’ data protection protocols.
After consulting with my advisor and other mentors, I decided to take contemporaneous,
near-verbatim notes during all interviews, and to offer participants a chance to review my notes
for accuracy at the conclusion of the interview. In the case of in-person interviews, the
interviewee reviewed my notes on the spot on my machine. For phone or video interviews, I
emailed the notes (using PGP encryption if desired by the other party) to the interviewee for
review. Most interview participants have taken me up on the offer to review my notes, though
many have either declined or never responded to the notes that I shared via email. Of those who
reviewed my notes, many helpfully corrected typographic errors, for which I am grateful. One
person asked me to delete a specific sentence, with the clarification that I could certainly
“remember” and “use” the sentiment therein, but that s/he would prefer it not be kept in writing
(I obliged).
The second important take-away from this meeting concerned data protection. IRB
standards require all researchers to take steps to protect research data, of course (especially data
Research design workshop, July 7, 2016, Toronto.
13
! 71
pertaining to human subjects), but I knew that a global population of activists dedicated to
privacy would value data security more highly than other populations. Additionally, my
interview participant pool included political dissidents with credible reasons to fear state-
sponsored persecution and hackers with very particular opinions about the proper way to protect
data. Finally, the Oxford Internet Institute’s Dr. Joss Wright noted that I would be conducting
much of my fieldwork in Europe and would thus be subject to European data protection
standards, which are much more stringent than those in the U.S. The following information
14
would eventually be included in my Informed Consent Facts Sheet:
My computer is fully encrypted, and no one has the encryption key but me. Backups are
likewise fully encrypted and are stored in my locked office at my home. Interview notes
as transcribed onto my computer will be retained indefinitely for future research
purposes, such as my dissertation and related journal articles or a future book project
based on the dissertation. With your consent, I will attempt to re-contact you for a follow-
up interview between finishing the dissertation and starting the book project.
From April 2017 onward, I decided that crossing the U.S. border with my main laptop (on
which interview transcripts were stored) was too risky. Customs and Border Patrol (CBP) was
increasingly asking travelers to unlock their electronic devices and browsing through their files
and social media profiles without a warrant (international airports are considered to be part of a
border zone where many civil rights and liberties don’t apply). As a white U.S. citizen
He is named here with his explicit consent.
14
! 72
participating in the Global Entry program, the odds of this happening to me were low, but not
null. After consulting several legal experts, I concluded that while CBP couldn’t detain me
indefinitely and would need to let me enter the country, they could confiscate my devices and
attempt to decrypt them. Even if they couldn’t break the encryption on my devices, I would still
lose access to recent interview data. Additionally, there were persistent news reports that the
Transportation Security Administration (TSA) was considering banning all devices larger than a
smartphone from the cabin on all flights to the U.S., meaning that I would need a device that I
could place in my checked luggage, i.e. one that I could afford to replace should it be lost, stolen,
or damaged. The so-called “laptop ban” was not implemented, but I nonetheless purchased a
Chromebook to use as a travel device.
Using the Chromebook for interviews conducted outside of the United States after April
2017 meant temporarily uploading those interview notes to Google Drive, breaching my initial
promise not to store notes in the cloud. I addressed this by explicitly telling interviewees and
asking if they’d prefer something else. No one objected, and several expressed their appreciation
for the information. No interview notes were stored in Google Drive without the subject’s
informed consent.
Data analysis
As recommended by Miles et al. (2014), data collection and analysis proceeded
concurrently, allowing early analysis to inform subsequent interviews and participant
observation. Notes from interviews and participant observation as well as key primary and
! 73
secondary sources (project websites, government officials’ public addresses, news articles, etc.)
were uploaded into the NVivo qualitative data analysis software and coded using a grounded
theory approach (Strauss & Corbin, 1990). Documents (including interview transcripts and field
notes) were coded according to the case or cases to which they pertained and by theme. Themes
included privacy, anonymity, free expression, human rights, technology development, funding
streams, surveillance capitalism, gender, encryption, and more.
I formally began writing up in August 2017, periodically returning to the field and
completing additional interviews as I identified gaps in the data.
Funding the project
As noted previously, this project originated as a class paper during my coursework at the
University of Southern California. Fieldwork for this initial phase (which focused on the Tor
Project) was supported by travel grants from the USC Graduate School and the USC Graduate
Student Government, and by conference travel support from the School of Communication,
which facilitated travel to Valencia, Spain, for the Circumvention Tech Festival in March 2015.
Participant observation at the Chaos Communication Camp (August 2015) was partly supported
by a $500 grant from the Annenberg Research Network on International Communication
(ARNIC). Airfare to Europe was funded by a Summer Institutional Fellowship from the School
of Communication, as I had participated in the Annenberg-Oxford Media Policy Institute earlier
that summer.
! 74
Travel to João Pessoa, Brazil for the Internet Governance Forum (November 2015) was
funded by Internews as part of a sub-grant to Ranking Digital Rights, as I moderated and helped
organize several panels related to the launch of RDR’s inaugural Corporate Accountability Index.
Though I did not conduct formal dissertation research at this site, participating in the IGF
necessarily informs my perspective on digital rights/internet freedom activism.
Between February and September 2016, I held a Senior Information Controls Fellowship
from the Open Technology Fund, which supported my work for Ranking Digital Rights. That
fellowship covered travel expenses to Valencia for the Internet Freedom Festival (March 2016),
to San Francisco for RightsCon (March/April 2016), and to Towson, MD for the Open Tech
Summit (October 2016). While my primary purpose for attending these events was my work for
Ranking Digital Rights, these conferences constituted participant observation and I took
advantage of the opportunity to conduct interviews. I also completed several interviews while
participating in the Hackademia summer school, hosted by Leuphana University in Lüneburg,
Germany (August 2016). Expenses related to Hackademia were covered by the School of
Communication as a Summer Institutional Fellowship.
Between October 2016 and July 2017, I continued working for Ranking Digital Rights on
a part-time basis, and my salary came from RDR’s operating budget. I was selected as a 2017
Internet Freedom Festival Fellow, which entailed helping to curate the conference, in thanks for
which the IFF covered the costs of travel to Valencia, accommodations, and a stipend (March
2017). Ranking Digital Rights covered my travel between Valencia, Budapest (where part of the
RDR team is based), and Brussels, where RightsCon was held a few weeks later. An RDR
! 75
colleague graciously hosted me in Budapest, while accommodations in Brussels were supported
by a small grant from the Internet Policy Institute at the University of Pennsylvania.
I received the Oakley Endowed Fellowship for the 2017-2018 academic year from the
USC Graduate School, enabling me to focus full-time on my dissertation from my home in
Washington, DC, which greatly facilitated ongoing fieldwork and interviews with activists and
funders in the digital rights/internet freedom space. Fieldwork at the Internet Freedom
Implementers Meeting (Montreal, June 2017), at Psiphon (Toronto, September 2017), and at the
Internet Freedom Festival (Valencia, March 2018) was supported by the School of
Communication as part of the standard research support provided to doctoral candidates. An
additional grant from the USC Graduate School further supported travel to Valencia.
I made two additional trips to Toronto between my dissertation defense (April 30, 2018)
and the final manuscript submission deadline (September 7, 2018). First, I conducted further
participant observation and interviews at RightsCon (Toronto, May 16-18, 2018). I purchased an
award flight via my credit card company, and Access Now generously waived my conference
registration fee. Second, I returned to Toronto the following month (June 13-15) for the Citizen
Lab Summer Institute, where I presented and workshopped the Psiphon case study. The Citizen
Lab covered my flight, while a friend graciously hosted me.
This travel funding, however standard, raises further questions about my position as a
researcher-activist within the digital rights space. In particular, some early readers of this
dissertation have questioned whether the Internet Freedom Festival (IFF) fellowship I received in
2017 influenced my research. As an IFF fellow, I helped evaluate submissions to the festival and
curated the final program. In return for this labor, I received travel support for the festival, where
! 76
I was left to my own devices — in this case, participant observation and interviews. I do not
believe that this funding influenced my research, however, it is undeniable that my position
within the digital rights space colors the analysis in this study. As a scholar, I have been trained
to take a critical stance and to look at the material circumstances while also respecting the voices
of my informants in my analysis. The empirical research in this study is scrupulously
documented, while the analysis is grounded in existing scholarship and theory.
Researcher as activist
I came to this study as a digital rights activist focusing on corporate social responsibility
in the information and communication technologies (ICT) sector, with previous experience in
“analog” human rights advocacy. The importance of human rights online and the relevance of
internet technologies to the global struggle for human rights were self-evident to me, as were the
good intentions, intellectual honesty, and moral rectitude of my colleagues in the digital rights
space. This made it hard for me to understand many of the critiques of internet freedom/digital
rights, particularly ones that seemed to imply that internet freedom/digital rights were either
Astroturfed or a front for U.S. empire. But as my fieldwork went on and as I read more deeply
into the science and technology studies literature as well as the internet freedom scholarship, I
grew to understand that internet freedom and digital rights were not the synonyms I initially
understood them to be, and became increasingly aware of the fault lines beneath the movement.
One of these fault lines concerns the regulation of ICT companies, which the original 1990s
formulation of the Internet Freedom agenda and the cypherpunk arm of the digital rights
! 77
movement eschew (see chapters 3 and 4). The more I read and learned about the intellectual
history of the movement, the better I understood both its critics and its internal debates. I hope
reading this study can help others do the same.
Another fault line concerns professionalization (I use the term in a lay sense here, to
mean “treating activism like a workplace rather than a hobby” — see chapter 3), particularly its
use as a strategy to combat sexual and gender-based violence within the space. This topic wasn’t
part of my original research design, but as the issue grew in salience within the movement I
decided that it was too important to ignore. I soon realized that my personal network of friends,
colleagues, and research participants (categories that were often permeable, at times
uncomfortably so) gave me a unique vantage point from which to observe, analyze, and advocate
for positive change.
But being privy to information that I had been asked to keep confidential proved
challenging at times, particularly when trying to craft an analysis that was true to the facts as I
understood them without revealing any secrets. In general, negotiating multifaceted relationships
with research participants who were also friends and colleagues was both more and less
challenging than I had anticipated. It was more challenging in the sense that it came up more
often than I had first thought, and it was easier in the sense that negotiating (and at times re-
negotiating) the terms of my relationships with certain people was overall more collegial and
productive than I had expected.
While this version of the project has not been comprehensively reviewed by anyone
outside of my dissertation committee, I intend to seek feedback from the relevant communities
prior to publication. I workshopped Chapter 3 (The Digital Rights Space: Portrait of a Social
! 78
Movement) at the Internet Freedom Festival in March 2018 (Valencia, Spain), and am seeking
feedback on Chapter 4 (The Political History of Internet Freedom Funding) from key informants
with federal government experience. Likewise, I have already made arrangements with the
leadership of Psiphon and the Tor Project to present those case studies to their teams. The Signal
and Telegram case studies present more difficulty, as I have not been able to gain access to those
teams. Addressing these gaps in the research is a priority for the next iteration of this project.
! 79
Chapter 3 - The digital rights space: Portrait of a social movement
Introduction to the chapter
This chapter investigates the global social movement dedicated to the promotion of
digital rights, understood as the right of individuals to freely access, create, and disseminate
content online without surveillance or reprisal, with narrow legal exceptions that are compatible
with human rights. Digital rights technology development is but one aspect of the movement’s
repertoire of contention, and must be understood in the context of broader contention over
communication rights, notably free expression and privacy.
The rights to freedom of expression and privacy are codified in international law by the
Universal Declaration of Human Rights (UDHR; articles 19 and 12, respectively) and the
International Covenant on Civil and Political Rights (ICCPR; articles 19 and 17, respectively).
These rights are universal, but they are not absolute, and must be balanced against other interests
in accordance with international human rights law (Kumar, Prasad & Maréchal, 2017). The
Universal Declaration predates the internet, of course, but “the same rights that people have
offline must also be protected online,” as the UN Human Rights Council asserted in June 2016,
explicitly emphasizing the need to protect freedom of expression and privacy rights in the march
toward a more digitally connected world.
Though recognized in principle, these rights are too often violated in practice, as is true
of all human rights. Governments can either proactively violate their citizens’ rights for any
number of reasons, or they can fail to protect their citizens’ rights from violations committed by
! 80
others. There is nothing new to this unfortunate reality. What is unique to digital rights is the
central role played by the private sector, and specifically by ICT companies, in regulating the
enjoyment of free expression and privacy rights online (DeNardis, 2014; MacKinnon, 2012). For
example, they set and enforce the rules for the content that users share on the platforms,
adjudicate disputes between users when harassment claims arise, and take responsibility for
securing highly sensitive information — a responsibility to which they do not always live up.
The consequences can be especially dire for high-risk users like journalists, human rights
defenders, or political dissidents (Anderson & Guarnieri, 2018; Kumar, Prasad & Maréchal,
2017). They thus combine the powers of legislator, judge, and enforcer, powers whose separation
is the very foundation of liberal, democratic modes of governance, a principle dating back to the
Enlightenment. The growth of government tactics for controlling information flows, and the
private sector’s role in implementing information controls, is discussed in Chapter 1 of this
dissertation.
The digital rights movement pressures both governments and companies to abide by their
commitments to human rights online. The Guiding Principles on Business and Human Rights —
endorsed by the UN Human Rights Council in June 2011 — affirm that states have a duty to
protect human rights, businesses have a duty to respect human rights, and both have an
obligation to provide remedy for individuals whose human rights are violated. Advocating for
human rights in the 21
st
century requires navigating a complex ecosystem that includes
companies, investors, governments, policymakers, civil society, advocates, and activists. This
chapter seeks to contribute to scholarship on this ecosystem through an analysis of the civil
! 81
society organizations, technologists and activists who comprise the global digital rights
movement.
Chapter roadmap
The chapter proceeds as follows. First, it provides a brief overview of the sociological
literature on social movements and of the ways that ICTs have impacted the practice of social
movements. Next, I trace the genealogy of the digital rights movement to earlier waves of
contention over communication rights, including human rights activism and hacker culture. We
then turn to a discussion of gender, diversity and inclusion within the digital rights movement.
Though gender was not an explicit focus of this study when I began my research, my fieldwork
coincided with significant tensions over gender equality and sexual assault, as several high-
profile men within the community were publicly revealed to be repeat sexual offenders (namely,
Jacob Appelbaum, Morgan Marquis-Boire, and Ali Bangi) (Ann-King, 2017; Brandom, 2018;
Cameron, O’Neill & Larson, 2016; Jeong 2017a; Jeong, 2017b; Locker, 2018). In 2017, the
public conversation in the U.S. and elsewhere returned again and again to the mistreatment of
women in the workplace, across sectors as varied as Hollywood, the media, national security, and
more. I found myself enmeshed in the digital rights space’s “#MeToo moment,” both as a scholar
and as a member of the space in my own right. I argue that intra-movement contestation over
gender, diversity and inclusion is partly grounded in pre-existing ideological differences between
the movement’s sub-currents, and that initiatives to reduce the potential for sexual violence and
! 82
increase diversity and inclusion amount to calls for professionalization — treating activism like a
workplace with common norms for acceptable behavior.
Data, sources and methods
This chapter is based on over three years of participant observation and ethnographic
research in the digital rights movement, complemented by published information in secondary
sources (notably blog posts and journalistic accounts). The section on gender, diversity and
inclusion in particular also relies on interviews conducted between 2015 and 2018, and on my
own experience. It reflects a militant ethnography (Juris, 2008) approach, as I actively engaged
in intra-movement activist efforts to root out entrenched patterns of misogyny and toxic
masculinity. Notably, from late 2017 onward I have been a core member of the Protect Our
Spaces initiative, which “takes a stand” against the violence and “culture of fear” that “women
and high-risk communities” experience at internet freedom/digital rights events (Protect Our
Spaces, 2017).
Digital rights as a social movement
When I started fieldwork for this project in March 2015, at the Circumvention Tech
Festival (later renamed Internet Freedom Festival), I was struck by the ubiquitous use of the
phrase “the digital rights space.” A newcomer to this “scene” (for lack of a better term), I quickly
picked up on the preferred sequence of “small talk” questions: “What’s your name? What do you
! 83
work on? Where are you based? How long have you been in the space?” (emphasis mine). The
space was short for “the digital rights space,” and to me it represented the sentiment that this
motley assortment of polyglot techies, hackers, geeks, lawyers, academics, and policy analysts
constituted a network of people working toward a common goal. But the question also implied
that “the space” was something that anyone could join, rather than an identity-based community
that one was born into or could claim as a result of discovering one’s “true” identity. As I
describe later in this chapter, the annual Valencia gathering was specifically conceived as an
intervention to help this network coalesce into a social movement. But what is a social
movement, and does the digital rights space truly qualify as one?
While many activists have little interest in scholarly analysis of their work and may even
resist participating in such research (Collier, forthcoming; Croeser, 2012; Milan, 2013; Milan &
Hintz, 2010), the question of whether the digital rights movement is indeed a social movement
matters not only for social movement scholars, but also for designing policy interventions.
Funders in particular may approach grant-making differently if they view the issue areas within
digital rights (encryption, net neutrality, commercial content moderation, etc.) as part of the same
broad contestation rather than as discrete policy debates. Writing in 2012, Sky Croeser identified
an “emerging” social movement linking various contestations over digital technologies (notably
online censorship and surveillance, free/libre and open source software, and intellectual
property) through “the master frame of citizen — rather than government or corporate — control
! 84
of digital technologies” (para. 2). More recently, Félix Tréguer framed “digital rights
15
contention” as a subset of longstanding “communication rights contention,” defining the former
as “political conflicts related to the expansion or restriction of civil and political rights exerted
through, or affected by, digital communications technologies” (2017b, para. 1). Just as digital
media is a subset of media, digital rights contention is a subset of communication rights
contention with both historical and ideological ties to earlier waves of contention, as I will trace
later in this chapter.
In “The concept of social movement,” Mario Diani (1992) defines social movements as
“networks of informal interactions between a plurality of individuals, groups and/or
organizations, engaged in political or cultural conflicts, on the basis of shared collective
identities” (p. 1). This collective identity results from “a process whereby several different
actors, be they individuals, informal groups and/or organizations, come to elaborate, through
either joint action and/or communication, a shared definition of themselves as being part of the
same side in a social conflict” (p.2) — that is to say, shared identity need not be innate, it can be
socially constructed. This is the case here.
In crafting this definition, Diani relies on definitions put forth by earlier scholars,
including Turner and Killian (1957, 1987), Tilly (1984), and Touraine (1981). While these
Croeser uses the term “digital liberties movement” while noting that it is an analytical
15
tool that isn’t “widely adopted by movement activists themselves” (2012, para. 2). I use “digital
rights movement” because I have found that this is one of the main terms that activists use. I
reserve “internet freedom” to refer to top-down policy initiatives from governments and (more
rarely) corporations.
! 85
definitions are largely concordant, their differences highlight the relative fluidity of social
movements as a concept. Social movements can be defined by who they are, what they do, and
what they believe, with different scholars emphasizing a different aspect. Ralph Turner and
Lewis Killian (1957, 1987) focus on the fact that a social movement’s membership is fluid and
that it lacks a formal leadership structure, while specifying that some action must be taken:
a collectivity acting with some continuity to promote or resist a change in the society or
organization of which it is part. As a collectivity a movement is a group with indefinite
and shifting membership and with leadership whose position is determined more by
informal response of adherents than by formal procedures for legitimizing authority
(1987: 223, cited in Diani, 1992 p. 4).
This stands in contrast to the definition put forth by John D. McCarthy and Mayer N.
Zald, for whom shared “opinions and beliefs” are sufficient (McCarthy and Zald, 1977: 1217-18,
cited in Diani, 1992 p. 4). I find this definition overly broad, as the opinions and beliefs of others
are essentially unknowable if they do not take some action (including speech) to reveal their
preferences. Phenomena that cannot be observed cannot be studied empirically. For all we know,
there may be a substantial set of the population that abhors fruitcake, but if they do not take
collective action to ban fruitcake that does not constitute a social movement, in my view.
Another issue with the McCarthy & Zald definition is that it contains no mention of power
relationships, leaving open the possibility of “social movements” focused on trivial matters, as in
the fruitcake example.
! 86
Charles Tilly proposes a narrower definition than McCarthy and Zald, which requires
visible action that is sustained over time and enjoys support beyond the persons taking the
actions. This second requirement precludes classifying marginal movements as “social
movements,” at least until they cease to be marginal and garner broader support. The Tilly
definition also introduces the notion of power, which implies that the substantive issue cannot be
trivial :
a sustained series of interactions between power holders and persons successfully
claiming to speak on behalf of a constituency lacking formal representation, in the course
of which those persons make publicly visible demands for changes in the distribution or
exercise of power, and back those demands with public demonstrations of support” (Tilly,
1984: 306, cited in Diani, 1992 p. 5).
The definition put forth by Alain Touraine is the narrowest, as it only allows for a single
social movement to exist in a given society at one time:
the social movement is the organized collective behavior of a class actor struggling
against his class adversary for the social control of historicity in a concrete
community” (Touraine, 1981:77, cited in Diani, 1992:5).
Social movements, then, cannot be divorced from the “dominant conflict” in a given
society. There are of course movements of opposition and support about other conflicts, but for
! 87
Touraine these should be referred to as “national movements,” “communitarian movements” or
some other qualifier. In what Touraine calls the “programmed society,” that conflict is between
technocrats and their adversaries, replacing the earlier conflict between the labor and capital
classes. The Touraine definition also requires the two opposing “class adversaries” to be engaged
in a struggle for “historicity,” which is the “overall system of meaning which sets dominant rules
in a given society” (Touraine, 1981:81, cited in Diani, 1992:5).
A social movement is thus characterized by a coherent set of visible actions that are
sustained over time and link to shared identity and/or values, that challenge existing power
structures in a given society, and that enjoy a base of support that is broader than the individuals
taking action. The digital rights movement is specifically concerned with the exercise of human
rights in a digitally mediated society, notably the rights to privacy, freedom of expression, and
access to information. This transnational network of individuals, non-governmental organizations
and (more rarely) political parties uses a variety of methods to advocate for a free and open
internet where everyone can produce, access and disseminate the content of their choice without
arbitrary interference by dominant structures of power (restrictions resulting from due process
and designed to safeguard the rights of others, i.e. restrictions based on copyright, hate speech, or
prohibiting imagery of child abuse, are generally accepted within the movement). The
development of digital rights technology, which is the focus of this study, is part of that
repertoire of contention.
Digital rights proponents do not form a distinct in-group with clear markers of
membership. Rather, they comprise a “collectivity” (Turner & Killian, 1987). Some scholars
have framed communication rights activism as a “movement-nexus” that connects disparate
! 88
movements to one another (Hackett & Carroll, 2006), and identified an “embryonic”
transnational movement for communication rights (Padovani & Pavan, 2009). On the other hand,
Stefania Milan’s study of radio activists and radical techies between 2006 and 2012 concludes
that there is no “social movement dynamic” (2013, p. 175) linking the two groups, even though
they both engage in emancipatory communication practices. She found that the two groups were
aware and supportive of each other’s work, partly thanks to a small number of individuals who
participated in both activities, but that “community radio projects and radical internet projects
remained two distinct universes, with two separate cultural systems of action” (p. 175).
Indeed, I did not encounter any references to community radio activism in the course of
my fieldwork. However, the tool developers I interviewed identified as part of a “digital rights
space” and/or a “net freedom community” that also includes policy, research and advocacy
groups that operate at global, regional, and national levels. Christina Dunbar-Hester’s work on
radio activists (2014) shows that such groups are themselves connected to community radio
activists in their own communities, thus serving as bridges between the globally-minded techies
and the locally-minded radio activists. Jeffrey Juris (2008) makes a similar argument about
Indymedia, whose network of Independent Media Centers helped connect geographically distant
activist hubs.
Milan’s reluctance to identify a social movement uniting these two sets of actors in her
study is grounded in her observation that alliances between organizations are “contingent and
episodic” and that “what survives in between synthesis moments does not score highly in terms
of collective identity” (p. 175-176). My experience as a scholar-activist reveals that there is a
widespread sense that “the space” is a social movement, but that, as Stefania Milan (2013) — a
! 89
scholar-activist as well — observes, individuals who attend international conferences and
gatherings or participate in transnational projects are more likely to self-identify as part of a
transnational movement than those whose activities are mainly domestic. Nevertheless, shared
values like freedom of expression, privacy, and respect for individual autonomy provide
commonality even as other values — diversity, gender equality, intersectionality — are more
contested. Shared values provide the sense of being on the same side emphasized by Diani.
Milan’s point about the “contingent and episodic” nature of the digital rights movement
merits further engagement (2013, p. 175-176). She uses the metaphor of a karst river to describe
the dynamic at hand:
In geology, a karst river is a river that surfaces only occasionally, running underground
for most of its course. When it surfaces it may take a different name, but it is still
classified as a river because it has the features of a river — namely, it is water running
towards the sea. Similarly, a karst movement is visible only in the presence of favorable
environmental conditions, such as open policy windows and episodes of repression,
which may encourage groups from different backgrounds to come together and act on a
shared definition of a situation. In periods of quiescence, a karst movement runs
underground, concentrating on challenging prevailing cultural codes by means of
prefigurative communicative realities (Milan, 2013, p. 176).
Jack Z. Bratich (2018) calls these underground phases “dormancy,” using sleep as an
analogy to describe how movements “recompose” themselves between periods of visible activity.
! 90
He argues that dormancy is essential to movements’ health and longevity: “activity without
dormancy is a recipe for burnout, unsustainability, and collapse” (p. 14). These are the periods
during which movements develop what Zeynep Tufekci (2017a) has called “network
internalities”: “the benefits and collective capabilities attained during the process of forming
durable networks” (p. 75), which are the fruit of collectively pursuing joint tasks, no matter how
trivial. Breaks in a movement’s most visible activities should therefore not be seen as evidence
that the movement has stalled.
Notwithstanding this important point, two factors explain the divergence between Milan’s
analysis and mine. The first is that the two studies have different boundaries: Milan’s looks at
two types of emancipatory communication practices (i.e. community radio and volunteer tech
infrastructure), while mine examines technology development as an emancipatory
communication practice for the specific purposes of censorship circumvention and secure
communication. Second, the timeframe of Milan’s fieldwork for this study (largely conducted
between 2006 and 2012) predates many of the developments around which the digital rights
movement has coalesced: successive generations of information controls; an infusion of cash
from governments and private funders linked to the ideology of Internet Freedom; Edward
Snowden’s revelations about NSA surveillance; public recognition of the increasingly central
role played by Silicon Valley corporations in managing global information flows; and myriad
reactions to these events. To continue with Milan’s karst metaphor, changes in environmental
conditions may have prompted the once-hidden river to surface permanently.
The digital rights movement’s membership is indefinite, and it lacks a formal leadership
structure. Individual NGOs have formal leaders, but the movement as a whole does not. The
! 91
“continuity” requirement is met: digital rights proponents have been advocating for net
neutrality, privacy, and free expression for several decades now, albeit punctuated by periods of
dormancy. Their actions are sustained through time, and oppose the movement to “power
holders” including governments and information and communication (ICT) companies.
The movement is global and heterogeneous. It includes men, women, and non-binary
people; hackers, lawyers, artists, politicians, and journalists; anarchists, socialists, libertarians,
Democrats, Republicans, and everything in between. The repertoire of contention includes grass-
roots advocacy campaigns; lobbying elites behind the scenes; technology development; user
education; investigative journalism, and more. Some believe in incremental change through
coalition-building and compromise; others argue that only a revolution will bring about a better
future. Notwithstanding these differences, members of the digital rights movement are keenly
attuned to the importance of communication rights for the practice of all social movements. The
next section examines how ICTs have become the backbone of social movement organizing, but
also how reliance on ICTs can imperil movements.
Digital communication networks in the practice of social movements
In this section I detail the specific ways that ICT tools have transformed the practice of
social movements. This is important because the digital rights movement positions itself as a
defender of the right of all social movements to exist. Understanding the basis for this claim
requires understanding how ICTs are used in social movements. ICT tools include platforms like
email, social networking sites, and microblogging; hardware like cheap, powerful laptops and
! 92
smartphones; and software like encrypted messaging applications, anonymous web browsers,
and virtual private networks (VPNs). The use of these tools has transformed the practice of social
movements in many ways, both good and bad. I have identified four principal mechanisms
through which ICTs have enhanced the practice of social movements:
First, in the network society writ large, the unit of analysis has shifted from the
organization to the project. Human rights advocacy (broadly defined) has shifted from a model
based on large nongovernmental organizations, with offices in multiple locations, comprising
departments focusing on various sub-issues (Human Rights Watch, Amnesty International), to a
networked ecology model where individuals and smaller organizations organize around discrete
projects (Encrypt All The Things, Ranking Digital Rights, onlinecensorship.org). Prior to the
widespread adoption of email, in particular, centrally coordinated hierarchical organizations were
the most effective way to coordinate the work of large numbers of people (Lebert, 2003; Metzl,
1996). For example, Joanne Lebert’s chapter in “Cyberactivism: Online activism in theory and
practice” (2003) describes the impact of ICTs on the structure and communicative practices of
Amnesty International. While email was enthusiastically adopted by staff, volunteers, and the
broader Amnesty membership, public-facing websites raised a series of questions. How would
the International Secretariat assure the quality and consistency of materials published by the
organization’s sections around the world? How could Amnesty design websites to be more than
mere repositories of material? Would shifting focus from communicating with internal audiences
to communicating with outside audiences risk losing control of the organization’s narrative?
These are quandaries that all NGOs had to confront in the late 1990s and early 2000s.
! 93
Of course, more challenges would come later with the rise of social media (Lebert, 2003).
Advocacy organizations and social movements are not the same thing, but many (perhaps most)
social movements are connected to professional advocacy organizations. Changes in the structure
of advocacy organizations thus have implications for the structure of social movements. If each
project is a star in the constellation of projects, then the social movement is the broader galaxy.
Within the digital rights galaxy are “legacy” human rights organizations (Amnesty
International and Human Rights Watch among them) as well as a growing number of
organizations and projects devoted to digital rights in a specific region, or to a subset of digital
rights issues. This includes the Electronic Frontier Foundation, La Quadrature du Net, and the
Digital Rights Foundation, among others. Additionally, the Association for Progressive
Communications (APC) serves as a network of networks, connecting the broader digital rights
community and coordinating joint activities (Salter, 2003). Many digital rights NGOs are not
(formal, dues-paying) APC members yet would likely identify with APC’s values statement,
which includes decentralized action, freedom of expression, support for free/open source
software, and social and gender equality. The APC itself is a globally distributed network with no
offices or headquarters — something that would have been impossible without ICTs.
Second, ICTs (notably social networking sites and microblogging services) have
expanded the public sphere and led to the “electronic grassrooting of civil society” (Castells,
2001, cited in Carty, 2015). This transformation is connected to the first, reducing the reliance on
formal leadership structures and on the mass media to coordinate actions, set the agenda, and
communicate with power elites and the general public. The latter is especially important when
the traditional mainstream media either ignores the social movement or presents it in an
! 94
unflattering light. Instead, activists can blog, live-tweet, stream live video footage, and otherwise
tell their stories without going through the mass media’s filter. The creation of Indymedia as part
of the anti-corporate globalization movement exemplifies this dynamic (Juris, 2008). More
recently, the Black Lives Matter movement in the U.S. and elsewhere has leveraged these
affordances to draw attention to police violence.
Third, using ICTs to organize boycotts, motivate supporters to attend rallies, gather
petition signatures, and fundraise is vastly more cost-effective (in terms of money, time and
effort) than telephone trees or going door-to-door. This allows movements to grow faster, across
greater distances, and to coordinate with groups elsewhere that share the same goals or values.
The many collaborations between the various Indignad@s and Occupy movements are a prime
example (Carty, 2015). The increased efficiency also frees up resources for other purposes. For
example, since the advent of email Amnesty International has significantly reduced the
proportion of its activities represented by letter writing campaigns, allowing the organization to
focus on research and advocacy. A 2015 study looked at the role of peripheral nodes (individuals
who are connected to participants in social movements, but are not active themselves) in
spreading awareness of an issue and inspiring action, concluding that “peripheral users in online
protest networks may be as important in expanding the reach of messages as the highly
committed minority at the core” (Barberá et al., 2015). At the individual level these “slacktivists”
are less important than the active participants in the social movement, but because of their much
larger numbers, in aggregate they can have an impact that is just as consequential.
Fourth, ICT innovations have expanded the repertoire of contention (Tilly, 2006)
available to social movements. In addition to organizing kinetic activities like marches, street
! 95
protests, rallies, and demonstrations, activists use the internet to collect signatures for petitions,
spread memes and viral videos, and, more controversially, conduct distributed denial of service
(DDoS) attacks (Sauter, 2014). The World Wide Web has dramatically reduced the costs of
spreading information and expanded the means available to do so: books and pamphlets in
electronic format, yes, but also interactive websites, downloadable datasets, high-production
videos, and even games are used by social movements to reach new audiences. Developing
digital rights technologies is also part of this repertoire of contention.
But reliance on ICTs has also weakened social movements in some important ways.
Sociologist Zeynep Tufekci (2017a) argues that ICTs have transformed the “trajectories” of
social movements and the public sphere. Pre-internet movements needed to establish robust
networks of participants before they could organize mass actions, and so staging a protest was a
strong signal that the movement was large, cohesive, and powerful. Today, garnering mass
support and organizing relatively simple actions is greatly facilitated by ICTs, to the point that
many movements (especially on the left) resemble an “adhocracy,” where whoever shows up,
and is willing and able, gets things done. This has the benefit of helping movements grow
quickly, but allows them to avoid difficult conversations about goals, strategies, tactics, and
internal governance. Movements that leap-frog over this hard work of building what Tufekci
calls “network internalities” are often ill-equipped to sustain themselves in the long run, and they
can easily fizzle out without accomplishing their goals.
! 96
Genealogy of the digital rights movement
This section surveys the history of communication rights contestation since the 1970s to
argue that digital rights technology development is part of a decades-long effort to secure
communication rights from governmental and corporate control (Croeser, 2012). The 1970s were
selected as the point of departure because that decade corresponds to the emergence of a truly
globalized media ecosystem facilitated by computerization. The digital rights movement is a
subset of the broader struggle for communication rights, that focuses on technical infrastructure
rather than media content. This infrastructure supports independent media organizations, the
practice of journalism, and the work of social movements dedicated to any number of issues by
subverting governmental and corporate efforts to control information flows. Today, the digital
rights movement’s most prominent battles focus on internet censorship and on surveillance (for
reasons discussed in Chapter 1), but it is important to consider its relationship to previous eras of
communication rights activism, particularly as the debate shifts to increasingly complex matters
like artificial intelligence, algorithmic governance, and media manipulation.
In “Social movements and their technologies: Wiring social change” (2013), scholar-
activist Stefania Milan traces the evolution of global debates over communication policy and the
relationship of those debates to the geopolitical concerns of the time, focusing on the 1975-2005
period, which she breaks down into three decades: the institutional period (1975-1985), civil
society engagement (1985-1995), and the renaissance of media activism (1995-2005). In this
section I present a summary of Milan’s discussion before moving on the 2005-2015 period. The
2015-2018 period corresponds to my fieldwork, and is discussed in the case studies.
! 97
The institutional period (1975-1985)
The 1975-1985 period was marked by the Cold War between the United States and the
Soviet Union, and by former colonies (mostly of the British and French empires) asserting their
new independence on the world stage. They joined the United Nations, refused to trade the
colonialism of Empire for allegiance to either of the day’s superpowers by forming the non-
aligned movement, and critiqued the political and economic systems that kept them subjugated.
Notably, the world-system perspective (Wallerstein, 1974) and dependency theory (Amin, 1976)
highlighted the continued economic colonialism that characterized North-South relationships,
theorizing that the wealthy “center” of the world economic system depleted the developing
“peripheries” of their natural and human resources and transformed them into value-added
products in the “center,” before selling them in the “peripheries” for a hefty profit. This resulted
in a systematic transfer of wealth from the global South to the global North (Galtung, 1964).
Dependency theory influenced communication scholars like Edward Said (1979, 1994)
and Herbert Schiller (1976) to develop the concept of cultural imperialism, meaning the
domination of one culture over another following a center-periphery power structure. Even as
they pushed for economic reform in other fora, countries from the non-aligned movement
proposed a New World Information and Communication Order (NWICO). They argued that
existing communication infrastructure and policy was a legacy of the colonial era that threatened
developing countries’ sovereignty and culture, and posed barriers to their economic development.
Rich countries, led by the U.S., disagreed. The main site of these debates was the United Nations
Economic, Social and Cultural Organization (UNESCO), whose secretariat was largely
! 98
sympathetic to the NWICO agenda. It convened a number of intergovernmental meetings around
the world, and published documents referring to the rights of all countries to “have equal access
to all the sources of information and take part on an equal footing in the control over and use of
international channels of dissemination” and calling for “treat[ing] the communication sector not
only as a support to development but as an integral part of the development plan
itself” (UNESCO, 1976, p. 39, cited in Milan, 2013, p. 26).
In 1980, UNESCO’s International Commission for the Study of Communication
Problems issued a report (“The MacBride Report,” after its lead author) framing the NWICO
debate not as a matter of governments’ right to determine communication policy and practice
within their borders, but as a basic individual “right to communicate” as “a prerequisite to many
other [rights]” (MacBride, 1980, p. 253, cited in Milan, 2013, p. 26). The report called for “free,
open and balanced communication” (p. 253) while criticizing the free-flow doctrine for its links
to cultural imperialism and calling for the development of alternative media structures
independent from both governmental and commercial interests. Its findings were endorsed by the
UN General Assembly and found support within both the Soviet Bloc and the Non-Aligned
Movement, but the U.S.-led Western alliance rejected what it (accurately) saw as an attack on
neoliberal capitalism and a threat to business profits in the media and telecommunications sector.
It must also be noted that the Soviet Union’s support for NWICO was rooted in a desire to
undermine Western economic and cultural domination, and not in a commitment to its citizens’
communication rights. Indeed, Moscow and other Eastern Bloc governments severely limited
domestic and transnational information flows and ruthlessly persecuted dissidents throughout
! 99
this period, as did many governments in the Non-Aligned Movement. Ronald Regan assumed
16
the American presidency in 1981, and his two terms in office were characterized by “aggressive
unilateralism,” a “resurgence of the Cold War,” and American withdrawal from both UNESCO
and the International Labor Union (Milan, 2013, p. 27).
While the UN charter provides for “suitable arrangements for consultation with non-
governmental organizations which are concerned with matters within [the UN]
competence” (Article 71, cited in Milan, 2013, p. 28), global civil society was largely absent
from the NWICO debates, with the exception of professional associations of journalists. Some of
these favored measures to protect communication rights by restructuring global media industries
while others (notably conservative groups from the Global North) argued that NWICO might
threaten free expression, thus opposing reform (Giffard, 1989; Milan, 2013; Preston et al., 1989).
These groups’ engagement with global policy issues inspired the formation of similar expert
associations like the Media Alliance (San Francisco, 1976), the Campaign for Press and
Broadcast Freedom (United Kingdom, 1979), and the Chaos Computer Club (Berlin, 1981). At a
time when most groups focused on broadcasting and to a lesser extent print media, the Chaos
Computer Club brought together computer hackers “striv[ing] across borders for freedom of
information” (Bennett, 2008; Kubitschko, 2015). Meanwhile in the United States, Richard
Stallman created the Free Software Foundation and its GNU project, inspiring the free and open
source software (F/OSS) movement. And in Europe and Latin America, various groups launched
free and community radio projects that asserted their right to communicate. Global civil society
This is not to say that governments in the NATO alliance were perfect in this regard,
16
far from it.
! 100
at the time mainly consisted of small organizations operating at the local level and “rooted in
elitist sectors of society” (Milan, 2013, p. 28). They collaborated with academics to document
Western media companies’ involvement in patterns of cultural imperialism and to “promote
progressive principles in intergovernmental settings” (Milan, 2013, p. 29), though they were
largely ineffective at achieving the latter goal. The 1975-1985 period also saw the emergence of
“new social movements” (Touraine, 1981; Melucci et al., 1989) focused on the environment and
world peace, which “stressed the individual and cultural dimensions of participation” and
“link[ed] the call for justice of Western activists to the independence struggles of ‘Third World’
countries” (Milan, 2013, p. 24).
Civil society engagement (1985-1995)
This decade saw the fall of the Berlin Wall (1989), the dissolution of the Soviet Union
(1991), and the end of a number of dictatorships across Latin America. Exilees’ return home and
democratization provided a ripe context for organized civil society to flourish. At the same time,
deregulation of the media and cultural sectors — something that media and telecommunications
companies had lobbied for — “set in motion a process of concentration of media ownership” that
ultimately led to both vertical and horizontal integration (Milan, 2013, p. 30), and this
“commercial media played a crucial role in spreading capitalist lifestyle models and
values” (Herman & McChesney, 1997, cited in Milan, 2013, p. 30). The World Trade
Organization (WTO) and the North American Free Trade Association (NAFTA) both came into
effect in 1994, and the Maastricht Treaty (implemented in 1993) formally created the European
! 101
Union. These three institutions, along with the Washington Consensus triad (comprising the
International Monetary Fund, the World Bank, and the U.S. Treasury Department) would come
to symbolize the neoliberal economic model.
Starting in the mid-1980s, the falling costs of personal computing and dial-up
connections combined with the increased political opportunities afforded by democratization to
spur an expansion in the number of non-governmental organizations and in their scope of action.
Notably, many organized to oppose the market globalization symbolized by the creation of the
World Trade Organization (WTO), the North American Free Trade Agreement (NAFTA), and
widespread deregulation of various sectors that had previously been the purview of the state.
Importantly, these groups used new technologies to coordinate with one another. For example, in
1994 the so-called “Velletri agreement” joined several organizations devoted to sustainable
development and women’s rights in a coalition that would leverage ICTs to share skills and
knowledge across borders (Interdoc, 1984; Milan, 2013). The Interdoc experiment was only the
first in a series of innovations designed to “provide grassroots activists with cheap ways of
sharing text-based information” (p. 32). And in 1990, “non-profit ISPs joined forces in the
Association for Progressive Communications (APC), to ensure that ‘all people have easy and
affordable access to a free and open internet to improve their lives and create a more just world’”
(Milan, 2013, p. 32). The APC continues to be a core institution in today’s digital rights
movement.
Meanwhile in California, the nascent cypherpunk movement was organizing around the
use of cryptography by ordinary people, communicating largely through the eponymous
Cypherpunk Mailing List (see West, 2018). The word “cypherpunk” itself is a portmanteau of
! 102
“cipher” and “cyberpunk,” a subgenre of dystopian science fiction, coined by Jude Milhon at one
of the group’s early in-person meetings. Key cypherpunk texts include the Crypto Anarchist
Manifesto (May, 1992), the Cypherpunk Manifesto (Hughes, 1993), David Chaum’s 1995 essay,
“Security without identification: Transaction systems to make Big Brother Obsolete” (Chaum,
1995), and J.P. Barlow’s “Declaration of the Independence of Cyberspace” (Barlow, 1996).
Among Barlow’s key legacies is the creation of the Electronic Frontier Foundation (EFF) in
1990, a San Francisco-based NGO that defends civil liberties in the digital world through
litigation, policy analysis, grassroots activism, and technology development. Like the APC, the
EFF continues to be a central actor in the digital rights movement.
Public-key cryptography had been developed in the mid-1970s by Whitfield Diffie and
Martin Hellman, paving the way for the general public to use strong encryption which had only
been available to governments up until that point. In 1991, Phil Zimmerman developed the Pretty
Good Privacy (PGP) software, allowing any internet user with a modicum of skill to encrypt his
or her emails. Moreover, Zimmerman made his software available to anyone on the global
internet, which the U.S. federal government argued violated legal restrictions on exporting
munitions, including software. Extensive litigation and public debate (the so-called “Crypto
Wars”) resulted in commercial encryption being moved from the Munition List to the Commerce
Control List and in the federal government’s abandonment of the “Clipper Chip” (Levy, 2001),
an NSA-designed encryption device for telephones that would have included a law enforcement
backdoor. This was a victory for the cypherpunks and other privacy advocates, and made
commercial innovations in online banking and electronic commerce possible by protecting the
confidential information transiting over the internet. Ironically, crypto’s victory in the 1990s
! 103
would pave the way for surveillance capitalism or data capitalism (Zuboff, 2015; West, 2017), as
the growth of online commerce would not have been possible without the ability to encrypt
sensitive information, as my colleague Sarah Myers West argues in her dissertation (West, 2018).
In turn, the online advertising industry could not have taken the path that led to Google and
Facebook’s business model without the explosion of online content that coincided with the late
1990s tech bubble.
The renaissance of media activism and birth of surveillance capitalism (1995-2005)
Global civil society continued to grow and interconnect thanks to new communication
technologies, much in the same way that governments, businesses, and ordinary people used
ICTs to connect across previously daunting barriers and distances. Many such NGOs coalesced
to opposed various aspects of the neoliberal wave of globalization promoted by the Washington
Consensus and symbolized by the rise of free trade agreements and supranational entities like the
EU, the WTO, and NAFTA, among others. While much media coverage referred to these civil
society networks as “anti-globalization,” a more accurate terminology would be “alternative
globalization” or “the movement of movements” (della Porta, 2007; Juris, 2008). Indeed, these
groups didn’t necessarily oppose the increased cross-border exchanges of goods, services, ideas,
and people; they opposed the specific terms under which these exchanges were structured,
particularly the neoliberal world order’s bias for financial profits at the expense of human well-
being. High-profile street protests erupted around the 1999 WTO Summit in Seattle and the 2001
G8 meeting in Genoa, and the nascent global civil society started organizing its own summits,
! 104
such as the Porto Alegre Social Forum in Brazil, first held in 2001, which was intended as a
counterpoint to the Davos Economic Forum (Juris, 2008). The UN reformed the status of NGOs
(UN Directive 1996/31), increasing civil society’s opportunities to participate in global
governance and setting the stage for multistakeholder summits bringing together
intergovernmental entities, governments, corporations, and civil society voices. The opening of
these political opportunities coupled with the coordination affordances of ICTs led to the
proliferation of “transnational coalitions” and a true “global civil society” (Milan, 2013; Kaldor,
2003; Van Rooy, 2004).
In addition to fostering coordination at a global scale, continued innovation in ICTs
helped propel emancipatory communication practices (ECPs) as part of the activist toolkit. ECPs
are “ways of social organizing seeking to create alternatives to existing media and
communication infrastructure” (Milan, 2013, p. 9). For example, custom-built websites,
independent video productions, and web-based radio stations became standard ways for activists
to bypass the traditional media and to get their message directly to the public and potential
supporters.
The Zapatista struggle in Chiapas, Mexico is one of the canonical examples of early
digitally-enabled protest. Another example is the creation of Indymedia during the 1999 “Battle
of Seattle,” when protesters set up an Independent Media Center to help them report from the
frontline themselves. The setup used open-source software developed by Australian activists that
facilitated online publishing at a time when modifying a webpage was much more technically
onerous than today. The model was so successful that Indymedia quickly blossomed into a global
collective of independent media organizations (Juris, 2008; Milan, 2013).
! 105
Online forms of disruptive protest also emerged, such as distributed denial of service
(DDoS) attacks through which groups of individuals use automated scripts to flood a target
server with more connection requests than it can handle, causing it to shut down (Sauter, 2014).
The hacker collective Cult of the Dead Cow coined the “hacktivism” portmanteau in 1998 to
“indicate the politically motivated use of technology: activists seek to fix society through
software and online action” (Milan, 2013, p. 36).
Though it would be years before they would garner much public attention, techies around
the world were starting to develop tools for anonymity, privacy, and censorship circumvention.
Paul Syverson, Roger Dingledine, Nick Mathewson, and others were developing the onion
routing research that would eventually become Tor. And in the Bay Area, James Marshall started
working on CGI Proxy, one of the first censorship circumvention tools, as early as the
mid-1990s. These projects, and others like them, would eventually coalesce into a technical
infrastructure allowing activists and ordinary people to bypass the commercial internet and its
ubiquitous surveillance. The political economy of this infrastructure is, of course, the subject of
the present study.
The UN held its first summit dedicated to communication issues, the World Summit on
the Information Society (WSIS) in two phases: Geneva in 2003, and Tunis in 2005. The role of
ICTs in economic development was particularly emphasized. While a number of governments
objected to civil society participation, arguing that activists’ input was more properly channeled
through their national governments, civil society participated in the summit in large numbers and
organized parallel events to bolster independent voices. These efforts resulted in the Alternative
Geneva Declaration, titled “Shaping Communication Societies for Human Needs” — as opposed
! 106
to the more narrow interests championed by corporate interests and many governments (Milan,
2013; WSIS Civil Society Plenary, 2003). NGO attendance at the Tunis meeting was much
sparser, partly due to authoritarian-leaning governments actively preventing their countries’
nationals from participating (Deibert, 2013; MacKinnon, 2012).
In the technology industry, e-commerce grew rapidly after strong encryption became
widely available following the Crypto Wars (Levy, 2001), creating the “dot com bubble” that
would burst in the early 2000s. Businesses and individuals were going online, creating more
websites than ever before, as well as demand for new tools to help navigate this exciting new
world. This spurred Stanford PhD students Larry Page and Sergei Brin to develop a new search
engine that determined webpages’ relevance based on their popularity, founding Google in 1998
(see Noble, 2018 for a critique of Google’s search algorithms). The young company financed
itself by selling advertising spots on the search engine’s results page, displaying ads deemed
relevant to the search terms entered by the user — contextual advertising. Over time, Google
expanded its product offerings, with the result of collecting increasing amounts of information
about its users, and linking each datapoint — email content, web searches, and more — to a
single Google Account, which in turn often reflected the user’s offline identity. The more
information Google had about its users, the better it could target them with advertising, and the
more it could charge for those ads. This business model, which would come to be recognized as
“surveillance capitalism” (Zuboff, 2015) or “data capitalism” (West, 2017), spread across Silicon
Valley and decimated newspapers’ advertising revenues, destabilizing the entire industry and, in
time, disrupting the global news and media ecology. Meanwhile, in a Harvard dorm room across
! 107
the country, a young Mark Zuckerberg and his friends were developing the site that would
become Facebook.
Liberation technology and the surveillance society (2005-2015)
The pace of change accelerated during the 2005-2015 period, which saw rapid adoption
of both smartphones and social networking sites, as well as the most significant global financial
crisis since the Great Depression.
As the tech scene in Silicon Valley and elsewhere continually invented new online
services provided to the end user for free (or nearly so), venture capital firms and other early
investors pressured start-ups to earn profits. Google in particular quickly realized that its most
valuable asset was the data it held about the users of its myriad “free” products, and used
increasingly sophisticated “big data” techniques to translate that data into advertising sales.
Online advertising up until then had largely been contextual, inferring the audience’s value to the
advertiser from the website’s content. Targeted advertising offered the prospect of displaying ads
based on the individual person’s recorded online behavior. Google and Facebook quickly
monopolized the online advertising market, which streamlined the ad-buying process,
concentrated revenue into the two companies’ hands, and put significant strain on traditional
journalism outlets (West, 2017; Zuboff, 2015).
Politically, the “War on Terror” that had been declared following the September 11
th
,
2001 terrorist attacks in New York and Washington continued unabated. As a series of
whistleblowers would reveal over the course of several years, the U.S. National Security
! 108
Administration (NSA) developed a massive system of global telecommunications surveillance to
collect every conceivable data stream in pursuit of Total Information Awareness. A key part of
this program involved tapping into the trans-oceanic cables that carry internet and voice traffic
around the world, and copying the data held by Google, Facebook, and other major corporations
when it transited between two data centers (Greenwald, 2014; Schneier, 2015). Nearly five years
after Snowden, the general public still hasn’t internalized the deep relationship between
government surveillance, data capitalism, and their routine online behavior.
The year 2008 was marked by two momentous events that would provoke significant
backlash. The first was the 2008 global financial crisis sparked by the collapse of the U.S.
subprime mortgage market. Financial institutions had been encouraging would-be home buyers
to purchase larger and more expensive homes than they could afford, issuing high-interest loans
than new homeowners then struggled to repay. Given the interconnected nature of the global
financial system, the crisis quickly spread from the U.S., causing the most significant economic
recession since 1929. A number of governments — notably in Europe — responded by enacting
strict austerity measures that weakened the social safety net precisely when it was most needed.
The second major event of that year was the election of Barack Obama, the first
American president who wasn’t a white man. He was also the first candidate to take advantage of
the targeted advertising opportunities offered by social media, particularly Facebook (Issenberg,
2012). His presidency was a success in many respects, but his two terms in office also saw the
Tea Party movement, which combines libertarian and far-right elements, overtaking the
Republican Party and a rise in nativist, xenophobic and white supremacist political expression
(Skocpol & Williamson, 2012; Zeskind, 2012).
! 109
The 2005-2015 decade also saw waves of technologically-empowered protests and social
movements. In 2009, the “Green Movement” protests erupted after Iran’s contested presidential
election, and was dubbed the “Twitter Revolution” by many Western observers. The use of
Twitter as a mobilizing platform within Iran was overstated by foreign media at the time, but
Twitter was indeed used to share images and videos of the protests and the violent repression
they encountered beyond Iran’s borders, thus galvanizing international public opinion
(Alimardani & Milan, 2017). In early 2011, protests swept across the Arab World, demanding
better economic opportunities and just governance. Protesters pointed to the State Department
cables released by WikiLeaks a few months earlier as evidence for their grievances, and used
social media tools like Facebook and Twitter to share information, plan demonstrations, and
organize politically. Optimism regarding ICTs’ potential to foster progress was at its height. But
the repression was swift, and only Tunisia would come out of this tumultuous period with
anything resembling democracy and stability.
The Middle East was not the only region to see large-scale unrest. Born in lower
Manhattan’s Zuccotti Park in September 2011, the Occupy Wall Street movement spawned
offshoots around the globe that focused on local points of contention while highlighting common
grievances. Occupy built on the previous decades’ global justice movement to critique global
capitalism, financial volatility, and rising wealth inequality, and closely tracked the Indignados/
15-M movement that originated in Spain earlier that year (Tufekci, 2017).
But from a communication rights perspective, the bombshell of the decade was NSA
whistleblower Edward Snowden’s revelations about the extent of the U.S. government’s global
surveillance operations (Greenwald, 2014; Schneier, 2015). Public pressure increased on ICT
! 110
companies to be more forthcoming about the conditions under which they would share user
information with governments and to better protect user data through the use of encryption
(MacKinnon, Maréchal & Kumar, 2016). As companies moved in that direction, many
governments — specifically their law enforcement and intelligence agencies — warned that
ubiquitous encryption would interfere with their agencies’ missions, advocating for “responsible
encryption” solutions that would include government-only “backdoors” — something that
cryptography experts agree is impossible, as such a weakness would also be accessible to
nefarious actors (Abelson et a., 2015; Schulze, 2017). In parallel to this reemergence of the
Crypto Wars, an array of governments sought to impose data localization requirements on ICT
companies, ostensibly to protect their citizens’ data from being unlawfully accessed by the U.S.
government. However, accessing data stored outside of the U.S. is more clearly within the NSA’s
legal mandate and is no more difficult from a technical standpoint. Data localization does,
however, facilitate the work of domestic law enforcement and intelligence agencies, boost the
country’s tech sector by increasing demand for data centers, and allows politicians to score
points with voters by appearing to protect citizen privacy from American intelligence (Sargsyan,
2016). Notwithstanding this pro-privacy posturing, many of the same countries were working to
match the U.S.’s electronic surveillance capabilities or to legalize similar practices that had been
in place outside of the law, as was the case in France (Tréguer, 2017b).
This section has traced the genealogy of the contemporary digital rights movement to
broader struggles over the governance of global communication flows, to the international
human rights movement, to popular reactions against global capitalism, and to the practice of
free and open source software development. While issue areas like media deregulation, vertical
! 111
and horizontal consolidation, cultural imperialism, online censorship, encryption, government
surveillance, data capitalism, and net neutrality are sufficiently complex that they have each
spawned highly specialized research and policy communities, they are in fact deeply interwoven.
At its core, each of these debates is a negotiation over who has the right to communicate what, to
whom, and under what conditions — specifically, who can extract economic benefit from the
exchange and who will police the entire arrangement. While the sub-currents within the digital
rights space have much in common, they are rooted in very different ideologies. For example, the
cypherpunks’ libertarian commitment to “freedom through encryption,” which sees electronic
privacy as the end-goal, is in tension with the human rights and social justice movements’
emphasis on society-wide equity alongside the achievement of individual rights. This has
significant implications for the movement’s strategic priorities, modes of action, preferred
funding models and organizational forms, and for governing interpersonal relationships within
the movement.
The past three years (2015-2018) have similarly seen governments, companies, and civil
society actors contend with mutually incompatible goals and values in the communication arena.
Key themes have included ever more sophisticated online censorship tactics, an explosion in the
availability of end-to-end encrypted messaging applications, debates over net neutrality and zero-
rating, social media platforms’ vulnerability to propaganda, and the propensity of many
governments (notably in Africa, but not exclusively) to shut down the internet when they fear
protests might erupt. This time period coincided with data collection for my case studies, and
will be discussed in more depth in the case study chapters.
! 112
Tensions in the digital rights space: Gender, diversity and inclusion
The first half of this chapter offered a sociological overview of social movements and
ICTs, and historicized the digital rights movement by tracing its roots in earlier waves of
contention over communication rights. This section presents a snapshot of the contemporary
movement by considering the struggle for gender equality within the digital rights movement as
one aspect of the tensions over diversity and inclusion with which the movement is grappling.
This dynamic is closely related to similar tensions in the F/OSS community and the broader
technology sector, but with the significant difference that it occurs in the context of a social
movement that also includes explicitly feminist human rights organizations. In the course of my
fieldwork, I have found that divergent attitudes toward gender, diversity and inclusion tend to
closely track the “human rights” and “technology development” sub-currents within the
movement.
While gender is not the only such cleavage within the community, my fieldwork
coincided with a period of intense controversy within the space over gender, sexual assault,
remedy, reconciliation, and due process. A a woman in the space, I found myself observing and
participating in many intense discussions through conference calls, email threads, Signal group
chats, and in person. Rather than sign-posting these tensions and declaring them out of scope for
this project, I concluded that these developments were too important to ignore and resolved to
incorporate a focus on gender in my existing research design. My observation is that over the
2015-2018 period the tenor of these conversations evolved from a discrete “whisper network”
intended to warn individual women and genderqueer people about specific problematic men, to
! 113
collective actions designed to alter social norms and power structures within the space. This
evolution tracked similar feminist reckonings in other sectors of society, at least in North
America, that by late 2017 had coalesced into the #MeToo movement (a comprehensive
discussion of the #MeToo reckoning would be beyond the scope of this dissertation).
In addition to addressing an important and timely topic, this section also serves as a case
study of how subcultures within a social movement confront their differences and negotiate
norms for the group as a whole. Thus, it examines one pathway for change within social
movements. As such, my findings may be relevant to other struggles for justice within
movements and in the workplace. This section must be understood in the context of three key
controversies at the intersection of hacking cultures and the digital rights space: Jacob
Appelbaum’s ouster from the Tor Project in 2016 (also discussed in Chapter 6); the 2017 public
revelations that security researcher Morgan Marquis-Boire had been credibly accused of raping
dozens of women over the course of a decade, and that the prominent Toronto-based activist Ali
Bangi had been charged with rape and false imprisonment; and disputes over social norms for
conferences and other events, and their enforcement, notably concerning the 2017 Chaos
Computer Congress (Ann-King, 2017; Brandom, 2018; Cameron, O’Neill & Larson, 2016;
Jeong 2017a; Jeong, 2017b; Locker, 2018).
A few notes on terminology
While a thorough accounting of the LGBTQ experience in the digital rights space is
beyond the scope of this project, it is important to acknowledge that this is a significant part of
! 114
the community. My observation has been that queer sexual orientations as well as trans and non-
binary gender identities are largely respected in the digital rights space, though there is ample
room for improvement. My focus here is on the exclusion and assault experienced by people who
are not cis men, generally perpetrated by people who are cis men. In this section I mainly refer to
“men” and “women” or “women*”; the latter designation should be understood as “people who
identify as women in ways that are meaningful to them.”
While acknowledging the important conceptual distinctions between these terms, in this
section I use phrases like “hacker culture,” “the F/OSS movement,” “techies” and “the security
community” somewhat interchangeably to refer to the parts of the digital rights movement that
identify primarily with those identities and cultures. Likewise, both “human rights” and “global
social justice” refer to the current within the movement that emphasizes the human over the
technical.
Is this a tech movement or a human rights movement?
Disputes over gender, diversity and inclusion in the digital rights space stem from
seemingly irreconcilable cultural differences between the movement’s forebears in the human
rights movement and in the F/OSS community. Attitudes toward gender and sexual violence are
not the only manifestation of this cultural chasm, of course, and much of this discussion may be
generalizable to other identity markers, including race, sexual orientation, gender identity, class,
nationality, and disability.
! 115
The hacker community only partly overlaps with social justice activist circles, and there
are of course many sites of feminist contestation in hackerspaces, makerspaces, and F/OSS
projects that lack explicit social justice agendas. Likewise, some actors in the digital rights
movement who focus on policy and advocacy interact only minimally with hacking subcultures.
There are gender issues there too, of course, but my focus here concerns the collision between
the cultural norms and values of hacking on the one hand, and social justice activism on the
other. Digital rights technology development is both highly technical and explicitly justice-
oriented, and is thus a key site for observing this collision.
The technology sector’s hostility to women is well-documented, and digital rights
technology development is no exception. Women were underrepresented within each of my case
studies, and I found that toxic misogyny was at the root of many of the Tor Project’s
longstanding problems: the group’s early leadership adhered to an ideology of “programmer
supremacy,” which held that the contributions of coders, programmers, and engineers were most
valuable, and devalued other types of expertise. This dynamic fostered a climate of selective
17
impunity that allowed prominent Tor developer and “evangelist” Jacob Appelbaum to (according
to credible allegations) harass and abuse his colleagues as well as others in the broader digital
rights space for years (see Chapter 6).
But digital rights technology does not occur in a vacuum. As discussed earlier in this
chapter, these practices are embedded in a broader social movement that is linked to the F/OSS
“Programmer supremacy” is a construct coined by Karen Reilly, a former Tor Project
17
employee who was forced out of the organization for trying to blow the whistle on Appelbaum.
See Chapter 6.
! 116
culture and to the international human rights movement. While their alliance is rooted in shared
normative and strategic goals, these two broad groupings are nonetheless divided by significant
ideological differences. Questions of gender, diversity and inclusion crystallize these differences
in several ways.
First, in my fieldwork I found that there are demographic differences between the two
groups. While coders, developers, and other technologists are overwhelmingly male, other (“non-
technical”) professional roles (such as lawyers, researchers, advocates, and policy analysts)
approach gender parity. As part of my ethnography, I asked participants to describe their career
path, which allowed me to identify two broad archetypes: people tended to either come from a
technical background and later develop a commitment to human rights and justice issues, or they
started their careers in human rights activism and later came to appreciate the relevance of
technology to social justice. Unsurprisingly, there are more men in the first group, and more
18
women* in the latter. Members of both groupings bring their educational, professional, and
personal baggage to the digital rights space, and this is reflected in approaches to gender,
diversity, and inclusion.
Second, the two groups disagree about the nature and scope of the problem. Many
women* contend that the physical and virtual spaces where hacking cultures are created and
maintained are neither welcoming nor safe for them, citing demeaning and sexist language,
gendered harassment (both online and offline), and physical assault (including rape) as barriers
to participation. This is more than a matter of wanting to be allowed to join the boys’
“clubhouse” (Dunbar-Hester, 2018): for many women, this is a matter of being successful at their
I fall in the latter category.
18
! 117
chosen profession. If your job requires you to attend hacker conferences to give talks, learn about
the cutting edge of the field, and network with others, but people who look like you are routinely
treated as sexual objects rather than professionals, you may have to choose between being safe
and being successful. Across industries, more and more women are refusing to make that choice,
and demanding cultural change instead, notably through diversity initiatives “aimed at recasting
the ‘social infrastructure’ that determines how technical communities form and
proceed” (Dunbar-Hester, forthcoming, p.1). Christina Dunbar-Hester (2018) argues that these
practices “are essentially an extension of the geek practice of ‘argument by technology,” an
outgrowth of the ‘rough consensus and running code’ ethos that defines free software
movements” (p.1). But beyond demanding equity for individual women, social justice advocates
further argue that diversity is good for the community as a whole and that it enhances the quality
of work output, for example by ensuring that the needs of diverse end-user communities are
considered.
Though some are supportive, in many cases techies respond to these demands by either
denying that the problem is real , arguing that Codes of Conduct and other norm-setting efforts
19
are unnecessary and would be unproductive (because good people already know not to rape, and
bad people will anyway), or suggest that those who don’t like the community as it stands should
For example, supporters of Jacob Appelbaum maintain that key details of the
19
allegations against him (as posted online) are inaccurate, and cast doubts over the allegations as a
whole.
! 118
“fork” it and start their own events. They view restrictions on harassing, threatening or
20
offensive speech as an assault on free speech that, to them, is more dangerous than a culture that
marginalizes women, people of color, or other groups. A related but distinct line of argument is
that an emphasis on diversity and inclusion jeopardizes the “real work,” understood as
programming, near-exclusively. Many people within the Tor Project excused Appelbaum’s
behavior for years by arguing that he “did good work” and that the project needed his
21
contributions more than those of the people (often women*, but not exclusively) whom he drove
away. As Christina Dunbar-Hester explains in a forthcoming book chapter (2018), the
“meritocracy” of F/OSS culture reinforces existing structures of privilege and oppression while
insisting that such structures do not exist.
At this stage it is worth mentioning that the conversation has shifted lately from an
emphasis on access to a focus on safety. Dunbar-Hester’s fieldwork and data collection
(2011-2014) coincided with widespread discourses about the need to increase women’s
participation in STEM fields, with a variety of rationales, including women’s empowerment,
utilizing all of society’s talents, and creating better software to meet the needs of more diverse
user bases. As Dunbar-Hester notes, F/OSS culture is in dialogue with the broader culture, and it
stands to reason that discourses in the broader society would be reflected in the subculture.
In software development, “forking” refers to copying a project (generally code) and
20
taking it in a different direction than envisioned by the group being the original project.
This is debatable; nearly everyone I spoke to about him referenced his penchant for
21
taking credit for other people’s work, or for joining projects at the last minute before insisting
that co-authors be listed alphabetically.
! 119
Similarly, my fieldwork and data collection for this project (2014-2018) started around
GamerGate, continued through the pervasive misogyny of the 2016 U.S. presidential campaign,
22
and concluded as the #MeToo reckoning (and its sister movements in various countries and
industries) crescendoed. If the leading concern during the 2011-2014 period was encouraging
women to participate in technology development (often framed as “the pipeline problem”), the
2014-2018 period focused on what happens to women when they do try to participate: not only
sexist jokes and demeaning comments, but also frightening harassment and sexual assault. This
is not to say that assault and rape did not occur prior to 2014. Rather, the public ouster of Jacob
Appelbaum from the Tor Project and several other public rape allegations helped change the
tenor of the conversation, as a large number of interview participants from various parts of the
digital rights space emphasized.
This brings me to a third point of friction. Even when everyone involved agrees that rape
and assault occur and should be prevented, the two groupings disagree about the appropriate way
to address harassment, assault and rape. The social justice camp promotes enforceable Codes of
Conduct, processes for handling allegations, and actions that prioritize the safe attendance of
Gamergate refers to a cultural battle, largely conducted online starting in 2014, pitting
22
proponents of diversity and gender equality in video games, the tech industry, fan culture, and
other “geek” communities against those who resist such cultural change. The latter group,
primarily white males, led a coordinated campaign of harassment and intimidation (including
rape and death threats) against women whom they derided as “social justice warriors.” Many
scholars and analysts see Gamergate as a precursor of the so-called “alt-right” movement that
helped propel Donald Trump to the White House.
! 120
alleged victims over accused perpetrators’ participation. Techies typically retort that the accused
are innocent until proven guilty in a court of law, notwithstanding the fact that most legal
systems have shoddy records of adequately handling sexual harassment, abuse and assault, as the
opposing camp is quick to point out. Moreover, the idea that women* are asking to be able to
exclude anyone simply by request is a straw man argument: the demand is for clear guidelines
and processes, and that allegations of harassment, abuse and assault be taken seriously rather
than summarily dismissed as they typically are currently. Finally, the techie rhetoric claims that
committing to holding abusers accountable will inevitably result in ruined lives and careers.
Aside from the fact that this rhetoric erases the ruined lives and careers of victims, it also
exaggerates the impact of minor penalties like being asked to leave a party or banned from a
specific conference.
Professionalization: A contested solution
23
In the aftermath of the Appelbaum scandal, Valerie Aurora, Mary Gardiner, and Leigh
Honeywell (2016) published a blog post titled “No more rock stars: How to stop abuse in tech
communities” that would influence the next phase of anti-abuse activism within the digital rights
Several early readers of this chapter noted that “professionalization” can be a fraught
23
term because of how “professionalism” has been used to marginalize people who don’t fit the
cisgender, heterosexual, white mold of the U.S. workplace. Attitudes toward black women’s
hairstyles are but one example of this dynamic. Here I use “professionalization” in the narrow
sense of “being more like a workplace” while acknowledging the limitations of this framing.
! 121
movement and adjacent spaces. Honeywell, a software engineer, had experienced Appelbaum’s
disregard for other people’s stated boundaries first hand while she was sexually involved with
him in 2006-2007 (Honeywell, 2016). The post describes Appelbaum as only one exemplar of a
common archetype in tech communities, the rock star:
A rock star likes to be the center of attention. A rock star spends more time speaking at
conferences than on their nominal work. A rock star appears in dozens of magazine
profiles – and never, ever tells the journalist to talk to the people actually doing the
practical everyday work. A rock star provokes a powerful organization over minor issues
until they crack down on the rock star, giving them underdog status. A rock star never
says, “I don’t deserve the credit for that, it was all the work of…” A rock star humble-
brags about the starry-eyed groupies who want to fuck them. A rock star actually fucks
their groupies, and brags about that too. A rock star throws temper tantrums until they get
what they want. A rock star demands perfect loyalty from everyone around them, but will
throw any “friend” under the bus for the slightest personal advantage. A rock star knows
when to turn on the charm and vulnerability and share their deeply personal stories of
trauma… and when it’s safe to threaten and intimidate. A rock star wrecks hotel rooms,
social movements, and lives (Aurora, Gardiner & Honeywell, 2016).
The authors advise tech communities (including the digital rights space) to value
“plumbers” over “rock stars,” citing a tweet from Molly Sauter:
! 122
Seriously, "rock stars" are arrogant narcissists. Plumbers keep us all from getting cholera.
Build functional infrastructure. Be a plumber.
— Molly Sauter (@OddLetters) June 17, 2016
The post doesn’t use the word “professionalization,” but that is essentially what Aurora,
Gardiner & Honeywell are calling for. I use the term in a lay sense: members of tech
communities should adhere to the established norms of the workplace with regard to decorum,
use of alcohol and other drugs, sexual behavior or imagery, and accountability for their behavior.
The blog post’s recommendations for “preventing rock stars” amount to intentionally building
communities and organizational cultures, rather than allowing wannabe rock stars to create
microsocieties that revolve around them:
• Have explicit rules for conduct and enforce them for everyone
• Start with the assumption that harassment reports are true and investigate them
thoroughly
• Make it easy for victims to find and coordinate with each other
• Call people out for monopolizing attention and credit
• Insist on building a deep bench of talent at every level of your organization
• Flatten the organizational hierarchy as much as possible
• Build in checks for “failing up”
• Enforce strict policies around sexual or romantic relationships within power
structures
• Avoid organizations becoming too central to people’s lives
! 123
• Distribute the “keys to the kingdom”
• Don’t create environments that make boundary violations more likely
In other words, Aurora, Gardiner and Honeywell are calling for tech communities to
resemble the workplace rather than a social event. But as Christina Dunbar-Hester notes, “it
seems that many who oppose CoCs [Codes of Conduct] understand the conflict to hinge on
whether open tech spaces should be more like a ‘clubhouse,’ with a free-wheeling, voluntaristic
ethos that welcomes free expression and a ‘come as you are’ mentality, or a workplace, with a list
of rules handed down by a Human Resources (HR) department and the attendant threat of
punishment for perceived transgressions” (Dunbar-Hester, 2018, p. 16). For many who resist the
“no more rock stars” agenda, establishing rules for interaction destroys their freedom to act as
they please and reproduces the oppressive norms of mainstream society. As Dunbar-Hester points
out, this echoes J.P. Barlow’s “Declaration of the Independence of Cyberspace,” rejecting the
encroachment of the physical world’s laws and norms into cyberspace:
You claim there are problems among us that you need to solve. You use this claim as an
excuse to invade our precincts. Many of these problems don't exist. Where there are real
conflicts, where there are wrongs, we will identify them and address them by our means.
We are forming our own Social Contract. This governance will arise according to the
conditions of our world, not yours. Our world is different. (Barlow, 1996)
! 124
But tech organizations, conferences and movements are not separate from the physical
world. They are part of it, with problems that are remarkably similar to those in other sectors. In
the digital rights space as elsewhere, the total freedom of lawlessness quickly yields a “tyranny
of structurelessness” (Freeman, 1972) where the loudest, most powerful, least empathetic voices
have primacy over others. It is clear that this law of the jungle benefits certain classes of people
over others. In the digital rights space (and elsewhere), the privileged tyrants tend to be male and
technically proficient. It would be easy to conclude that these hackers reject Codes of Conduct
and other markers of professionalization as part of a conscious plan to preserve their own
privilege. While that is doubtless true for some, there are other reasons to reject
professionalization.
The first is that the “clubhouse” is fun, perhaps especially for young men for whom
participation in the digital rights space is a fulfilling hobby rather than a career. Tech conferences
in particular tend to be anarchic, counter-cultural spaces where Club Mate, alcohol and other
24
drugs flow freely, and philosophical conversations, hacking and dancing go late into the night
and early morning (Coleman, 2013). Hackers are attached to aesthetic markers of anarchy that
set them apart from mainstream culture, and may fear that professionalization may force them to
work standard hours while wearing formal clothing. But there are also more principled concerns
about the consequences of professionalization. Professional organizations — as opposed to
anarchist hacker collectives — need a steady influx of cash to pay for salaries, benefits, office
Club Mate is a highly caffeinated beverage popular in hacking cultures, particularly in
24
Germany. In my experience it is an acquired taste, but I am told it pairs nicely with clove
cigarettes and/or marijuana spliffs.
! 125
space, and other expenses. This in turn requires partnering with funding organizations (typically
governments, companies, or foundations) who may try to impose their own norms and values,
thus changing the group and its mission. Many hackers thus perceive Codes of Conduct and
others forms of professionalization as steps on the pathway to selling out, as well as unacceptable
restrictions on free expression.
Chapter conclusion
This chapter has sketched a portrait of the transnational movement dedicated to
communication rights in the internet age, commonly known as “the digital rights space,” placing
it in the context of the sociology of social movements and discussing the impact that the very
same technologies that the digital rights space defends and promotes can have on the practice of
social movements in general. The digital rights space is rooted in several interwoven traditions of
contestation and social innovation, notably the F/OSS movement, the international human rights
movement, the cypherpunks, and the alternative globalization movement. There is significant
tension between the techie and social justice groupings, made evident by recent disputes over
gender, diversity and inclusion.
Specifically, the question of whether organizations and conference organizers have a
responsibility to prevent sexual harassment and abuse, and how they should do so, has pitted
proponents of greater professionalization (notably through the implementation of Codes of
Conduct) against others who value a rhetoric of “freedom” over other goals like equal access to
conference spaces and professional opportunities. The dynamic has been present for years, but
! 126
has taken on more urgency since 2016, when former Tor Project “rock star” Jacob Appelbaum
was ousted from the organization (and the space as a whole) due to a longstanding pattern of
bullying and abuse. The 2017 news that two other prominent “rock star” men in the space,
security researcher Morgan Marquis-Boire and expatriate Iranian dissident Ali Bangi, also had
extensive records of intimidation and assault, further galvanized reform efforts and helped quell
claims that Appelbaum had been an aberration. But I know of at least one other (alleged) serial
rapist who has not been publicly named, and whose victims struggle to pursue their digital rights
careers without suffering further assault at his hands. There are a number of formal and informal
efforts underway to root out abusers like this man and to prevent further instances by changing
the cultures of organizations and conferences. It is too soon to tell how these efforts will pay off,
but as one activist told me, “It’s clear that toxic masculinity is a problem in every community.
But if there’s a space that has a shot at fixing the problem and coming out stronger for it, it’s this
one.”
25
Interview, December 7, 2018. Los Angeles, CA.
25
! 127
Chapter 4 - A political history of U.S. internet freedom funding
Introduction to the chapter
This dissertation’s first chapter presented a historical overview of the growth of
information controls, defined as “techniques, practices, regulations or policies that strongly
influence the availability of electronic information for social, political, ethical, or economic
ends” (Citizen Lab website, 2015). The focus of this project is the social construction of digital
rights technologies that enable internet users to thwart censorship and better protect their privacy
online. Such technology development is part of the repertoire of contention of the transnational
social movement for digital rights, which I examined in Chapter 3. Many such projects also have
ideological, financial, and/or institutional ties to the U.S. government’s Internet Freedom agenda,
and these ties are the subject of much controversy both within the digital rights movement and in
broader discourses about technology, politics and society. Though the present chapter was not
part of my initial research design, I realized early in my fieldwork that many of my research
participants had only a limited understanding of the Internet Freedom agenda and the funding
streams that sustain many of technologies they relied on for their day-to-day communication
needs, including Signal and Tor.
I also observed that proponents and critics of “Internet Freedom” tended to talk past each
other. In a recent essay, Jack Goldsmith (2018) identified two principles of Internet Freedom as it
was articulated in the 1990s: the commercial non-regulation principle and the anti-censorship
principle. These are both negative freedoms, or “freedoms from,” that track two bedrock
! 128
American political values: free enterprise (“the pursuit of happiness”) and the First Amendment.
The U.S. government’s Internet Freedom agenda is designed to export these values around the
world and embed them in the global infrastructures of internet governance, tying these policies to
the national interest, notably by promoting the commercial interests of U.S. technology
companies. In contrast, David Kaye (2018) argues that Goldsmith’s two principles represent a
26
“supply-side” conception of Internet Freedom that links “its success or failure” to American
policy, American norms, and America’s ability to overcome foreign resistance” (para. 2).
Like many activists I encountered throughout my field work, Kaye emphasizes a
“demand side” that focuses on people’s demands for basic human rights in a digitally networked
society: rights to free expression, access to information, privacy, self-determination, equal
treatment, and more. Similarly, Nani Jensen Reventlow and Jonathan McCully (2018) argue that
the two-principle vision of Internet Freedom defined by Goldsmith is an incomplete one, noting
that the United Nations Human Rights Council (UNHRC) has “affirmed that the same rights that
are protected offline must be protected online” (para. 4). This is includes not only privacy and
freedom of expression, but also the rights to education, to freedom from discrimination, and
more. Where Goldsmith defines “Internet Freedom” very narrowly and in relation to American
values and policies, Kaye, Jensen Reventlow and McCully argue for a more expansive view that
is rooted in international human rights law — what David Pozen (2018) calls the “de-
Americanization” of Internet Freedom.
United Nations Special Rapporteur on the promotion and protection of the right to
26
freedom of expression and opinion since 2014.
! 129
Throughout this dissertation, I use the phrase “digital rights” to refer to this more
expansive view of “Internet Freedom,” in contrast to many other scholars and practitioners who
use the two phrases more or less interchangeably. In my experience, this is a source of much
confusion and misunderstanding. Clarifying the distinctions between the two concepts is a
prerequisite for addressing one of this study’s primary research questions, concerning the
relationship between the two milieus.
To that end, the previous chapter examined the emergence of the transnational social
movement for digital rights and analyzed one prevalent tension within the movement. This
chapter traces the political history of Internet Freedom programming, placing it in the broader
context of the Internet Freedom agenda, U.S. foreign policy, and international relations
scholarship. In this context, Internet Freedom programming refers to the activities carried out by
civil society organizers (“implementers,” in State Department lingo) with U.S. government
funding. Programming is distinct from policy, which refers to U.S. government efforts to
incorporate normative values of Internet Freedom in intergovernmental and multistakeholder
fora.
Somewhat paradoxically, much of the funding that sustains organizations in the digital
rights arena flows from the U.S. government under the aegis of Internet Freedom funding and/or
public diplomacy activities. It is likely that many digital rights tools, including three of my case
studies (Psiphon, Tor, and Signal) would not exist without this support. Thus, analyzing the
political economy of digital rights technology requires an understanding of the ideology and
policy goals behind Internet Freedom agenda funding, the bureaucratic politics behind the
! 130
specific ways that monies are appropriated and spent, and how Internet Freedom policies and
funding intersect with (and sometimes contradict) other policy initiatives.
This chapter focuses on a narrow aspect of the Internet Freedom agenda: the program
funding disbursed by the State Department, the Broadcasting Board of Governors (BBG) and
their sub-entities like the Bureau of Democracy, Human Rights and Labor (DRL), the Bureau of
Near East Affairs (NEA), Radio Free Asia (RFA), and the Open Technology Fund (OTF) —
among the primary conduits for this funding. Specifically, the chapter traces the political history
of this funding through the halls of power in Washington in order to shed light on the strange
relationship between American foreign policy and the techno-activists who develop digital rights
tools. Importantly, these programs derive their mandate from Congressional appropriations
linked to human rights rather than to the two Clinton-era principles identified by Goldsmith
(2018).
Chapter roadmap
The chapter begins with a discussion of some of the recent scholarship on the Internet
Freedom agenda and what I see as shortcomings in the literature. Next, I provide an overview of
the scholarly literature on the relationship between ICTs and U.S. power, focusing on the Internet
Freedom agenda. The bulk of the chapter traces the history of program funding (i.e., funding that
is granted to external groups) associated with Internet Freedom, emphasizing the bureaucratic
processes through which vast sums of public monies are appropriated and spent. In particular, I
! 131
examine the relationship between the bureaucrats who manage Internet Freedom programs and
the civil society groups who implement them.
Data, sources and methods
This chapter combines participant observation in the Internet Freedom policy arena,
interviews with current and former officials at various federal agencies implicated in Internet
Freedom initiatives, and content analysis of primary documents with a review of the extant
scholarly literature on Internet Freedom, U.S. foreign policy, and bureaucratic politics. Most of
the foreign policy insiders I interviewed requested anonymity but were nonetheless eager to
share their stories and insights with me, seeming to trust me because of my own position in the
Internet Freedom policy community (see chapter 2).
Rehabilitating Internet Freedom
The Internet Freedom agenda has gotten a bad rap in recent years. There are a number of
reasons for this, including the blatant contradiction between internet freedom rhetoric and the
mass surveillance programs revealed by Edward Snowden; the failure of the technologically
mediated Arab Spring and assorted Color Revolutions to usher in a new wave of
democratization; and growing awareness of all the ways that ICTs can be used to oppress rather
than liberate. Casual observers should be forgiven for thinking that the Internet Freedom agenda
has run its course, particularly given the Trump administration’s rejection of human rights
! 132
rhetoric more generally. Jack Goldsmith (2018) makes a compelling argument that the Internet
Freedom agenda (narrowly defined by the two principles of commercial non-regulation and anti-
censorship) has failed: far from ushering dictatorships into democracies, the internet as we know
it has helped authoritarians consolidate their grasp on power and “is being used for ill to a point
that threatens basic American institutions” (para. 7) .
27
But in another sense the Internet Freedom agenda is alive and well, expending tens of
millions of dollars annually on grants and contracts to civil society organizations and private
companies whose work furthers the free flow of information, online privacy, and human rights in
the digital age. These financial flows notably sustain much of the development of digital rights
technologies and therefore contribute to shaping the very nature of communication in the 21
st
century, but existing scholarship does not adequately grapple with the connections between
ideology, government policy and social movement practice. The resulting analyses, in my view,
fail to engage with the civil society efforts that Internet Freedom funding enables, and are thus
built on empirically shaky ground. (Respecting the confidentiality of Internet Freedom grantees
make this difficult: the State Department has a policy of not disclosing grantee identities, though
implementers are free to self-disclose their funding sources).
Much of the recent scholarship on the Internet Freedom agenda emphasizes the
connection between the rhetoric of internet freedom and the American pursuit of power in the
international system. In “The Real Cyber War,” Shawn Powers and Michael Jablonski argue that
the Internet Freedom agenda “serve[s] to legitimize a particular political economy of globalism,”
Goldsmith is referring to the Russian interference in the 2016 U.S. election, among
27
other troubling ongoing issues.
! 133
and that “America’s ‘free flow’ doctrine is a strategic vision to legitimize a specific geopolitical
agenda of networking the world in ways that disproportionately benefit Western governments
and economies” (p. 203). Moreover, as Taylor Owen points out (2015), it is indisputable that the
Internet Freedom agenda’s lofty rhetoric does not quite square with other dimensions of U.S.
foreign policy:
The core challenge of the State Department’s Internet freedom agenda is not that
circumvention tools are a bad idea, or that the censorship and surveillance programs they
are meant to counter are not damaging to civil society. It is that at the same time as the
21st century statecraft program was supplying Syrian dissidents with counter-
surveillance technology the US government was simultaneously building a large-scale
surveillance program of its own. What’s more, at the same time the United States was
supporting these dissidents to oppose certain regimes, the regimes were often buying
their surveillance hardware from American corporations, at the same technology trade
fairs as the US intelligence agencies. It is an understatement to say that activists will be
suspicious of US efforts going forward. No matter how genuine the intentions, the State
Department’s Internet freedom agenda cannot be isolated from the wider actions of the
US government” (Owen, 2015, p. 164).
Some observers have drawn connections between the Internet Freedom agenda (again, as
narrowly defined by Goldsmith) and cyber-libertarianism, “the belief that individuals—acting in
whatever capacity they choose (as citizens, consumers, companies, or collectives)—should be at
! 134
liberty to pursue their own tastes and interests online,” with the goal “to minimize the scope of
state coercion in solving social and economic problems and looks instead to voluntary solutions
and mutual consent-based arrangements” (Thieren & Szoka, 2009, para. 1-2). David Golumbia,
an outspoken critic of cyber-libertarianism, summarizes the ideology as “computerization will set
you free,” and identifies a number of corollaries to this central tenet, including:
a resistance to criticism of the incorporation of computer technology into any sphere of
human life; a pursuit of solutions to perceived problems that takes technical methods to
be prior to analytic determination of the problems themselves; a privileging of
quantificational methods over and above, and sometimes to the exclusion of, qualitative
ones; the use of special standards for evaluating computational practices that differ from
those used in evaluating non-computational ones; and an overarching focus on the power
of the individual and individual freedom, even when that individual is understood to be
embedded in a variety of networks” (2013, p. 1).
However, I contend that though cyber-libertarianism, digital rights and Internet Freedom
share certain commonalities, they are distinct ideologies that should not be conflated.
While these critiques of the Internet Freedom agenda are important, they suffer from a
major methodological blindspot. For understandable reasons, much of the extant scholarship on
the Internet Freedom agenda, and U.S. internet policy more broadly, is based on analyses of
official speeches, policy documents, and other primary sources that are either publicly available
or obtainable via the Freedom of Information Act (FOIA). As a result, this literature privileges
! 135
Internet Freedom policy over Internet Freedom programming, which is especially problematic
because policy generally adheres to narrower definition of Internet Freedom, compared to
programming’s more expansive embrace of human rights. Moreover, the State Department’s
policy of keeping grantee identities confidential makes it difficult for scholars to evaluate
program effectiveness and impact. In this chapter, I build upon the existing literature by
interviewing current and former officials involved in Internet Freedom programs, as well as
recipients of Internet Freedom grants and contracts, and by participating in closed-door meetings
between government officials and civil society representatives. The insights I have gleaned from
this research should be viewed as complementing, not contradicting, previous research, even as
they complicate traditional notions of policymaking.
Moreover, an exclusive focus on the Internet Freedom agenda’s shortcomings risks
obscuring its successes and impeding an empirical analysis of how it is implemented. This is
particularly important in the context of the Trump administration’s rejection of human rights
rhetoric and praise of brutal autocrats worldwide. Again, critics of U.S. foreign policy are right to
point out the hypocrisy of promoting human rights and Internet Freedom while pursuing mass
surveillance programs, illegal assassination-by-drone programs, and other gross violations of
human rights. Rather than denigrating what the U.S. government gets right because it gets so
much else wrong, I argue that human rights advocates should bolster those efforts and resist the
impulse to throw out the good with the bad.
In light of this dissertation’s focus on the development of digital rights technology, I am
particularly interested in how the U.S. government came to support these tools, and how the
Internet Freedom bureaucracy intersects with the transnational social movement for
! 136
communication rights, including groups that develop digital rights tools. In this chapter, I revisit
the recent history of Internet Freedom funding to argue that it is indeed rooted in a longstanding
American commitment to human rights, including online, and that it would be a mistake to
dismiss Internet Freedom programming as mere window-dressing for American empire. While
some observers (Levine, 2018) contend that grantees of the Internet Freedom agenda are “funded
by spies” (p. 219) and that digital rights organizations like the Tor Project “operat[e] as a de facto
arm of the U.S. government” (p. 237), I argue that the relationship reflects a dialectic of mutual
influence and negotiation that is more complicated than concepts like “regulatory capture” and
“Astro-turfing” allow.
ICTs and U.S. Power
Internet Freedom is part and parcel of U.S. foreign policy, and the International Relations
(IR) field is an appropriate point of departure for its analysis. Classical IR theory is primarily
concerned with the relationships between nation-states in the physical world, and the field is still
grappling with the addition of cyberspace to the existing models of the international system. In
this section, I discuss how IR scholars have approached the relationship between ICTs and U.S.
power, consider the weaknesses of these approaches, and engage with some of the political
science literature on bureaucratic politics. This literature review constitutes important
background for the subsequent section, which provides a political history of Internet Freedom
funding. In turn, this history is crucial for analyzing the relationship between the U.S. Internet
Freedom agenda and the transnational social movement for digital rights.
! 137
The international relations field has primarily engaged with ICTs through two paradigms:
“technological instrumentalism” and “technological essentialism” (McCarthy, 2015).
Instrumentalism regards technology as neutral, contending that it is the use to which it is put that
matters, and that the design process (including the values embedded therein) is irrelevant. The
opposing view, essentialism, approaches technology as having “inherent characteristics” (p. 20)
and often implies that technology plays a causal role in social change. Critically, this school
includes both techno-optimists and techno-pessimists. This is the stance underlying both the
“technology as liberation” and “technology as control” frames discussed in Chapter 1, which
respectively parallel liberal and realist thinking. McCarthy sees both schools as “determinist”
and “ultimately untenable” (p. 19).
Madeline Carr (2016) also identifies two dominant schools within IR that grapple with
technology and power, which track McCarthy’s paradigms closely. What she calls the “industrial
age: technology and national power” (p. 32) paradigm combines a realist understanding of power
and an instrumentalist view of technology. Realists see nation-states as unitary actors whose
actions are guided by self-interest, more specifically maximizing power (and therefore security)
in an international system characterized by anarchy. In this view, ICTs are tools that aren’t so
dissimilar from conventional weapons in that they can be used defensively, aggressively, and
symbolically, as advanced technology can serve as a signal of the country’s overall material
resources and capabilities.
In contrast, the “information age: technology and social power” paradigm is rooted in
determinist understandings of technology. For Carr, much of this scholarship sees ICTs as causal
factors in social change since the end of the Cold War, frequently assuming that such
! 138
technologies have universal effects on state power. Carr concludes that the dialogue between
these two paradigms amounts to “a stalemate based on competing answers to the question of
whether new technology enhances state power more than it undermines it” (p. 33), a question
that cannot be answered definitively because ICTs are too varied and complex to have universal
effects.
Of course, both of these paradigms are rather simplistic compared to the more nuanced
Science and Technology Studies (STS) approaches described in Chapter 1. Nevertheless, these
are the paradigms that have dominated most IR scholarship and much of U.S. foreign policy,
including the Internet Freedom agenda, for decades. McCarty and Carr both advocate
incorporating the Social Construction of Technology (SCoT) approach into IR scholarship, and
the present study is a response to this call, as it considers the human and social processes behind
four technologies whose impact on the relationships between people, states, and institutions is
becoming more apparent by the day. Each of the case study chapters examines the unique
processes that led to the tool in question, while this chapter focuses on the ideology, policy and
practice of Internet Freedom funding, which directly supports Psiphon, Tor, and Signal — but
not Telegram.
Some critiques leveled at the Internet Freedom agenda are grounded in a perception that
Internet Freedom is naive (Morozov, 2011) and “essentialist” (McCarthy, 2015) or
“deterministic” (Carr, 2016), while others condemn it as a tool of U.S. policy, suggesting an
instrumentalist understanding of ICTs. Public statements from State Department officials support
both views:
! 139
We don’t promote Internet freedom or connective technologies as a means of promoting
“regime change.” We promote the freedoms of expression, association and assembly
online and offline because these universal freedoms are the birthright of every individual
(Posner, 2012, cited in Carr, 2013, p. 632).
But also:
We are also supporting the development of new tools that enable citizens to exercise their
rights of free expression by circumventing politically motivated censorship… We want to
put these tools in the hands of people who will use them to advance democracy and
human rights (Clinton, 2010, cited in Carr, 2013, p. 631).
So which is it? In a 2013 article, Madeline Carr argued that during the Arab Spring — in
many ways a turning point for the Internet Freedom agenda — U.S. policymakers, led by
Secretary Clinton, “laid claim to a determinist approach while actually pursuing the policies
dictated by a constructionist approach” (p. 630). In other words, public pronouncements imbued
internet technologies and social media with intrinsic democracy-enhancing powers even as the
State Department in particular took steps to help shape the technological tools at activists’
disposal by funding specific censorship circumvention and secure messaging tools.
This strategy allowed the U.S. government to rhetorically distance itself from the
upheaval in the Middle East: “by imbuing technology with agency and purpose, we may deny
having it ourselves, and therefore be released from the burden of accountability” (Winner, 1980,
! 140
cited in Carr, 2013, p. 631). Some officials (such as Alec Ross and Michael Posner) emphasized
that digital rights are human rights, arguing that the U.S. government funds digital rights tools to
benefit everyone, regardless of the political uses to which they may or may not be put. Secretary
Clinton, on the other hand, drew a clear connection between digital rights tools and democracy-
promotion as a U.S. foreign policy objective. It is likely impossible to know to what extent these
statements are truthful or politically expedient at the individual level. At the institutional level,
this suggests that the State Department is of two minds about funding digital rights tech: it is the
right thing to do AND the strategic thing to do.
My fieldwork confirms that both viewpoints are prevalent within the State Department:
DRL officials typically emphasized the human rights frame, while Danny Sepulveda (then the
U.S. Coordinator for International Communications and Information Policy) volunteered that the
Internet Freedom agenda is “totally about regime change,” and it is inarguable that American
28
technology companies have greatly benefitted from the non-commercial regulation principle
(Powers & Jablonski, 2015). In my analysis, both views are genuinely held by players
throughout the policymaking apparatus, and it is equally false to claim that either one is the “true
position” of the U.S. government. Carr notes that it is “difficult to argue that this is simply
‘bottom-up’ change — and not, to some extent at least, also top-down” (2013, p. 633), but it is
equally incorrect to say that Internet Freedom is purely top-down. As David Kaye (2018) argues,
there is in fact genuine demand for what Internet Freedom has to offer, albeit with diverging
definitions. As I detailed in chapter 3, the vibrant social movement dedicated to communication
Interview, March 16, 2015. Washington, DC.
28
! 141
rights in the internet age has deep roots in earlier waves of contestation, and it influences the
practice of Internet Freedom programming in a number of concrete ways.
The bureaucratic politics of Internet Freedom
While international relations scholarship traditionally views nation-states as unitary
actors who rationally pursue their own self-interests, much in the same way that individual
human beings do, the bureaucratic politics paradigm focuses on factors at the individual and
organizational levels of analysis (Allison and Halperin, 1972). In this paradigm, the basic unit of
analysis is not the policy outcome but the discrete actions of government, defined as “the various
acts of officials of a government in exercises of governmental authority that can be perceived
outside the government” (Allison and Halperin, 1972, p. 45). In order to explain, predict or plan
government actions, it is necessary to identity the action channels, or “regularized sets of
procedures for producing particular classes of action” (Allison and Halperin, 1972, p. 45). These
action channels can be divided into “that portion which leads to decisions by senior players and
that part which follows from those decisions” (Allison and Halperin, 1972, p. 46). Actions are
generally preceded by a series of decisions, defined as “authoritative designations, internal to a
government, of specific actions to be taken by specific officials” (Allison and Halperin, 1972, p.
46).
Decisions, in turn, are the result of complex interactions between players, or “games.” In
the case of Internet Freedom, the key players include members of Congress serving on relevant
committees and subcommittees, their staff (in both committee offices and the members’ personal
! 142
staff), and executive branch officials at the State Department (notably DRL), the Broadcasting
Board of Governors, Radio Free Asia, the Open Technology Fund, and a handful of other
agencies. While a precise accounting of the “bureaucratic games” in which these players engage
would require access to internal communications and other classified information, and is
therefore not possible in the context of this dissertation, I hope that the observations and analysis
in this chapter will help inform any such studies that may be conducted in the future. More
immediately, this chapter enhances current understandings of how the “sausage” of Internet
Freedom is made, thus informing my analysis of the relationship between the Internet Freedom
agenda and digital rights activism, one of this study’s overarching research questions.
Some critics of the Internet Freedom agenda are inherently skeptical about any policy
that serves the U.S. national interest. Implicit in this critique is the assumption that international
politics are a zero-sum-game, and that what is good for the United States must be bad for other
actors. This is undoubtedly the case some of the time, but not always. Governments with heavy-
handed information controls like China, Iran and Russia see Internet Freedom as a zero-sum-
game, but how does the U.S. Government construct its understanding of Internet Freedom as a
national interest?
Allison and Halperin argue that “perceptions of issues or arguments about the national
interest do not begin ab initio,” but rather are rooted in “a foundation of shared assumptions
about basic values and facts” that are “reflected in various attitudes and images that are taken for
granted by most players” (Allison and Halperin, 1972, p. 56):
! 143
Most participants accept these images. Their idea of the national interest is shaped by
these attitudes, and their arguments are based on them. Most participants tend to interpret
the actions of other nations to make them consistent with held images, rather than
reexamining basic views. Even those in the bureaucracy who do not share some or all of
these values and images are inclined to act and to argue as if they believed them. They do
this because to do otherwise would make them suspect by other members of the
bureaucracy (Allison and Halperin, 1972, p. 56).
The connection between the free flow of information, the privacy of electronic
correspondence, democracy promotion, human rights, and the U.S. national interest is one such
“shared assumption,” and is discursively sustained through appeals to Internet Freedom
throughout various high-level policy documents, as we will see in the subsequent section. The
current and former government officials I interviewed all pointed to such documents when
discussing the Internet Freedom agenda’s legitimacy, highlighting the bipartisan support behind
Internet Freedom and its backers in both Congress and the executive branch.
29
To extend Allison and Halperin’s top-down model, Ralph S. Brower and Mitchel Y .
Abolafia (1997) propose a bottom-up approach that focuses on actors who rank lower within the
bureaucratic hierarchy. The authors’ ethnographic study of public managers in state government
found that “because of their relatively less powerful hierarchical positions, lower-level
participants engage in political activities that are primarily about the pursuit of identity rather
The exception being the Trump White House and Tillerson State Department, neither
29
of which have shown any indication that Internet Freedom is on their radar.
! 144
than specific organizational outcomes” (Brower and Abolafia, 1997, p. 305). For this reason, I
focused my attention on the working-level program officers at the State Department, the Open
Technology Fund, and other agencies who perform the daily work of funding Internet Freedom
programs: issuing funding solicitations (calls for proposals, essentially), tracking and evaluating
applications, issuing funds, monitoring periodic progress reports from implementers (grant
recipients), communicating with the broader digital rights community, and coordinating activities
with other elements of the foreign policy bureaucracy.
Ideologies, Policies and Practices of Internet Freedom
This section historicizes the ideological underpinnings, policies and practices of Internet
Freedom as an element of U.S. foreign policy by tracing the evolution of American international
communication policy over the past five decades. As we will see, Internet Freedom is connected
to earlier policy commitments in the context of the Cold War, notably the Free Flow Doctrine
and democracy promotion, that share a normative adherence to liberal-democratic-humanist
values while also advancing U.S. power in the international system. The focus of this chapter is
an examination of how the U.S. government promotes global internet freedom through grant-
making, focusing on the institutions and processes that transform American taxpayer dollars into
support for global civil society organizations. Two agencies command the lion’s share of Internet
Freedom program funding: the State Department (including the Bureau of Democracy, Human
Rights, and Labor (DRL) and the Bureau of Near East Affairs (NEA)), and the Broadcasting
Board of Governors (including Radio Free Asia’s Open Technology Fund). My research and
! 145
analysis consequently focuses on these two entities, while acknowledging that others are
implicated as well.
This chapter narrowly focuses on Internet Freedom programming, but it bears repeating
that the Internet Freedom agenda does not exist in a vacuum. As Shawn Powers and Michael
Jablonski (2015) note, the longstanding American commitment to unimpeded communication
flows is in tension with a historical willingness to support limitations (in the form of intellectual
property protections like trademarks and patents, for example) that benefit U.S. companies. The
contradictions between the Internet Freedom agenda and mass surveillance programs are even
more glaring, and it is undeniable that U.S. domestic policy often fails to meet the standards that
the foreign policy apparatus extolls abroad (the FCC’s repeal of the 2015 network neutrality
order comes to mind). A thorough accounting of these tensions and contradictions would be
outside the scope of this dissertation, however.
Before Internet Freedom: Promoting Democracy, Human Rights… and U.S. Empire
The U.S. Internet Freedom agenda is rooted in the Free Flow doctrine that drove
American international communication and media policy throughout the postwar period, a
“strategic vision to legitimize a specific geopolitical agenda of networking the world in ways that
disproportionately benefit Western governments and economies (Powers & Jablonski, 2015, p.
203). The Free Flow doctrine is tied both to technological innovations and to the growth of U.S.
empire in the decades following World War II.
! 146
From the mid-19
th
century onward, successive waves of innovation in communications
technology (i.e., the telegraph, radio, telephone, satellite, fax, and internet) decoupled the
transmission of information from the physical artifacts on which it was encoded, with a number
of consequences for international politics, as governments sought to coordinate the governance
of cross-border communication and took advantage of the new technologies’ affordances to
project power across ever greater distances. These consequences included the creation of the first
intergovernmental organization (the International Telegraph Union, in 1865), which would later
be incorporated into the United Nations systems as the International Telecommunications Union;
the diminishing independence of foreign diplomats from their capitals; the emergence of public
opinion as a consideration in foreign policy making; and more (Hanson, 2008).
The United States emerged from World War II as one of two hegemonic powers in a
bipolar world system (the other being the Soviet Union), and before long the onset of the Cold
War would lead the U.S. to deploy military, economic, and cultural assets around the world. The
Truman Doctrine, the Marshall Plan, and the reconstruction of Japan aimed to encourage Western
Europe, Japan, and later South Korea to develop into free-market democracies that would be
reliable allies and trading partners. This goal was largely achieved, setting the stage for decades
of democracy promotion and lofty human rights rhetoric, but also military interventionism and
covert operations meant to stave off a “domino effect” of Communist contagion.
Meanwhile, the British and French colonial empires were crumbling, and a host of newly
independent nations in Africa, Asia, and the Middle East were forging a path between the
American and Soviet spheres of influence. This Non-Aligned Movement rejected formal
allegiance to either the U.S. or the USSR and organized around five core principles: mutual
! 147
respect for territorial integrity and national sovereignty; mutual nonaggression; mutual non-
interference in domestic affairs; equality and mutual benefit; and peaceful co-existence
(Chaudhuri, 2014). These principles are in tension with other principles linked to human rights
that would gain traction later in the 20
th
century, such as the responsibility to protect, democracy
promotion, and internet freedom.
The Free Flow Doctrine
By the mid-1970s, the idea that relationships between rich and poor countries were
fundamentally imbalanced was well-entrenched. Theories borrowed from the economic arena,
such as the world-system perspective (Wallerstein, 1974) and dependency theory (Galtung, 1964;
Amin, 1976), were adapted for the study of international communication (Said, 1979, 1994;
Schiller, 1976). Countries at the “center” of the world system were seen as extracting both
material and symbolic goods from the developing nations at the “periphery,” transforming them
into value-added economic and cultural products, and re-exporting them back to the
“peripheries,” with the net result being a systematic transfer of wealth from the global South to
the global North and a pattern of cultural imperialism. The developing world contested this
arrangement, arguing that existing communication infrastructure and policy was a legacy of the
colonial era that threatened developing countries’ sovereignty and culture, and posed barriers to
their economic development. Consequently, they proposed reforming the arrangement. Rich
countries, led by the U.S., disagreed.
! 148
Governments erected regulatory and practical barriers to the flow of goods, services, and
cultural products across their borders, but many such barriers (particularly in the communication
arena) were vulnerable to technological advancement. New satellite technology threatened
nation-states’ control of media and broadcasting, and there was a widespread perception among
countries in the Eastern Bloc and the Non-Aligned Movement that the concept of national
sovereignty itself was in jeopardy. Many countries feared becoming de facto vassals to one of the
two superpowers. On the other hand, the United States and its NATO allies saw three mutually
reinforcing benefits to satellites. First, the day’s techno-optimists argued that “satellites were
inherently democratic as they could potentially allow everyone in the world to communicate with
each other” (Milan, 2013, p. 25). Second, satellites facilitated long-distance broadcasting into
Eastern Bloc and Non-Aligned countries, bolstering Western ideas and values in the battle for
hearts and minds. And third, the technology had the potential to open up new markets for
Western media conglomerates (Powers & Jablonski, 2015). This history would repeat itself a few
decades later with the advent of the internet.
Non-aligned countries working in partnership with UNESCO proposed a number of
media infrastructure reforms that together amounted to a New World Information and
Communication Order (NWICO), such as collective ownership of satellites and a right to reply
to media reports. The MacBride Report (1980) in particular emphasized communication as
central to human rights, economic development, and “a better, more just and more democratic
social order” (p. 253) and underlined that such progress would require “free, open and balanced
communication” (p. 254). The journalism profession was split over NWICO, with some
professional press and media associations supporting its proposals and others cautioning that
! 149
they might threaten freedom of expression. The U.S. and the U.K. (led by Ronald Reagan and
Margaret Thatcher at the time, respectively) saw NWICO, which had the Soviet Bloc’s support,
as antithetical to free enterprise and freedom of expression, and withdrew from UNESCO
altogether, along with Singapore (Milan, 2013). The U.S. especially saw any limitations on
information flows as attacks on neoliberal capitalism and a threat to business profits — in other
(hyperbolic) words, attacks on the American way of life itself (Milan, 2013; Powers & Jablonski,
2015).
NWICO was ultimately abandoned, and the 1980s and 1990s saw widespread
deregulation of the media and telecommunications sectors around the world, leading to both
vertical and horizontal consolidation, as well as the computerization of many aspects of society,
including business and the financial sector (Hanson, 2008; Milan, 2013). The free flow doctrine,
free-market capitalism and liberal democracy seemed to have won the day (Fukuyama, 1992).
(See chapter 3 for a more detailed account of the NWICO debate).
The end of history, the beginning of internet freedom (1992 - 2000)
Francis Fukuyama’s 1992 article declaring “The End of History” captured the zeitgeist of
the Clinton administration, at least where foreign policy and international communication policy
were concerned. The Cold War was over, and a wave of democratization was sweeping the
world. The view from Washington was that capitalist free-market economics, electoral
democracy, and the free flow of information were the logical culmination of human progress
(Fukuyama, 1992).
! 150
In 1994, President Clinton signed the International Broadcasting Act, which created the
International Broadcasting Bureau (IBB) and a new Broadcasting Board of Governors (BBG) to
oversee “all international non-military broadcasting” (Broadcasting Board of Governors, n.d.).
The BBG was to “supervise all broadcasting activities and provide strategic management for the
agency” and “serv[e] as a firewall between U.S. government policymakers and the journalists”.
The 1998 Foreign Affairs Reform and Restructuring Act abolished the U.S. Information Agency
(the IBB’s umbrella agency), establishing the BBG as an independent federal agency. The Act
gave the BBG authority over V oice of America, Radio Martí and TV Martí, as well as Radio Free
Europe/Radio Liberty, Radio Free Asia, and the Middle East Broadcasting Network, which are
structured as non-profit “grantee” organizations (Broadcasting Board of Governors, n.d.).
As Madeline Carr (2013, 2016) notes, the Clinton administration privileged the interests
of the business community as it considered how to regulate and structure the nascent internet,
which it saw primarily as a driver of future economic growth. But the administration was also
concerned with protecting civil rights. Online privacy and the civilian use of strong encryption
were the biggest points of contention, as domestic freedom of expression and access to
information were protected by the First Amendment. The internet quickly expanded beyond the
U.S., and with it the American values that were “baked into” the network’s design. Though the
so-called “Crypto Wars” pitted proponents of strong encryption against the interests of law
enforcement and intelligence, privacy prevailed (Levy, 2001). By the end of the decade, the
values that were starting to be called “Internet Freedom” had been incorporated into U.S. foreign
policy. The 1999 National Security Strategy framed ICTs “as key to mitigating human rights
abuses and promoting the free flow of information,” while the following year’s plan for
! 151
“Defending America’s Cyberspace” similarly emphasized the importance of safeguarding civil
rights and liberties (such as privacy and freedom expression) while acknowledging the frictions
with cybersecurity (Clinton, 1999, 2000, cited in Carr, 2013, p. 624-625). Perhaps influenced by
the cypherpunks’ libertarian rhetoric, and certainly driven by the internet’s potential for
economic growth, the Clinton administration focused how governments could use the internet to
cause harm, ignoring the dangers of the surveillance capitalism business model (Zuboff, 2015)
that was slowly taking shape.
Explicitly linking Internet Freedom to U.S. foreign policy was a double-edged sword. On
one hand, this provided a framework to guide policymakers at every level of the policy apparatus
to take human rights into account, and gave U.S. leadership the moral authority to argue for
internet freedom on the world stage. But on the other hand, linking online privacy and freedom
of expression to American foreign policy — the ultimate aim of which is the preservation and
expansion of U.S. power in the international system — may harden certain nation-states’
opposition to Internet Freedom (Carr, 2013). To echo Madeline Carr, in my fieldwork, I indeed
found that for many activists “the power component of Internet Freedom [overshadowed] the
human rights agenda” (p. 623), to the point that a subset of the digital rights movement sees
accepting U.S. Internet Freedom funding as ethically corrupt, or at least questionable (Ben
Gharbia, 2010; York, 2015).
! 152
The Global Internet Freedom Task Force (2001-2008)
History did not end during the Clinton administration, of course (Fukuyama 1992, 2014),
nor did the normative debate about whether global information policy should privilege the free
flow of information, information sovereignty, or something else. The rhetoric paralleled earlier
disputes surrounding broadcasting via radio and satellite, but regulation of a different technology
was at stake: the internet. And unlike broadcasting, the internet allowed for bidirectional
communication, further complicating matters.
Starting in the early 2000s, internet censorship increased significantly in China,
imperiling the free flow of information and affecting American companies as well as local
human rights defenders. As detailed in chapter 1, various countries pushed back against the
internet’s seeming erasure of national borders through technical, regulatory and extra-legal
means. These efforts were met by increasingly sophisticated censorship circumvention tools,
though most were difficult to use without significant technical expertise. As previously
discussed, the values of Internet Freedom were rhetorically linked to American values of privacy,
free expression, and free enterprise. A bipartisan consensus was emerging in Congress that
something had to be done.
In 2002, Rep. Christopher Cox (R-CA) introduced the Global Internet Freedom Act
(GIFA), which would have required the government to “denounce governments that restrict,
censor, ban, and block access to information on the Internet” and to “deploy technologies aimed
at defeating state-directed Internet censorship and the persecution of those who use the
! 153
Internet” (HR.5524 2003, 7-8, cited in Carr, 2013, p. 625). It failed to advance out of committee,
and Cox re-introduced it the following year, to no avail.
Around the same time, American companies that had survived the bursting of the late
1990s tech bubble were confronting the challenges associated with operating in countries with
different norms concerning freedom of expression and privacy. Yahoo came under particular
scrutiny in 2006 after it was revealed that the company had turned over the contents of the
several pro-democracy dissidents’ email accounts to the Chinese authorities, resulting in lengthy
prison sentences (Schonfeld, 2006). In a February 15, 2006 hearing of the House Subcommittee
on Africa, Global Human Rights and International Operations, Rep. Chris Smith (R-NJ) called
tech companies’ cooperation with the Chinese government “sickening,” arguing that turning over
the contents of email accounts amounted to “decapitating the voice of the dissidents" (Zeller,
2006, para. 1). Rep. Tom Lantos (D-CA) likewise condemned the companies’ leaders, asking
how they could sleep at night (Zeller, 2006, para. 6).
Smith and Lantos were both longtime advocates for stronger U.S. action to defend and
promote human rights. Notably, Lantos, a Hungarian-American who is the only Holocaust
survivor to serve in the U.S. Congress, had founded the Congressional Human Rights Caucus in
1983 . Smith introduced the Global Online Freedom Act (GOFA) in 2006, and again in 2007,
30
2009, 2011, and 2013. While specific language varied between versions of the bill, GOFA’s
31
successive iterations focused on three themes: including internet restrictions in the State
After his death, it was renamed the Tom Lantos Human Rights Commission in his
30
honor.
Lantos co-sponsored the bill in 2006 and 2007; he died in 2008.
31
! 154
Department’s human rights reporting; requiring corporate transparency from ICT companies
operating in countries where internet rights are restricted; and export controls restricting the
transfer of technologies that “create or operate systems used to monitor, track, and target citizens
for killing, torture, or other grave abuses” (Brown, 2013, p. 4-5). The GOFA was never passed,
but two of its main goals — human rights reporting and pressure on ICT companies to increase
transparency — would eventually be fulfilled by civil society organizations supported by Internet
Freedom grants. As for export controls on so-called “dual-use technologies”, the issue proved
32
to be so vexing that it is still an object of contention (Moussouris, 2017).
In parallel to the GOFA efforts in Congress, Secretary of State Condoleezza Rice created
a working group within the State Department “to address challenges around the globe to freedom
of expression and the free flow of information on the Internet.” Called the Global Internet
Freedom Task Force (GIFT), the working group was jointly led by DRL’s Office of
33
International Labor and Corporate Social Responsibility and the Bureau of Economic, Energy,
and Business Affairs. The GIFT’s mandate was to “maximize freedom of expression and the free
flow of information and ideas; minimize the success of repressive regimes in censoring and
silencing legitimate debate; and promote access to information and ideas over the Internet.”
Secretary Rice referred to these three pillars as “Internet Freedom” (U.S. Department of State,
2006a). Importantly, Rice’s conception of Internet Freedom did not include freedom from mass
Freedom House’s Freedom on the Net Report and Ranking Digital Rights’ Corporate
32
Accountability Index, respectively.
This was an unfortunate acronym, as it implied that internet access was a “gift”
33
bestowed by a benevolent America.
! 155
surveillance. Whether this omission was related to the Bush administration’s development of
Total Information Awareness mass surveillance programs within the NSA is an open question.
The GIFT strategy published in December 2006 delineated three strategic priorities. First,
the State Department would monitor internet freedom around the world, by spotlighting abuses
of freedom of expression and the free flow of information online in the annual human rights
report and by directing embassies to “increase interim reporting of incidents related to Internet
freedom” so as to allow the U.S. government to respond in a timely fashion. Second, the State
Department would respond to challenges to Internet Freedom in bilateral, multilateral, and
multistakeholder venues, both proactively and reactively when abuses occurred. And third, it
would advance Internet Freedom by tackling the global digital divide through both U.S.
government aid programs and public-private partnerships. The Department would support “the
provision of unfiltered information to people living under conditions of censorship” and fund
“innovative proposals and cutting-edge approaches to combat Internet censorship in countries
seeking to restrict basic human rights, including freedom of expression” through an announced
$500,000 grant program (U.S. Department of State, 2006b). The latter strategic priority is the
focus of this dissertation.
Meanwhile, a coalition of actors, led by the Global Internet Freedom Consortium’s
(GIFC) Michael Horowitz, started lobbying Congress to fund censorship circumvention
technology. This coalition notably included the “human rights in China” and international
religious freedom communities, and enjoyed support from the business community. Horowitz
focused his advocacy on garnering support for two tools called Freegate and UltraSurf, which
had been specifically designed for the needs of the Falun Gong in China but were also used
! 156
elsewhere. He and his supporters argued that the needs of Chinese internet users would be best
met by a single tool, which others maintained the opposite: that a diverse set of independent tools
would be more likely to defeat the Great Firewall that the Chinese government was building, and
to meet the needs of internet users outside of China .
34
While Horowitz and the GIFC claimed that DRL’s reluctance to exclusively fund
Freegate and Ultrasurf was grounded in a desire to appease the Chinese government, which has
no love lost for the Falun Gong, the reality was that the tools allowed GIFC to see users’ internet
traffic and communications — a major privacy violation — and that the tools’ codebases had not
been independently evaluated (MacKinnon, 2010). Moreover, Freegate and UltraSurf’s
developers were keenly focused on the needs of Falun Gong practitioners in China, and
prioritized the tools’ ability to circumvent the Great Firewall above the needs of users in other
environments. In the end, Congress wrote broader language into the 2008 Appropriations Bill
than Horowitz had wanted, allocating $30 million to the State Department, which in turn
allocated it to DRL.
The appropriations language directs DRL to focus its Internet Freedom programs on four
“pillars”: technology, digital safety, policy advocacy, and research and analysis. The technology
portfolio primarily includes circumvention tools and secure communications tools, including
secure communications platforms like Tor. About half of DRL’s Internet Freedom grant
expenditures in any given year go to digital rights tools. Importantly, DRL doesn’t publicly
affiliate or publish information about its grantees, though grant recipients are free to disclose that
information if they choose to (the Tor Project and Lantern both disclose their DRL grants
Interview, March 10, 2017, Valencia, Spain.
34
! 157
publicly, as does Ranking Digital Rights). This policy protects the vulnerable populations
receiving grants or benefitting from the activities supported by the grants, such as human rights
defenders in countries where perceived association with the U.S. government would endanger
them.
35
This history is significant because the broad appropriations language allowed DRL’s
program officers to use their expertise and best judgment in allocating the funds. Among DRL’s
first grantees were two civil society organizations: Freedom House’s Freedom on the Net Report,
and Internews . The annual Freedom on the Net report “features a ranked, country-by-country
36
assessment of online freedom, a global overview of the latest developments, as well as in depth
country reports” (Freedom House, n.d.). Freedom on the Net data is used beyond Freedom House
itself:
The key trends and emerging threats highlighted in reports are then used in national and
international advocacy campaigns by Freedom House. Our findings are also used by
activists worldwide in working for local change, by international development agencies
in designing programs and determining aid recipients, by tech companies for business
intelligence and risk assessment, by journalists who cover internet rights, and by scholars
and experts (Freedom House, n.d.).
Interview, March 10, 2017, Valencia, Spain.
35
Interview, March 10, 2017, Valencia, Spain.
36
! 158
Importantly, the Freedom on the Net reports is produced by a global network of
researchers from local civil society, who not only provide first-hand local expertise but also
benefit from the training provided by Freedom House. Freedom on the Net is thus not only a
critical resource but also a capacity-training exercise that helps build successive generations of
“internet freedom defenders” (Freedom House, n.d.).
Internews is a U.S.-based media development organization with a rich history of
facilitating cross-cultural communication (notably between the U.S. and what was then the
USSR) and of supporting independent media organizations around the world. At the time of the
2008 DRL grant, it was already engaged in online media projects, having launched its Global
Internet Policy Initiative in 2001 (Internews, n.d.). The organization decided to sub-grant much
of the DRL money, supporting several programs that are widely used today (including the Tor
Project), digital security programs, and a number of research initiatives.
37
These choices would shape how the U.S. government and other funders bounded
“Internet Freedom,” resulting in a more holistic approach across the entire sector rather than
limiting internet freedom to censorship circumvention exclusively, as Horowitz and his acolytes
had wanted. Over the years, DRL in particular sought to balance funding tools and projects that
met immediate needs (i.e., censorship circumvention software and emergency response
programs) with long-term efforts intended to “prevent the conditions that lead to emergency
situations” (i.e., long-term research, policy and advocacy initiatives).
38
Interview, March 10, 2017, Valencia, Spain.
37
Interview, March 14, 2018. Washington, DC.
38
! 159
Horowitz would continue to lobby Congress to earmark the funding for the GIFC’s
development of Freegate and Ultrasurf rather than directing it to DRL, requiring DRL officials to
justify the existing holistic approach by delivering private Hill briefings to key members of
Congress and proposing draft appropriations language. This set a precedent for close cooperation
between the DRL Internet Freedom Office and Congress, and taught program officers the
intricacies of managing relationships with Capitol Hill. As for the GIFC, it eventually received a
$1.5 million grant from DRL in 2010 (Pomfret, 2010; Smith-Spark, 2010), but would never play
the central role in the Internet Freedom world that Horowitz hoped it would.
Internet Freedom’s Heyday (2008-2012)
While the foundations for Internet Freedom policy and programming were established
during the administrations of Bill Clinton and George W. Bush, respectively, the Internet
Freedom agenda is frequently identified with Hillary Clinton’s tenure at the State Department.
Indeed, Secretary Clinton gave two major speeches on the subject (at the Newseum in 2010, and
at The George Washington University in 2011). But the impact of Clinton’s tenure in Foggy
Bottom was to make Internet Freedom more of a priority on the policy side, and she didn’t
engage much with grant-making. The four congressionally-mandated “pillars” remained, and
39
subject-matter experts in the Internet Freedom program office had a fair amount of latitude to
direct their implementation.
Interview, March 10, 2017, Valencia, Spain.
39
! 160
Like the Clinton and Bush administrations, the Obama administration integrated Internet
Freedom into major policy documents. The 2011 International Strategy for Cyberspace
committed the U.S. government to “help secure fundamental freedoms as well as privacy in
cyberspace” by “support[ing] civil society actors in achieving reliable, secure, and safe platforms
for freedoms of expression and association;” “collaborat[ing] with civil society and
nongovernment organizations to establish safeguards protecting their Internet activity from
unlawful digital intrusions;” encourag[ing] international cooperation for effective commercial
data privacy protections;” and “ensur[ing] the end-to-end interoperability of an Internet
accessible to all” (Obama, 2011, p. 23-24). Internet Freedom was likewise “a central pillar of the
USA’s 21
st
Century Statecraft foreign policy doctrine” (Carr, 2013, p. 621), even though the
Obama administration recognized that there were frictions between Internet Freedom and other
dimensions of cybersecurity. Online anonymity was seen as especially problematic, as it made
attribution of cyberattacks and other unlawful online behavior more difficult. But as Madeline
Carr points out, this is “both a problem for state security and a safeguard for global civil society”
(p. 622). The Obama White House and the Clinton State Department were particularly concerned
with supporting the ability of political dissidents and human rights defenders to access and
publish information and to communicate securely. This required coordination between multiple
agencies across the federal government, and the level at which these tensions were addressed
rose over the course of Obama’s presidency.
40
Interview with Ambassador Daniel Sepulveda, U.S. Coordinator for International
40
Communications and Internet Policy (2013-2017), March 16, 2015. Washington, DC.
! 161
This is also the period during which Internet Freedom programming became
institutionalized within the State Department and other federal agencies. Starting in FY 2008 (the
final year of the Bush administration), the federal budget began requiring the State Department’s
Bureau of Democracy, Human Rights and Labor (DRL) to fund the development of digital rights
tools. DRL’s staff was “excited” to do so, but the decision came from Congress, as discussed in
the previous section. DRL’s focus on global human rights is unique in the State Department, as
most of the bureaucracy is organized around regional or thematic portfolios (women, children,
refugees…).
While the general thrust of Internet Freedom programming is mandated by Congress,
subject-matter experts within the program office select which activities to fund from a pool of
proposals submitted by civil society organizations. These subject-matter experts typically have
deep connections to civil society and to the issues faced by frontline human rights defenders. As
a result, the program office was able to develop funding priorities based on congressional
guidance and the input it received from civil society. Many of these priorities were eventually
written into policy guidance. For example, one former program officer described how she and
her colleagues identified the necessity to focus on the needs of marginalized and at-risk
populations when funding technology development. They began incorporating language about
usability, localization, and testing tools with diverse user populations into DRL solicitations
(calls for proposals, essentially) and coaching grantees to think about structural inequality in
their work. The needs of marginalized and at-risk populations became an Internet Freedom
policy priority a few years later .
41
Interview, March 14, 2018. Washington, DC.
41
! 162
Creation of the Open Technology Fund (2012)
One major structural change to the implementation of the Internet Freedom agenda was
the 2012 creation of the Open Technology Fund, a subsidiary of Radio Free Asia (RFA).
Structured as a 501(c)(3) nonprofit organization, RFA is a “grantee” organization of the
Broadcasting Board of Governors (BBG), tasked with conducting public diplomacy outreach to
East Asia via broadcasting and the internet.
During the FY2012 budget elaboration process, the BBG had convinced Congress to
reallocate some of the State Department’s Internet Freedom budget to the BBG, arguing that the
State Department wasn’t doing a good job of spending it. However, a number of civil society
groups argued that the BBG wasn’t up to the task either. Congress was committed to funding
grants for technology development that would further the cause of internet freedom — but which
agency should it task with doing so? DRL didn’t seem capable of managing the full
congressional appropriation, and there were doubts that the BBG itself could pick up the slack.
The next option was to allocate the funds directly to one of the BBG’s intermediaries, and Radio
Free Asia’s director Libby Liu volunteered to take on the task, going on “a month-long quest” to
find people who would help her spend the money. She convened a number of meetings, one of
which was attended by Dan Meredith, who was then a technologist at New America’s Open
Technology Institute. As he tells the story, Meredith drafted an architecture plan to spend the
money really quickly, figuring that RFA would fail to expend the funds and that he would then
publish his draft plan and “shame them for not doing anything.” But within five minutes, Liu
realized that Meredith actually cared: it wasn’t about power, money, or a job for him. An hour
! 163
later, Meredith found himself on the phone with RFA’s director of human resources negotiating a
job description and salary.
At first glance, the idea that Dan Meredith manages millions of dollars in U.S.
government funding borders on the absurd, and I didn’t quite know what to make of him when I
first met him in 2015. As Yasha Levine notes, Meredith is a far cry from “a typical stuffy State
Department suit”: his “scruffy beard and messy blonde surfer hair” suggest that he’d be more at
home at a music festival than in Foggy Bottom (2018, p. 255). His personal aesthetic tends to the
hipster surfer bro, he favors puka shell necklaces, and he has been known to address colleagues
as “brother” and “sister.” Meredith’s self-presentation couldn’t be further from my
42
preconceptions of a USG funder. A self-taught computer geek with a high school education,
Meredith had been involved with Indymedia in the 2000s, “faked his way” into technology
consulting for “disruptive, civil society stuff,” and moved to DC as part of the wave of
technically savvy idealists who flocked to DC on Barack Obama’s coattails in 2009. The OTF,
and its director, were a far cry from the “official Washington” that many of my State Department
interlocutors embodied.
Meredith had been the second full-time employee at New America’s Open Technology
Institute, which he left in 2011 to work for al-Jazeera, where he helped cover the Arab Spring.
His responsibilities there entailed identifying tools and technology to improve the organization’s
secure communications toolkit and to assist with the investigations, notably building a
whistleblowing platform intended to compete with WikiLeaks. Al-Jazeera and other journalists
He also rode a fixed-gear bicycle, a cultural signifier of hipsterdom, to our interview in
42
DC.
! 164
relied heavily on the open source apps TextSecure and Redphone to communicate securely, but
the company behind these tools, Moxie Marlinspike’s Whisper Systems, was acquired by Twitter
and stopped maintaining the tools. Meredith told me that he could think of dozens of stories of
43
sources and colleagues “going from not having physical surveillance to being tortured and
murdered,” which he traces to the disappearance of the secure communication tools they relied
on (see chapter 7). The issue of secure online communications became very personal for him,
much in the same way that it had in the 2000s, when the FBI had raided some of the server
platforms he managed for Indymedia. Meredith returned to the U.S. in late 2011 to care for his
mother, who was ill, and decided that he wanted to find a way to support digital rights
technologies, and to use his experience in the field to “get groups like OTI, Internews, etc, as
well the funders, to start doing it right.” Before he knew it, Libby Liu had hired him to direct
44
Radio Free Asia’s brand-new Open Technology Fund.
Unsurprisingly, many in the global activist community were wary of this “six-foot white
guy from America spending a lot of money,” and Meredith had to spend a lot of time convincing
people that he wasn’t a spy. His approach was to “turn that question around,” by asking what
people would need to see from funders, especially ones affiliated with the U.S. government, to
have it not matter if they were, in fact, spies. The community’s response was that they needed
transparency: they wanted to be able to verify that “all the numbers add up,” to see OTF’s
funding history (both inflows and outflows), and to understand what role various U.S.
government actors play in OTF’s decision-making. While he conceded that OTF “needs to do
These events are discussed in greater detail in chapter 7, the Signal case study.
43
Interview, April 19, 2017. Washington, DC.
44
! 165
better to make the information more accessible,” it is true that the OTF website contains a wealth
of information about these topics. Meredith and his colleagues have repeatedly expressed their
45
support for my research and enthusiastically participated in interviews, and I got the sense that
they saw participating in my research as a way of being even more transparent.
The Kerry State Department and the Snowden Crisis: Hypocrisy Exposed? (2013-2016)
In June 2013, the Guardian and the Washington Post began publishing a series of articles
alleging that the U.S. National Security Agency (NSA) had been running a top-secret system of
global telecommunications surveillance since shortly after 9/11, in partnership with other
intelligence agencies (Greenwald, 2014; Schneier, 2015). The steady drip of reporting from
Snowden’s leaked documents would continue for years, and several interviewees (who were
“read in” to the programs after 2013) have told me that there is still more to come. The
46
Snowden revelations, and their aftermath, have been extensively discussed and analyzed
elsewhere, so I will focus here on their impact on the implementation of the Internet Freedom
agenda. In short: very little.
The Snowden revelations’ main impact on Internet Freedom was to complicate senior
leadership’s messaging around privacy and surveillance. For the most part, the Obama
administration avoided addressing the tension between NSA mass surveillance programs and the
Internet Freedom agenda while continuing to fund digital rights tools and other programming.
Interview, April 19, 2017. Washington, DC.
45
Interview, March 17, 2015. Washington, DC.
46
! 166
Behind the scenes, DRL and OTF officials were quietly reaching out to grantees and other civil
society actors to address concerns and maintain trust relationships. These officials had been just
as surprised to learn about the NSA programs as the general public had, and were keenly attuned
to the contradiction between the Internet Freedom office’s mission and mass surveillance:
[My colleagues and I] had no idea that these programs were going on and we were
completely appalled by it. But, a key characteristic of working on Internet Freedom
within the U.S. government is holding these tensions really tight all the time. There are a
lot of strange bedfellows, we’re funding some of the biggest anarchists in the world while
representing the U.S. government at international meetings, and doing a lot of damage
control in the background.
47
More broadly, the Snowden revelations were the final nails in the coffin of the “liberation
technology” discourse (Diamond, 2010). One activist I interviewed defined the concept in harsh
terms: “A really badly and overused term from a bygone era, pre-Snowden, before the Iranian
Green Movement and the Arab Spring. Snowden marks the end of that era. The NSA revelations
really clearly outlined the dangers and hypocrisy of tech companies and governments who were
hailing technology as a liberating force.”
48
As I explained in Chapter 1 of this study, liberation technology is “any form of
information and communication technology that can expand political, social, and economic
Interview, March 14, 2018.
47
Phone interview, November 22, 2017.
48
! 167
freedom” (Diamond, 2010, p. 70). Though Diamond’s article is more nuanced that is usually
acknowledged, the phrase has become associated with an overly optimistic and uncritical view of
the internet that is imbued with technological determinism. But this wasn’t always the case. Dan
Meredith, the director of the Open Technology Fund, explained:
When I first came up [in the late 1990s/early 2000s], “liberation technology” felt cool,
non-Western, and more academically grounded. What I appreciate is that it’s broader, it
encompasses a lot of technologies that don’t have a single use or single origin. My
problem with phrases like “internet freedom technology, human rights technology, etc”, is
that our goal is to decouple the “glitterness” of technology. These technologies that have
been heralded as glittery should be made more boring, more integrated into everything
everyone does, including activists.
49
In other words, Meredith is calling for digital rights technologies to be reframed as
infrastructure, to be made boring and apolitical — or rather, to subsume their political potency
under the mundanity of daily life (there is, of course, a rich literature examining how
infrastructure is, in fact, deeply political and worthy of study precisely because it is boring and
often overlooked — see Star, 1999, for example). As my colleague Sarah Myers West argues in
her dissertation (2018), encryption became infrastructure for the internet economy in the
1980s-90s. This increased online security enabled the rise of e-commerce, e-government,
globalized financial markets, and transnational social movements. Meredith wants to see strong
Interview, April 19, 2017. Washington, DC.
49
! 168
end-to-end encryption (i.e. inaccessible to any third parties, notably governments and
corporations), anonymity, and censorship circumvention similarly “baked into” internet
infrastructure.
As programs supporting the Internet Freedom agenda grew, so did the need to coordinate
(or at least de-conflict) their activities with other parts of the government. For example, before it
awards grants to projects with a specific geographic focus, DRL’s Internet Freedom team is
required to clear its plans with the relevant country bureau and ambassador (as is the case for any
country-specific activity funded or directed from Washington). Embassies are responsible for
coordinating bilateral relationships between Washington and foreign governments, and may
conclude that specific grants to civil society organizations jeopardize other elements of the
bilateral relationship. For example, foreign counterparts may take a dim view of U.S. support of
human rights defenders and/or political dissidents, and withhold their cooperation in the national
security arena. However, tensions between Internet Freedom funding of encryption technology
and the surveillance objectives of the national security, law enforcement, and intelligence
communities remain unresolved.
In response to mounting concerns about digital rights technologies’ supposed use by
criminals, DRL commissioned the RAND Corporation to evaluate its Internet Freedom portfolio
to determine “whether DRL involvement has increased the potential for illicit use” (RAND,
2015, p. x). In the case of Tor, the report concluded that “DRL funding has not made it more
likely for this tool to be used for illicit purposes” (Romanosky et al., 2015, p. 44). However, the
Onion Services functionality (then called Hidden Services) was deemed out of scope as DRL
! 169
does not specifically fund it. As I discuss in chapter 6, U.S. law enforcement mainly objects to
50
Onion Services, as opposed to the privacy-enhancing Tor Browser.
In March 2015, I asked Daniel Sepulveda, then Deputy Assistant Secretary of State and
U.S. Coordinator for International Communications and Information Policy in the State
Department’s Bureau of Economic and Business Affairs, how conflicts between different
government agencies were resolved when their internet policy priorities were at odds with one
another. He told me that over time, these decisions have been made at increasingly high levels of
government, and that President Obama “has been pretty clear that he's conflicted about the trade-
off between digital rights and security.” After cautioning me that comprehensively addressing my
question would require access to classified information that I was not authorized to receive, he
told me that “a good working thesis would be that the level at which these questions are
addressed has risen over time in direct proportion to the number of people and critical resources
connected to the network,” and that he had been part of a number of vigorous debates involving
both the President and the Secretary of State, referring to Barack Obama and John Kerry. He
emphasized the importance of the State Department’s funding of the Internet Freedom agenda to
the national interest (“it’s totally about regime change”), volunteering that “if we were in a time
of budget wealth, we’d probably be spending more money on this”. Several other interview
51
participants with first-hand knowledge of the interagency process with regard to internet policy
Onion Services (formerly known as Hidden Services) are Tor’s mechanism for
50
anonymous publishing, which is only accessible through the Tor browser or by using the Firefox
plug-in Tor2Web. See the Tor case study (Chapter 6) for more details.
Interview, March 16, 2015. Washington, DC.
51
! 170
disagreed with his “regime change” claim while corroborating that interagency coordination was
taking place at increasingly higher levels, and that “if the pie were bigger, Internet Freedom
would be getting more pie — but not necessarily a larger share of the total”. Here again,
52
empirically confirming this hypothesis would require access to classified information. At this
stage, I do not have any insight into how the Trump administration confronts these dynamics, if
at all.
53
The Tillerson State Department (2017-2018)
The 2017 presidential transition was met with a great deal of trepidation within the
internet freedom/digital rights community. Neither Trump nor his first choice as Secretary of
State, former Exxon executive Rex Tillerson, had a reputation for defending human rights,
whether online or not. Trump in particular had made a number of comments over the course of
the campaign that led observers in the internet policy community to worry that his administration
would reverse years of progress in this area. This fear would be borne out by the nomination of
Ajit Pai to chair the Federal Communications Commission, and by the repeal of the FCC’s Open
Internet Order in late 2017, among other troubling developments.
The first quarter of 2017 was a time of great uncertainty in the Internet Freedom world.
Morale was at an all-time low across the State Department amid an announced plan to drastically
Interview, March 10, 2017. Valencia, Spain.
52
If I had to venture a guess, my hypothesis would be that the Trump administration is
53
not addressing these tensions in the least. That is probably for the best.
! 171
downsize its workforce, vacancies across the department’s most senior level of management, and
a dearth of policy guidance from either the White House or Tillerson’s office (Ioffe, 2017). Even
as some members of the digital rights community feared that officials working on Internet
Freedom and human rights might be “purged,” the reality was that “a whole lot of nothing” was
going on in Foggy Bottom. The Internet Freedom officials I spoke to at the time were anxious,
but emphasized that the Internet Freedom grant programs they administered were safe for the
time being: the funds for FY 2017 (which would be disbursed in FY 2018) were already
allocated, and could not be withdrawn at the White House’s whim. The Trump administration’s
first opportunity to gut the Internet Freedom programming budget would come in FY2018,
potentially affecting the grants that would be made in FY2019.‑ 54
But even that prospect did not seem to faze my interlocutors. They pointed out that the
Senate Appropriations Committee and Foreign Relations Committee were both very influential
in the budgeting process, and that the “line-up” on the majority Republican side hadn’t changed
in the 2016 election. Moreover, they had learned from colleagues who had served through
previous presidential transitions that human rights were important to lawmakers on both sides of
the aisle; the difference was in how they would “message” and “frame” budget requests and
policy initiatives. For example, some Republican members of Congress are particularly
55
sensitive to the needs of Christian minorities in China and the Middle East, and are more likely
to support Internet Freedom when its connection to international religious freedom is
highlighted. Censorship circumvention, online anonymity, and secure messaging are potentially
Interviews, March 10, 2017. Valencia, Spain.
54
Interviews, March 10, 2017. Valencia, Spain.
55
! 172
critical (or at least useful) to any social movement, religious minority, or marginalized group.
Funding such tools could accurately be framed as benefitting whatever an individual lawmaker’s
pet cause was.
Tillerson was fired (via Twitter, no less) on March 13, 2018, without having had much
impact — if any — on the Internet Freedom agenda, whether on the policy or the programming
side. Time will tell what the rest of the Trump presidency will bring.
Conclusion
There are two important takeaways from this history lesson. The first is that the U.S.
government’s Internet Freedom agenda is rooted in a longstanding commitment to the free flow
of information and privacy, both domestically and globally, and to U.S. foreign policy interests
that are not specific to either of the major political parties and endure across administrations. All
the government officials (current and former) I interviewed underscored the bipartisan support
that Internet Freedom programming enjoys in Congress. While some of this support can be seen
as “people doing the right thing for the wrong reasons” (depending on one’s normative views),
the fact remains that there is remarkable bipartisan agreement about financially supporting
censorship circumvention, online anonymity, secure messaging, and training programs to help
human rights defenders, journalists, civil society organizations, political dissidents, and ordinary
people use these tools in their work and in their daily lives.
The Clinton administration, which oversaw the explosive growth of the internet in the
1990s, was keenly aware of the risks that ICTs could pose to civil rights and liberties, and sought
! 173
to enshrine “Internet Freedom” principles in key policy documents. The emergence and growth
of global information controls during the Bush Administration spurred Congress to initiate grant-
making programs to support civil society work on internet freedom, notably the development of
censorship circumvention technologies. Complex lobbying efforts by Michael Horowitz and the
Global Internet Freedom Consortium on one hand, and a broader coalition of human rights and
media policy groups on the other, resulted in a holistic approach to Internet Freedom funding that
endures to this day. Rather than narrowly focusing on censorship circumvention tools, Internet
Freedom programming encompasses both tools and projects that meet immediate needs (i.e.,
censorship circumvention software and emergency response programs) as well as long-term
efforts intended to “prevent the conditions that lead to emergency situations” (i.e., long-term
research, policy and advocacy initiatives). Starting in 2012, the Open Technology Fund has
56
complemented the State Department’s grant-making — which has a $500,000 floor per grant and
comes with onerous reporting requirements — through smaller grants to individuals and to
newer or smaller organizations that lack the capacity to absorb State Department grants. Neither
changes in administration, nor the high-profile scandals highlighting the tensions between
Internet Freedom and other elements of U.S. foreign policy, have had any meaningful impact on
the implementation of Internet Freedom programs, even as they altered the messaging around
these programs. Eighteen months into the Trump administration, it seems clear that Internet
Freedom is well institutionalized, and that its congressional earmark protects it from an
administration that often appears hostile to its values and strategic goals. Internet Freedom
intersects with other aspects of American foreign and domestic policies in complicated and often
Interview, March 14, 2018. Washington, DC.
56
! 174
uncomfortable ways, but in my view that does not take away from the bona fides of this
motivation. Contrary to Yasha Levine’s unsubstantiated claim that “most people involved in
privacy activism do not know about the U.S. government’s ongoing efforts to weaponize the
privacy movement,” (2018, p. 259), I found that civil society activists are keenly aware that the
U.S. government is an important funder in their space (there is a range of views, however, on
whether it is positive, negative, or ambivalent).
The second key takeaway is that the design and implementation of Internet Freedom
programs offer extensive opportunities for civil society experts to provide input, and that these
experts shape Internet Freedom programs to a high degree. Both DRL and OTF fill staff
vacancies by hiring from civil society, and most of my interlocutors in government have
significant first-hand experience in the digital rights movement. As a result, they possess deep
knowledge of the issues, the organizations, the politics, and the practical realities on the
frontlines of the movement. This expertise shapes the calls for proposals that these agencies issue
periodically, ensuring that program officers will select grantees from a pool of proposals
designed to meet a concrete need. Moreover, proposed projects are conceived and designed by
civil society implementers who are the real experts on the needs of their communities. The
growing emphasis on usability, localization, structural inequality, and the needs of marginalized
and at-risk populations exemplifies this pattern: program officers with civil society expertise
consult with frontline activists to determine a broad set of priorities, which are then implemented
by civil society groups in ways that they themselves identify. Additionally, new grant officers
begin their new roles as funders with existing trust relationships with civil society, allowing them
to stay informed about the community’s needs despite no longer being on the frontlines
! 175
themselves. This is especially important when the tensions between Internet Freedom and other
aspects of U.S. foreign policy become conflagrations, as happened around WikiLeaks, the
Snowden revelations, and the Trump election. Many digital rights activists (including recipients
of Internet Freedom funding) told me that they valued being able to interface with a government
representative who came from the community, knows and shares its values, and established his
or her trustworthiness prior to becoming a funder.
This dynamic is not unique to the United States. Aaron Shaw (2011) coined the term
insurgent experts to describe the Brazilian civil society actors who moved into government roles
in order to push the bureaucracy to make greater use of Free/Open Source software. Daniel
O’Maley’s (2015) ethnography of internet freedom activists in Brazil similarly found that civil
society actors had taken up key positions in relevant federal agencies, allowing them to push
forward their agenda more effectively. These insurgent experts “take on the role of a ‘double
agent’ within the state” (Shaw, 2011, p. 256) using the machinery of the bureaucracy to advance
preexisting values and agendas. This dynamic is replicated by the myriad public servants who
sincerely believe in their agency’s mission: Environmental Protection Agency staff (typically)
care about protecting the environment, and teachers are overwhelmingly committed to public
education. Barack Obama appointed several prominent human rights activists to lead DRL (as of
August 2018, Trump had not appointed anyone, and the bureau was headed by an Acting
Assistant Secretary). The influence of “insurgent experts” on the Internet Freedom agenda’s
implementation is nevertheless significant because so many actors assume that this is not the
case, and that Internet Freedom funding is at best window-dressing for policies intended to
extend American geopolitical and economic empire, and at worst a nefarious plot to overthrow
! 176
geopolitical adversaries, flood the world with purported privacy-enhancing technologies that
actually feature secret NSA backdoors, and advance the financial interests of Silicon Valley
corporations at the expense of human rights.
This is not to say that observers who point out the tensions between Internet Freedom and
government programs that are antithetical to digital rights, such as the NSA’s mass surveillance
operations, are wrong to do so. To the contrary, these tensions are emblematic of long-running
contradictions in American governance. One interviewee pointed to what he called the
Madisonian and Trumanite traditions in American government, opposing constitutional idealism
to the national security network of the deep state (Glennon, 2015). Regardless of one’s stance
57
on Glennon’s argument, it is beyond dispute that different federal agencies have different
mandates, which they pursue to the best of their abilities. The same is true of sub-agencies and
bureaus within Cabinet-level departments. Conflicts between bureaucratic units are resolved
through the interagency process, rising to the political level or to the National Security Council
as needed. Such tensions should thus be seen as checks and balances at work within the
executive branch — American democracy working as it was intended to do. The interviewee,
who asked to be quoted as “an anonymous government employee” and has since left government
service, told me that by backing internet freedom globally, he hoped that he was shaming other
branches of government into respecting freedoms at home.
58
This conversation took place long before the Trump administration started railing
57
against a so-called “deep state” comprising “Obama holdovers” and career bureaucrats.
Interview March 2, 2015. Valencia, Spain.
58
! 177
About the case studies
The past fifteen years have seen the internet emerge as a site of struggle between
governments, corporations, social movements, and individual users as various institutional actors
try to assert control over the new means of communication, efforts that face resistance from all
quarters. Governments and corporations in particular seek to impose strategic architectures
(Price, 2015) that reify their normative views of the world by privileging certain values or
objectives over others. Contrast, for example, the U.S. government’s emphasis on the free flow
of information and opening up foreign markets to American companies (Powers & Jablonski,
2015) to the focus on religious orthodoxy put forward by Iran’s leaders, or to Beijing’s emphasis
on social harmony and the “China Dream.” Meanwhile, the 2018 implementation of the General
Data Protection Regulation (GDPR) in the European Union is bound to have sweeping
implications for the global practice of data capitalism (West, 2017) — or at least, that is the
ardent hope of the many critics of that economic arrangement.
This study’s main contribution to the study of resistance(s) to prevailing internet
architectures concerns the development of digital rights tools, defined as technology that help
users enjoy their rights to privacy, freedom of expression and access to information online. This
notably includes censorship circumvention software and secure messaging applications.
In designing my study, I was particularly motivated to amplify the voices of the people I
encountered through my work in the digital rights space and whose experiences and worldviews
diverged from what I read in the news or in the scholarly literature. There are three broad
categories of actors in the international system: nation-states, private companies, and civil
! 178
society. Actors within each category run the gamut from massive to minuscule, from powerful to
powerless, from famous to infamous to obscure. It seems to me that the two scholarly fields that
my work straddles — communication and international relations — focus on the first two actor
categories and do not sufficiently consider the role that civil society, social movements, nonprofit
organizations and individual activists play in the ongoing (re)negotiation of the norms that
govern communication and information flows. As I designed this study, I was already embedded
in the global movement for digital rights through my work with Ranking Digital Rights, and
knew that I was in a unique position to make a scholarly contribution that acknowledged the role
played by global civil society in the politics and geopolitics of information and questioned
prevailing assumptions.
Four specific tools were selected as case studies: Psiphon, Tor, Signal (including third-
party implementations of the Signal protocol), and Telegram. As discussed in chapter 2, the cases
were selected based on a combination of factors, including each project’s connection to the U.S.
Internet Freedom agenda, its legal and organizational infrastructure, the tool’s user base, its
implication in public controversies about internet regulation, and researcher access. While this
study cannot claim to represent the full spectrum of digital rights tool development, the diversity
of selected tools provides a nuanced view into the contemporary digital rights movement. Each
of the four following chapters traces the institutional history of the tool, examines the factors that
spurred the tool’s creation and evolution, considers the tool’s affordances and use cases, and
analyzes the tool’s relationship to the emerging geopolitics of information. I have endeavored to
follow a coherent structure across all four chapters while highlighting salient points that are
! 179
unique to each case. The analysis covers the micro, meso and macro levels in turn, though much
like the famed “marble cake” of government, it can be difficult to fully separate out these layers.
! 180
Chapter 5 - Psiphon: Software for the soft war?
Introduction to the case study
In Chapter 1 of this dissertation, I traced the evolution of information controls, defined as
"techniques, practices, regulations or policies that strongly influence the availability of electronic
information for social, political, ethical, or economic ends” (Citizen Lab website, 2015). The first
generation of information controls, pioneered by China in the early 2000s and quickly adopted
by a number of other countries, focused on using technical means to control information flows
into a given country or territory. State-mandated censorship of the internet is growing, and can
occur at several layers of internet infrastructure. (Deibert et al., 2008, 2010, 2011). While
information controls have grown in complexity and sophistication since then, technical filtering
remains a cornerstone of internet policy for many countries, much to the chagrin of many of
these countries’ citizens. But this is a crude way to restrict the free flow of information, and there
is a wide panoply of tools and techniques for bypassing this censorship. Censorship
circumvention tools help users access content that is blocked at the application layer, at
international gateways, or through keyword-based filtering, preserving users’ ability to express
themselves and access information online, with important implications for the geopolitics of
information. But who develops these digital rights tools, why, and under what conditions? How
do they see their role in facilitating global information flows, and thereby impacting international
politics?
! 181
The first case study examines censorship circumvention software by focusing on Psiphon,
a privately held Canadian corporation based in Toronto, whose internet proxy service had 25
million active users worldwide as of September 2017. Its user base skyrocketed during a two-
week period starting in late December 2017, which coincided with a wave of street protests in
cities around Iran, which were accompanied by increased government censorship of the internet.
During the first week of January 2018, Psiphon saw a jump from a normal average of
35,000-40,000 downloads a day to 700,000 daily downloads. Meanwhile, the amount of data
transiting through Psiphon’s servers increased 500% (Ling, 2018). This raises a number of
questions about the connection between a small Canadian company and Iranian politics.
This chapter explores Psiphon’s involvement in the geopolitics of information through an
analysis of its business model, which links the company to the U.S.-led Internet Freedom agenda
and the public diplomacy activities of the U.S., U.K., and Germany. Then, it considers that
relationship from the point of view of the Iranian government, which considers Psiphon to be
part of a “cultural NATO” intent on undermining the Islamic Republic as well as Iranian values
and culture. (Though Psiphon’s staff was unwilling to share detailed information about their user
base, several interviewees told me that a significant portion of the tool’s users is located in Iran.)
While there are several other circumvention tools with institutional linkages to the
Internet Freedom agenda (notably Lantern, based in Los Angeles), focusing on a group based
outside of the United States complicates the group’s relationship to the U.S.-led Internet
Freedom agenda. It’s one thing for the U.S. government to fund domestic organizations as part of
its foreign policy objectives, but collaborating with a foreign company is qualitatively different.
Some critics of the Internet Freedom agenda point to the State Department’s support for
! 182
American companies in the guise of supporting internet freedom as evidence that the policy is
merely window dressing for U.S. economic policy, but that argument does not hold up in this
case. Nor can Psiphon’s employees’ motivations be linked to a sense of American patriotism: to
my knowledge, none of the company’s employees are U.S. citizens.
Psiphon is also fairly unique among digital rights organizations as a private company
whose developers and other staff work together in the same physical place — Toronto. In
contrast, many other projects have globally distributed workforces. Researcher access was
another key consideration. Psiphon was created at the University of Toronto’s Citizen Lab before
being spun out as a private company, and maintains a number of institutional relationships, both
formal and informal, within the digital rights space. As a result, I had encountered Psiphon
founder Ronald Deibert, CEO Karl Kathuria, and other Psiphon staff on a number of occasions,
notably at the Citizen Lab Summer Institute, which I attended in 2014, 2015, 2016 and 2018.
59
Chapter roadmap
The chapter proceeds as follows. First, I provide an overview of the research
methodology and data collection process, which are discussed in greater detail in Chapter 2.
The Citizen Lab is an interdisciplinary research center based at the Munk School of
59
Global Affairs, University of Toronto, focusing on research and development at the intersection
of information and communication technologies, human rights, and global security. Each
summer, the Lab organizes a Summer Institute that gathers researchers from academia and civil
society to collaborate on joint projects. See https://citizenlab.ca.
! 183
Next, I briefly explain what Psiphon is, what it does, and how — background that is necessary
for understanding the case study. Then, I present an institutional history of Psiphon, from the first
uses of proxy servers for censorship circumvention, through Psiphon’s initial development at the
Citizen Lab, to today. That history shows that Psiphon emerged from a civil society context
(specifically the University of Toronto) and selected its legal structure and business model as an
experiment: at the time, comparable tools depended either on volunteer labor, grant funding, or
user subscription fees, and Psiphon’s founders wanted to find a business model that allowed
developers and other staff to be paid market wages while keeping the product free for most users.
Psiphon has been successful in that endeavor, but its business model is reliant on a mixture of
government money, including sponsorship agreements with news organizations associated with
American, British, and German public diplomacy. This raises questions about Psiphon’s agency
and autonomy, and surfaces important tensions in the geopolitics of information.
The next section of the chapter is organized according to the level of analysis, going from
the micro level of the organization’s business model and operations, to the meso level of
Psiphon’s relationship to the Internet Freedom agenda and Western public service broadcasters,
and to the macro level of geopolitics, focusing on the “soft war” of ideas and values as
understood by Iran’s government (many Psiphon users are located in Iran). The conclusion
contrasts the digital rights frame embraced by Psiphon and other digital rights projects (which
emphasizes individual communication rights) to the Westphalian paradigm of information
sovereignty under which censoring governments like Iran’s operate.
! 184
Data, sources and methods
This case study relies primarily on ethnographic semi-structured interviews with Psiphon
employees and others familiar with the project’s history (notably Ronald Deibert, of the Citizen
Lab). Content analysis of primary source documents and of media coverage supplemented the
60
interviews. Finally, I installed Psiphon on my iPhone to get a feel for the user experience in
keeping with the walkthrough method (Light et al., 2016), though I didn’t travel to countries that
censor the internet while conducting this research. Consequently, I did not encounter the
61
sponsored landing pages from public diplomacy broadcasters that characterize the user
experience in censored environments.
Interviews for the Psiphon case study were conducted in two stages. First, I interviewed
two Psiphon employees in October 2016 at the Open Technology Fund Summit in Towson, MD,
which I attended as part of my Information Controls Fellowship. I traveled to Toronto in
September 2017 for the second wave, interviewing 11 current or former employees as well as
The Internal Review Board (IRB) process and a discussion of informed consent can be
60
found in Chapter 2.
The walkthrough method is “grounded in a combination of science and technology
61
studies with cultural studies” (Light et al, 2016, p. 1), and “is a way of engaging directly with an
app’s user interface to examine its technological mechanisms and embedded cultural references
to understand how it guides users and shapes their experiences” (p. 2). The process invites the
researcher to engage with the app in the same way that a new user would, from initial sign-up, to
usage, to exiting from the app (including terminating one’s user account, if applicable).
! 185
Citizen Lab Director Ronald Deibert, who was one of Psiphon’s founders. Psiphon also came up
in a number of interviews with funders and other individuals within the digital rights space, and
those conversations naturally inform the case study. In January 2018, I interviewed the BBC
World Service’s former Director of Global News, Richard Sambrook, over the phone, regarding
the genesis of Psiphon’s business model. And in June 2018, I workshopped this chapter at the
Citizen Lab Summer Institute, incorporating feedback from that session into the present version.
As with all interviews for this project, I took contemporaneous notes on my personal
laptop and gave participants the opportunity to review them and correct any transcription errors.
With the exception of Sambrook, Psiphon CEO Karl Kathuria, and Citizen Lab Director Ronald
Deibert, all interview participants requested anonymity. They are identified as “Psiphon
employee 1,” Psiphon employee 2,” etc. Gaining access for this case study was very easy. After
our initial interview at the Open Technology Summit in 2016, Kathuria invited me to visit
Psiphon’s office in Toronto, organized a meeting to introduce me to the team, and encouraged
them to speak to me saying, “I’ve invited Nathalie here and I’ve been talking to her, but of
course it’s completely up to you.” I emphasized that participation was entirely voluntary. At least
one person present declined to be interviewed. Interview notes and primary documents were
uploaded to NVivo and hand-coded.
The basics: What Psiphon is, what it does, and how
As discussed earlier in this study, internet censorship is nearly as old as the web itself.
Censorship through technical blocking is a blunt instrument, yet one that governments continue
! 186
to use in spite of the more sophisticated tactics that have emerged (DeNardis, 2014; MacKinnon,
2012; Deibert et al., 2008, 2010, 2011; Tufekci, 2017). Censorship circumvention tools are
software applications that allow internet users to access the free and open internet in spite of
state, corporate or other restrictions on content access. Circumvention has become much more
technically difficult over the years, with developers and censors engaging in a “cat-and-mouse
game” (as many of my interview participants put it).
Psiphon Inc. is a Canadian company that provides internet circumvention proxy services
to users worldwide by allowing them to route their internet traffic through Psiphon’s servers,
thus avoiding the technical censorship imposed by governments or ISPs. Importantly, Psiphon
does not market itself as a privacy-enhancing tool, though it takes measures to protect user
privacy, notably by minimizing the data it collects on its users. As of early 2018, Psiphon is
available for Android (through either the Play Store or as a “side loaded” app), iOS devices, and
Windows PCs. Would-be users who face technical barriers to downloading Psiphon can request
to have the code sent via email.
Following the walkthrough method for studying apps described by Light et al. (2016), I
installed the Psiphon app on my personal iPhone. The user experience is straightforward. Users
can select from among 33 different languages thanks to translation services provided by the
Localization Lab, an American nonprofit that was spun out of the Open Technology Fund. Users
can also choose the country through which they want to route their traffic (19 countries are listed,
including India and Singapore, which are known to censor the internet at least some of the time),
or they can let the app route for best performance. The user is prominently invited to “Go
Premium Now!” in order to avoid advertising and enjoy higher connection speeds. Once the user
! 187
connects to Psiphon, all of the device’s connections are routed through Psiphon servers. The user
experience on other supported devices (Android and PC) is similar.
Project history
The code that eventually became Psiphon was first developed at the University of
Toronto’s Citizen Lab in 2004. At the time, the Citizen Lab was researching the evolution of
government-led information controls worldwide through the Open Net Initiative, a partnership
between the Citizen Lab, Harvard’s Berkman Center for Internet and Society (now the Berkman-
Klein Center), and the SecDev Group, a “political risk and analysis” consultancy based in
Ottawa. While governments were intensifying their attempts to censor web content and restrict
their citizens’ access to specific content through filtering and other techniques, there were very
few tools that allowed users to circumvent those restrictions. Those that existed required a good
deal of technical know-how, and did not meet the needs of most would-be users in censored
environments. Additionally, the internet and especially the World Wide Web were growing in
technical complexity, which was challenging for some of the earlier “client-less” (meaning that
the end-user doesn’t need to install a software program on her local machine) tools, such as CGI
Proxy. It was time for new technical approaches to censorship circumvention.
Citizen Lab’s researchers, led by Michelle Levesque and Nart Villeneuve, decided to
create a simple, easy to use version of existing peer-to-peer systems that would work on
machines running Windows operating systems. The team coded a straightforward application in
Python to set up peer-to-peer connections over SSL (Secure Sockets Layer — an encryption
! 188
protocol). Psiphon 1 (as it was later dubbed) relied on social networks of trust, allowing users in
censored environments to access the free and open internet by connecting to a friend’s computer
abroad. The name Psiphon, coined by Ronald Deibert, is a portmanteau referring to the Python
programming language that evokes the act of siphoning gas out of a tank. Deibert submitted a
funding proposal to the Open Society Institute, covering several “CiviSec” (i.e. security for civil
society) projects, including Psiphon and CiviBlog, a guide to circumventing internet censorship.
The grant enabled the Citizen Lab to hire software manager Michael Hull and two
additional engineers, and the peer-to-peer Psiphon 1 was released on December 1, 2006,
enjoying widespread media coverage in Canada and beyond. In retrospect, Deibert believes that
some of the program’s technical elements “wouldn’t fly today, given what we know now about
security online,” and the team quickly realized that the peer-to-peer model wouldn’t scale, as it
required each would-be user in a censored environment to route her web traffic through a friend’s
machine — which in turn would need to be turned on and connected to the internet. Psiphon was
at a crossroads, facing three possible models for long-term development.
The first option, to continue developing Psiphon as a Citizen Lab project, was deemed
unsustainable in terms of funding, project management, and user support. Moreover, it simply
wasn’t the kind of work that Deibert and his team wanted to be doing.
The second option was the community development model that the newly incorporated
Tor Project had selected (see Chapter 6). In an email, Deibert explained his decision to look for
an alternative development model:
! 189
It was not that I rejected it. I LOVE THE OPEN SOURCE COMMUNITY . The
evaluation that was done at the time was to look at what exists in the circumvention
space, and see what’s out there, and what has not been tried and see if we can have a
different complementary approach.
Mike Hull was keen on the business side of Psiphon. He thought it might work. I had
spoken to Richard Sambrook at BBC who was also intrigued by a service that would be
willing to take contracts from companies to “push” content into restricted jurisdictions on
behalf of media. I thought “why not, let’s see if this works” — especially because Tor
was just starting out then, and that project was looking for support from the open source
community and serving a different audience. At that time, Tor was not usable for the
broadcasters who were looking for a service that could stream media in censored
jurisdictions. Seems to have been a good choice, as Psiphon works, brings in money from
the same clients, and gives quite a few people jobs. Tor has done very well too! Serves a
different set of audiences for different reasons.
62
The third option — which was ultimately selected — entailed spinning Psiphon off as a
private company. Based on a chance encounter with the then-head of the BBC World Service,
Richard Sambrook, Deibert had the idea of asking international broadcasters to sponsor Psiphon
users in a certain country, which would allow those users to access the content that the
broadcaster had created for them. Users would first connect to a sponsored landing page before
Email correspondence, November 2, 2017.
62
! 190
continuing on to surf the web. With help of the University of Toronto’s Innovation Centre on the
legal aspects, Psiphon was incorporated in Ontario in 2008. By that time the technical
architecture had morphed from Psiphon 1 to Psiphon 2, which eschewed the peer-to-peer model
in favor of dedicated servers to handle user traffic and still used a clienteles approach, without
requiring end users to install software. After initially serving as the new company’s VP of Policy
and Outreach, Deibert quickly moved to the background, focusing on his role as director of the
Citizen Lab. The current version of the software, dubbed Psiphon 3, requires installing a software
client and includes support for mobile devices. It is available for Microsoft PC, Android, and
iOS.
63
Psiphon’s business model
This section focuses on Psiphon’s internal operations through an examination of the
company’s business model and an ethnography of its employees. Who develops anti-censorship
technology, and what motivates them to do so? What meaning do they derive from their work,
and what frustrates them about it? After describing Psiphon’s business model, which links the
company to several institutional actors implicated in the implementation of the Internet Freedom
agenda, the chapter argues that the Psiphon team is an example of what John Postill (2014) calls
“freedom technologists”: they are primarily motivated by a desire to use technical skills to
improve the lives of their users by facilitating greater freedom of expression and connection to
Interview, September 22, 2017. Toronto.
63
! 191
the global internet, and have sophisticated, pragmatic understandings of their own roles in the
global struggle for information access and control.
Psiphon is unique among my four case studies in that it is a privately-held, profit-seeking
corporation whose income streams combine paid user subscriptions, government grants,
sponsorship contracts with international broadcasting organizations, and advertising revenue. As
noted above, Psiphon was first created in a university context, and the business model was
selected as an experiment to see whether it was, in fact, possible to develop censorship
circumvention software in a financially sustainable way. While Psiphon’s staff were very
welcoming and forthcoming in other respects, they were not willing to provide much information
about the company’s finances, and I have not been able to find that information via other sources.
That being said, Psiphon seems to be financially successful. The company’s offices
occupy an elegant converted brownstone in a trendy Toronto neighborhood, its 20 or so highly
qualified employees seem satisfied with their working conditions and salaries, and CEO Karl
Kathuria told me that the company pays “a lot” of taxes. Psiphon’s success seems to validate Ron
Deibert’s experiment to see whether a censorship circumvention company can be profitable.
64
This is all the more remarkable given that most Psiphon users don’t pay for the service.
The company derives its income from a combination of sponsorship from media organizations,
advertising revenue, grants, and premium memberships. Users who don’t want to see ads, or who
want higher speeds, can subscribe to Psiphon Premium for USD$2.99 per week, $9.99 per
month, or $99.99 per year. While Kathuria was unwilling to discuss the relative share of income
from various sources, he told me that advertising revenue was “a small share of what we do.”
Interview, September 21, 2017. Toronto.
64
! 192
A substantial portion of Psiphon’s revenue comes from partnership agreements with
international broadcasters whose content is blocked from reaching their intended audiences, such
as V oice of America, Radio Free Europe/Radio Liberty, Radio Free Asia, the British
Broadcasting Corporation’s World Service, and Germany’s Deutsche Welle. These publicly-
funded organizations are all part of their respective governments’ public diplomacy apparatus,
aiming to reach key foreign audiences, many of them in censored internet environments. For
example, BBC Persian sponsors landing pages for Psiphon users in Iran, showcasing BBC news
articles deemed relevant to Iranians when they first open their web browser. Users can navigate
to their desired page without engaging with the sponsor’s content, but the initial impression
serves to create an association between the BBC (for example) and free internet access, and to
increase the likelihood that BBC Persian content will reach its intended audience .
65
Psiphon users outside of regions targeted by such sponsors are shown a Psiphon landing
page instead, which displays targeted advertising. While Psiphon itself collects as little user
information as possible, it doesn’t prevent the advertising networks controlled by Google or
Facebook from identifying users and serving them ads based on the profiles they maintain about
the user and his device. Psiphon essentially rents landing page real estate to the advertising
network, which in turn selects which ad to display based on who it thinks the user is. This
process of targeted advertising is completely external to Psiphon.
Asked to comment on the pros and cons of Psiphon’s business model, the employees I
interviewed pointed to the flexibility and security that comes from having a diversity of income
streams, even as they lamented the need to have a business model in the first place:
Interview, October 18, 2016. Towson, MD.
65
! 193
Modularity is one benefit for sure. If you’re trying to be a company that’s based on the
internet, you have to be able to adapt to changes to the internet writ large. If your
business model is based entirely on grants, the State Department or whoever could decide
it doesn’t want to fund you anymore, and you’re out to dry. Our business model reflects
the dynamic nature of this sector, where the money is flowing from, and lets us stay true
to what we want to do (Psiphon employee 1).
66
As long as we’re not reliant on a single revenue stream we can balance things to be as
equitable as we can, that frees us up to say no to things we think will impact the user
negatively. On the other hand, it’s too bad we have to do it in the first place. It would be
great to be funded by some of the large internet companies since we allow users to access
their content, but we’ll be more robust as a company if we explore all the different
options. Having a stable source of income allows us to explore different models and find
what suits us best. With a tool like Psiphon it would be great if it was free to everyone
without UX [user experience] hassles, advertising, or whatever. If we had an angel funder
who just made all the money problems go away that would be great (Psiphon employee
2).
67
Interview, September 21, 2017. Toronto.
66
Interview, September 21, 2017. Toronto.
67
! 194
I’m not a huge fan of advertising, but it seems like a necessary evil. We don’t do Ad IDs
or super cookies or anything like that (Psiphon employee 3).
68
CEO Karl Kathuria described what he saw as the advantages of being a private, profit-
seeking company, as opposed to an NGO or a community-based project:
In a way we compete with people [other companies] that are entirely grant funded and
can do things for free, or commercial VPNs who can do sales. But we don’t actively
compete against them. It’s a mix between business-to-business and business-to-consumer.
The main advantage is that we’re treating the service as a business from the day to day,
that also means how you do things on the backend. Treating everything as business
transactions means that with suppliers, we’re always open and above board about what
we’re doing, we don’t ask for special favors or to get things for free because of the
greater good. We make sure to pay well, to keep a good core team working on the product
and working on the company.
69
This focus on professional management, long-term planning and organizational stability
differentiates Psiphon from other projects in the digital rights space (see Chapter 6 on Tor, for
example), that value “rough consensus and running code” (Internet Engineering Task Force,
1998) and can be prone to “the tyranny of structurelessness” (Freeman, 1972). The company’s
Interview, October 17, 2016. Towson, MD.
68
Interview, September 21, 2017. Toronto.
69
! 195
employees are overwhelmingly white and male, but to their credit, this is something that they
repeatedly brought up as a problem.
We now turn to the people of Psiphon — who are they, what motivates them to develop
anti-censorship technology, and how do they see their role in the broader ecosystem of actors
vying to determine the rules for online communication?
Psiphon's freedom technologists
One of this dissertation’s primary research questions asks: What motivates these
“freedom technologists” (Postill, 2014) to dedicate their careers to digital rights technology
development, rather than applying their skills elsewhere? For software developers especially,
there are myriad other career paths that would be much more lucrative.
In addition to the CEO, I interviewed ten different Psiphon employees, about half of the
company. All were male, and all but three were developers or computer scientists (there are two
women employed in management roles, but they were out of the office during my site visit).
Most (7 out of 10) were native-born Canadians. Staff employed in technical roles had studied
computer science or related fields at the university level. Interviewees came to Psiphon either
after a broad job search in computer science in the Toronto area, or as a result of personal
relationships with Psiphon founders or other staff. A few had previously worked for other
organizations in the digital rights space. With the exception of CEO Karl Kathuria, all
interviewees requested anonymity, and are identified as Psiphon Employee 1, Psiphon Employee
2, etc. Quotations are lightly edited for clarity.
! 196
John Postill (2014) introduced the concept of “freedom technologists,” who “combine
technological and political skills to pursue greater Internet and democratic freedoms, which they
regard as being inextricably intertwined” (Postill, 2014, p. 1). Psiphon’s employees embody that
concept, though they are not quite the “radical techies” theorized by Stefania Milan (2013), as
that construct implies a non-profit organizational structure. Far from being utopian “slacktivists”
with a naive view of the technology, Postill argues that “most freedom technologists are in fact
technopragmatists, that is, people who take a very pragmatic view of the limits and possibilities
of new technologies for political change” (p.2). None of the Psiphon employees I interviewed
were familiar with the term, but several guessed its meaning fairly accurately:
I’ve never heard that before… It sounds like technologically inclined people making tools
to pursue freedom of information, of discourse, of travel, whatever (Psiphon employee 1).
It’s gotta be an individual, anyone advocating for expression of rights, respect of the
Universal Declaration [of Human Rights]… I’m comfortable with all those phrases, that’s
generally my understanding of what we do here. I’m not on the tech side so much
(Psiphon employee 4).
Many of the developers and other staff pointed to a desire to use their technological skills
to help others:
! 197
My whole career arc has been all about using tech to help people. Seeing how my work
benefits people on the ground in places like Iran is really rewarding (Psiphon employee
2).
I love writing code. Having a positive impact is hugely rewarding. Having people tell you
real times that they used it. I have an Iranian friend who lives in Toronto, he used it when
he went back over the summer (Psiphon employee 3).
The reward is seeing that information that should be disseminated, is. Hearing user
feedback is rewarding (Psiphon employee 4).
We’ve managed this long, and I get paid! This is one of the coolest jobs… to just tell
someone “my job is helping people around the world get past state censorship”… The
personal, spiritual reward of doing some actual social good can’t be overstated (Psiphon
employee 5).
It was also clear that Psiphon employees enjoy their day-to-day work tasks and working
environment:
It’s super fun and constantly changing. You get to learn tons of stuff you wouldn’t
somewhere else. Everyone is super passionate. Money’s never far from anyone’s mind,
but it’s not the driving force for everyone. Conversations become morally compelling --
! 198
but also morally fraught and argumentative. When it started we only had maybe 10K
users, it’s been rewarding to see the company grow and succeed. Now it’s more like 10
million users (Psiphon employee 5).
I like the company culture, the ability to be on the cutting edge of technological
innovation. The development team is so smart, so driven. I’ve learned so much in my few
years here. The opportunity to find these little tricks and loopholes in the way the internet
was designed to work, to allow us to help other people get the info they want. Our users
aren’t all activists, they could just be people wanting to listen to music or get on
Facebook to message a relative. Being the mouse in a cat-and-mouse game is really
rewarding. People can get online, and that’s a good day (Psiphon employee 6).
We use tech to help people and can see impact on the ground. It’s an incredible team with
intelligent people, and I feel adequately compensated for the work (Psiphon employee 2).
While the Psiphon team seems to fit Postill’s definition, they do not necessarily embrace
the “freedom technologist” label, and some even rejected the phrase as not applying to them or
as discursive manifestations of Western imperialism:
That sounds like something that people want to call themselves. No one here would call
themselves that. It’s a pretty self-righteous term. There’s no collective “we” of “we are
the freedom technologists”. I think it’s dumb that they put “freedom” in the title [of the
Internet Freedom Festival]. It’s too self-congratulatory. I mean fine, people can call
! 199
themselves that, but I will never self-identify as that. I do believe in that work and the
values behind it, but not the label… Language is the only thing we have to communicate.
It’s rife with controversy, and so easily picked apart. We should be careful with the terms
we choose to use. We need more self-awareness and self-reflection (Psiphon employee 7).
The same employee continued:
[Liberation technology] is a dumb buzzword. Actually I have even more distaste for it
than that, I’m really not a fan of the word. I know what they’re trying to get at -- this
classic, western liberal democracy paternalistic view of the rest of the world. It speaks to
that complex, which we have in New America and Europe, which needs to die. I’m sure a
lot of people who use that phrase do so without ill-intent. It’s a dumb name for a whole
basket of plural and overall noble efforts. People should stop calling it that… I know that
in governance spheres they want to have these “god” terms for things, and they do it for
efficacy, for impact, for an ethos… but I don’t really see the need for it. But you could
say that of many things, not just technology. At that level, we always create these
binaries, and thats how they want people to understand the world, in terms of good/evil,
black/white…Why can’t we just call the tools what they are? There’s no need for a value-
laden umbrella term that creates a vision. Just say “anonymity tool”, “circumvention
tool”, “secure messaging tool”, etc. No one in Uzbekistan goes online looking for a
“liberation technology” -- if you create a term and the definition of it, you can make
anything be anything. That’s the discursive power of institutions, if you read Foucault,
! 200
power lies in naming things -- this is how we create the good and evil that we want to
receive (Psiphon employee 7).
They also questioned the need for a label to describe their work:
Another label for what we do? I have never thought about it. No. This is software
development in a particular area. There probably are people whose desire is to do
LibTech [liberation technology] stuff and have a cool name for what they are/what they
do. While I feel really strongly about what we do, I wouldn’t apply a particular term to it.
Liberation tech developer -- but that’s a statement of fact, not a title or a mantel (Psiphon
employee 5).
Nearly all the Psiphon employees interviewed used the image of a cat hunting a mouse to
describe the dynamic between Psiphon (and similar tools) and censoring governments:
It’s like a cat and mouse game between companies and governments, this censorship and
circumvention arms race. There’s a set of companies on each side. Not just governments
are adversaries, but also workplaces, schools, etc. Psiphon came from the University of
Toronto, for example. And filtering isn’t always bad: sometimes it’s about keeping
students off of Facebook when they should be working. Schools and universities are very
interested in figuring out how to disable Psiphon. It’s not only a political tool but also a
! 201
tool for kids, who could become users of Psiphon later down the road (Psiphon employee
8).
Several interviews seemed to reveal a certain belief in the neutrality of technology,
perhaps as a way to distance themselves from how the tool is ultimately used:
What we really want to do is make a really good circumvention tool. After that, what it’s
used for is out of hands. You get too bogged down if you try to predict what people are
going to use it for. You’re fooling yourself if you think you know how other cultures are
going to use the tool that you make. Just make a really good tool and see where it takes
you (Psiphon employee 1).
[Psiphon] is just a tool. It’s mostly a neutral tool like a knife: you can cook or you can kill
with it. It’s a more or less neutral tool (Psiphon employee 8).
While most interview participants seemed wholeheartedly committed to Psiphon’s
mission, some developers expressed ambivalent feelings about the broader context of their work:
Psiphon and other tools, in my opinion, are 90% about governments projecting power to
other populations. 10% is about rights, but rights are fiction. It’s all politics. BBG, VOA,
BBC… it’s all government. I have a cynical and skeptical view. There’s no real need for
Iranian people to see porn, but people provide tools to do it. They’re interested in
! 202
degrading other people and to push some propaganda. Even if you see that BBC and
Deutsche Welle are mostly correct information, it’s still slanted, selective. For example,
some time ago there was an explosion in Toronto that I only heard about from Russian
media. They only give us what they want to give us. Each country is trying to push its
own stuff, its internal propaganda and external propaganda. There’s no real difference
between Canada and the US. There are groups of countries: Europe with North America,
Russia and China are getting closer to each other. Europe and America together, Australia
and Canada off to the side. [Working at Psiphon] is a stable financial source. I’m trying to
be as neutral as possible. I’m just a developer, whatever you use it for is not my problem.
In my experience, I’ve been in politics for a long time ago -- I saw how dirty it is, so I try
to be as careful as possible (Psiphon employee 8).
In sum, the Psiphon employees I interviewed were committed to the company’s mission:
helping people get around internet censorship, while also rhetorically distancing themselves from
the political implications of the work. They seemed to like their work, their colleagues, and
appeared satisfied with their salaries and working conditions. They were cognizant that the
company’s success (and ability to pay them well) was linked to Psiphon’s business model, but
considered the financial relationship to public diplomacy broadcasters a regrettable necessity.
! 203
Relationship between Psiphon and public diplomacy broadcasters
This section interrogates the relationship between Psiphon and global media politics. As
seen in the previous section, a significant portion of Psiphon’s revenue comes from a mixture of
government money, including contracts with American, British and German government entities
engaged in public diplomacy broadcasting. This directly connects the company to Internet
Freedom as a foreign policy platform, yet many of the Psiphon employees I interviewed
expressed ambivalence and misgivings about Internet Freedom as an ideology.
The previous section detailed Psiphon’s business model, what motivates Psiphon’s
employees to do this work, and the joys and frustrations they find in their work. We now turn to
an examination of the relationship between Psiphon’s mission — to provide censorship
circumvention to internet users in censored environments — and the public diplomacy
broadcasters that provide much of the company’s revenue. Public diplomacy is the process
through which nation-states attempt to win the “hearts and minds” of foreign populations
through various tactics, including citizen and student exchange programs, educational and
cultural programming, and broadcasting. Public diplomacy broadcasters vary widely in the
degree of journalistic independence they enjoy and in their alignment with their government’s
foreign policy. There is also a great degree of variation in the leeway that governments give other
countries’ public diplomacy broadcasters: the British Broadcasting Corporation’s BBC America
Channel is widely distributed and accepted in the United States, while its sister channels BBC
Persian and BBC Chinese face significant restrictions. Public diplomacy broadcasters thus
! 204
rhetorically link their right to reach foreign audiences (and attempt to influence them) to the free
flow doctrine and to internet freedom discourses.
In an interview, Richard Sambrook shared his insights into the broadcasters’ perspective,
based on his experience as Director of Global News at the BBC World Service (2004-2010). The
World Service was broadcasting in 45 different languages at the time, but was blocked in some
of the places that it most wanted to reach from a public diplomacy perspective — China and Iran,
for example. Partnering with Psiphon was “a way to help people who wanted to get BBC content
by enabling them to get around the censorship.” Sambrook saw the partnership as a “useful pilot
or experiment to help get content to audiences.”
70
The BBC World Service is funded by the British government, much as the U.S. federal
government funds V oice of America and other Broadcasting Board of Governors (BBG) entities.
But the British government is constitutionally barred from influencing the BBC’s content, though
it has some input on budgetary matters, the number of language services, etc. As Sambrook put
it, “the BBC is there to reflect British values but not British politics.” The World Service’s
mission is to bring high quality information content to all parts of the world, in a way that is free
to consume. The BBC earned its reputation for accuracy and quality during the Second World
War and the Cold War, and its mission remains unchanged — albeit in a different environment
that is characterized by both greater ability to transmit information across borders and by
governments’ increased desire and technical ability to control the spread of information within
their borders.
71
Phone interview, January 10, 2018.
70
Phone interview, January 10, 2018.
71
! 205
The BBC’s first major experience with blocking came in 2009, around the Iranian
elections that June. Both the BBC Persian website and the satellite TV channel were blocked,
though the web-based livestream of BBC Persian TV was still accessible. The BBC noticed large
increases in traffic to the livestream from within Iran, and tried to notify audiences the stream
was still available while working with Psiphon to set up dedicated server nodes. Shortly
thereafter, the BBC Chinese service likewise worked with Psiphon to set up dedicated nodes, and
their availability was likewise advertised via email newsletters, Twitter, direct email contact with
individuals, and on-air announcements. In December 2010, Chinese dissident Liu Xiaobo was
awarded the Nobel Peace Prize for his pro-democracy work. News of the award and of the
ceremony itself was heavily censored by the Chinese government, though the BBC recorded that
Psiphon facilitated 387 connections from China to its livestream of the award ceremony, in
addition to other users around the world who viewed the livestream directly (Kathuria, 2011).
Among the many people working to keep the BBC’s news content available worldwide
was Karl Kathuria, who was then in charge of the technical infrastructure for the BBC World
Service websites. His role entailed “making sure that BBC China and BBC Persian could be seen
by their target audiences,” and this experience led him to understand that “the internet is not a
level playing field.” He started looking for ways that the BBC could reach audiences in censored
environments, notably via a brief fellowship at the Citizen Lab (where Psiphon was created) in
2011. The fellowship led him to realize that this was the field he wanted to be in, and he initially
consulted for Psiphon from his home in the UK before relocating to Canada and becoming CEO
in 2014.
72
Interview, October 18, 2016. Towson, MD.
72
! 206
In a 2011 conference paper, Kathuria examined the options available for international
news broadcasters confronted with satellite and radio jamming as well as technical internet
censorship. The paper focused on detecting censorship and circumventing it, arguing that
Psiphon was the “most suitable” circumvention tool for the BBC. The paper highlighted five key
attributes:
- Usability: The BBC would direct its audiences to the tool, but not be responsible for
hosting it.
- No executable code: The BBC was not willing to provide software that had to be
installed on the user’s PC, therefore a web-based proxy was chosen .
73
- Appropriate information: The user of the circumvention solution would need an
appropriate set of clear terms and conditions for use.
- Hosting environment: The hosting of the solution had to be in a secure environment,
provided by a trusted corporate entity, and not peer-to-peer based.
- Exit strategy: If a web-based solution was compromised or if the BBC needed to
withdraw from providing the service, it had to be able to do so (Kathuria, 2011).
Psiphon’s financial relationship to public diplomacy broadcasters raises questions about
the company’s autonomy and agency. CEO Karl Kathuria and other employees rejected the idea
that Psiphon was an agent of U.S., British or German foreign policy, pointing out that it would be
The paper referred to Psiphon 2, which did not require installing a local client. The
73
current version of Psiphon, Psiphon 3, does require such installation.
! 207
rather difficult for any company to simultaneously be an agent of three different countries’
foreign policies. They also pointed to the structure of the arrangement, whereby sponsors pay for
the connection costs of a certain number of users in target countries, as ill-suited to establishing a
controlling relationship between governments and Psiphon. Psiphon provides access to the free
and open internet, which includes all kinds of content. All the sponsors get in the return for their
financial support is the opportunity to place their content in front of users’ eyes when they first
connect to Psiphon’s servers.
Kathuria further described Psiphon’s process for evaluating new potential sponsors:
This only comes up rarely, as we’re expensive, and not many people get what we do. We
generally only work with organizations where we can get a verbal assurance that they
will respect our autonomy. It’s not a formal process, but we do get pushback from
members of the team who have a strong moral compass at times. We only want to work
with partners who are going to make us look good, so we do our due diligence. It’s not
about politics, it’s about recognizing that none of us are experts on all media systems out
there, so we rely on experts we trust to help us figure out if their values are aligned with
ours. The BBG, BBC etc. are among the least biased [international broadcasters]. V oice
of America is not trying to disseminate fake news, they might screw up editorially from
time to time, but intent matters. Motives matter.
74
Interview, September 20, 2017. Toronto.
74
! 208
To be sure, there is room for disagreement about whether Psiphon’s sponsors are indeed
legitimate news organizations, propaganda outlets, or something in between, and evaluating
these distinctions is outside the scope of this chapter. The point here is that Psiphon’s leadership
takes steps to preserve its autonomy by designing contractual agreements that protect that
autonomy, maintaining diverse revenue streams (to avoid being beholden to any one funder), and
holds internal discussions before deciding to work with a new partner. Kathuria identified RT
(formerly known as Russia Today) as an example of a public diplomacy broadcaster that Psiphon
would decline to work with, owing to the network’s record of propagating demonstrable
falsehoods.
Psiphon, Iran and the Soft War
Building on the previous section, we now turn to a more focused analysis of information
controls and censorship circumvention in Iran, where many (perhaps most) of Psiphon’s users are
located. Iran, of course, has been an Islamic Republic since the 1979 revolution. It has been a
key geopolitical adversary of the United States ever since, with a drastically divergent
understanding of terms like “human rights,” “freedom of expression,” and “internet freedom.”
Consequently, the Iranian government sees the U.S. Internet Freedom agenda not as a
benevolent, altruistic example of America at its best, but as an imperialist onslaught against
Iranian cultural and religious values and its national sovereignty. In this context, technologies
that allow Iranian internet users to bypass government filtering are seen as tools of Western
imperialism — a frame that is sharply divergent from the Psiphon team’s own understanding of
! 209
their product. The section considers the rationales behind the Iranian government’s censorship of
the internet, and places circumvention tools like Psiphon in the context of what Iranian leaders
call the “Soft War,” understood as a mediated attack on Iran’s sovereignty and culture by the
United States and other Western countries.
75
Iran’s information controls regime is grounded in both endogenous and external factors.
On the domestic level, Iran has been an Islamic Republic since the 1979 revolution, and the
country’s sovereignty is “rooted in Islamic, populist economic and cultural policies, and in the
idea that citizens require a sort of ‘guardianship’ provided by a supreme religious
leader” (Alimardani & Milan, 2017, p. 3). Its form of government exhibits some “democratic
tendencies” (ibid.) as well as “authoritarian structures derived from the divine authority given to
clerics” (ibid; Keddie, 2003). This framework calls for the government to “protect” the
population from cultural materials that it sees as contrary to Islam, regardless of the medium on
which the content in question is inscribed. The state has thus maintained strict control over
domestic cultural and media production and on imports of foreign films, television series, books,
magazines, and music. Iran’s aspirations to a “halal internet” (i.e. an internet that conforms to its
official version of Shia Islam) represent a natural progression from the preexisting censorship
regime.
While I have encountered speculation that the term “Soft War” is a wordplay
75
referencing “software,” I have not encountered this claim in the scholarly literature, nor have I
discovered any primary documents corroborating this speculation. Moreover, the two terms
sound nothing alike in Farsi, as several interview participants have pointed out.
! 210
These endogenous factors are linked to the “soft war” paradigm through which Iran’s
government views its relationship with the West and especially with the United States: “the
strategic use of non-military means to achieve objectives such as regime change, an objective
that might otherwise be attained through conventional weaponry” (Price, 2015, p. 135). (This
concept is distinct from Joseph Nye’s “soft power” construct (2004), defined as the use of
attraction and persuasion to achieve geopolitical goals).
The internet became available to the general population in the early 2000s, and Supreme
Leader Ayatollah Ali Khamenei identified the new medium as a national security concern in his
speeches:
Everyone knows that the West’s conflict with the Islamic Republic system is not like it
was in the first ten years after our revolution. In that conflict, the Western governments
used force against us, but they lost. Our enemy’s priority today is soft war, which is war
through cultural means, through spreading lies and rumors by the use of communication
tools. Soft war means causing doubt in the mind and hearts of people… One of the tactics
of soft war is causing divisions among people, like what the enemy did during this year’s
elections (Khamenei, 2009, cited in Alimardani & Milan, 2017. Translation by M.
Alimardani).
The rhetoric only intensified over the years:
! 211
All increasing advances in the cyberspace are serving the realization of the “soft war’s”
goals, but more important than these hardware capacities and advances is the huge army
of intellectual, political, literary, and social elites, as well as prominent activists in the
field of communications and different art disciplines that have created a strong and
influential software backdrop for the realization of the goals of of the “soft
war” (Khamenei, 2015, cited in Alimardani & Milan, 2017. Translation by M.
Alimardani).
Invoking the notion of a “soft war” securitizes the spread of internet content and of
digital rights tools within Iran, linking information controls to the preservation of the Republic’s
Islamic nature and making compliance with the restrictions a matter of personal faith and
morality. Communicative efforts seen as undermining the goals and values of the 1979
revolution represent an existential threat to the regime and the society whose caretaker it
imagines itself to be. Monroe Price (2015) quotes Ali Mohammed Nai’ini (2009), who was then
deputy head of the Basij-e Mostaz’afin (an organization subordinate to the Revolutionary Guards
and the Supreme Leader), to argue that “Soft War proceeds by seeking to destroy the
effectiveness of a system’s ideological framework” (p. 138):
The main aim behind the Soft War is to force the system to disintegrate from within in
view of its values, beliefs, its main fundamental characteristics, and its identity. Any
system, especially a system that is based on certain beliefs and values, owes its identity
and its existence to those beliefs and values. It is based on the model and principles on
! 212
the basis of which it continues its political, social and economic life (Ali Mohammed
Na’ini, interview with Ali Shirin, “Military Official on ‘Soft War’ that Iran is Allegedly
Facing,” BBC Monitoring International Reports, November 28, 2009, quoted in Price,
2015).
Iranian authorities have branded the apparatus that they imagine to be behind the “Soft
War” a “cultural NATO,” which Price explains as “a phrase that encapsulates a quasi-military
approach from the West and implies a coordinated activity within an alliance (through the NATO
reference) but one that acts with the velvet glove of ‘culture’ “ (Price 2015, pp. 138-139). The
Islamic Development Organization, a state entity, argues that:
One of the purposes of cultural NATO is to put the national-religious culture of the target
society on the margin so that [the external entities hold] in hand the control of the world’s
affairs via the domination of the [target society’s] desires” (“Soft War Reasons Against
Islamic Republic of Iran,” Islamic Development Organization, January 2, 2010, cited in
Monroe, 2015, p. 139)
Official discourse also ties domestic social protest movements (such as the 2009 Green
Movement) to the “soft war,” framing the domestic opposition as a seditious movement
fomented from abroad, and therefore illegitimate (Alimardani & Milan, 2017). Indeed, “since
1979, the Islamic Republic has characterized many internal dissenters (and often mere
participants in civil society activities) as ‘proxies of the United States and its allies, working to
! 213
weaken the political system’” (Price, 2015, p. 141, quoting Adelkah, 2010). The American origin
of many of the platforms and tools involved — notably Twitter, whose role in the 2009 Green
Movement has been overstated in the West, but was nevertheless far from inconsequential —
heightens the regime’s sensitivity to their use. This partially explains the unique role that
Telegram, created by Russian tech entrepreneur Pavel Durov, plays in Iranian political life (this
dynamic will be explored further in Chapter 8). The U.S. government’s financing of censorship
circumvention tools is yet another weapon in the perceived arsenal for the soft war.
76
In this context, technologies that allow Iranian internet users to bypass government
filtering are seen as tools of Western imperialism — a frame that is sharply divergent from the
Psiphon team’s own understanding of their product. Psiphon’s creators and current staff see
unfettered internet access as an individual human right. They are cognizant of the geopolitical
stakes of their work, but consciously reject the “Soft War” frame as well as claims that they are
“pawns” or “agents” of the U.S. government. As these “freedom technologists” see it, Psiphon’s
business model may be reliant on the public diplomacy budgets of the U.S., U.K., and Germany,
but that doesn’t mean that the company is doing the bidding of those governments. Rather, they
see Psiphon as providing a public good to which all human beings are entitled. The fact that
Psiphon is only able to provide that good thanks to financial arrangements linked to certain
countries’ foreign policy objectives is secondary, in their view, and does not diminish their
agency or autonomy.
For a more detailed analysis of Iran’s “Soft War” framing, see Price, 2015, chapter 7
76
“Soft Power, Soft War” (pp. 134-152).
! 214
Censoring governments, circumvention tool developers, and the governments that fund
them each have a frame to describe the geopolitical dynamics at play: the “Soft War” (in Iran’s
case — other censoring countries have similar frames), Internet Freedom as an ideology, and
digital rights. The digital rights frame is similar to the internet freedom frame in many ways. The
crucial difference lies in identifying which actors have agency. The ideology of Internet Freedom
focuses on nation-states as the primary actors in international affairs, whereas the digital rights
frame emphasizes the agency of individual developers, activists, and internet users.
Conclusion
This chapter has presented a case study of Psiphon, a Canadian company that provides an
internet proxy tool for users whose access to the global internet is censored or filtered through
technical means. Psiphon is free for most users (a premium subscription is available), deriving
the bulk of its revenue from a combination of government funds, notably sponsorship agreements
with public diplomacy media organizations, including the BBC World Service, Deutsche Welle,
V oice of America, and the Broadcasting Board of Governors. This arrangement raises questions
about agency and autonomy: is Psiphon an autonomous company with agency to pursue its own
goals (i.e. providing access to the uncensored internet while earning a profit), or is it an agent of
the United States and its allies?
There are strong arguments for both interpretations, and the analysis hinges in part on
whether one views the public diplomacy broadcasters that sponsor Psiphon as legitimate news
organizations or as propagandists, and on whether one regards internet censorship of the kind
! 215
practiced by Iran, China and other countries as legitimate. Iran and other censoring governments
view Psiphon as part of a “cultural NATO” that threatens their ability to control information
flows within their borders. Information and communication technologies have eroded the state’s
monopoly over the legitimate control of information flows (see Price, 2015), a monopoly that
nation-states are seeking to reestablish. This Westphalian paradigm is difficult to reconcile with
the individual human rights framing emphasized by Psiphon and by internet freedom/digital
rights discourses more broadly, and there is significant contestation about the legitimacy of
technical censorship and other forms of information controls, both internationally and
domestically.
Psiphon’s employees reject the idea that they are pawns in the geopolitical games played
by nation-states, emphasizing their commitment to individuals’ right to access the free and open
internet. Psiphon’s “origin story” suggests that this commitment to individual communication
rights was the impetus for the tool’s creation, and that the company’s business model took
advantage of Psiphon’s ability to solve an existing problem faced by deep-pocketed actors. At the
same time, many of the company’s employees view this institutional connection to the U.S.
Internet Freedom agenda with ambivalence. If one accepts Psiphon’s narrative, the British,
German, and U.S. governments’ financial support of Psiphon should be understood as a case of
“strange bedfellows” joining forces for a common goal. Their motivations for thwarting technical
censorship of the internet and their expectations of the end result are different, and it would be
inaccurate to say that public diplomacy broadcasters co-opt digital rights technology projects like
Psiphon. Rather, I argue that Psiphon should be understood as a social actor with a high degree
of agency and autonomy in pursuing its goals, that is nevertheless constrained by the financial
! 216
necessity of generating income to sustain its activities. The frame through which Psiphon and its
people explain their relationships with other actors in the international system is sophisticated
and complex, and should be taken into account in future analyses of the political economy of
global information flows. However, it is nonetheless also true that Psiphon is part of a socio-
technical assemblage that connects funding governments (the United States, United Kingdom,
and Germany), public service broadcasters, public diplomacy agencies to publics in censored
internet environments — what the Iranian government calls a “cultural NATO.” This presents a
tension with Psiphon’s employees’ self-image as facilitating a basic human right (access to
information).
Interpreting the relationship between Psiphon and its sponsors also hinges on whether one
believes that funding relationships necessarily imply control, and on an evaluation of the
mechanisms intended to isolate the recipient organization from funder pressure. In this case, the
quid pro quo is explicit: Psiphon’s sponsors want to reach key audiences in censored countries,
and pay Psiphon to help those audiences circumvent internet censorship and to expose them to
sponsored content when they first connect to Psiphon. After this initial encounter with a
sponsored landing page, Psiphon users are free to access any online content they desire.
The questions raised by this case study are highly normative, value-laden ones for which
there are no easy answers. My own view is that Psiphon’s sponsors are indeed legitimate news
organizations, that the types of internet censorship that Psiphon circumvents are not legitimate
(and in fact represent a human rights violation), and that Psiphon takes sufficient steps to
maintain its autonomy from its funders, though it could be more transparent about what those
steps are (this analysis is informed by off-the-record information that I am not able to share
! 217
publicly at this time). But again, my analysis is grounded in my normative commitment to
placing individual communication rights ahead of governments’ claims to information
sovereignty — a stance that is far from universal.
! 218
Chapter 6 - Tor: Peeling the onion
Introduction to the case study
This chapter considers one of the technologies at the heart of the digital rights movement,
Tor, and the non-profit organization behind it, the Tor Project. Tor was selected for four main
reasons. Developed in the mid 1990s, Tor is one of the oldest digital rights tools around, with a
rich history that has yet to be told comprehensively yet is often misrepresented. Second, the Tor
Project’s continued U.S. government funding, and (to a lesser extent) onion routing’s origins at
the U.S. Naval Research Lab, directly connect Tor to the U.S. Internet Freedom agenda. Third,
Tor is the core technology underlying many other digital rights tools, such the Open Observatory
of Network Interference (OONI) and the whistleblowing platforms SecureDrop and GlobaLeaks.
The Tor Project is thus a core node within the digital rights ecosystem. Finally, Tor is both
ubiquitous and an object of controversy within the digital rights community as well as
government circles, which generates a wealth of primary and secondary sources. Everyone has
an opinion about Tor, serving as an entrée into views about digital rights technology and internet
freedom more broadly.
In the context of this chapter, “Tor” refers to the anonymity technology itself, comprising
the Tor Browser as well as Onion Services, while “the Tor Project” refers to the 501(c)(3)
organization and its staff, including volunteers. The name “Tor” was originally an acronym for
“The Onion Router” or “The Onion Routing” (first-hand accounts differ), and studying Tor can
be like peeling an onion: layers of complexity nestled within one another like Russian dolls, the
! 219
close inspection of which has been known to produce tears. But just as onion is an under-
appreciated yet essential ingredient to many dishes, Tor is an invisible yet vital component of
global politics today.
Roadmap to the chapter
The chapter proceeds as follows. First, I provide an overview of the research
methodology and data collection process, which are discussed in greater detail in Chapter 2.
Next, I briefly explain what Tor is, what it does, and how — background that is necessary for
understanding the case study. Then, I present an institutional history of the Tor Project, from the
genesis of onion routing (the core technology behind Tor) to 2015, when my fieldwork began.
The next section of the chapter is organized according to the level of analysis, going from
the micro level of the organization, to the meso level of bureaucratic politics within the U.S.
federal government, and to the macro level of geopolitics. The empirical research and the
analysis presented in this middle section is essential for understanding the remainder of the
chapter, which traces the evolution of the Tor Project as an institution between 2015 and 2018, as
it professionalized its structure and operations. As we will see, a major catalyst for this evolution
was the very public ouster of Tor developer and “evangelist” Jacob Appelbaum, who was
accused of bullying, harassment, sexual assault and more by a number of his former colleagues.
Thinly veiled references to “the Jake Problem” peppered my interviews from the very beginning,
dating to before the specific accusations against Appelbaum became public. Though Appelbaum
has not been charged with any crimes in connection to these allegations, an internal investigation
! 220
concluded that “many people inside and outside the Tor Project have reported incidents of being
humiliated, intimidated, bullied and frightened by Jacob, and several experienced unwanted
sexually aggressive behavior from him” (Steele, 2016). My fieldwork in the digital rights space
leads me to conclude that the general tenor of the accusations against Appelbaum is accurate.
While I can’t verify details of all specific claims, I believe the women, men, and gender-
nonconforming people who have come forward. I stand with victims.
To the Tor Project’s credit — and especially to the credit of its new executive director,
Shari Steele — the organization is taking proactive steps to professionalize and to institutionalize
policies and procedures to prevent recurrences of this pattern of behavior. As part of this
evolution, the Tor Project has been moving away from the implicit “programmer supremacy”
that characterized its early years. One can only hope that these efforts continue and that the rest
of the digital rights space follows suit (this dynamic is explored in chapter 3).
Data, sources and methods
This chapter was first conceived as a stand-alone seminar paper for my cognate
coursework in International Relations in the Fall of 2014, the second year of my PhD program.
At the time, I was intrigued by what I saw as a paradox: Tor was simultaneously supported by
some U.S. government agencies and lambasted by others. It seemed to me that the federal
government was at war with itself over online anonymity. Around the same time, critics of the
Tor Project and of the Internet Freedom agenda more broadly (notably Yasha Levine of Pando
Daily) spun conspiracy theories about Tor that only grew more complex as I endeavored to
! 221
untangle them. Under the supervision of Patrick James, I pursued the research in the Spring and
77
Summer of 2015, conducting a series of interviews as well as archival research (with IRB
approval). Initial interviews were conducted in Valencia, Spain and Washington, DC between
March 1 and March 21, 2015, with 18 participants including Tor users and developers as well as
then-current and former U.S. government officials. I conducted additional interviews in
Washington, DC in May and June 2015, as well as participant observation at the Chaos
Communication Camp (CCC) outside of Berlin in August 2015. Organized by the German
hacker collective Chaos Computer Club and held every four years outside of Berlin, CCC is a
gathering of over 4,000 hackers, hacktivists, and like-minded individuals from around the world
who share a commitment to privacy, freedom of expression, and the preservation of the internet
as a global commons.
By the Fall of 2015, it was clear to me that Tor would be at the heart of my dissertation
project. I advanced to candidacy in January 2016 and received IRB approval for the dissertation
the following August. Interviews and participant observation were conducted throughout 2016,
2017, and early 2018 in Valencia, San Francisco, Berlin, Baltimore, Montreal, and Washington,
as well as online and on the phone. In total, I have interviewed over 30 individuals about the Tor
Project. The ground rules for interviews ranged from “on background, not for attribution” to “on
the record,” depending on participants’ preferences, which is why only some interviewees are
identified. I then conducted thematic coding, using NVivo, on the content of these interviews and
on a wide range of primary and secondary sources. Throughout the writing process, I solicited
Professor, USC School of International Relations, and a member of this dissertation
77
committee.
! 222
input and “sanity checks” from interview participants, who mostly validated and at times
questioned my analysis, pushing me to dig deeper and question my assumptions. I plan to share
subsequent drafts with additional informants for their feedback, notably with current and former
Tor Project leadership and staff.
The basics: What Tor is, what it does, and how
The Tor browser
Tor was originally designed as an anonymity tool allowing users to access the internet
without revealing their identity or location to the sites they visit or to any observers of the
network. It does so by routing web traffic through a network of virtual tunnels. Rather than
traveling directly between the user’s local machine and the remote server hosting the web page,
data packets jump from the local machine to a series of three nodes (entry node, relay, and exit
node) before connecting to the desired server. Data packets sent back from the server reverse the
same route. Connections between the local machine and the entry node, and between the Tor
nodes, are encrypted. Only traffic between the target server and the exit node are transmitted “in
the clear” and can be read by an adversary, though the growing adoption of the HTTPS protocol
means that even that section of the network is increasingly likely to be encrypted.
Onion routing’s design and the global nature of Tor’s network of nodes means that the
technology is also useful for circumventing many forms of online censorship, and as we will see,
Tor owes much of its funding to this secondary affordance. Censors have caught on to the
emergence of censorship circumvention tools, of course, and over the years the Tor Project has
! 223
developed a number of techniques to get around new types of blocking. Bridging nodes or
“bridges” are Tor nodes that aren’t listed in the public directory of Tor nodes, which makes it
harder for ISPs to block them. Information on how to connect to these bridges is spread by word-
of-mouth. Bridges are powerless against Deep Packet Inspection (DPI) boxes, which can
differentiate between Tor traffic and “regular” web traffic. “Pluggable transports” disguise the
traffic between the user’s local machine and the Tor entry node. As censors become savvy to the
fingerprints left by each pluggable transport, the Tor Project has developed new types of
pluggable transports in a cat-and-mouse game that seems unlikely to end.
Tor’s user experience has gone through several iterations. As of late 2017, the Tor
technology has two main components, the Tor Browser and Tor Onion Services (formerly known
as Hidden Services). The Tor Browser is similar in many respects to other web browsers such as
Microsoft Explorer, Mozilla Firefox or Apple Safari (it is a modified version of Firefox), with
one key difference. While standard web browsers work by establishing a connection directly
between the user’s local machine and the distant server hosting the desired content, Tor routes
the data packets through at least three intermediary nodes before delivering them to the user’s
machine. The IP address (and thus the user’s identity and physical location) is hidden from both
the target website and anyone monitoring web traffic, including state surveillance or censors. The
Tor browser thus allows users to maintain anonymity and to access sites that would otherwise be
inaccessible because of keyword-based filtering, geofencing, or some other technical filter.
A person using the Tor browser can access the same universe of internet services and web
pages as someone using any other browser. If the user connects to the internet from a location
where the internet is subject to technical filtering or censorship, using the Tor Browser allows her
! 224
to circumvent those limitations and connect to sites and services that would otherwise be
inaccessible to her. Crucially, onion routing masks her IP address from the services she connects
to, the sites she visits, and any adversaries (to use the cryptography parlance) observing the
network. Such adversaries can include the internet service provider, surveillance tools deployed
in service of data capitalism, law enforcement, intelligence services, and other users who use the
same machine or connect to the same local network.
Onion Services
Onion Services (formerly known as Hidden Services) are Tor’s mechanism for
anonymous publishing, which are only accessible through the Tor browser or by using the
Firefox plug-in Tor2Web. While a technical description of how Onion Services work is beyond
the scope of this project, it suffices to say that Onion Services allow users and “services” (which
include websites, blogs, online marketplaces, chat services, and more) to connect without the
parties knowing each other’s IP addresses. In most democracies, and certainly in the United
States, Onion Services are the most controversial aspect of Tor. While the Tor Browser allows
users to anonymously access web content, Onion Services allows users to anonymously publish
and disseminate online content that is only accessible to others who know where to look. But
Onion Services are not a platform like Facebook, Twitter, or Wordpress, and the Tor Project has
no way of policing Onion Services’ content. This is by design. Instead, Onion Services allow
“Alice” and “Bob” to communicate in cyberspace without either party ever learning the others’
IP address.
! 225
Institutional history
Ancient hisTORy
78
Tor is “a network of virtual tunnels that allows people and groups to improve their
privacy and security on the Internet” that was first developed at the U.S. Naval Research
Laboratory (NRL) in the mid-1990s. It was originally an acronym for “The Onion Router,”
though it is a happy coincidence that Tor is also the German word for “gate.”
Until recently, the Tor Project’s website framed Tor’s original purpose as “protecting
government communications” (some older pages still do), though it is not entirely clear that this
was the case. In an interview, Paul Syverson, the NRL computer scientist most closely involved
in Tor’s creation, explained that NRL is a research lab that does basic as well as applied research,
and Tor stemmed from “a ‘this is interesting research’ kind of place” and from “conversations
while car-pooling” with colleagues. On the other hand, Syverson’s colleague on the onion
routing project, Michael Reed, had a different explanation:
The original *QUESTION* posed that led to the invention of Onion Routing was, "Can
we build a system that allows for bi-directional communications over the Internet where
the source and destination cannot be determined by a mid-point?" The *PURPOSE* was
for DoD / Intelligence usage (open source intelligence gathering, covering of forward
Tor is not normally written in all caps, and doing so is seen by community members as
78
a tell that the writer doesn’t really know what they are talking about. In this case I capitalize all
three letters in order to make a pun.
! 226
deployed assets, whatever). Not helping dissidents in repressive countries. Not assisting
criminals in covering their electronic tracks. Not helping bit-torrent users avoid MPAA/
RIAA prosecution. Not giving a 10 year old a way to bypass an anti-porn filter. Of
course, we knew those would be other unavoidable uses for the technology, but that was
immaterial to the problem at hand we were trying to solve (and if those uses were going
to give us more cover traffic to better hide what we wanted to use the network for, all the
better...I once told a flag officer that much to his chagrin) (quoted in Levine, 2014a).
Syverson met Massachusetts Institute of Technology (MIT) computer science graduate
Roger Dingledine at the “0th” Privacy Enhancing Technologies Symposium (PETS) in 2000. He
had been asked to do research on resource management in onion routing, and turned to
Dingledine for help. Dingledine initially declined, but after the start-up for which he worked at
the time “went belly-up” he accepted a contractor position with NRL. Nick Mathewson, another
MIT computer science graduate, joined the fledgling Onion Routing Project shortly thereafter.
79
Dingledine’s MA thesis project, Free Haven, was Tor’s intellectual precursor in many
ways, intended to “store documents for long periods of time” and “allow for both anonymous
publication and reading” (Gehl, 2018, p. 40). However, Dingledine and his colleagues ran into
several insurmountable technical challenges related to server credibility and accountability and
to latency, and abandoned the project.
The onion routing work was largely theoretical at first, and research papers were the
primary output. Around 2001-2002, the Defense Advanced Research Projects Administration
Interview, March 19, 2016. Towson, MD.
79
! 227
(DARPA) asked Dingledine to produce a demo of working onion routing software — on a three-
week deadline. Dingledine panicked. Fortunately, an undergraduate at MIT named Matej Pfeiffer
had workable code, which Dingledine modified and turned into a demo that he called OR, for
“onion router”. Dingledine emphasized to me that he wrote the code on his own time, and that it
was outside the scope of his responsibilities at NRL. In his words, “NRL did not make it or even
fund it directly. The funding caused Tor to be, but they [NRL] didn’t write Tor”. This is slightly
at odds with the oft-repeated origin story that Tor “was originally developed with the U.S. Navy
in mind, for the primary purpose of protecting government communications”, though it doesn’t
completely contradict it. My interpretation is that onion routing as a field of inquiry in computer
science was developed with the U.S. military in mind, but that the specific software that
ultimately became Tor was coded separately. From a public relations perspective, the Tor Project
has long struggled to find the right balance between, one one hand, distancing itself from the
military (and the government more broadly) to reassure some users, and on the other hand, using
social capital from its military association to curry favor with critics in government and
especially law enforcement. For a period of time around 2015/2016, Tor’s spokeswoman
described Tor as originating “at MIT” — which isn’t completely false, but comes across as an
attempt to diminish the central role of the U.S. Department of Defense in Tor’s origin story.
80
From the beginning, the small team intended to make Tor open-source, as an onion-
routing system that was only used by the government would “nullify the qualities that you want”:
a massive, diverse, geographically heterogenous population of users who would camouflage one
another’s online activities. As Dingledine explained in 2004,
Field notes, 2015-2016.
80
! 228
“The United States government can’t simply run an anonymity system for everybody and
then use it themselves only. Because then every time a connection came from it people
would say, “Oh, it’s another CIA agent.” If those are the only people using the
network” (quoted in Levine, 2014a).
Moreover, the scrutiny and peer-review process associated with open-source software
ensures the robustness of the code. Several long-time Tor developers described the software as
“the most examined piece of code on the planet”. For many open-source software enthusiasts, the
possibility of examining source code is a token of transparency, accountability and even
trustworthiness. It is the opposite of the “blackbox algorithms” that are of such concern today:
anyone can inspect, audit, and tweak the software. The implicit understanding in the open-source
community is that free software can be examined and therefore trusted. This is especially true of
Tor, the subject of myriad computer science research papers.
NRL’s financial support for onion routing research ended in late 2003, though Syverson
would continue to contribute to the Tor Project intellectually until the present day.
EFF sponsorship
Across the country, the San Francisco-based Electronic Frontier Foundation (EFF) was
fighting the recording industry over peer-to-peer (P2P) file-sharing. The year 2003 was the
height of copyright litigation, with media conglomerates filing tens of thousands of lawsuits
against individual users of tools like Napster and Limewire. It was increasingly clear to the
! 229
EFF’s young staff attorney, Fred von Lohmann, that there was going to be increasing demand for
software that helped people protect their identities. He was worried that “the tail of file-sharing
would wag the dog of online anonymity”: that large numbers of people would start using
anonymizing software solely for the purpose of evading copyright. This would be one of the least
sympathetic contexts for anonymity online and lead to blowbacks, jeopardizing the legal status
of anonymous speech generally, especially if those were the first cases to be litigated, thus
setting a legal precedent.
81
V on Lohmann had long been interested in anonymous speech. How do you protect it?
How do you make sure it remains a possibility for people who need it? The organization where
he worked, the EFF, had been deeply involved in the 1990s “crypto wars” that centered on the
people’s right to encryption and to anonymous online activity (see Levy, 2001). The EFF
couldn’t allow the albatross of file-sharing to be tied to the neck of anonymity. They needed to
find an anonymity solution that wasn’t linked to file-sharing. V on Lohmann started looking for
what technology was out there, hoping to find one that would tell the story of anonymous speech
in a more sympathetic way than the frame that the P2P companies were using. That’s when he
found Tor.
Among the groups working on online anonymity at the time, Tor stood out for two
reasons. First, Dingledine and Mathewson had actual working code. Other teams talked a good
game, but had no code to show. And second, the two men were really good advocates for the
cause of anonymous speech online. Others whom V on Lohmann encountered were not going to
be good spokespeople for the cause:
Phone interview, October 17, 2016.
81
! 230
“Within ten minutes of talking to them, they’d start talking about undermining the
financial system, overthrowing the government, a whole bunch of rhetoric that I thought
was probably not going to make people understand why anonymous speech is a good
thing” (interview, 2016).
By the time V on Lohmann was done canvassing real solutions, Tor stood out as the only
one that met his criteria. He was interested in having EFF support this technology, not just
legally, but also institutionally. The press release announcing the new partnership emphasized the
mutual benefits of the arrangement:
"EFF is a great organization to work with," said Roger Dingledine, Tor's project leader,
who, along with Nick Mathewson, is also a core developer. "EFF understands the
importance of anonymity technology for everyone -- from the average web surfer, to
journalists for community sites like Indymedia, to people living under oppressive
regimes. With their support and experience, we can focus on making Tor useful and
usable by everyone."
"The Tor project is a perfect fit for EFF, because one of our primary goals is to protect the
privacy and anonymity of Internet users," said EFF Technology Manager Chris Palmer.
"Tor can help people exercise their First Amendment right to free, anonymous speech
online. And unlike many other security systems, Tor recognizes that there is no security
! 231
without user-friendliness -- if the mechanism is not accessible, nobody will use it. Tor
strikes a balance between performance, usability, and security."
82
It became clear that Dingledine and Mathewson wanted to take their project to the next
level, and that they would need help to do so. EFF provided some financial support but also
served as a non-profit fiscal umbrella. Having released Tor’s code as an open-source project,
Dingledine and Mathewson wanted to diversify the funding sources, improve the software,
improve usability, think of hiring employees, an executive director, etc. It would be a big help if
they didn’t also have to worry about “the 501(c)3 stuff” at the same time. It became clear pretty
quickly that EFF would only be able to host Tor for a single year, following an incubator
approach. Dingledine and Mathewson continued publishing their research and developing Tor’s
code, released as an open source project in 2004. The network itself had launched in October
2003, and for one year Tor operated as an open source project under the EFF’s fiscal umbrella.
Dingledine and Mathewson cobbled together funding support for themselves as individuals,
hiring subcontractors when the workload was too high.
83
By 2006, Tor had matured significantly as a project and it was time to spin out from the
EFF, particularly as the EFF was under budget strain at the time. Dingledine, Mathewson, and
others had a lot of conversations about the relative benefits of being a nonprofit organization or a
for-profit company. Dingledine and Mathewson were intent on the nonprofit model, which they
Cited in Levine 2014a, see also https://www.evernote.com/shard/s1/sh/5f06db2b-
82
c851-454e-a40a-219b5fdc4296/5e4fd3744ecd8f931b0b8cd0b6450533
Phone interview with Andrew Lewman, December 12, 2017.
83
! 232
saw as a way to prevent shareholders and investors from “calling the shots” and warping the
project’s mission of protecting online anonymity.
84
Incorporation as a non-profit
The Tor Project was incorporated as a Massachusetts 501(c)(3) non-profit organization in
December 2006 (Massachusetts was selected because Dingledine, Mathewson, and other
contributors were living in the Boston area). Founding legal documents for The Tor Project, Inc.
list Roger Dingledine as President and Shava Nerad as Executive Director, Treasurer and Clerk.
The first Board of Directors comprised Ian Goldberg, Andrew Lewman, Rebecca MacKinnon,
85
Nick Mathewson, Wendy Seltzer, and the EFF’s Fred von Lohmann. For a time, the project had
no funding, and both Dingledine and Mathewson developed it on a voluntary basis as best they
could, despite Mathewson having to take “day jobs” to make ends meet.
It was around this time that Simson Garfinkel, who was then a graduate student at
Harvard, introduced Dingledine to Ken Berman, who was working on ways to make the content
produced by entities under the Broadcasting Board of Governors (BBG) accessible to foreign
audiences, including in countries that censored the internet. The BBG overseas the U.S.
government’s foreign media activities, notably Radio Free Europe/Radio Liberty, Radio Marti,
Radio Free Asia, and other conduits for public diplomacy. Dingledine pitched a funding proposal
that highlighted Tor’s censorship circumvention affordances. Though this wasn’t the purpose for
which Tor was designed, Dingledine knew that this use case — accessing banned content — was
Phone interview with Andrew Lewman, December 12, 2017.
84
MacKinnon would later be my supervisor at Ranking Digital Rights.
85
! 233
of greatest relevance to the BBG’s work. The Tor Project’s IRS-990 for 2007 lists a $250,000
86
grant from the BBG’s International Broadcasting Bureau (IBB). The IBB would fund the Tor
Project until 2013, after which the Open Technology Fund (OTF), another BBG entity, took over.
The political history of Internet Freedom funding via the BBG, OTF, and other government
entities is traced in chapter 4 of this dissertation.
By the end of 2007, Nerad had resigned for health reasons, and Dingledine took over as
Executive Director. When the organization received its first State Department grant shortly
thereafter, Dingledine and the rest of the board convinced Andrew Lewman, who had more
business and management experience than the others, to serve as Executive Director. The State
Department emphasized that it wanted the Tor Project to deliver usable software, and not just
research papers. In order to do that, the Tor Project would need an “adult” with management
experience to run the show and ensure deadlines were met. Among the Tor community of the
time, Lewman stood out as a promising candidate. After a BS in computer science, he had spent
10 years as the VP of Engineering at TechTarget, a start-up that (as best I can tell) provides
business intelligence to companies seeking to purchase new software. In contrast, most of the
others had spent their entire careers in academia.
Years earlier, Lewman had started letting friends based in various Asian countries route
their internet connections through his home to circumvent censorship and filtering.
Unfortunately, a friend-of-a-friend sent 23,000 emails through Lewman’s connection in a short
period of time, resulting in Lewman’s ISP kicking him off the network for spamming. He started
looking for circumvention software his friends could use instead of using him “as a VPN,”
Interview, October 19, 2016. Towson, MD.
86
! 234
finding the fledgling Tor Project in 2003. Tor at the time “was very slow, but it worked.” He
began volunteering as a developer and talking to others involved in the project — there was
already “a whole community” working on the open source project:
I asked why they didn’t release something usable, instead of just source code. Their
response was mostly that it was an alpha project, though there was also an attitude that if
you couldn’t compile code you weren’t worthy of using Tor. Someone else said, “you can
produce binaries yourself,” so I did.
87
Lewman, Dingledine and Mathewson eventually realized that they all lived in the Boston
area and decided to meet up for dinner. Lewman described his first impression to me:
They were typical MIT open source people: smart, but with a worldview that is all about
technology and the classic MIT bubble. Everything is about MIT, and computer science
ruled the world for them.
88
This belief in “programmer supremacy,” as former Tor director of development Karen
Reilly put it, would turn out to be Tor’s original sin.
Phone interview, December 12, 2017.
87
Phone interview, December 12, 2017.
88
! 235
Funding Tor
This section focuses on the internal dynamics of the Tor Project between 2007 and 2015.
It examines the non-profit organization’s institutional history in greater detail by analyzing the
sources of the Tor Project’s funding over the years and considering the Tor Project’s management
under the leadership of Roger Dingledine and Andrew Lewman, and how that has impacted the
organization. I conclude that the Tor Project, with all its flaws and contradictions, is what
happens when computer scientists try to run a nonprofit whose work puts it at the epicenter of
geopolitical tensions they are ill-prepared to deal with.
The Tor Project’s funding model, which relies extensively on grants and contracts from
U.S. government agencies, has been the object of much controversy over the years. It is also
what initially prompted my interest in the organization: It does seem a bit odd that the federal
government would support an online anonymity tool while many of its national security entities,
notably the National Security Agency and the Federal Bureau of Investigation, act to undermine
that same tool. Roger Dingledine first wrote Tor’s code while he was researching onion routing
as a Naval Research Lab contractor, and the Tor Project continues to receive most of its funding
from the U.S. government. Much of the funding over the years was first granted to a different
organization that then sub-granted it to the Tor Project. From certain vantage points, this
resembles an effort to obfuscate the origins of the funding, akin to money-laundering, and some
see this as further evidence that the Tor Project is tied to the U.S. intelligence apparatus.
In July 2014, Pando Daily published an article by Yasha Levine titled “Almost everyone
involved in developing Tor was (or is) funded by the US government” (Levine, 2014). The article
! 236
argues that “Tor’s original — and current — purpose is to cloak the online identity of
government agents and informants while they are in the field: gathering intelligence, setting up
sting operations, giving human intelligence assets a way to report back to their handlers — that
kind of thing. This information is out there, but it’s not very well known, and it’s certainly not
emphasized by those who promote it.” Levine concludes that “the Tor Project more resembles a
spook project than a tool designed by a culture that values accountability or
transparency” (Levine, 2014). Levine’s article sparked controversy in the digital rights
community, and in November 2014 Pando Daily published a rebuttal from journalist Quinn
Norton explaining that while “following the money” was good journalism, even the U.S.
government’s money couldn’t corrupt the mathematics behind Tor. For Norton, Tor was a rare
example of “stars align[ing] between spooks and activists and governments and
anarchists” (Norton, 2014). Regardless, Levine and others remain convinced that Tor contains
“backdoors” through which the U.S. government can intercept the web traffic of precisely the
users it wants to target — because they have been lulled into a false sense of security by Tor’s
promises of anonymity. The debate between Levine and Norton, and the subsequent feuds
(largely conducted over Twitter) between the Pando Daily crew and certain members of the Tor
! 237
community were what first drew my attention to the Tor Project. Might there be more there
89
than meets the eye, I wondered?
My research revealed that the reality is much more prosaic, bringing to mind Hanlon’s
razor: don’t attribute to malice what can be explained by incompetence. Levine’s 2014 exposé
concluded that “the Tor Project more resembles a spook project than a tool designed by a culture
that values accountability or transparency,” but I argue that the Tor Project, with all its flaws and
contradictions, is what happens when computer scientists try to run a nonprofit whose work puts
it at the epicenter of geopolitical tensions they are ill-prepared to deal with.
As discussed earlier in this chapter, Tor was the fruit of collaboration between computer
scientists from MIT and the U.S. Naval Research Lab, an outfit whose mode of operation more
closely resembles a university research center than a unit of the military. Notwithstanding the
public relations spin that would later be mapped onto Tor’s origin story, it seems that onion
routing wasn’t invented to respond to the concrete problem of protecting military
communications, but rather stemmed from basic research, from scientists pursuing knowledge
for knowledge’s sake — though protecting military communications was likely the internal
justification for the budget allocation within NRL. From the very beginning, Tor was designed to
work for anyone — otherwise, it would work for no one. Paul Syverson, Roger Dingledine and
It should be noted that individuals on both sides of the feud behaved reprehensibly,
89
hurling insults and doxing threats. See https://pando.com/2014/12/09/clearing-the-air-around-
tor/, https://pando.com/2014/12/10/its-time-for-tor-activists-to-stop-acting-like-the-spies-they-
claim-to-hate/, https://pando.com/2014/11/14/tor-smear/, https://motherboard.vice.com/en_us/
article/ae3n8p/that-time-a-tor-developer-doxxed-a-troll
! 238
Nick Mathewson were all computer science researchers with little experience of management
(non-profit or otherwise), policy or politics when they first conceived of onion routing, and they
seem to have been at least equally motivated by the intellectual challenges of the work as they
were by onion routing’s practical applications.
The next few phases of the Tor Project’s evolution seem to have resulted from
happenstance or outside forces, and not from any master plan orchestrated by Dingledine and
Mathewson. It was Fred von Lohmann who sought them out, in service of the EFF’s legal
strategy regarding online anonymity in the context of peer-to-peer file sharing and copyright.
90
When the fledgling Tor Project left the EFF’s fiscal umbrella, its leadership cobbled together an
organization as best they could: board members were mostly academic computer scientists, and it
seems that the funding relationship with the Broadcasting Board of Governors happened almost
by accident. When the organization’s first executive director fell ill, her successor, Andrew
Lewman, was selected — apparently — on the sole basis that unlike the others, he had worked in
the private sector and “knew about business” — and more specifically, that having worked for an
online advertising company, he knew how to help Tor users evade commercial surveillance.
Interestingly, Lewman eventually left Tor to work for a series of VC-funded security consulting
firms, where his work seems to focus on helping various actors track people and information on
the “dark net,” including Tor Onion Services.
For most of its history, the Tor Project applied for academic research grants in computer
science because those were the types of grants that Dingledine in particular knew how to
Phone interview, October 17, 2016.
90
! 239
successfully compete for. As it turns out, the National Science Foundation and the Defense
91
Advanced Research Projects Agency (DARPA) are among the most important funders of that
type of research. It’s only natural that the Tor Project would have sought these funders out. I have
uncovered no evidence that the Tor Project received these grants for any reason other than being
the most suitable group to carry out the planned research.
Claims of “money-laundering” through the use of “pass-throughs” are likewise
overwrought. For observers like Levine, these layers of bureaucracy point to a desire to obscure
the financial flows and thus camouflage the relationship:
In 2007, it appears that all of Tor’s funding came from the federal government via two
grants. A quarter million came from the International Broadcasting Bureau (IBB), a CIA
spinoff that now operates under the Broadcasting Board of Governors. IBB runs V oice of
America and Radio Marti, a propaganda outfit aimed at subverting Cuba’s communist
regime. The CIA supposedly cut IBB financing in the 1970s after its ties to Cold War
propaganda arms like Radio Free Europe were exposed.
The second chunk of cash — just under $100,000 — came from Internews, an NGO
aimed at funding and training dissident and activists abroad. Tor’s subsequent tax filings
show that grants from Internews were in fact conduits for “pass through” grants from the
US State Department (Levine, 2014a).
Interview, October 19, 2016. Towson, MD.
91
! 240
However, in many cases these “pass-throughs” are set up for specific legal or tax reasons.
For example, certain government grants are restricted to groups headquartered in the country in
question, or require applicants to form a consortium with other organizations. The Tor Project
would need to collaborate with an EU-based group in order to compete for grants from the
European Union. And across the U.S. federal government, grant applicants are encouraged to
“bundle” the activities of several organizations working toward the same broad goal into a single
application. The partners designate a primary applicant who will receive the funds (if the
application is successful) and sub-grant them to partner organizations. The U.S. Agency for
International Development (USAID) in particular has been pushing for such bundles, in part
because it facilitates the involvement of small, local organizations outside the U.S. who would
not be able to manage a large federal grant. A similar dynamic operates with respect to DRL
grants, which have a $500,000 floor.
92
Federal grantees must submit to onerous accounting, time-tracking, and overhead
requirements in order to comply with the Office of Management and Budget’s “Super-
Circular ,” which codifies the administration of federal grants. New organizations, small teams,
93
and collectives of volunteer anarchists do not have the institutional capacity to apply for or
manage such grants. Bundled grants allow smaller projects to compete for funds by working with
more established organizations. Over time, deliberate skills transfer between organizations
allows the smaller groups to develop the needed institutional capacity. For example, the Tor
94
Interview, September 27, 2016. Washington, DC.
92
https://www.whitehouse.gov/omb/grants_circulars/
93
Interview, September 27, 2016. Washington, DC.
94
! 241
Project received DRL funding via a sub-grant from Internews for six years before receiving its
first direct DRL grant in 2013. Likewise, DARPA funds the Tor Project’s research into enabling
internet access for US military personnel and allies in the field via a grant to the Stanford
Research Institute (SRI International), a non-profit that serves as an “R&D center for hire” for
numerous corporations and government agencies. The Tor Project’s portion of the grant was
likely bundled with other initiatives relating to the small broad topic. (I have not been able to
access the full text of grant proposals or award letters).
Moreover, such “pass-throughs” can provide a buffer that protects the project from
potential attempts at influence from the U.S. government, or the perception of a conflict of
interest, contra Yasha Levine’s repeated claim that Roger Dingledine and other Tor developers
are “on the DoD’s payroll” or draw their salaries “from Pentagon/State Department sources at
Tor” (2014a). Bundling grants can also help smooth differences in organizational cultures
between funders and grantees. A Tor Project insider based in Germany remarked that he
95
dislikes “how funders think they should have a say in how [the organization] is run,” pointing to
the “cultural clash” between the Tor Project and the funders — meaning between the crypto-
anarchists of Tor and the “suits” in Washington and corporate America. I can certainly attest to
96
the potential for a “culture clash,” having spent a lot of time with the crypto-anarchist coders, the
diplomats in pinstripes, and everyone in between — notably the “cultural brokers” at Tor and
other organizations whose role centers on translating and connecting these two worlds. For the
six years that Internews “bundled” the grant that included funding for the Tor Project, that
Interview, September 27, 2016. Washington, DC.
95
Interview, March 2015. Valencia, Spain.
96
! 242
organization’s staff served as “cultural brokers” between the Tor Project and the State
Department, and also gradually taught the Tor Project how to apply for, manage, and report on
large government grants of their own.
One final anecdote illustrates how just a bit of research can reveal that something that
looks suspicious at the outset is actually rather mundane. In both 2012 and 2013, the
International Broadcasting Bureau (part of the BBG) contracted with the Tor Solutions
Corporation (TSC), a company wholly owned by the Tor Project. The IBB had issued a Call For
Proposals (CFP) for which only small businesses were eligible. The language had been intended
to disqualify “the Boeings of the world,” but, as Dingledine put it, “it turns out that nonprofits
are not small businesses.” Thus the TSC was created for the sole purpose of receiving this
contract. The TSC was legally dissolved on April 4, 2016, as it no longer served a purpose and
cost the Tor Project money in various fees.
97
Drawbacks of USG funding
If U.S. government funding is the path of least resistance for the Tor Project, it is also one
that comes with a number of drawbacks. First, because both computer science research grants
(DARPA) and internet freedom grants (DRL) are structured around sets of predetermined
deliverables, the Tor Project is unable to pivot its priorities as its understanding of the tool’s
needs evolve. This nimbleness is very important for software development. One participant told
me,
Interview, October 19, 2016.
97
! 243
Software development is often unpredictable. Committing to long cycle times means
you’re stuck with realizing that your plans are bad without being able to decide to change
tack.
98
Nor did the Tor Project have an “unrestricted” pool of money, which is considered a best
practice in nonprofit management. As the same participant explained,
Sometimes users rely on features that lack funding for development or maintenance.
99
Second, Tor has been dealing with the reputational fallout of being associated with the
U.S. government, which was particularly intense after the Snowden revelations. Levine’s
critiques fall under that umbrella, as does the persistent online speculation about the true nature
of the Tor Project’s relationship to the federal government.
A third drawback is that some members of the Tor community believe that very few of
the grant officers within funding agencies have the technical ability to understand the projects
described in grant applications, and that funding agencies have a bias for training programs and
other initiatives that grant officers can comprehend (though former DRL official Ian Schuler
disagreed with that assessment).
100
Interview, March 9, 2017. Valencia, Spain.
98
Interview, March 9, 2017. Valencia, Spain.
99
Field notes, March 2015. Valencia, Spain.
100
! 244
And most importantly, reliance on a single funding source left the organization in a
precarious position should the funds dry up for any reason — a very real concern, especially in
the aftermath of the 2016 election. The election of someone like Trump was unprecedented, and
for several months it was anybody’s guess what would happen to Internet Freedom funding.
Everyone I have spoken to about Tor, from March 2015 until now, was unanimous that funding
diversification was the most important challenge for the organization’s long-term survival.
Different reasons were emphasized by different participants, ranging from resilience to any one
stream disappearing, to optics “tainting” the project in countries like China and Iran. The
101
organization has been actively pursuing this goal since before I began my fieldwork. For
example, in early 2015 the organization received an $82,765.95 grant in unrestricted funds from
Reddit, following a vote from Redditors.
Organizational (mis)management
My research corroborates the aphorism that you should never attribute to malice what can
be explained by incompetence. Indeed, I found that many of the perplexing things about Tor that
resembled conspiracies from the outside could be explained by poor organizational management
and an ideology of “programmer supremacy.” My aim here in airing what some might consider
“dirty laundry” is not to smear the Tor Project or its leadership. Rather, my intention is to
demonstrate the importance of sound nonprofit management by showing how poor management
grounded in an ideology of “programmer supremacy” creates the propitious environment for a
Field notes, 2015-2018.
101
! 245
toxic individual — namely Jacob Appelbaum — to operate and drive valuable staff away. This
may seem blindingly obvious, yet the professionalization of activism and digital rights
technology development is a contentious issue in the digital rights space, as we saw in Chapter 3.
Previous sections of this chapter traced the Tor Project’s “origin story” from the birth of
onion routing at the Naval Research Lab, to an incubation period at the Electronic Frontier
Foundation, to incorporation as a nonprofit and the first U.S. government grant in 2007. We now
turn to the organization’s growth from 2007 to 2015, which which times Andrew Lewman was
the executive director of the Tor project. The consensus among my interview participants is that
Lewman was technically proficient but an ineffective manager of an increasingly complex
nonprofit. Oddly, he consistently refers to the Tor Project as a “company” in various online
102
fora, signaling a lack of understanding that non-profits cannot, and should not be run like profit-
seeking businesses. This may reflect some degree of wishful thinking on his part. As one
participant put it,
When I heard what he’s doing now [helping defense contractors track Tor users on the
“darkweb”], he’s very much a capitalist. He always wanted to make money, and to set up
a money-making arm of Tor.
103
As Executive Director, Lewman “treated Tor as a start-up.” To his mind, the only
difference between a nonprofit and a for-profit was “what you do with the money at the end of
Field notes, 2015-2018.
102
Phone interview, October 10, 2017.
103
! 246
the year,” and he believed that “nonprofits should be profitable.” He quickly realized that
running the Tor Project was different from his previous experience in two major ways. First, he
had to figure out how to properly manage and incentivize volunteers who didn’t want to be paid
with money:
In tech startups, people are driven by either money or career path [career advancement].
In the nonprofit world that wasn’t it — how do you reward people in ways that were non-
financial?
104
[…]
We committed to getting things done by specific deadlines, which was hard to enforce
with volunteers. It took time to figure out how to incentivize people to care about the
project as a whole, rather than just the bit they were working on. Every six months, we
would have these developers meetings. With any other group you would call it a
company meeting, but that seemed too corporate. We would pay for people to travel to
wherever it was, for their hotel, etc. — that worked just fine with a lot of people. Some
people volunteered for Tor out of passion and didn’t want any recognition, they just
wanted to work on the code. You just had to let them do their thing and not count on them
sticking to a schedule. Others just wanted to be thanked publicly. Others wanted the
professional experience. As Tor grew in the public mindshare, it seemed to have this self-
Phone interview, December 12, 2017.
104
! 247
fulfilling loop where the people who wanted to work with us were committed, wanted to
be committed to a successful project.
105
An even greater challenge concerned the political dimensions of Tor’s work, for which
Lewman told me he was completely unprepared: “State Department funding meant that
everything would be political. I had to learn real quick how to be more of a politician, and to be
more diplomatic with different people, different cultures.” Lewman described his experience
106
interacting with the Swedish Development Agency, SIDA, which had given the Tor Project a
grant:
We received funding from the Swedish government [SIDA]. That process was very
different from US funding. The Swedes were incredibly precise, and what worked in the
US didn’t work there. I basically had to start learning Swedish to extend the olive
branch, to show we were serious. I went there, met them, talked to them. They were not
into any sort of fame or fortune — it was more like, “How are we going to help the world
be a better place? Can you itemize that in detail and commit to milestones?” We also had
people coming from different countries (including State Department export control
countries, like Iran), who want to learn about Tor, how to use it, etc. I had to be careful
about the implications of that as a US citizen, about the legal repercussions.
107
Phone interview, December 12, 2017.
105
Phone interview, December 12, 2017.
106
Phone interview, December 12, 2017.
107
! 248
As I interviewed Lewman, it became clear to me that he never quite mastered the political
dimensions of leading Tor, particularly the Internet Freedom agenda funding that sustained the
organization:
Tor quickly became well known for helping criminality on the “dark web.” The Internet
Freedom funders had a an off-the-record hearing, where I was questioned by senators and
their aides about things like “How long have you been building child-abuse technology?”
State did their best to prepare me, but it was rough. One senator was from New Jersey,
another maybe from Virginia? In talking to them and their aides, I realized they were both
devout Catholics, and that their goal was to free China and Iran, and let religion flow
there. Then, State warped that mission into “everyone wants access to V oice of America”
— it started as freedom of religion (ie Christianity, specifically Catholicism) then State/
etc turned it into something else. It didn’t really matter to me what the original goal was.
State took a more venture capital approach to funding this [anti-censorship tech]: “let’s
give a little bit of money to a lot of organizations and see what works.” For me, the
religion or propaganda motivation didn’t matter, it was about advancing the state of the
art [of technology] and free speech. That was one of State’s justifications [free speech],
which is more palatable to the American public than the religion thing.
108
Lewman is referring to a classified briefing (not “off-the-record”), and misunderstands
the history of the Internet Freedom agenda, which I detailed in chapter 4.
Phone interview, December 12, 2017.
108
! 249
Over the course of my fieldwork, I heard complaints about his lackadaisical approach to
record-keeping, his communication style, and the vague, confusing instructions he gave to staff
and volunteers. I also heard a handful of claims (framed as “repeating something I have heard”)
that Lewman might have either embezzled funds from the organization, worked to prevent the
Tor Project from being “too successful,” or both. I have not come across any conclusive evidence
for these claims myself. One participant described Lewman as “neutral to a fault,” saying that “if
Andrew took a position [in a debate], it was in favor of neutrality” — in other words, he
consciously elected not to take a position.
Roger [Dingledine] could identify a right and wrong in his head a little more clearly.
Andrew made a conscious decision not to identify right and wrong — I thought that was
a bit strong given his strong stance regarding violence against women.
109
Most of all, I was told time and time again that Lewman, along with the other men at the
helm of the Tor Project, had consistently ignored, minimized, or excused the behavior of star
developer Jacob Appelbaum, whose pattern of bullying, sexual harassment, abuse and assault
would nearly tear the Tor community apart. The more I learned about Lewman, the more his
110
strong public stance opposing violence against women seemed at odds with his failure to
recognize and put an end to Appelbaum’s behavior. Descriptions of his “neutrality” and extreme
conflict-avoidance provide a partial explanation. As I understand it, Lewman dealt with
Phone interview, October 10, 2017.
109
Field notes, 2015-2018.
110
! 250
Appelbaum’s abrasive personality and poor treatment of his colleagues by distancing himself
from him, something that is easy to do in a globally distributed virtual organization.
Lewman seems to harbor some resentment for the expectation that he, as Executive
Director, would be responsible for dealing with personnel issues:
I had to deal with all that crap myself, no one else wanted to do it. At my previous job,
where I managed over 100 people, I gave people contracts that could be ended or not
renewed if things didn’t work out. For volunteers that were problem children, I’d have to
have a conversation with them, try to figure out how to resolve the issue, but there were
some times where you had to say “this attitude isn’t working.” I would try to build
consensus among the staff about whether a person was helpful or not. Everything went
through the Board [of directors] — I had to have all the documentation, be prepared for
any public fallout, etc. The Board had the authority to overrule me as the ED, but I can’t
think of a time that happened. My management style is to build consensus, understand
why people disagree, etc. You have to wait until you have all the facts — you can’t act on
rumors.
In Lewman’s telling, this impacted his ability to handle “the Jake Problem” as he would
have liked to:
Everything went through the Board — a lot of people [within Tor] didn’t know how to
deal with it [Appelbaum’s increasingly toxic behavior], they had never encountered
! 251
something like this before. In my experience, when an employee is toxic, you can’t cure
them, you have to cut them out. It’s like a cancer that there’s no chemo for. But it was a
consensus-driven organization, there were enough people who thought he could be
handled that my hands were tied. It was a “all the mice know something needs to be
done, but no one wants to be the one to do it” kind of situation. Jake turned people into
flying monkeys by helping them meet Assange, meet Snowden, get them media attention,
all kinds of stuff like that — I was surprised by the people who fell for it.
111
Notwithstanding Lewman’s arguments about the Tor Project being a “consensus-driven
organization,” as Executive Director he shared responsibility with the Board of Directors
(including Dingledine and Mathewson) for following up on “rumors” concerning Appelbaum’s
abuse of his coworkers. In the assessment of Shari Steele, who succeeded Lewman after a long
career at the Electronic Frontier Foundation, everyone’s management failed: they should have
dealt with Appelbaum in a timely manner, and didn’t. Steele’s investigation found indications
112
that the leadership knew that this was happening, and my fieldwork confirms that many others
inside the organization — notably former Director of Development Karen Reilly — tried to raise
the alarm, and were punished for it.
113
Phone interview, December 12, 2017.
111
Phone interview, December 18, 2017.
112
Field notes, 2015-2018.
113
! 252
Tor and Uncle Sam
This section considers the institutional relationships between the Tor Project and various
government agencies, as well the bureaucratic politics within and between government agencies
with respect to Tor and to digital rights technology more broadly. The U.S. federal government
does not speak with a single voice when it comes to Tor, nor are its actions internally consistent.
Whether by design or not, this is an example of constitutional checks and balances at work
within the executive branch. Computer science research, internet freedom, crime prevention and
prosecution, and national security imperatives drive different agencies’ agendas and actions.
Actors negotiate trade-offs and compromises through bureaucratic games (Allison & Halperin,
1972) that are difficult to trace without much greater access to government records than is
possible without a security clearance. In spite of this access limitation, I was able to interview a
number of current and former government officials about their and their agencies’ relationship to
the Tor Project.
The previous sections detailed the organizational history of the Tor Project,
demonstrating that the evidence used by some observers to claim nefarious ties to the U.S.
intelligence community are more accurately interpreted as evidence that programmers are not
natural nonprofit managers. Nevertheless, the apparent contradictions between various
government agencies’ relationship to Tor (and to online anonymity in general) raise a number of
interesting questions — but the answers are to be found in an analysis of the federal bureaucracy,
not an examination of the Tor Project itself. The complicated relationship between Tor and the
U.S. government reflects the divergent relationships that various government entities maintain
! 253
with freedom of information, privacy and anonymity. Its main funder, the State Department’s
Bureau of Democracy, Human Rights and Labor (DRL), is primarily concerned with supporting
human rights activists and other dissidents abroad for whom online anonymity can be a matter of
life or death, and is responsible for implementing much of the Internet Freedom Agenda
announced by then-Secretary of State Hillary Clinton in 2010 (Shirky, 2011). Certain branches of
the military, intelligence and law enforcement communities likewise use Tor to mask their own
activities, and rely on Tor’s other users to provide a crowd in which to hide. One informant told
me that this was precisely why the U.S. Naval Research Lab released the code for Tor in 2004,
and Dingledine frequently gives trainings to military, intelligence and law enforcement on how
to use Tor (Romanosky et al., 2015). In contrast, other parts of the military and intelligence
community charged with investigating criminal and terrorist suspects view Tor as a digital safe
house where they cannot track their targets.
My first wave of interviews, at the 2015 Circumvention Tech Festival in Valencia,
focused on exploring how members of the Tor community viewed the U.S. government funding.
I heard a number of fairly fantastical claims, mixed in with more sophisticated takes: for
example, one person told me that the Tor Project had, on at least one occasion, turned down
money from another government in order to avoid damaging the relationship with the U.S.
government. However, neither Ian Schuler nor John Napier Tye (both former DRL officials)
114
could think of a country that would be likely to support Tor and that the State Department would
object to. For example, both Sweden and the Netherlands have supported the Tor Project
Field notes, March 2015.
114
! 254
financially with the State Department’s blessing. Conversely, there is no plausible scenario where
a U.S. adversary like China, Iran or Russia would give money to an anti-censorship group.
115
Other respondents focused on the ethics of any digital rights group accepting U.S.
government funding, surfacing a variety of perspectives. Some would take cash but not in-kind
donations or staff assistance while others would welcome fellows paid by State Department or
OTF programs, but not direct grants. There are also plenty of people at the extremes of the
spectrum, either denying any conflict of interest with accepting U.S. government money or
regarding all such grants as tainted. Several Tor insiders seemed to view the funding relationship
with the U.S. government as a zero-sum game. As several European developers put it, “money
doesn’t stink,” and even if the U.S. government can be viewed as an adversary, any financial
flows from the U.S. government to the Tor Project mean that there “is still more money for [us]
and less for them.”
116
In retrospect, what is most striking about this first wave of interviews is how little the Tor
developers and volunteers I interviewed knew about the funding, the Internet Freedom budget
that much of it stemmed from, or the geopolitical stakes involved. They were quite clear on the
importance of anonymous internet access for individual users: “The U.S. Navy will survive
without Tor. The Iranian dissident won’t.” For a time I thought that my informants were perhaps
holding back information or opinions because they didn’t know or trust me. But as I encountered
the same people over and over again, and got to know them better, I realized that they genuinely
had only a superficial understanding of the context in which they operated. Time and time again,
Interviews, March 16 & 17, 2015. Washington, DC.
115
Interviews, March 2015. Valencia, Spain.
116
! 255
research participants have told me that they can’t wait for me to finish my dissertation so they
can better understand their own community and the forces that shape their work.
117
After the Festival, I spent most of 2015 studying for my qualifying exams (and thus
reading a lot of Communication and International Relations scholarship) and investigating the
federal bureaucracy’s relationship to the Tor Project, to online anonymity, to encryption, and to
other dimensions of internet freedom and digital rights. The next section discusses the divergent
objectives and strategies of the U.S. foreign policy and national security agencies with respect to
Tor.
The interagency process: Divergent missions and federal ambivalence
While international relations scholarship traditionally views nation-states as unitary
actors who rationally pursue their own self-interests, much in the same way that individual
human beings do, the bureaucratic politics paradigm focuses on factors at the individual and
organizational levels of analysis (Allison and Halperin, 1972). In this paradigm, the basic unit of
analysis is not the policy outcome but the discrete actions of government, defined as “the various
acts of officials of a government in exercises of governmental authority that can be perceived
outside the government” (Allison and Halperin, 1972, p. 45). Asking why the U.S. government
both funds and targets Tor is to pose the wrong question. It is more useful to first examine
individual agencies’ relationship to the nonprofit and its signature piece of software, before
analyzing the interagency process through which seemingly opposed interests are reconciled —
Field notes, 2015-2018.
117
! 256
or not. In this section I consider many of the government agencies that have interacted with the
Tor Project over the years, analyzing how Tor relates to their mission and the actions they have
taken in consequence.
Naval Research Lab
The first federal agency to encounter Tor was the Naval Research Lab (NRL), which
funded Paul Syverson’s initial research into onion routing, and where Roger Dingledine was a
contract researcher when he first wrote Tor’s demo code. Being a computer science research
powerhouse, the NRL has been supportive of Tor. Similarly, DARPA supports Tor because the
group’s research furthers DARPA’s mission, notably secure internet access for military assets in
theater. Other parts of the military do not view Tor in the same light. Randolph Schumaker, who
worked at the Naval Research Lab from 1985 to 2002, told me that when he was Superintendent
for Information Technology he once received a call from an Air Force colonel, who asked
Schumaker whether he knew that NRL was operating a site that allowed secure internet access
— Tor. Schumaker did, of course. The colonel demanded that NRL stop its research, which
Schumaker declined to do. The colonel threatened to have NRL cut off of certain government
data sharing networks — which it wasn’t on anyway. “This isn’t the last you hear of me,” warned
the colonel. Schumaker never heard from him again.
118
Phone interview with Randolph Shumaker, June 1, 2015.
118
! 257
Broadcasting Board of Governors and State Department
A few years later, the fledgling Tor Project came into contact with the Broadcasting
Board of Governors, whose mandate to “win hearts and minds” abroad was being stymied by the
growth of online censorship. Tor was conceived as a privacy tool, but its technical affordances
meant that it also circumvented censorship. Funding the Tor Project, then, fit squarely within the
BBG’s mission. It made sense for the Open Technology Fund (OTF) to take over funding Tor
when it was created in 2012. Similarly, the State Department’s Bureau of Democracy, Human
Rights and Labor (DRL) funds the Tor Project (first via Internews, then directly) because its
activities further the Internet Freedom Agenda as mandated by Congress (the political history of
Internet Freedom funding is discussed in chapter 4).
National Security and Law Enforcement
On the other hand, national security and law enforcement agencies encounter Tor as a
threat to their missions. Onion Services pose the greatest problem for law enforcement, as it is
where extremist websites, illegal drug marketplaces like the Silk Road, and various other illegal
content can be found. One federal law enforcement agent told me his sense was that his agency
would have no problem with Tor if it weren’t for Onion Services. From his perspective, Onion
Services are the refuge of the illegal trade in guns, drugs, and human beings, and should either be
shut down or subject to a “Terms of Service” policy, similar to social networking sites like
Facebook and Twitter or blogging platforms like Wordpress.
However, this view is grounded in a misunderstanding of how the technology works.
Onion Services doesn’t actually host content the way that Wordpress or Facebook do, so a Terms
! 258
of Service Agreement would be “about as enforceable as telling people that they can use public
roads for whatever they want — except for trafficking cocaine”. Additionally, many digital rights
technology projects use Onion Services as their backbone. For example, they are the cornerstone
of the technology behind the whistleblowing platforms SecureDrop and GlobalLeaks. More
importantly, the consensus within the Tor Project is that the ability to anonymously publish
information, including information that might be illegal within a certain jurisdiction (often
political in nature, or related to the rights of marginalized groups), is a non-negotiable part of the
organization’s mission. As core Tor developer Griffin Boyce put it, “historically, the Tor Project
has been willing to take the public relations hit in exchange for doing something that we all think
is right: giving people an easy way to speak anonymously.” And while everyone within the Tor
Project is unanimous that child porn websites are bad, “pedophiles are terrible webmasters,”
Boyce told me, suggesting that the existence of Hidden Services (as Onion Services were
formerly known) was not the impediment to law enforcement that public discourse would have
us believe. He argued that the way to fight the sexual abuse of children is through more
119
resources in “the real world,” not by banning a technology that he sees as overall beneficial to
human rights. In an email to the Tor-talk mailing list, Roger Dingledine elaborated:
I should also make clear my opinion on some of the bad uses of Tor. The folks who are
using Tor for child porn, even though they are a tiny fraction of overall Tor users, are
greatly hurting Tor -- by changing or reinforcing public perceptions of what privacy is
for, and also by attracting the attention and focus of law enforcement and making that the
Phone interview, March 20, 2015.
119
! 259
way that law enforcement first learns about Tor. So, fuck them, they should get off our
network, that's not what Tor is for and they're hurting all of us. Now, that doesn't mean we
should weaken Tor, even if we don't want them on the network. That slope is too easy to
slip down, and we must not get into the business of dictating what is acceptable behavior
for Tor users (which would eventually lead to designing technical mechanisms to enforce
these choices) (Tor-talk, 4/19/15).
Nevertheless, in the eyes of some law enforcement and counter-terrorism officials, Tor is
primarily a tool that allows criminals to organize and communicate online with impunity. This
may reflect a lack of differentiation between the Tor Browser and Onion Services, as my
interview with the federal law enforcement agent suggests. Additionally, public servants are
under strong incentives to take any and all actions that might potentially prevent crime,
terrorism, or other threats to public order. This includes mass surveillance, restrictions on the use
of encryption and other circumvention technology, and also offline policing tactics like “stop-
and-frisk.” From a law enforcement perspective, the possibility of preventing or solving a crime
is worth the negative externalities born by others, especially when the consequence of missing a
threat could cost an individual his or her job, so risk is transferred from the public servant to
unspecified members of the public (Beck, 1992; Mueller, 2010). However, there is no evidence
that these techniques are actually effective at preventing or solving crimes (Schneiderman, 2013;
U.S. Department of Justice, 2015). Moreover, Western law enforcement and intelligence are
comparatively overexposed to “bad” uses of Tor and underexposed to its benign and positive
! 260
uses. As Eric Jardine (2015) notes, the social costs of Tor are disproportionately born by liberal
societies, while its benefits tend to accrue to repressive ones.
Tor’s privacy-enhancing affordances present a direct challenge for the intelligence
community. The National Security Agency (NSA) in particular has been trying to “break” Tor, as
it stands in the way of the NSA’s mission to “Collect it all, Process it all, Exploit it all, Partner it
all, Sniff it all, Know it all” (Greenwald, 2014). Among the 2013 Snowden revelations was a 49-
page research paper that “portrays a years-long program to defeat what the agency called ‘The
Tor Problem,’ with the agency repeatedly updating its tactics as Tor’s developers made changes
to the network” (Gellman, Timberg & Rich, 2013).
Similarly, former Federal Bureau of Investigation (FBI) Director James Comey has stated
publicly that Tor, encryption and other privacy-enhancing technologies threaten law
enforcement’s ability to prosecute crimes (Comey, 2014; Government Technology, 2014). Tor is
part and parcel of the resurgent policy debate over encryption that, it turns out, was not put to
rest in the 1990s (see Levy, 2001, for background and analysis of the so-called Crypto Wars).
Anecdotal evidence also suggests that the NSA and FBI in particular have attempted to thwart
Tor’s funding. Former State Department official John Napier Tye told me that when he was at
DRL, he received “several” phone calls from officials at the FBI and the NSA asking that DRL
cut off funding to the Tor Project. After consulting with his boss, Tye asked each caller to send
this request in writing for due consideration. None did.
120
Interview, March 17, 2015. Washington, DC.
120
! 261
Tor and the geopolitics of information
This section examines the political and geopolitical implications of Tor by focusing on
three key affordances: censorship circumvention, anonymous internet access, and anonymous
online publishing. As we have seen in the previous section, the U.S. federal government does not
speak with a single voice when it comes to Tor, nor are its actions internally consistent. Whether
by design or not, this is an example of constitutional checks and balances at work within the
executive branch. Computer science research, internet freedom, crime prevention and
prosecution, and national security imperatives drive different agencies’ agendas and actions.
Actors negotiate trade-offs and compromises through bureaucratic games (Allison & Halperin,
1972) that are difficult to trace without much greater access to government records than is
possible without a security clearance.
We now turn to an analysis of the political and geopolitical impact of Tor’s three key
affordances (censorship circumvention, anonymous internet access, and anonymous online
publishing) as they impact governments in general — not just the United States — and the
international system as a whole.
Censorship circumvention
As we saw in chapter 1, internet censorship by technical means was the first step that
governments like China and Singapore took to control the flow of information across their
borders. This first generation of information controls (Deibert et al., 2008, 2010, 2012; Deibert,
2016), when used alone, is fairly trivial to circumvent, as evidenced by the prevalence of tools
! 262
like CGI Proxy, Lantern, Psiphon (see chapter 5), and of course, Tor. Elsewhere in this chapter I
have shown that while neither Tor nor onion routing in general were developed with censorship
circumvention in mind, that is the affordance that has earned the Tor Project much of its grant
funding. Indeed, the Broadcasting Board of Governors’ International Broadcasting Bureau
funded Tor in the hopes that the technology would help the BBG’s content-producing units (i.e.
Radio Free Europe, Radio Free Asia, etc) reach their intended audiences in China, Iran and
elsewhere.
In most liberal democracies, censorship circumvention is politically anodyne to the point
of irrelevance. In places where the internet is not subject to technical censorship, it is hardly
controversial to make it possible for others to bypass such censorship. To be sure, many
institutions within liberal democracies impose technical limitations on what users of their
internet connections can access: school districts and many workplaces filter pornography, for
instance. But even in those cases, the limitations are framed as “it is not acceptable for you to
121
use our resources to access this content,” not “it is unacceptable for anyone in the country to
access this content at any time.”
In contrast, governments that practice variants of networked authoritarianism
(MacKinnon 2011, 2012; Maréchal, 2017a) see online content itself as a threat (Nocetti, 2015).
As discussed in the previous case study, Iran’s leaders see the country as the target of a “soft
Interestingly, a number of filters that comes preinstalled with certain networking
121
equipment block access to the Tor Project’s website, but not to the Tor network itself. This is the
case for at least one coffee shop where I wrote parts of this dissertation. I was able to access the
Tor Project site by first connecting to my university VPN.
! 263
war” led by the United States, the ultimate goal of which is regime change (Alimardani & Milan,
2017; Price, 2017). This is far from an irrational belief: in an interview, a former State
Department official recalled a meeting about Iran during which someone said:
Our policy isn’t regime change, it’s that we want the regime to change - but we don’t
think it can change.
122
Separately, Daniel Sepulveda — who was then Deputy Assistant Secretary in the Bureau
of Economic and Business Affairs, and U.S. Coordinator for International Communications and
Information Policy — volunteered that internet freedom was “totally about regime change.”
123
This was an unusually candid confirmation of what internet freedom’s critics have long asserted
(see chapter 4). But, numerous interviewees from other federal agencies and from other units
within the State Department emphasized other motivations and strategic objectives behind the
Internet Freedom agenda such as human rights and the promotion of U.S. economic interests,
notably market access for American tech companies (see Powers & Jablonski, 2015). The
reasons for different agencies’ endorsement of the Internet Freedom agenda — of which
censorship circumvention is a core tenet — appear to be as varied as their attitudes toward Tor
(see Chapter 3).
Interview, March 17, 2015. Washington, DC.
122
Interview, March 16, 2015. Washington, DC.
123
! 264
Anonymous internet access
As we have seen, onion routing was first developed by computer scientists at the U.S.
Naval Research Lab, both as a result of basic science research pursued for its own sake and as a
way to protect the communications of U.S. military personnel abroad. The code that would later
become Tor itself was written by MIT computer scientist Roger Dingledine, who was then an
NRL contractor, in his spare time. It was further developed by Dingledine and his colleague Nick
Mathewson while the fledgling Tor Project was under the Electronic Frontier Foundation’s fiscal
umbrella. As seen earlier in this chapter, the EFF had identified the Tor Project as the most
promising candidate for an online anonymity tool that would offer a more compelling narrative
than file-sharing applications like Napster and Limewire did.
The Tor Project’s public presentations, website, and other publications emphasize the
diversity of use cases for online anonymity: protection from “unscrupulous marketers and
identity thieves,” researching sensitive topics like those related to health or sexuality, law
enforcement conducting undercover operations online, military assets in theater, and avoiding
corporate tracking on the internet. In liberal democracies, accessing the “clear” or regular
internet in an anonymous fashion is largely uncontroversial — it is only the “Dark
Net” (discussed in the next sub-section) that is controversial. As one American law enforcement
officer told me, “if it weren’t for Hidden Services [now branded as Onion Services], my agency
would have no problem with Tor.”
124
That is not the case in more authoritarian contexts. While it can be difficult to parse the
reasons for opposing Tor usage, governments that police the content that their citizens access
Interview, March 18, 2015. Washington, DC.
124
! 265
online by filtering or blocking certain sites often track users who seek to circumvent those
barriers as well. While network observers (such as the ISP or security services) can’t tell what
site a person is visiting, they can still tell that the person is using Tor, as encrypted traffic looks
different from “regular” traffic. Pluggable transports “transform” the network traffic between the
local client and the bridge node through which the connection enters the Tor network so that it
looks like something other than Tor traffic. For example, a Pluggable Transport called
SkypeMorph “transforms Tor traffic flows so they look like Skype video.”
125
Anonymous online publishing
We now turn to Tor Onion Services, the anonymous online publishing system formerly
known as Hidden Services. As described earlier in this chapter (see “The Basics”), Onion
Services allow users and “services” (which include websites, blogs, online marketplaces, chat
services, and more) to connect without the parties knowing each other’s IP addresses. It is not a
“platform” in the way that Facebook, Twitter or Wordpress are, as the published content does not
reside on servers controlled by the Tor Project. There is no mechanism for the Tor Project to
control what is available through Onion Services.
As Robert Gehl details in his forthcoming book (2018), Onion Services are what many
people think of when they talk about the “dark web,” which Gehl defines as “Web sites built with
standard Web technologies (HTML, CSS, server-side scripting languages, hosting software) that
cannot be viewed with a standard Web browser, such as Firefox or Chrome” (p. 11 of 2017 draft,
shared with me by the author). Onion sites must be accessed using specialized software (the Tor
https://www.torproject.org/docs/pluggable-transports.html.en
125
! 266
Browser or the Tor2Web Firefox plug-in in this case), as must other dark web sites on the Freenet
and I2P networks.
Contrary to what media hypes and moral panics would have us believe, the Dark Web
contains mirrors of popular “clear web” sites like Facebook and ProPublica and anodyne sites
about cats or where users can anonymously play chess, alongside political sites that are banned
from the “clear web” in the publishers’ country of origin, and — yes — fora where drugs, guns,
criminal networks, images of child sexual abuse, and “terrorist” content can be found. “Location-
hidden services” or “hidden services” were rebranded as Onion Services as a way to distance Tor
from the negative connotations of words like “dark” and “hidden.”
The Tor insiders I interviewed did not minimize the gravity of the sexual abuse of
children, though they disputed claims that Tor truly hampered law enforcement (they were
markedly less concerned about drug sales and other controversial content). Core Tor developer
Griffin Boyce told me that “pedophiles are terrible webmaster,” who inevitably “fail at opsec” in
some way. This is how Ross Ulbricht, the “Dread Pirate Roberts” behind the original Silk
126
Road online marketplace for illegal drugs, was caught by the FBI. He used the same handle
across a variety of sites, including at least one that linked his online persona to his IP address and
thus to his offline identity (Gehl, 2018).
In addition to arguing that Hidden Services makes it harder but not impossible for law
enforcement to identify “legitimate” criminals, my research participants disputed that criminals
really relied on Tor all that much. As Dingledine has pointed out in a variety of fora, criminals
have many more tools at their disposal than “legitimate” Tor users do:
Phone interview, March 20, 2015.
126
! 267
The argument varies by type of criminal, but for the "what about the terrorists" side, my
argument goes something like:
Scenario 1, I want to build a tool that handles millions of people, that will work for the
next year, and that I can tell you all about so it can get peer review and people can trust it.
That's the Tor problem.
Scenario 2, I want to build a tool that works for twenty people, for the next two weeks,
and I'm not going to tell you about it.
There are so many more ways to solve scenario 2 than there are to solve scenario 1 (Tor-
talk listserv, 6/7/2015).
Finally, Onion Services provide the technical backbone for a number of rights-enhancing
projects like the whistle-blowing platforms SecureDrop and OpenLeaks, which are used by news
organizations around the world to solicit tips from whistleblowers while preserving their
anonymity. Women on Web, a Dutch non-profit that helps women obtain contraceptive and
abortion information and services in countries that limit reproductive rights, hosts much of its
technical back-end as an Onion Service to protect the privacy of its beneficiaries and of its
staff.
127
Field notes, August 14, 2015. Chaos Communication Camp, Zehdenink, Germany.
127
! 268
Tor today: 2015-2017
The previous section described the Tor community that I first encountered in March 2015
— though I didn’t know any of that at the time. Many of the questions I asked during that first
bout of fieldwork now seem hopelessly naive, though I realize they were a necessary first step
for this project. As it turns out, 2015 was both the year I started my research on Tor and the year
that the organization’s structural flaws came to a head.
Long-time executive Lewman resigned in April 2015, a few weeks after my first trip to
Valencia. Things had not been going smoothly for a good while, so his departure did not come as
a surprise. Roger Dingledine stepped in as Interim Executive Director, as he had years earlier.
128
Dingledine knew that Tor needed a stronger board of directors, as the organization had
prioritized technical expertise over experience overseeing complex organizations and developing
organizational infrastructure. For years, the Tor Project had been using the developers to fill
those gaps, but it was now clear that the organization desperately needed other kinds of
professional expertise: press and media relations experts, a project manager, dedicated
administrative assistants, human resources professionals, accountants (or, as Dingledine put it,
“people to make sure you fill out your financial forms correctly”). The Tor Project’s leadership
realized that putting the right people and processes in place — and finding the money for pay for
them — would require a world-class Executive Director who understood technology, could
manage people, and knew how to fundraise and get the organization on a sustainable footing.
Dingledine, along with board members Wendy Seltzer and Julius Mittenzwei, led the search
Interview, October 19, 2016. Towson, MD.
128
! 269
committee. It soon dawned on them that their dream candidate existed: her name was Shari
Steele.
Steele was the long-time executive director of the EFF, who had recently retired and
followed her husband from San Francisco to Seattle. When the search firm engaged by the Tor
Project reached out to her, she reportedly laughed off the idea. The search firm brought in other
candidates for initial interviews, but their performances were lackluster. The truth was that there
were not many people in the world who could do the job: managing a mission-driven, globally-
distributed non-profit that was overly reliant on government grants, whose work was highly
technical and more than a little controversial, and with a brewing personnel maelstrom centered
around one of the organization’s most high-profile staff, was not a mere day job. Dingledine and
his colleagues needed someone who understood that, and who was up to the task. After
interviewing a few “OK candidates,” eventually board member Wendy Seltzer managed to get
Steele on the phone, and emphasized that the search committee was desperate to find “someone
just like her.” As it turns out, six months of retirement had been enough for Steele, and with her
kids back at school after summer break, Steele was bored and decided to throw her hat in the
ring.
129
She got the job, of course. When she started in December 2015, Steele found an
organization in complete disarray at the operational level. The Tor Project desperately needed to
diversify its funding sources, but first, Steele needed to get the organization itself on an even
keel. Her first hires were an administrative assistant, a director of human resources, a grant
writer, and a finance and grants manager. Steele was surprised to find that there was no support
Interview, October 19, 2016. Towson, MD.
129
! 270
system for the dozens of people — staff and volunteers alike — who worked on Tor, not to
mention Tor’s users. Moreover, she had never worked with an open source community before,
and was surprised by the way this affected her ability to make decisions. And to cap it all off, she
soon realized that “there were some personnel issues that were bigger than [she] anticipated, that
took up a lot of [her] and other people’s time and energy.” She was, of course, referring to Jacob
Appelbaum, “the sexual predator in our midst.” Steele nevertheless managed to achieve her goals
for the first half of 2016, but “it definitely sapped a lot of time and energy.”
Diversifying the Tor Project’s funding sources was essential for three reasons. First, the
organization’s reliance on a single funder for the bulk of its budget meant that if the U.S.
government stopped funding Tor for any reason, the organization would be unable to make
payroll and could be forced to shut down. This existential threat further imperiled the
constellation of projects and tools that relied on Tor, including the SecureDrop and GlobalLeaks
whistleblower platforms that media organizations around the world rely on to receive anonymous
tips. Second, the deliverable-focused structure of U.S. government grants meant that the
organization lacked unrestricted funds to pursue its own priorities. And third, the association
with the U.S. government had caused substantial reputational harm to the organization by
providing fertile soil for any number of conspiracy theories about double agents, secret NSA
backdoors, and more, to grow.
Steele’s other top priority was making the Tor community more safe, across a number of
dimensions. She reviewed and improved health insurance and other benefits for Tor Project
! 271
employees, converted several contractors to employee status (which was complicated because
some of them lived outside the U.S.) — and, of course, dealt with Jacob Appelbaum.
130
“The Jake Problem”
Figuring out how to address the numerous controversies tied to Jacob Appelbaum has
been one of the most vexing conundrums in this project. In the early stages of data collection I
was content to simply let people tell me whatever they wanted to share, expecting to hear claims
about the propriety (or not) of accepting U.S. government funding. As it turned out, many of my
early interviewees wanted to talk about Jake Appelbaum.
The more I learned about Appelbaum, the more I wanted to just stay away and write
about the macro-level politics of Tor. Several trusted advisors urged me to stay away from the
topic. “GamerGate” was going on at the time, and female researchers in particular shared their
concern that I might end up at the receiving end of a “flame war” or even get “doxed.” It seemed
that investigating the Tor community’s sordid internal drama could only end poorly for me, and
had very little to do with my big research questions about the geopolitics of information controls
and digital rights. But as the scandal surrounding him and his record of harassment and
depredation came into the public eye, it became clear that the story was too important to ignore.
In the eyes of many, Appelbaum was the Tor Project, or at least its public face. It was also
becoming apparent through my interviews that many of the Tor Project’s structural flaws — its
reliance on U.S. government grants, its personnel issues, the emphasis on software development
at the expense of other kinds of expertise, the fraught interpersonal dynamics within the
Interview, October 19, 2016. Towson, MD.
130
! 272
community — had created a propitious environment for Appelbaum to operate, and often had
been exacerbated by Appelbaum’s own behavior. I concluded that to write about Tor without
discussing Jake (as he is commonly known) would be akin to discussing climate change without
mentioning fossil fuels, or to debate gun violence without addressing the Second Amendment. It
would perhaps be easier for me in the short term, but it would be intellectually dishonest as well
as a callous insult to survivors and potential victims. Journalism and the social sciences have
traditionally dismissed the personal behavior of its subjects as irrelevant or out of scope, thus
contributing to impunity for misogynist abusers, but the tide is starting to turn. Researchers of all
ilk increasingly recognize that the personal is political, and that odious behavior can no longer be
excused by wails of “but he does good work!”. As I was drafting this chapter, I watched
131
“Risk,” Laura Poitras’s documentary about WikiLeaks and Julian Assange (Appelbaum features
prominently in the film, and Poitras was romantically involved with him while she was filming
it). Thirty-one minutes in, after a scene that reveals Assange’s misogynist true colors, Poitras
comments in a voice-over:
“This is not the film I thought I was making. I thought I could ignore the contradictions. I
thought they were not part of the story. I was so wrong. They are becoming the
story.” (Poitras, 2017).
In Appelbaum’s case, it’s highly disputable that he even did good work. Many of my
131
research participants argue that Appelbaum didn’t actually do much coding, spending most of his
time arguing with others on IRC and various mailing lists, and later claiming credit for work
performed by others.
! 273
My goal here isn’t to find the “ground truth” beneath the “he said, she said” controversies
of the past several years, nor is it to rehash the different versions of the story that have been
published on blogs, on social media, in the tech press and even in mainstream outlets. Rather, my
intent is to demonstrate the deleterious impact that a single person can have in the absence of
appropriate management, leadership and accountability. That being said, I must state
unequivocally that I believe the victims who have come forward. The Tor community has spent
much of the past few years arguing about the credibility, veracity and accuracy of various
accounts of Appelbaum’s abuse. There are some contradictions in the details, and I have no first-
hand knowledge of any of those episodes. But I believe the women (and men, and genderqueer
folk) who have come forward: Jacob Appelbaum is a bully and serial abuser, and the Tor
community (as well as the broader digital rights space) is well rid of him.
In retrospect I probably shouldn’t be surprised that the man everyone warned me about
turned out to be at the center of the story. People have been whispering to me about what they
almost uniformly called “The Jake Problem” since I started my fieldwork in Valencia in 2015.
The organizational dysfunction detailed in the “Ancient HisTORy” section of this chapter created
a propitious environment for a “bad apple” to thrive. For years, the Tor community was
characterized by a belief in “programmer supremacy” that minimized the contributions of other
professionals, a “rock star” culture that maintained that “star programmers” (like Appelbaum)
were untouchable because of their perceived talent (Aurora, Gardiner & Honeywell, 2016), and a
near-total dereliction of duty by the Tor Project leadership — who had no experience managing
nonprofits whatsoever. These traits were exacerbated by the Tor Project’s globally distributed
! 274
workforce, which made it difficult for community members to develop the camaraderie and
trusted needed to address serious personnel problems. In a vicious cycle, Appelbaum’s toxic
behavior (the words “sociopath” and “narcissist” came up in many interviews) drove talented
staff and volunteers away, tanked morale, impacted the group’s ability to complete grant
deliverables in time, and reportedly dissuaded at least one potential foundation donor from
funding the Tor Project, thus perpetuating the reliance on government grants.
132
Born in 1983, Jacob Appelbaum is an American computer security researcher and hacker
living in Berlin. He was a core Tor developer and one of the organization’s public faces until
2016, when he was pushed out for serially bullying, harassing and assaulting colleagues. Among
other controversial traits and affiliations, Appelbaum is close to WikiLeaks’ Julian Assange,
133
and was once romantically involved with Laura Poitras, the documentary film-maker who won
the 2015 Academy Award for Citizen Four (as Poitras revealed in her 2017 documentary,
“Risk”). After a troubled childhood, Appelbaum dropped out of college, taught himself computer
programming, and traveled to Iraq and post-Katrina New Orleans before joining the Tor Project
in 2008. He moved to Berlin in 2009, citing concerns about U.S. government surveillance and
harassment related to his work with WikiLeaks.
Recurring allusions to “the Jake Problem” were one of the oddest aspects of my first
round of fieldwork for this project, at the 2015 Circumvention Tech Festival in Valencia, Spain.
Asked to discuss their experiences as Tor users or Tor Project volunteers, interview participants
Several interviews, conducted at the Internet Freedom Festival in March 2015, 2016,
132
2017, and 2018. Valencia, Spain.
Another infamous bully and serial abuser of women.
133
! 275
would almost invariably make oblique references to Appelbaum’s personal and sexual life,
generally framed as anecdotes about other people’s perceptions of him, but some were more
pointed than that. A few people specifically told me that “it’s fine to try to talk to him, but make
sure you’re never alone with him.” I was told that “some people think” he was a double agent
for the CIA and for WikiLeaks. One person told me that “it is interesting that Jake’s involvement
hasn’t killed the project — some people within Tor are concerned about that, but others [outside
the organization] see it as proof of a conspiracy theory” (it wasn’t clear to me what that
conspiracy might entail). Conspiracy theories were thick on the ground, and most of them
involved Jake in one way or another. Was he incompetent, or a genius programmer? A crazed
paranoiac, or were people really out to get him? Rumors spread around the conference that
Appelbaum had hosted an alcohol-fueled Purim party in his hotel room, during which a
surveillance device found underneath an activist’s car was examined. The group reportedly
telephoned Julian Assange, holed up in the Ecuadorian embassy in London, to wish him a happy
Purim and “cheer him up.”
134
What I didn’t know at the time was that the CTF was the scene of one of the first
attempts to confront Appelbaum about his appropriation of others’ work, harassment of
colleagues and repeated sexual misconduct (Bernstein, 2016). I could tell that something was
afoot, but the most anyone would tell me was “Trust me, you do not want to be part of
this.” (They were right — I didn’t.) The Tor Project’s executive director, Andrew Lewman,
would resign a few weeks later.
Field notes, March 2015.
134
! 276
Back in Washington a week later, Appelbaum continued to come up in my interviews. In
Valencia, I had heard that some outside observers attributed the Tor Project’s failure to win a
DRL grant in 2011 (for FY 2012-2013) to the organization’s connection to WikiLeaks through
Jake, but as former DRL official Ian Schuler told me, the reality was the grant application was
poorly written, and the rejection was equally based in DRL’s fiduciary responsibility to taxpayers
and in a desire to force the Tor Project to “get their act together.” Schuler’s account was
135
confirmed by several Tor Project insiders. In turn, several interviewees have suggested that the
application was poorly written in large part because the person responsible for drafting it didn’t
have access to requisite information, such as the organization’s budget.
136
Appelbaum did not attend the 2016 Internet Freedom Festival (the CTF had been
renamed to something less obscure-sounding). I didn’t conduct any interviews in 2016, as my
existing IRB only covered the Tor Project and I knew from the grapevine that tension was higher
than usual within the Tor community. I figured that this was not the best time to approach
interview subjects, and spent the festival focusing on my work for Ranking Digital Rights.
Nevertheless, it was impossible to ignore the atmosphere around the Festival, which was rife
with whispered speculation about Appelbaum’s absence. I later heard through the grapevine that
he had been specifically told not to attend. The reason became clear a few months later.
On June 2, 2016, Shari Steele published a post on the Tor Project blog, titled “Jacob
Appelbaum leaves the Tor Project,” that read:
Interview, March 16, 2015. Washington, DC.
135
Several interviews, conducted at the Internet Freedom Festival in March 2015, 2016,
136
2017, and 2018. Valencia, Spain.
! 277
Transitions:
Long time digital advocate, security researcher, and developer Jacob Appelbaum stepped
down from his position at The Tor Project on May 25, 2016.
I had decided not to approach anyone from Tor until the dust had settled, tracking the
news coverage closely instead — not all of it accurate, from what I could tell. My social media
feeds were full of barely-veiled references to Appelbaum’s ouster, as I used both Facebook and
Twitter to stay in touch with colleagues, friends and research informants in the digital rights
movement. A few days after the initial announcement, Shari Steele posted a longer entry to the
Tor Project blog, saying that “many people inside and outside the Tor Project have reported
incidents of being humiliated, intimidated, bullied and frightened by Jacob, and several
experienced unwanted sexually aggressive behavior from him.” The next several months were
punctuated by anonymous accusations of harassment, assault and abuse, for which some
survivors took credit; various hacker and digital rights groups announcing that Appelbaum was
no longer welcome in their communities; Appelbaum’s protests that he was innocent; several
women close to Appelbaum publicly coming to his defense; a short-lived attempt to “fork” the
Tor community out of loyalty to Appelbaum; and more eyebrow-raising claims on all sides.
In July 2016, the entire Board of Directors resigned, paving the way for a new board to
be appointed. Indeed, the new directors boast much greater experience in nonprofit management
and expertise beyond computer science. As of November 2017, the board comprises two
executive directors of non-profits (the EFF’s Cindy Cohn and Megan Price of the Human Rights
! 278
Data Analysis Group); two academics (Matt Blaze, who teaches computer and information
science at the University of Pennsylvania, and McGill University anthropologist Gabriella
Coleman); and four highly respected technologists (Julius Mittenzwei, Linus Nordberg, Ramy
Raoof, and Bruce Schneier).
Dingledine and Mathewson stayed on to lead Tor’s software development, but abandoned
their dual role as board members. The Tor Project has adopted a Social Contract and Community
Council Guidelines — part of Steele’s campaign to make Tor a safe community — and continued
the organization’s overhaul. So far things seem to be going well. I am in regular contact with
Steele and other members of the Tor Project’s leadership team. I sincerely hope that by the time I
write the book version of this project, I will be able to point to Tor as an example of a deeply
dysfunctional organization that has managed to right itself, and whose turnaround other groups in
the digital rights space can emulate.
Conclusion
This chapter has presented a case study of the Tor software and the non-profit
organization behind it, the Tor Project, across multiple dimensions at the micro, meso, and macro
levels of analysis. The context in which Tor emerged and continues to operate is characterized by
a divergence between the objectives and strategies of the U.S. foreign policy and national
security agencies, and by a mismatch between the contexts in which Tor’s legitimacy is contested
(notably U.S. elite policymaking circles) and the contexts in which its use is most clearly needed
(marginalized groups and dissidents in internet-repressing countries) (Jardine, 2015).
! 279
For one interview participant I met in Valencia in 2015, “Tor is an attempt at equalizing
the power distribution” between institutions and individuals. Asked to expand upon this
statement, the participant explained that he sees institutions (notably governments and
corporations) as seeking to accrue power at the expense of individuals, who then become cogs in
a machine over whose actions they lack agency. Tor, he explained, provides a means to escape
the surveillance and control of these institutions and to regain a measure of privacy and
autonomy in daily life. And of course, he continued, if you’re talking about political dissidents,
human rights defenders, and journalists, reclaiming privacy and autonomy can be a matter of life
or death. To his mind, with one hand the U.S. government funds Tor in order to transfer power
from certain governments to their domestic critics, while with the other it targets Tor because it
poses a threat to its own domestic authority.
137
Prompted to comment on what I saw as a “paradoxical” relationship between the Tor
Project and the U.S. government, Rop Gonggrijp (whose digital rights activism predates the
phrase “digital rights”), told me that he doesn’t see any ambiguity in the U.S. stance regarding
information sovereignty, as the ambition is to “get it all” and turn information into power. Where
it does become fuzzy, he said, is in the methods, where the U.S.’s short-term and long-term
interests can be at odds with one another. The State Department wants to foment revolutions all
over the world, he said, and information controls in countries like Iran and China present an
obstacle. To support the possibility of uprisings requires supporting secure communications
(echoing Bueno de Mesquita and Dowds, 2005). Moreover, the U.S. government realizes that it
needs a world where clandestine communications, logistics and organizing are possible. The
Interview, March 2015. Valencia, Spain.
137
! 280
world, however, is less and less conducive to clandestine behavior of any sort. The U.S.
government thus has a legitimate need for what Tor does, and tactically it makes sense to have
the technology outside of government so that the user base is as large and diverse as possible: if
only U.S. government agents used Tor, then any adversary would know that a connection over
Tor comes from the U.S. government. To say that U.S. foreign policy is incoherent is to miss the
point, he told me: the primary objective of U.S. foreign policy is to maintain hegemony, which
requires maintaining access to world markets and “keeping everybody fighting”, echoing the
argument put forth by Powers and Jablonski (2015).
138
My research shows that many of Tor’s problems are rooted in its early leaders’ belief that
computer programmers and engineers are best (or at least adequately) suited to most tasks —
what Karen Reilly has called “programmer supremacy.” The Tor Project was led by computer
139
scientists with no previous non-profit experience until the organizational crisis centered on Jacob
Appelbaum forced a change. For years, the Tor Project’s culture devalued the contributions of
community members who didn’t “commit code” to technology projects, ignored and minimized
toxic behavior up to and including sexual assault, and drove several highly qualified staff away.
This in turn dissuaded several prominent foundations from supporting the project. That Tor
continued to function at all is a testament to how hard working and talented its staff and
volunteers were (and are). The key takeaway here is that quality, professional nonprofit
management matters tremendously, perhaps especially when there is friction with (some)
hackers’ crypto-anarchist mentalities.
Interview, March 2015. Valencia, Spain.
138
Interview, March 2015. Valencia, Spain.
139
! 281
Importantly, the empirical and ethnographic research allows the meso-level analysis to go
beyond prior understandings of the relationship between Tor and the U.S. government. Tor is not
the Astroturfed “spook” outfit that Yasha Levine and others have condemned, but at the same
time it is undeniable that the Tor Project would not exist in its present form (or perhaps even at
all) if it weren’t for the U.S. Naval Research Lab, the Broadcasting Board of Governors,
DARPA, and the State Department’s Internet Freedom funding. Paradigms that hew to the
unitary theory of government are ill-suited for understanding the relationship between Tor and
the U.S. government — or, indeed, for understanding questions related to internet technologies
and state power more generally (Carr, 2016).
Different bureaucratic units’ view of and relationship to Tor depends on how each unit’s
mission and organizational culture intersect with censorship circumvention, anonymous internet
access and anonymous online publishing. These agencies generally pursue their own agendas,
and contradictions between those agendas are seldom addressed. My fieldwork in Washington
foreign policy elite circles suggests that the level at which these policy conflicts are resolved —
when they are — has risen over time. It is difficult to confirm whether this is the case without
access to internal policy memoranda, meeting minutes, and other documentation that can only be
obtained with a security clearance or after a lengthy FOIA process. Testing this hypothesis is an
important question for future research.
! 282
Chapter 7 - Signal: Bringing encryption to the masses
Introduction to the case study
The object of this case study, the Signal Protocol, is likely the world’s most widespread
digital rights tool — and most of its users don’t even know it. The Signal Protocol is the
encryption protocol, first developed by Moxie Marlinspike and Trevor Perrin, that underlies not
only the Signal messaging application but also WhatsApp, Facebook Messenger’s “Secret
Conversations,” Google Allo’s “Incognito mode,” and Skype’s “Private Conversations,” as well
as Wire. Signal was selected as a case study because of its prevalence and because of its
connection to the Internet Freedom agenda: various stages of the Signal Protocol’s development
have been underwritten by the Open Technology Fund. Four of the tools that use the Signal
Protocol — WhatsApp, Messenger, Allo, and Skype — are owned by American tech giants
whose business model relies on surveillance capitalism (Zuboff, 2015), which makes their partial
embrace of end-to-end encryption all the more intriguing. A fifth, Wire, is a venture-capital
funded start-up that hopes to earn revenue from encrypted enterprise communication products.
Signal is particularly important to this project’s focus on the ways that developers of
digital rights technologies are changing the nature of global communication in the 21
st
century.
Indeed, it has arguably had a greater impact on the encryption debate than any other product by
bringing end-to-end encryption to the masses through the WhatsApp partnership. Moreover,
Signal occupies an uneasy position at the intersection of two related ideologies that nevertheless
exhibit important differences concerning the meaning of freedom and the relationship between
! 283
governments and the governed: the cypherpunks’ techno-libertarianism and the Internet Freedom
agenda, which is linked to the rhetoric of democracy and human rights. Signal is currently at a
pivotal moment in its organizational history, and it is unclear whether it will embrace formal
governance and accountability or a more anarchic model that may be more aligned with its
leaders’ professed cypherpunk values.
Roadmap to the chapter
After some background on Moxie Marlinspike, the case study traces the story of the
140
Signal Protocol’s development and deployment from 2007 to early 2018. It was surprisingly
difficult to get people to agree to talk to me on the record about Signal, especially compared to
how open the communities around Tor and Psiphon have been. There seem to be two main
reasons for this. The first is that only a small number of people have worked on the Signal
Protocol, limiting the potential participant pool. The second is that Marlinspike is particularly
averse to publicity of any sort, and mistrusts journalists and other writer/researchers (the
conflation of my role as a scholarly researcher with that of a reporter was a recurring theme
throughout my fieldwork). Others who work or have worked on Signal appeared to defer to
I am told that Trevor Perrin is actually the “cryptographic genius” behind the Signal
140
Protocol. The same source described Marlinspike as “an amazing engineer, but also a very
opinionated one” (Phone interview, March 31, 2018). I focus on Marlinspike for two reasons:
first, the strategic vision behind Signal appears to be his, and second, very little is publicly
known about Trevor Perrin. Both individuals declined to be interviewed.
! 284
Marlinspike’s preferences and declined to participate in interviews or to answer specific
questions that they believed Marlinspike wouldn’t want them to answer. Dan Meredith, the
director of the Open Technology Fund, suggested that I write up this case study on the basis of
publicly available information, ask him to comment on it in order to fill any gaps, and correct
any inaccuracies, and then approach Marlinspike with a polished draft for him to respond to.
Commenting on the first draft, Meredith noted that the perspective of Signal’s developers was
missing — something I hope to rectify in future iterations of this project, if the Signal team will
agree to speak to me.
Data, sources and methods
Like the other case studies that comprise this dissertation, the Signal case study was
initially designed as an ethnographic investigation into the people and institutions behind the
tool. However, despite my best efforts I was not able to get anyone who has worked on Signal to
talk to me. Through intermediaries, several potential interview participants indicated that
participating in my research would risk incurring the displeasure of Moxie Marlinspike, Signal’s
lead developer and formally the CEO of Signal Messenger. As a result, this case study is based
on content analysis of primary documents related to Signal and of secondary sources such as
media reports, and on interviews with individuals involved in funding Signal or who otherwise
are part of the cryptography community.
! 285
The basics: End-to-end encryption and the Signal protocol
Signal is an encrypted messaging application for mobile and desktop devices developed
and maintained by a team of cryptographers based in the San Francisco area, led by Moxie
Marlinspike and Trevor Perrin. The organization behind Signal has gone through several names
and legal structures over the years, as we will see in the “Project History” section. The app’s
encryption protocol, formerly dubbed “Axolotl” and now simply “the Signal protocol,” is also
141
used by closed-source programs WhatsApp, Facebook Messenger, Google Allo, and Skype, as
well as the open-source messaging app Wire. As a result, the Signal protocol is the most widely
used digital rights tool in the world, as WhatsApp alone boasts more than 1 billion active users
(WhatsApp, 2018).
Signal is widely credited for bringing strong end-to-end encryption to the masses,
enabling ordinary people to protect their communications without requiring much technical
ability, if any. End-to-end encryption refers to encryption that protects a user’s communications
at rest on her local device, in transit between the local device and the server, and at rest on the
remote server. This means that messages cannot be accessed by anyone other than the originator
and intended recipient(s). In contrast, many commercial communication services do not encrypt
user data that is stored on corporate servers, which allows the service provider to mine that data
for targeted advertising purposes and to share it with law enforcement upon request (most large
companies require a warrant or equivalent legal process before responding to such requests).
Named for an endangered aquatic salamander that has extraordinary self-healing
141
qualities.
! 286
Project history
About Moxie Marlinspike
While it may seem strange to begin a case study about an encryption protocol by talking
about one of its inventors, in this case it is inevitable given how closely the man who goes by
Moxie Marlinspike is associated with his invention. This “great man” approach follows in the
footsteps of Steven Levy, whose seminal book “Crypto” (2001) told the story of public
cryptography largely by telling the story of the cryptographers, and of journalist Andy
Greenberg, whose 2012 “This machine kills secrets” followed some of the latter-day
cypherpunks. However, this portrait is a superficial one, as Marlinspike is fiercely private,
granting very few interviews and rarely discussing his own life. He has not responded to my
repeated requests for interviews, and seems to have told colleagues not to speak to researchers
like myself. I hope that once I demonstrate my bona fides to him, he’ll be willing to talk, or at
least let others talk.
Moxie Marlinspike — not his real name — is a tall, slim, caucasian, American man in his
late thirties or early forties who styles his hair in dreadlocks. He made Fortune’s 2016 “40 under
40” list, which gave his age as “30s.” The most detailed profile of Marlinspike, published by
Wired in July 216, was authored by Andy Greenberg, who also wrote “This Machine Kills
Secrets: Julian Assange, the Cryptographers, and Their Fight to Empower Whistleblowers.”
The man currently known as Moxie Marlinspike grew up in central Georgia, where his
parents gave him the nickname “Moxie.” Recent court documents give his legal name as either
Matthew Rosenfeld or Mike Benham, but it could just as easily be something else. Little is
! 287
known about the man’s childhood, except that his parents split up when he was young, his
mother was a secretary, the area where he grew up resembled a strip mall, and the young Moxie
“hated the curiosity-killing drudgery of school” (Greenberg, 2016). He started dabbling in
computer programming in middle school, first using his school’s Apple II and then a cheap
desktop with a modem — a gift from his mother — that he used to “to trawl bulletin board
services, root friends’ computers to make messages appear on their screens, and run a ‘war-
dialer’ program overnight, reaching out to distant servers at random” (Greenberg, 2016).
Marlinspike’s after-school job in high school was for a German software company, where
he wrote developer tools. He headed to Silicon Valley in 1999 — presumably soon after
graduating high school — where he found a programming job after an initial period of
homelessness. But spending 40 hours a week in front of a computer, writing code, bored him just
as much as school had. “I thought, ‘I’m supposed to do this every day for the rest of my life?’”
he told Greenberg. “I got interested in experimenting with a way to live that didn’t involve
working.” For a few years this took the form of squatting vacant buildings as part of the San
Francisco punk scene and hitch-hiking around the country attending protests (eventually he
leveled up to hopping freight trains). In 2003 he decided to learn to sail, acquired a “beat-up 27-
foot Catalina” and sailed from San Francisco to Mexico on his own. A year later, he and three
friends sailed a “rehabilitated, leaky sloop called the Pestilence from Florida to the Bahamas,”
filming a documentary before abandoning the vessel in the Dominican Republic (Greenberg,
2016).
! 288
Whisper Systems
Around 2007 (he would have been in his mid-twenties at the time) Marlinspike started
thinking more seriously about digital surveillance in the aftermath of 9/11 and the Patriot Act.
Facebook membership was starting to expand beyond college campuses, and Twitter had been
created the year before, taking the 2007 South by Southwest festival by storm. Apple launched
the first iPhone that year, with Google’s Android operating system following a year later.
Marlinspike spent a few years working on security software, and started attending computer
security conferences like BlackHat. In 2010, anticipating that the growing number of smartphone
users would need security software before long, Marlinspike started his first company, Whisper
Systems, and developed two Android applications: TextSecure, for encrypted text messaging,
and RedPhone, for encrypted voice calls. As Greenberg put it, “Anti-authoritarian ideals were
built in from the beginning; when the Arab Spring exploded across North Africa, Whisper
Systems was ready with an Arabic version to aid protesters.” Indeed, TextSecure and RedPhone
were widely used by journalists and others throughout the 2011 protests (Garling, 2011a;
Greenberg, 2016).
Marlinspike knew he would need funding to grow the company and continue improving
the tools, so he moved back to San Francisco in search of investors for his start-up. Instead,
Whisper Systems was acquired by Twitter, and Marlinspike became the company’s director of
product security (Greenberg, 2016).
! 289
Acquisition by Twitter
Twitter spent an untold amount of money acquiring Marlinspike (“more money than I’d
ever encountered before. But that’s a low bar,” he told Greenberg), and his skills were “revered”
within the company. But Marlinspike was a poor fit for Twitter, and vice-versa. Where the
company’s goal was to maximize profits, Marlinspike’s was to protect Twitter’s users. According
to Greenberg, “his greater goal was to alter the platform so that it didn’t keep logs of users’ IP
addresses, which would make it impossible for authorities to demand someone’s identity, as
they’d done with one Occupy Wall Street protester in 2012.” Marlinspike’s desire to protect user
privacy was at odds with Twitter’s need to monetize user information in the same way that
Google and Facebook had. The company was planning its initial public offering (IPO) for 2013,
and couldn’t afford to be seen by Silicon Valley and Wall Street as closing off its purported main
revenue stream (Greenberg, 2016).
The terms of Whisper Systems’ acquisition required Marlinspike to work at Twitter for
four years before his stock options would fully vest, allowing him to leave with his money. It
seemed that he would have to resign himself to tinkering around the edges to improve user
safety, rather than rooting out the surveillance capitalism (Zuboff, 2015) at the heart of the
company’s business model. But after narrowly escaping death when his catamaran capsized in
the San Francisco Bay’s freezing waters, Marlinspike decided to walk away from Twitter and a
million dollars in company stock (Greenberg, 2016).
! 290
Open Whisper Systems
Free from Twitter’s golden shackles, Marlinspike hit the ground running. He named his
new venture Open Whisper Systems (OWS), and set it up to exclusively produce open source
software. Rather than pursue Silicon Valley funding — which would almost certainly require him
to sacrifice user privacy on the altar of data monetization — Marlinspike turned to Dan
Meredith, the director of the newly created Open Technology Fund (OTF). Meredith had relied
on TextSecure and Redphone in his previous role at Al-Jazeera, where his duties included digital
security for journalists covering the Arab Spring. When the tools disappeared following Twitter’s
acquisition of Whisper Systems, he saw dozens of colleagues and sources “go from not having
physical surveillance to torture and murder.” Making secure communications available to
journalists, activists, and other at-risk individuals was personal for him (see chapter 4 for more
on Dan Meredith and the Open Technology Fund), and he was especially keen to see TextSecure
and Redphone back in development. OTF gave Open Whisper Systems a grant for $455,000 over
18 months, followed by further grants totaling $2,955,000 (through 2016).
142
Unlike many other groups in the digital rights space, Open Whisper Systems is not a
501(c)(3) nonprofit and is not required to publish any information about its financials. In fact,
Open Whisper Systems isn’t the company’s legal name but a “doing business as” alias for Quiet
Riddle Ventures, a Limited Liability Corporation (LLC) registered in Mountain View, CA to a
Michael Benham (one of Marlinspike’s suspected aliases, or possibly his legal name). At this
stage I can only speculate about the reasons for this choice, though I hope to eventually interview
Marlinspike about this. It seems odd that an organization whose founder demonstrates little
Interview, April 19, 2017. Washington, DC.
142
! 291
interest in making money, that relies on grants and donations, and that would undoubtedly
qualify for 501(c)(3) status would select a legal structure that comes with a tax liability. My
informed guess is that Marlinspike (presumably in consultation with Trevor Perrin and other
colleagues) decided that paying taxes was worth avoiding the transparency and board oversight
that the 501(c)(3) status requires. Marlinspike, who uses a pseudonym both for himself and for
his company, seems to value privacy over transparency. And as a cypherpunk anarchist, he
strikes me as the sort of person who eschews institutions and formal oversight, preferring to let
the product — Signal’s code — speak for itself. It’s open source, after all. It seems that
Marlinspike views Signal’s code, and the impact it has had on other tools, as his legacy, and has
long seemed uninterested in setting Open Whisper Systems up as an institution that can outlast
him:
Marlinspike surprises me by admitting that he looks forward to the moment when he can
quit. “Someday Signal will fade away,” he states unsentimentally. Instead, he says, Open
Whisper System’s legacy will be the changes Signal will have inspired in better-funded,
for-profit communication apps.
That time may not be so far off. “I don’t really want to do this with the rest of my life,”
Marlinspike says. “Eventually, you have to declare victory.” (Greenberg, 2016)
As mentioned previously, Signal’s precursors TextSecure (an encrypted messaging
application for Android first released in 2010) and Redphone (an end-to-end encrypted V oice
! 292
over Internet Protocol (V oIP) application) were widely used by activists, journalists, and others
during the Arab Spring, until the service was made unavailable following Twitter’s purchase of
Whisper Systems in 2011 (Garling, 2011a). As Caleb Garling noted in Wired:
RedPhone was specifically designed for those involved in the Egyptian revolution earlier
this year, and Marlinspike saw it as something that could serve similar movements across
the globe. “[RedPhone] is targeted just for Egypt, but sets the stage for worldwide
support,” [Marlinspike] told Wired in February. “Hopefully with stuff happening in Egypt
it kind of steps things up [regarding distribution to other countries].”
[Chris] Soghoian, for one, laments the loss of the device. "I understand that's the normal
thing in Silicon Valley, where everything gets shut off when an acquisition is made," he
says. "But RedPhone isn't some app that helps you find the nearest ice cream store.
[Whisper] specifically targeted helping people under repressive regimes. Just shutting it
off with no warning to users puts the people in a very dangerous position, especially in
places like Egypt where they're having elections today" (Garling, 2011a).
The code for both tools was made available again as open source software in December
2011 (Garling, 2011b), though development was suspended while Marlinspike worked at Twitter.
Resuming work on these projects was one of the first orders of business for the new Open
Whisper Systems.
! 293
Signal
In 2014, Open Whisper Systems released Signal, an encrypted V oice over Internet
Protocol (V oIP) app for iOS, and announced plans to merge its existing applications with Signal.
This was a minor revolution in telecommunications, as Signal was the first free, encrypted V oIP
application (Greenberg, 2014). As Marlinspike explained in a January 2013 blogpost, the
engineering problems raised by voice communications were more complicated than those
involved in TextSecure because of latency — the delay between when the message is sent and
when it is received. High latency is much more tolerable for text-based communication than for
voice calls, where delays of even a few milliseconds can impede the flow of conversation
(Marlinspike, 2013). TextSecure for iOS was merged into Signal in March 2015 (Lee, 2015), and
the two Android apps were merged and released as Signal for Android in November 2015
(Marlinspike, 2015). In March 2016, OWS announced that it was renaming its encryption
protocol and associated code libraries to Signal, as well (Marlinspike, 2016).
WhatsApp
Signal’s greatest impact, at least in terms of the number of people using the Signal
Protocol, stems from its partnership with WhatsApp, a messaging application created in 2009 by
former Yahoo employees Jan Koum and Brian Acton (Olsen, 2014; WhatsApp, n.d.). The two
friends, freshly free from Yahoo and looking for a new adventure, realized soon after the launch
of the Apple App Store that mobile phone applications were the new hot market. Looking
through his phone’s address book, Koum had the idea that “it would be really cool to have
statuses next to individual names of the people” — similar to the “status message” practice
! 294
already prevalent on desktop instant messaging platforms (Olsen, 2014). Koum registered
WhatsApp as a company (the name is meant to sound like “What’s up?”) within a week, and
started working on the product. When Apple introduced push notifications for third-party
applications in June 2009, WhatsApp was updated to take advantage of this functionality. Koum
realized he had accidentally invented a mobile instant messaging service. At the time, the only
other free messaging service for mobile was Blackberry Messages (BBM), which was only
available for Research in Motion devices. Koum updated WhatsApp again to add a messaging
function, brought Acton onboard to manage the business aspects of the venture, and before long
had over a quarter million active users. The company continued to grow, attracting attention from
venture capital firms and eventually Facebook, which acquired WhatsApp for $19 billion in early
2014 (Olsen, 2014).
The Signal/WhatsApp partnership was first announced in November 2014 and completed
in April 2016 (Marlinspike, 2014, 2016). At the time, WhatsApp had over a billion monthly
active users (Marlinspike, 2016) who would now be protected by state-of-the-art end-to-end
encryption by default. Moreover, this meant that using an encrypted messaging application
would no longer be a niche activity that could be seen as red flag for criminality or other
suspicious behavior.
As Andy Greenberg wrote in Wired,
Marlinspike’s time at Twitter had given him an ambitious sense of scale: He was
determined to encrypt core chunks of the Internet, not just its fringes. By chance, he met
a WhatsApp engineer at a family reunion his girlfriend at the time threw at his house.
! 295
Through that connection, Marlinspike wrangled a meeting with WhatsApp’s cofounder
Brian Acton. Later, Marlinspike met with the company’s other cofounder, Jan Koum, who
had grown up in Soviet Ukraine under the constant threat of KGB eavesdropping.
Both men were almost immediately interested in using Marlinspike’s protocols to protect
WhatsApp’s international users, particularly its massive user bases in privacy-loving
Germany and surveillance regimes in the Middle East and South America. “We were
aligned pretty early,” Acton says. “When we got past the hairstyle, we were like, ‘Let’s
get down to business.’”
[…]
But these are only the early adopters in Marlinspike’s master plan. He outlines his
endgame: In the past, government-friendly phone companies have practically partnered
with law enforcement to make wiretaps easy. Now people are increasingly shifting to
what he calls overlay services—apps like WhatsApp and Facebook Messenger—to
communicate. And that switch offers a chance to start fresh, with a communications
infrastructure that can be built to resist surveillance. “The big win for us is when a billion
people are using WhatsApp and they don’t even know it’s encrypted,” Marlinspike says.
“At this point, I think we’ve already won the future” (Greenberg, 2016).
! 296
When designing this study, I had hoped that interviews with Marlinspike, Koum, Acton
and others would help shed light on the motivations for incorporating the Signal Protocol into
WhatsApp. I have encountered some speculation that this was partly driven by privacy concerns
related to the Facebook acquisition, but without primary interview data it would be irresponsible
to speculate beyond that. I hope to be able to speak with these individuals, and others with first-
hand knowledge of the situation, in this project’s next iteration.
Other commercial integrations
In addition to the WhatsApp partnership, Open Whisper Systems also worked with
Facebook, Google and Microsoft to incorporate the Signal protocol into the companies’
messaging applications. In May 2016, OWS announced a new partnership with Google to
incorporate the Signal Protocol in the new Allo messaging application (Marlinspike, 2016a). Two
months later Marlinspike announced that Facebook Messenger would start rolling out a Secret
Conversations option, also secured by the Signal Protocol (Marlinspike, 2016b). The latest
partnership with Microsoft, announced in January 2018, brings new Private Conversations (one-
to-one text chat) to Skype Insiders (Lund, 2018; Newman, 2018). There hasn’t been an
announcement yet regarding when Private Conversations will become available to all Skype
users. Importantly, the end-to-end functionality in these services is “opt-in” — users must
actively choose to protect a given conversation. In contrast, the Signal application, WhatsApp,
and Wire (discussed in the next section) use end-to-end encryption by default, and there is no
option for users to downgrade to a lower level of security.
! 297
The posts announcing these partnerships all contain language indicating that more similar
arrangements are forthcoming:
There’s (still) more to come
Microsoft joins a growing list of organizations including WhatsApp, Google, Facebook,
and Signal itself that have integrated the open source Signal Protocol into their messaging
platform. We’re going to continue our efforts to advance the state of the art for frictionless private
communication, in our own app and in others. We’re excited about the future of Signal
Protocol and the places it is going (Lund, 2018).
Open Whisper Systems’ collaboration with these tech giants and reliance on grant
funding — notably from the Open Technology Fund — raises a number of questions about
values, agency, and power, which will be addressed in the Thematic Analysis section of this
chapter.
Wire
In addition to the American tech giants discussed above, the Signal Protocol has also
been adopted by smaller messaging services like Wire. Launched in 2014 by four former Skype
engineers (including Skype co-founder Janus Friis), Wire is registered in Switzerland and
headquartered in Berlin, allowing the company to benefit from the stricter Swiss privacy regime
! 298
and to hire from Berlin’s vibrant tech scene. Early phases of development focused exclusively on
the “what” question — building a quality app that could handle short messages, long messages
and video — while neglecting the “why” question: the product’s unique value-add to users. The
answer to that question would present itself mid-2015, about six months after the app’s launch.
Examining feedback from early users, the team realized that there was a market need for a
privacy-enhancing messaging service that wasn’t connected to the surveillance capitalism
apparatus, and to Silicon Valley in particular. Wire’s founders opposed both NSA surveillance
and data capitalism, and they were eager to find a way to resist both of these adversaries at
once.
143
Having decided to implement end-to-end encryption across Wire’s apps and servers, the
team turned to the open-source code for the Axolotl protocol, as the Signal Protocol was called at
the time. According to Wire, Moxie Marlinspike declined a request to help Wire by reviewing its
implementation, instead tell Wire to “take a license” for $2.5 million (Marlinspike denies having
asked for money). This left Wire in a conflicted situation:
Here’s the conundrum: Moxie et al have publicly stated that they want wide adoption of
the Axolotl protocol — but if you do an independent implementation, using the published
reference documentation and background knowledge from having seen their code online,
you can be accused of copyright infringement and asked to pay a “license fee.” Interview, March 7, 2017. Valencia, Spain.
143
! 299
The risk of being wrongly accused of infringement is of a particular concern in this
situation because implementing the protocol has certain defined steps that obviously must
be the same, irrespective of the programming language used, making it easy to claim
infringement based on the resulting similarities — and, like any negative, difficult to
disprove. This puts developers in an impossible situation when they independently
implement an idea — even though this is precisely what copyright laws are intended to
encourage, not prohibit (Wire, 2016, May 17).
Starting in January 2015, Wire began implementing its own version of the Axolotl
Protocol that it called Proteus, which launched in March 2016. In Wire’s telling, Marlinspike
reached out:
He claimed that we had copied his work and demanded that we either recreate it without
looking at his code, or take a license from him and add his copyright header to our code.
We explained that we have not copied his work. His behavior was concerning and went
beyond a reasonable business exchange — he claimed to have recorded a phone call with
me without my knowledge or consent, and he threatened to go public with information
about alleged vulnerabilities in Wire’s implementation that he refused to identify (Wire,
2016, May 17).
Shortly after first rolling out end-to-end encryption in March 2016 (Auchard, 2016), Wire
started getting calls from companies in the medical, legal, and financial sectors, which were
! 300
searching for secure internal communications options that also complied with legal requirements
to maintain communication records. The VC-funded company had found its business model:
designing and maintaining customized, end-to-end encrypted internal communication platforms
for firms willing to pay for it. A more secure Slack, if you will.
144
By fall 2017, Wire was advertising its enterprise offerings, emphasizing that Wire is
“Europe-based” and “GDPR-ready” (Wire, 2017, Oct 25). In December, the company announced
it had hired a new CEO and COO, two hires that signaled confidence in its future revenue (Wire,
2017, Dec 7). Wire commits to keeping its apps free for individuals and to refrain from
advertising or profiling, emphasizing that “Wire is not in the business of collecting or selling
data” (Wire, n.d., “Security & Privacy).
The Signal Foundation
On February 21
st
, 2018, Open Whisper Systems published a pair of blog posts by Moxie
Marlinspike and Brian Acton, a co-founder of WhatsApp, announcing the creation of a new
501(c)(3) organization called the Signal Foundation thanks to an initial $50 million gift from
Acton. As of this writing, little is known about how the new Signal Foundation will operate.
Acton will serve as Executive Chairman, with Marlinspike continuing on as CEO of the (also
newly created) Signal Messenger nonprofit organization (Marlinspike & Acton, 2018).
Interview, March 7, 2017. Valencia, Spain.
144
! 301
At this stage the significance of this structural change is unclear. The new Signal
145
Foundation will eventually have to release an IRS 990 each year, detailing its principal sources
of income, major expenditures, and salary information for its highest-paid employees. This will
provide a measure of transparency into the group’s finances, and will likely enable it to compete
for new sources of funding, such as private foundations and State Department Internet Freedom
funding through the Bureau of Democracy, Human Rights and Labor (DRL). This is an
important turning point in Signal’s institutionalization and professionalization process. However,
over six months after the announcement, there have been no announcements about the Signal
Foundation’s structure, governance, or finances. None of my contacts in the digital rights space
seem to have any insight into Acton and Marlinspike’s plans beyond a persistent rumor that
Signal had considered launching its own crypto-currency as a way of generating revenue (see
chapter 8, the Telegram case study, on crypto-currencies and Initial Coin Offerings). This is a
troubling indication that rather than embracing formal governance structures and networked
accountability mechanisms, Signal is moving toward an even more explicitly cypherpunk model
governed by only computer code and cold, hard cash.
Analysis
Previous sections of this chapter have presented the basics of Signal, the Signal Protocol,
and end-to-end encryption more generally, before tracing the history of Signal’s development
As of late August 2018, the Signal Foundation’s website (signalfoundation.org) lacks
145
any content.
! 302
and the Signal Protocol’s incorporation into several widely used messaging applications. Next, I
analyze key themes related to Signal’s political economy at the micro, meso, and macro levels of
analysis. First, an examination of Signal’s use cases and the threat models it can (and cannot)
mitigate sheds light on Signal’s users and the role that encrypted communication plays in their
lives. Second, I examine the “strange bedfellows” arrangements behind the Signal Protocol’s
adoption by billions of global users. How did a cypherpunk crypto-anarchist, the U.S.
government’s internet freedom agenda, and major American tech companies come together to
fight for privacy and end-to-end encryption? And third, I consider the long-term implications of
ubiquitous end-to-end encryption for human societies.
Use cases and threat modeling
We now turn to Signal’s users and use cases: why someone should use Signal (or another
app built on the Signal Protocol), and why not. As we will see, all encrypted messaging services
are not created alike, even if they rely on the same encryption protocol, and there is no such thing
as a “one-size fits all, all-around best messaging app.”
“Use cases” and “threat modeling” are two important concepts for choosing an encrypted
messaging app. In software development, a use case is “a written description of how users will
perform tasks on your website. It outlines, from a user’s point of view, a system’s behavior as it
responds to a request. Each use case is represented as a sequence of simple steps, beginning with
a user's goal and ending when that goal is fulfilled” (Usability.gov, 2018). Threat modeling is the
! 303
process of identifying what a user needs to protect (her data, her identity, her location…) and
from whom she needs to protect it (Electronic Frontier Foundation, 2017).
Digital security trainers, whose job is to educate journalists, human rights activists, and
other at-risk users about security and privacy online, typically guide training participants through
the threat modeling process, focusing on five questions:
1. What do I want to protect?
2. Who do I want to protect it from?
3. How bad are the consequences if I fail?
4. How likely is it that I will need to protect it?
5. How much trouble am I willing to go through to try to prevent potential
consequences?
The Electronic Frontier Foundation’s (EFF) online Surveillance Self-Defense Guide
walks users through the same process, urging them to make lists of their assets (“data that you
keep, where it’s kept, who has access to it, and what stops others from accessing it”) and of their
adversaries (this “may include individuals, a government agency, or corporations”). Next, users
are guided to identify “what your adversary might want to do with your private data,” “which
threats you are going to take seriously, and which may be too rare or too harmless (or too
difficult to combat) to worry about,” and “what options you have available to you to help
mitigate your unique threats. Note if you have any financial constraints, technical constraints, or
social constraints” (Electronic Frontier Foundation, 2017). When selecting an encrypted
! 304
messaging service, the goal is to choose an app whose use case matches your threat model as
closely as possible.
The case for Signal
End-to-end encryption
Many digital rights experts recommend Signal because it was designed to mitigate as
many threat models as possible while fulfilling most use cases. Signal has much to recommend
it, not only technically but also in terms of usability. The user experience is remarkably similar to
standard short messaging service (SMS) or texting: contacts are identified by their phone number
and stored in the device’s address book. Signal can be used for 1-to-1 or group conversations
comprising the exchange of text, images, or short video clips, as well as for voice calls.
Crucially, the data is encrypted while it transits between each user’s device and the server and in
storage on the server itself (data stored on the local device is protected by device encryption,
which is now standard on all new iOS and Android devices). This is what computer security
experts and cryptographers call end-to-end encryption.
This end-to-end encryption resides in the Signal Protocol, which undergirds third-party
applications like WhatsApp, Wire, Google Allo’s Incognito mode, Facebook Messenger’s Secret
Conversations, and Microsoft-owned Skype’s announced Private Conversations. One important
difference between these tools is that WhatsApp and Wire encrypt all user messages by default,
whereas the other applications (each owned by a major American tech company) require users to
proactively choose to encrypt a specific conversation.
! 305
Metadata
Another important consideration concerns metadata, the “data about data” like who who
sent a given message, when, where from, and to whom (Electronic Frontier Foundation, n.d.).
Metadata can often be just as revealing as a message’s content, if not more so, as it reveals the
user’s social networks, distinguishing “strong ties” by the frequency of contact, and establishing
a pattern of life such as when the user is sending messages and therefore presumed to be awake.
This is turn can help identify the timezone of her current location and potentially travel habits as
well. Journalists, whistleblowers, and political dissidents need to be just as cautious about who
they’re communicating with as about the contents of their messages.
Google Allo, Facebook Messenger, Skype, and WhatsApp (acquired by Facebook in
2014) have varying metadata retention policies that aren’t necessarily made clear to users.
WhatsApp in particular has come under scrutiny for sharing user metadata with its parent
company Facebook in ways that may violate the 2018 General Data Protection Regulation
(GDPR), which governs data privacy for all European Union persons. While Facebook hopes to
use metadata from WhatsApp (largely comprising which users communicate with one another,
and with what frequency) to improve the delivery of targeted advertising across its platforms,
regulators in Germany, France, Italy and the United Kingdom have brought enforcement actions
to block the data transfer (BBC News, 2018). At the time of the 2014 purchase, Facebook has
promised to operate WhatsApp independently from its main platform, notably telling the
European Commission that it “could not automatically match user accounts on its own platform
and WhatsApp” — a claim that the Commission later found to be incorrect, as it told BBC News:
! 306
The Commission has found that, contrary to Facebook's statements in the 2014 merger
review process, the technical possibility of automatically matching Facebook and
WhatsApp users' identities already existed in 2014, and that Facebook staff were aware of
such a possibility (BBC News, 2018).
Aside from the society-level harms wrought by Facebook’s surveillance capitalism
business model (Zuboff, 2015), metadata can also be used in ways that cause tangible harm to
individuals. For example, there have been several documented cases of Facebook’s “People You
May Know” feature recommending that patients “friend” their psychiatrist — and each other
(Hill, 2016, 2017; Petrow, 2014). For high-risk individuals, it is nearly impossible to predict
what similar kinds of privacy violations may follow from WhatsApp metadata being fed into
Facebook’s algorithmic black box, or what physical harm may ensue. The information-sharing
practices of other companies like Google and Microsoft may also result in similar risks, though
they haven’t been documented in the same way.
In contrast, Signal only retains two pieces of information: the account creation date and
the last login time (Farivar, 2016), while Wire keeps “server-side logs” for 72 hours, “for the sole
purpose of facilitating troubleshooting, improving the service and preventing abuse” (Wire,
2017, p.5). Importantly, Wire stresses that it is “not in the business of collecting or selling data”
and that it stores “only the data [needed] to make sure your conversations stay in sync across
devices, to detect fraud and spam, and to troubleshoot customer issues” (Wire, n.d.). Unlike the
aforementioned American tech companies, neither Signal nor Wire relies on surveillance
capitalism as a business model. Signal is funded by grants (primarily from the Open Technology
! 307
Fund) while Wire generates its revenue from subscriptions to its team and enterprise services
(individual use is free).
Finally, both Signal and Wire are open source and have undergone independent code
review, which gives users a measure of confidence that the tools are as secure as they claim to
be.
The case against Signal
While Signal was designed for versatility, there are nonetheless several factors that render
Signal unsuitable for certain threat models and use cases. I’ll focus here on three scenarios: users
who don’t want to share their phone number with all their correspondents; users who need to
communicate from areas where Google AppEngine (a Google platform used by Signal servers)
does not operate; and users who could be endangered if they were seen to be using Signal (or
other encrypted apps).
As Jillian C. York (2017) argues, the decision to share one’s phone number (particularly a
cell phone number) is a gendered one: “As a woman, handing out my phone number to a stranger
creates a moderate risk: What if he calls me in the middle of the night? What if he harasses me
over SMS? What if I have to change my number to get away from him?” These are not questions
that cisgender men are used to confronting. York believes that this failure to consider the
gendered aspects of a seemingly anodyne design choice (using cell phone numbers as
identification) is an artifact of the gender imbalance in technology development:
! 308
I'm not so surprised that the mostly-male developers of these tools didn't consider these
risks—risks that largely affect women and other vulnerable groups. They've focused
carefully on ensuring that their encryption works (which is key), that their user-
verification models are usable and make sense, and I'm grateful for that…but I still don't
want to give my phone number out to a stranger (York, 2017).
In addition to women, people of color and public figures also tend to experience online
harassment that may make them reluctant to share their phone number. While there are
workarounds that can help tech-savvy users circumvent the need to share their phone number
(see Lee, 2017; Shelton, 2017), many security trainers suggest using Wire, which combines the
security of the Signal Protocol with the ability to create account based on an email address rather
than a phone number.
The decision to tie Signal to cell phone numbers is highly controversial among present-
day cypherpunks. Over the years, quite a few people have asked Marlinspike to offer an
alternative, which he has consistently refused to do. As one interview participant told me,
“Moxie is a ‘my way or the highway’ kind of guy.” Asked to allow others to “fork” the Signal
app while still using OWS’s servers, Marlinspike again refused, arguing that centralization
allows him to “fix things himself and help everyone at once.”
146
Other users, notably those based in Iran or who travel there with any frequency, are
unable to use Signal because of Google’s conservative interpretation of the Iran sanctions
regime, as Signal relies on the Google cloud service AppEngine for parts of its technical
Phone interview, March 31, 2018.
146
! 309
infrastructure (Mahmoodi & Bashar, 2018). Additionally, at-risk individuals operating in
particularly repressive environments could place themselves at further risk by using Signal. For
example, encryption is illegal in many high-surveillance countries, and in other places using
Signal can be seen as a red flag that someone “has something to hide.” In these situations, it may
be safer for at-risk people to use more commonplace applications, to communicate in cleartext,
or to avoid electronic communications altogether.
147
Finally, some observers are suspicious of Signal because of its financial ties to the U.S.
government’s Internet Freedom agenda via the Open Technology Fund (OTF). As discussed in
chapter 4, OTF was created in 2012 to disburse government funding allocated by Congress to
advance “Internet Freedom,” specifically the development of technologies that enhance freedom
of expression and privacy online. Journalist Yasha Levine in particular claims that because
Signal is “a creation of America’s spooky war apparatus,” it is “designed for regime change in
the age of the Internet,” arguing that if Signal and other digital rights tools “ever posed a threat to
the United States — and to the corporate monopoly power that calls the shots here — their
funding would be pulled and they would cease to exist” and casting doubt on Signal’s security
without providing any substantiation (Levine, 2016). (As I’ve demonstrated throughout this
dissertation, that is not how the U.S. government works.)
Field notes, Internet Freedom Festival, March 8, 2018. Valencia, Spain.
147
! 310
Cypherpunks, Internet Freedom, and surveillance capitalism collide
The Signal case displays the same tendency toward “strange bedfellows” that is typical of
the digital rights/internet freedom space more broadly. Founded and led by a cypherpunk crypto-
anarchist, the project is primarily funded by the U.S. government via the Open Technology Fund,
and the majority of the Signal Protocol’s users owe their access to end-to-encryption to Signal’s
collaboration with WhatsApp and other American tech giants. We see three major ideologies
intersecting here: the crypto-anarchism associated with the cypherpunks; the U.S. government’s
Internet Freedom agenda; and the surveillance capitalism (Zuboff, 2015) or data capitalism
(West, 2017) associated with Silicon Valley.
Crypto-anarchism’s foundational text is Tim May’s Crypto-Anarchist Manifesto (1988),
which was read aloud at the cypherpunks’ “founding meeting” in 1992. In six brief paragraphs, it
outlines May’s vision for the future:
Computer technology is on the verge of providing the ability for individuals and groups
to communicate and interact with each other in a totally anonymous manner. Two persons
may exchange messages, conduct business, and negotiate electronic contracts without
ever knowing the True Name, or legal identity, of the other. Interactions over networks
will be untraceable, via extensive re- routing of encrypted packets and tamper-proof
boxes which implement cryptographic protocols with nearly perfect assurance against
any tampering. Reputations will be of central importance, far more important in dealings
than even the credit ratings of today. These developments will alter completely the nature
! 311
of government regulation, the ability to tax and control economic interactions, the ability
to keep information secret, and will even alter the nature of trust and reputation (May,
1988, para 2).
May sees crypto-anarchism as the inevitable result of computer technology, a
technological revolution as consequential as the printing press and barbed wire. In his view,
computer technologies like “public-key encryption, zero-knowledge interactive proof systems,
and various software protocols for interaction, authentication, and verification” (para 3) will
“allow national secrets to be traded freely and will allow illicit and stolen materials to be
traded” (para 4), undermining the power of nation-states and of corporations alike. The original
cypherpunks believed in a praxis of code, in changing the world through coding:
Cypherpunks write code. We know that someone has to write software to defend privacy,
and since we can’t get privacy unless we all do, we’re going to write it. We publish our
code so that our fellow Cypherpunks may practice and play with it. Our code is free for
all to use, worldwide (Hughes, 1997).
This prefigures Lawrence Lessig’s quip that “Code is Law” (2006): for the cypherpunks,
socially constructed arrangements like the rule of law are secondary to the higher laws of
mathematics embodied by computer code. In their quest to change society, they see no need to
engage with democratic processes in order to change laws. Rather, they use code to alter the
material realities of the world in an attempt to subvert the legitimacy of governments and
! 312
orchestrate a return of sorts to Hobbes’ “state of nature,” substituting cryptography for physical
strength as the key determinant of power relationships. Moxie Marlinspike is heir to that legacy,
determined to bring end-to-end encryption to the masses. But he departs from the cypherpunks in
two important ways: in his belief that using crypto should be easy, even for those who lack
technical skills; and in his willingness to collaborate with powerful institutions like the U.S.
government and tech companies.
As I detail in Chapter 4, Internet Freedom has been a tenet of U.S. foreign policy since
the Clinton administration, and is deeply rooted in the American civic values enumerated in the
Bill of Rights. In its early years, the Internet Freedom agenda was specifically tied to censorship
circumvention, but this began to change as civil society groups emphasized other aspects of
human rights online, including communication privacy, persuading key actors in the federal
government to expand their understanding of internet freedom. But unlike the cypherpunks,
Internet Freedom proponents do not see computers and code as a way to circumvent socially
constructed institutions like democracy and the rule of law. Rather, they see the free and open
internet as a new public sphere (Habermas, 1989) where the societies of the future will be
socially constructed. Their aim (as they see it) is to protect access to that public sphere from
illegitimate manipulation by powerful actors (there is, of course, much to unpack about the
legitimacy of various types of communication interventions — see Price, 2015).
So how did the state apparatus behind the Internet Freedom agenda come to fund a
cypherpunk project like Signal? In short, personal relationships and the practice of hiring from
civil society (discussed in chapter 4). The Open Technology Fund’s support of Signal can be
attributed to OTF director Dan Meredith’s prior experience working for Al-Jazeera during the
! 313
Arab Spring and to his previous acquaintance with Moxie Marlinspike. While Meredith at least (I
cannot speak for Marlinspike) acknowledges the ideological tensions at play, he takes a
pragmatic approach to cooperating across ideological chasms. Internet Freedom advocates and
cypherpunks all agree that human rights defenders, journalists, political dissidents, and ordinary
people should have access to encryption, so they work together to provide it. It is then up to
individual users to determine how they will use it.
Likewise, it is pragmatism that governs the relationship between Signal and the behemoth
tech companies that have adopted the Signal Protocol. Marlinspike has indicated in a number of
interviews that his goal is to bring end-to-end encryption to the greatest number of people
possible, both for the concrete benefits he believes it will bring users but also as a way to
normalize the use of encryption. A few people I have interviewed suggested that the tech
companies paid OWS a licensing fee, which would make these partnerships an interesting part of
OWS’s business model, but I have not been able to verify details about such arrangements. As
for the companies, my hypothesis is that surveillance-based corporations like Facebook and
Google see offering optional end-to-end encryption, which protects the content of specific
conversations but little else, as a way to keep privacy-conscious users on their platforms.
Encryption for the masses
Finally, let us consider the society-wide consequences of ubiquitous end-to-end
encryption, meaning encryption that renders message content inaccessible to everyone except the
originator and intended recipient(s). In other words, this is encryption that cannot be subjected to
! 314
so-called “lawful intercept” requests submitted to the service provider, as the provider itself does
not have access to the decryption key. This is the type of encryption that Signal, WhatsApp,
Wire, and other apps discussed in this chapter provide.
Law enforcement and intelligence agencies object to end-to-end encryption on the
grounds that it puts criminals and terrorists beyond their reach, and that the rule of law and
public safety consequently require service providers to include “backdoors” into their products.
The British government has been particularly vocal in recent years, with Home Secretary Amber
Rudd calling end-to-end encryption “completely unacceptable,” arguing that “We need to make
sure that organisations like WhatsApp, and there are plenty of others like that, don’t provide a
secret place for terrorists to communicate with each other” (Sparrow, 2017). However, the
overwhelming consensus among cryptography experts is that it is impossible to introduce a
weakness that would only be accessible to pre-approved actors such as law enforcement without
rendering the entire system vulnerable to attack (Abelson et. al., 2015). In other words, end-to-
end encryption must protect everyone, or it will protect no one. Privacy advocates and other civil
society actors argue that encryption is too important to journalists, human rights defenders,
threatened minorities, and other at-risk populations to undermine. This is a well-worn argument
that is unlikely to be put to rest anytime soon — indeed, it has been ongoing since the 1990s
Crypto Wars (Schulze, 2017).
In light of this standoff, governments have periodically banned the use of specific
messaging applications, enforced at the technical layer of internet governance through
sophisticated protocol blocking mechanisms. WhatsApp is a frequent target of such bans, owing
to its mass popularity. For example, judicial authorities in Brazil have repeatedly ordered that
! 315
WhatsApp be blocked for failing to produce communications content data demanded by
authorities in connection with criminal investigations. Each time, the ban was lifted upon appeal,
as the superior courts found it to be disproportionate (de Souza Abreu, 2016).
Absent a policy resolution, governments seem to have shifted their focus to unlocking
specific devices targeted in investigations as a way of accessing their owners’ message history
(see Cox, 2018). This is much more costly than passive surveillance methods such as Deep
Packet Inspection (DPI) and requires either physical access to the target device or surreptitiously
installing malware on the machine, often by tricking the user into clicking on a link or installing
a piece of software (“spearphishing”). Either way, this is impractical at scale, and may represent
a compromise of sorts between the two camps. However, it does nothing to resolve underlying
jurisdictional tensions: national governments insist that they have the legitimacy to regulate
communication technologies used within their borders, but lack the ability to effectively enforce
such rules against foreign companies, many of them based in the United States.
Finally, widespread end-to-end encryption by public officials also raises questions about
transparency and accountability. The Australian province of Queensland has recently banned
ministers from using encrypted messaging applications as well as private email accounts to
conduct official business (Smee, 2018) in the interest of maintaining accurate and complete
archives of government communications. In the U.S., there has been some concern about
political appointees as well as career civil servants using secure messaging services to
communicate, thus circumventing official records rules.
! 316
Conclusion
This case study has focused on the secure messaging application Signal and on third-
party implementations of its encryption protocol. Signal offers a streamlined user experience that
replicates the SMS experience familiar to many users (notably by using the phone number as the
identifier) and is suitable for many threat models. However, it is less appropriate for certain
threat models, such as users who don’t want to share their phone number, users in areas subject
to certain U.S. trade sanctions, and users in contexts where using a “known encryption app” like
Signal could put them at greater risk. However, third-party implementations of the Signal
protocol, such as Wire and WhatsApp, may be more appropriate in such situations.
Signal’s particular history ties it to the cypherpunk ethos, the Internet Freedom agenda,
and Silicon Valley corporations, creating a socio-technical assemblage that is pragmatically
coherent yet ideologically inconsistent. Detractors point to these ideological frictions as evidence
that Signal should not be trusted, but I argue that they should be interpreted as evidence that
popularizing end-to-end encryption serves multiple interests. I hope to enrich the next iteration of
this study with ethnographic interviews with Moxie Marlinspike, Brian Acton, and other
members of the Signal team in order to explore the processes through which these “strange
bedfellows” relationships are negotiated and maintained.
The growing adoption of secure end-to-end encryption for routine communication poses
a challenge for law enforcement and intelligence agencies worldwide, who caution that what
they call “going dark” imperils their ability to pursue their missions, thus threatening public
safety, law enforcement, and the rule of law. Privacy advocates argue that encryption is not the
! 317
challenge that its critics claim, and that we are in fact living in a “golden age of surveillance”:
the proliferation of “digital exhaust” from our daily activities is such that there is more
information than ever at investigators’ disposal (Schneier, 2015). Arguably, this makes it easier
than ever before for authorities to detect verboten speech and behavior, which in many
jurisdictions include activities that are protected by international human rights norms.
Moreover, cryptographer Bruce Schneier (2015) argues that police and intelligence work
should be difficult: unjust or unwise laws must first be broken if they are ever to be repealed.
Schneier cites the legalization of same-sex marriage (and before that, of same-sex sexual
activity) and the momentum for marijuana legalization in the United States as examples of
practices whose legalization only became possible after society realized that the practices in
question were not causing the world to end. This doesn’t mean that anything goes: this process
requires the contested practice to be seen as harmless by a growing share of the population. The
fact that some people occasionally get away with murder, or with the sexual exploitation of
children, is exceedingly unlikely to result in the acceptance (much less legalization) of murder or
child abuse. But if one considers the myriad behaviors that were once deemed deviant but are
now mainstream in liberal societies (sex outside of heterosexual marriage, blasphemy, apostasy,
criticizing the head of state…), the argument for lawbreaking as a stepping stone to law-changing
becomes more apparent. To put it plainly, if Thais could see that criticism of the king does not
cause the world to end, reform of Thailand’s draconian lèse-majesté laws might be within reach.
A self-interested authoritarian might rationally conclude that this is all the more reason to
enforce a zero-tolerance policy on deviance and dissent; I take a different view, obviously.
Human societies inevitably change as the marginalized and oppressed assert their rights. The
! 318
internet can be a tool for doing so, but it can also be used by the powerful to maintain the status
quo. The much-cited trade-off between safety and liberty can more accurately be described as
one between risk and opportunity. The rise of both the personal computer (PC) and the internet
owes much to these technologies’ generativity, or openness to unforeseen uses (Zittrain, 2008).
But the same affordances that allow anyone with the right skills to code a new computer program
also make it possible for computer viruses, malware, and all sorts of socially undesirable content
and activities to proliferate. Information appliances (Zittrain’s term for devices that cannot be
(re)programmed by the user) mitigate these risks, but at the price of eliminating the openness to
innovation that have made both the PC and the Internet so successful (Zittrain, 2008). The same
is true for human societies: we can be protected from (most) risk or we can be free to innovate
and reinvent our worlds, but we cannot be both. As governments around the world seek to
control their populations’ online activities as a means to suppress dissent and control society,
digital rights technologies like Signal help maintain a space for innovation, risk-taking, and
social change.
! 319
Chapter 8 - Telegram: From Russia with crypto
Introduction to the case study
This project’s final case study concerns Telegram, a platform that combines aspects of
social networking with secure messaging, most recently in Iran and Russia. It is quite different
than the other three case studies in that it has no ties whatsoever to the Internet Freedom agenda,
whether in terms of funding, a “revolving door” of personnel, attendance at conferences or
participation in mailing lists. In fact, founder Pavel Durov has been publicly critical of the
Internet Freedom agenda as an ideology, arguing that U.S. government funding renders tools like
Signal and Tor fundamentally untrustworthy (Levine, 2017). But even as Telegram rejects the
discourses of liberation technology (Diamond, 2010), emancipatory communication practices
(Milan, 2013) and freedom technologists (Postill, 2014), it positions itself as an alternative to
Silicon Valley platforms and to tools linked to the U.S. Internet Freedom agenda for users who
value “freedom,” particularly freedom from a heavy-handed state (Durov, 2018). Indeed,
Telegram grew out of an effort to thwart Russian state surveillance, and the literature supporting
Telegram’s early 2018 Initial Coin Offering (ICO) positions the company as an explicitly
libertarian project (Telegram, 2018). It thus provides an interesting contrast to the other three
case studies, whose cyber-libertarian leanings are tempered by the rhetoric of human rights and
by the transparency and oversight requirements associated with nonprofit status and grant
reporting requirements.
! 320
Roadmap to the chapter
Similar to the previous case studies, this chapter begins with an overview of basic
information about Telegram: what it is, what it does, and how. Next, a detailed project history
traces Telegram’s roots to Pavel Durov’s ouster from Vkontakte, the social networking site he
had founded, at the behest of the Kremlin. The chapter then analyzes Telegram’s ideology and
politics by focusing, in turn, on Telegram’s emergence in the context of Vladimir Putin’s
crackdown on technologically-enabled civil society; on Pavel Durov’s cyber-libertarianism; and
on Telegram’s peculiar business model. The analysis shows that while Telegram’s rhetoric
emphasizes user security, privacy, and freedom of expression, the company fails to demonstrate
that it actually lives up to these commitments. Rather than earning user trust through
transparency and accountability, Telegram’s value proposition hinges on blind trust on Pavel
Durov’s good intentions and his team’s stated credentials.
Data, sources and methods
As mentioned earlier, I was not able to gain access to the Telegram team for interviews,
so this chapter is based on content analysis of Telegram’s publications and of public statements
by the Durov brothers (many communicated via Telegram itself), analysis of media coverage,
and secondary sources, as well as interviews with cryptographers and others with expertise on
the subject matter. Most of the consulted sources were in English, and I used Google Translate on
a small number of Russian-language sources. As with the Signal case study, I hope to use this
! 321
initial draft to start a conversation with Telegram’s leadership in hopes that they will grant me
interviews for the next iteration of this project.
The basics: What is Telegram?
Telegram is a smartphone application that combines secure messaging with elements of a
social networking site, developed by Russian-born brothers Pavel and Nikolai Durov and
launched in August 2013. As of March 2018, Telegram boasts 200 million monthly active users
(Durov, 2018). Its all-male, 15-person development team is currently based in Dubai, and
reportedly comprises “ethnic Russians” exclusively (Telegram, 2018; Walt, 2016).
The Durovs had previously founded the social networking site Vkontakte or VK, but
were forced out by the Kremlin after the platform was used to organize mass protests against the
results of the 2011 legislative elections, widely suspected of being rigged in favor of Vladimir
Putin’s United Russia party. The FSB, Russia’s security service and a successor agency to the
KGB, asked VK to turn over user information and to remove certain content relating to the
protests, which Vkontakte CEO Pavel Durov refused to do. He was subsequently forced to sell
his interest in VK to an oligarch close to the Kremlin, after which the shares were sold to the
Russian internet company mail.ru.
The brothers have been living in exile ever since, acquiring citizenships in St. Kitts and
Nevis in exchange for $250,000 donations (each) to the island’s Sugar Industry Diversification
Foundation — only a portion of the reported $300 million they had stashed in Swiss bank
accounts. The passports allow for unlimited visa-free travel across Europe, and Telegram’s team
! 322
has operated from a variety of locations around Europe before recently moving to Dubai
(Telegram, 2018; Walt, 2016). Telegram was entirely funded by Pavel Durov himself until early
2018, when the company announced a ambitious plan to develop a blockchain-based ecosystem
comprising encrypted cloud storage, censorship-resistant technology, and a cryptocurrency called
the “Gram” (Telegram, 2018). As of early April, an Initial Coin Offering (ICO) has raised over
$1.7 billion (Jeffries, 2018; Moore, 2018).
Following the walkthrough method for studying apps described by Light et al. (2016), I
downloaded the Telegram app on my personal iPhone. The user interface is very simple,
featuring a contacts list, a list of chats (groups and individual interlocutors), and the settings
interface. In addition to exchanging messages with individual contacts or groups of contacts,
users can also subscribe to large group chats known as Channels, which can include up to
100,000 users. These public Channels are akin to email distribution lists and are notably used in
Iran to circumvent restrictions on the traditional mass media. The Telegram app lacks a discovery
mechanism, however, and it seems that most users select Channels to follow through word-of-
mouth or by consulting lists maintained online. I followed Durov’s Channel, which Pavel Durov
uses to communicate with users directly, throughout the fieldwork period.
Project history
Telegram is the brainchild of Russian brothers Nikolai and Pavel Durov, who previously
founded the social site Vkontakte or VK (meaning “in contact,” in Russian). The sons of a
university classics professor, the brothers spent part of their childhood in Italy before returning to
! 323
their native St. Petersburg. The eldest, Nikolai, holds two PhDs in mathematics, while Pavel
studied linguistics at St. Petersburg State University and learned computer programming from
his brother. After university, he “trained in propaganda” as part of his compulsory military
service, immersing himself in the writings of Sun Tzu, Genghis Khan, and Napoleon (Hakim,
2014; Yaffa, 2013).
Pavel Durov was in his early 20s when Facebook first launched on American college
campuses, and he was among a handful of Russian developers racing to build the first Russian
social networking site. VKontakte launched in 2006, with a graphic interface eerily similar to
Facebook’s, down to the blue-and-white color palette. By the time Facebook opened itself to all
users — and not just those with a .edu email address — VKontakte had cornered the market on
the Russian-speaking internet, known as the RuNet.
The Western media often calls Pavel Durov “the Russian Mark Zuckerberg,” and his
personal philosophy does indeed have much in common with Silicon Valley cyber-libertarianism.
He started VK during the fleeting period of time where it seemed that Russia might be
transitioning to a functional democracy, telling the New York Times that “The best thing about
Russia at that time was the Internet sphere was completely not regulated. In some ways, it was
more liberal than the United States” (Hakim, 2014). Since then, of course, the state has clamped
down on all forms of media, including the internet. “Since I’m obviously a believer in free
markets,” Durov told the New York Times, “it’s hard for me to understand the current direction
of the country” (Hakim, 2014).
Durov says that he’s “not a big fan of the idea of countries” (Hakim, 2014), and often
seems to thumb his nose at the idea of national sovereignty, and particularly at the idea that
! 324
national laws apply online, bringing to mind J.P. Barlow’s Declaration of the Independence of
Cyberspace (1996). For example, in 2007, Durov decided to allow VK users to upload music and
videos regardless of copyright, drawing opprobrium from the U.S. government and lawsuits from
the recording industry. The company didn’t start proactively enforcing copyright laws until late
2013, after the Duma, Russia’s legislature, passed a law ordering that sites that facilitate
copyright violations be blocked (Dredge, 2014; Hakim, 2014; Yaffa, 2013). In a 2012 manifesto
published in the magazine Afisha, Durov argued that Russia should “rid society of the burden of
obsolete laws, licenses, and restrictions … the best legislative initiative is absence” (Durov,
2012, cited in Yaffa, 2013).
By 2011, Vkontakte was Russia’s leading internet property. Like elsewhere, Russian
internet users took advantage of the platform’s affordances to create groups and organize events
of all kinds, including many dedicated to politics and social causes. Vladimir Putin had been
prime minister for three years, following two terms as president (2000-2008). In September
2011, the Duma extended presidential terms from four to six years, and Putin announced that he
would run for president in the 2012 election. For many Russians as well as foreign observers, the
announcement dashed what hopes remained that Russia would complete its democratic
transition. The legislative elections that December were marred by reports of widespread voter
fraud, and tens of thousands of Russians took to the streets in Moscow and other cities to protest
both the flawed election and the increasingly autocratic United Russia party, to which both Putin
and president Dmitry Medvedev belonged. Opposition leader and blogger Alexei Navalny was
particularly active on social media, using VK among other platforms to organize protests and to
disseminate information related to his many grievances against the government. It was starting to
! 325
look like Russia may be about to have its own “Color Revolution,” which some dubbed the
“Snow Revolution” (Ioffe, 2011; White & McAllister, 2014).
Less than a year after the start of the Arab Spring, the received wisdom at the time was
that social media “caused” revolutions, or at the very least made them possible in otherwise
stable autocracies (see Shirky, 2011). There was little to be done about foreign platforms like
Facebook and Twitter, but the Russian security services quickly pressured Vkontakte to shut
down Navalny’s page and other groups used to plan demonstrations. Instead, Durov modified the
site to give Navalny’s posts greater visibility, and posted his “official reaction” on Twitter: an
image of a hoodie-wearing dog, sticking its tongue out. He later issued an “open letter” couching
his refusal to cooperate in business terms: “If foreign sites continue to exist in a free state, and
Russian ones begin to be censored, the Runet [Russian-language Internet] can await only its slow
death” (Durov, 2011, cited in Yaffa, 2013). The security forces were at his door before long.
Durov refused to let them in.
Durov says that this is moment that Telegram was conceived. "The no. 1 reason for me to
support and help launch Telegram was to build a means of communication that can’t be accessed
by the Russian security agencies,” he told Tech Crunch (Tsotsias, 2014). With police threatening
to break down his front door, Durov called his brother Nikolai. “I realized I don’t have a safe
means of communications with him,” he told the New York Times. “That’s how Telegram
started” (Hakim, 2014). Before long, the pair “cobbled together” an encrypted messaging system
to avoid surveillance by the FSB (Walt, 2016). The fact that the Durovs “rolled their own crypto”
rather than building on established protocols and consulting expert cryptographers looms large in
technical critiques of the platform’s security.
! 326
Protests continued for several months, occasionally punctuated by mass arrests and by
counter-protests in support of Putin and United Russia. Putin won the May 2012 presidential
election, appointing his predecessor Medvedev as prime minister (Soldatov & Borogan, 2015).
In April 2013, Pavel Durov fled Russia for Buffalo, New York, after being charged with
allegedly running over a police officer’s foot with a car, and focused his attention on the tool that
would eventually become Telegram. The charges were quickly dropped, as they had achieved
their objective: running Durov out of the country. He was still nominally the head of VK and
owned 12% of the company, but he mostly kept a low profile. One exception came in August,
when he publicly offered Edward Snowden a job at VK. The announcement was made via a VK
post, characteristically (Boyette, 2013).
The pressure on Vkontakte and on Durov continued, with the FSB notably demanding
information about users belonging to VK groups focused on the Euromaidan protest movement
in Ukraine. Durov once again refused, saying: "To give the personal details of Ukrainians to the
Russian authorities would not only be against the law, but also a betrayal of all those millions
of people in Ukraine who trusted us. The freedom to disseminate information is an inalienable
right of a postindustrial society" (The Moscow Times, 2014).
Amid these tensions, Telegram launched in August 2013. Fortuitously for Telegram, its
launch was followed a few months later by the announcement that Facebook had acquired
WhatsApp, resulting in over 8 million Telegram downloads across the iOS and Android
platforms in just a few days (Tsotsias, 2014). One of Telegram’s key value propositions is that
unlike many other messaging applications it lacks any financial or institutional relationships to
! 327
the U.S. government or to American technology companies. This makes the platform particularly
appealing for millions of internet users who are skeptical of U.S. imperialism.
Still abroad, Durov submitted a resignation letter in March 2014, announced he was
stepping down as VK’s CEO on April 1, then claimed it was all an “April Fool’s joke” a few
days later. Within a few weeks, the company’s board announced that Durov was no longer the
CEO, as the letter withdrawing his previous resignation was “not in accordance with all the
rules” (The Moscow Times, 2014). The move was widely interpreted as Durov losing his stand-
off with the Kremlin.
The brothers have been living in exile ever since, acquiring citizenships in St. Kitts, in the
Caribbean, in exchange for $250,000 donations apiece to the island’s Sugar Industry
Diversification Foundation. The passports allow for unlimited visa-free travel across Europe
(Walt, 2016). The company is registered as a British LLP, and uses a series of shell companies
registered around the world. This structure is designed to help the company evade government
requests for user information and for content restrictions (Hakim, 2014) and, more generally, to
stay one step ahead of government attempts to regulate or control the company:
Durov originally based Telegram out of a small Berlin office, but the staff now works out
of a series of houses and apartments rented mostly on Airbnb.com, or out of swank hotels
like the one in London; in the summer they might be found in a rented house on a lake in
Finland. After a month or two at any one place, they move on. Durov describes his team
as “nomads.” He says Telegram is registered in several countries including the U.K.
! 328
Durov explains the peripatetic lifestyle as a way of preventing the company from
becoming embroiled in the politics or economic ups and downs of any single country—a
lesson he says he learned from the turmoil in Russia that upturned his life and lost him
his first company. “I did not want to make the same mistake of relying on a single
jurisdiction,” he says. “No matter how good a place looks, you don’t know what crazy
new regulation they will introduce” (Walt, 2016).
As of mid-2018, Telegram’s 15-man team — there are no women — is based in Dubai
(Telegram, 2018), though who knows how long that will last. The United Arab Emirates is far
from being a bastion of free expression or online privacy, and it seems likely that Telegram will
eventually run afoul of local regulations or otherwise displease the authorities, particularly if the
Emirati population starts using the platform for political purposes, as Telegram’s large Iranian
user base has done.
Indeed, Telegram is one of the most popular communication services in Iran, with
roughly half of the country’s population of 80 million using the platform, which plays a unique
role in the country’s communication ecosystem:
Since 2009, Iranians have become experts in avoiding censorship and circumventing
government controls. And Telegram, available outside Iran’s “filternet” and with its high
performance at low internet speeds, has become a uniquely potent and ubiquitous agent
of communication and information dissemination in the cat-and-mouse game between the
people and those trying to control them. It’s easy to store and share large files—like
! 329
videos—on the platform; it works well with Persian’s left-to-right script; and it offers the
ability to develop Persian-language bots and stickers (fun memes and images shared in
chats) on top of a simple interface. Unlike Twitter, millions of Iranians use Telegram in
their everyday lives—around 40 million monthly users in a country of 45 million overall
online users, according to the latest ITU statistics. They often rely on Telegram’s private
group chats to stay in touch with friends or family; receive their news from local and
diaspora Persian news sources on the platform’s public channels; or subscribe to traffic,
weather, shopping or entertainment updates from their neighborhood (Alimardani,
2018a).
After street protests broke out in a number of cities across the country in December 2017,
Iran’s minister of ICT asked Telegram to censor the channel of independent news outlet
AmadNews for “encouraging hateful conduct, use of Molotov cocktails, armed uprising, and
social unrest.” The channel was linked to the Green Movement, and its stated mission was to
“expose the corruption of the regime and its clandestine activities.” Hardliners within the Iranian
government had been pushing for the channel’s removal for several months, and were gratified
when Telegram finally complied on December 30 (Alimardani, 2018a). Nonetheless, Telegram
was blocked (via technical means) in Iran the very next day. The block last until January 13
(Alimardani, 2018b). Iran has blocked Twitter since the 2009 Green Movement protests, with
hardliners attacking the American platform as a core component of the “cultural NATO” that is
waging a “Soft War” against Iran (see Ch. 5, the Psiphon case study, for an in-depth analysis of
these constructs). Unlike Psiphon, Twitter, and other online communication tools, Telegram lacks
! 330
financial or ideological ties to the United States and its foreign policy apparatus, and had been
tolerated in Iran until the recent ban. Clearly, hardliners within the Iranian government now feel
threatened by Telegram as well — but does Telegram have an ideology?
Telegram’s Russian origins
As seen in the previous section, Telegram is the product of founders Pavel and Nikolai
Durov’s contentious relationship with the Russian state. This relationship is itself shaped by
Russia’s unique approach to information and communication policy, which long predates the
Russian Federation itself. The recent history of Russian information and communication policy
is thus important context for understanding Telegram.
As I have detailed elsewhere (Maréchal, 2017a), the Russian approach to information
policy is rooted in the country’s imperial and Soviet past. Under the USSR, information was
considered a dangerous commodity to be feared and controlled, rather than a right and a public
good. Contrary to liberal conceptions of a free press serving as a fourth branch of governance
and fostering a habermasian public sphere (Habermas, 1989), the Soviet regime saw the media as
a danger to be tightly controlled, with only select elites permitted access to objective news or to
foreign publications (Gorny, 2007; Soldatov & Borogan, 2015). For example, ownership and use
of photocopiers were tightly restricted in an attempt to prevent the distribution of samizdat,
photocopied pamphlets of “subversive” material (Hanson, 2008).
The Soviet Union collapsed in 1991, and print and broadcast media were briefly
liberalized during the Yeltsin era, though many were owned by —and beholden to — various
! 331
oligarchs. Nonetheless, the variety of influences over the media ushered in an era of relative
media freedom. Former KGB colonel Vladimir Putin took office in late 1999, and promptly
reasserted the Kremlin’s control over the media under the auspices of “liberating” the press from
the oligarchs. For many Russia experts, understanding Putin is key to understanding Russia
today. Putin served in the Soviet intelligence agency, the KGB, for 16 years, rising to the rank of
colonel, and he spent much of the pivotal perestroika years outside of Russia. His views on
governance, the rule of law, the role of information in society, and the Russian national interest
are very much influenced by the KGB’s authoritarian traditions, themselves grounded in the
authoritarianism of imperial Russia (Soldatov & Borogan, 2015).
The end of the Cold War and collapse of the USSR, which also marked the end of
Russia’s superpower status, was a sore spot for the Russian elite, who perceived the U.S.’s
success in exporting its cultural products as a threat to national sovereignty. Elites also resented
growing U.S. influence in Eastern Europe and Central Asia, which they saw as their rightful
sphere of influence, and the European Union’s eastward expansion into the former Eastern
Block. Over the course of his first presidency (2000–2008), during which time domestic internet
access grew considerably, Putin came to see the information revolution as “one of the most
pervasive components of U.S. expansionism in the post-Soviet sphere, most notably in Russia
itself” (Nocetti, 2015, p. 129). Where others might have seen opportunities for innovation and
growth, Putin saw threats to the status quo and his hold on power, thus following in the footsteps
of his Soviet and pre-bolshevik predecessors alike. However, Russia’s political classes did not
understand the internet well enough to regulate it effectively, and the RuNet flourished outside of
government control for the first decade of the Putin era.
! 332
Putin switched posts with his prime minister, Dmitry Medvedev, in 2008 to circumvent
constitutional term limits. A series of “color revolutions” and the Arab Spring solidified Putin’s
understanding of the internet as a threat to political order and national sovereignty. He notably
saw the mass protests of 2011-2012 as a component of American “information aggression”
orchestrated from Washington, specifically by then-Secretary of State Hillary Clinton (Nocetti,
2015).
Russian internet policy—in both the domestic and foreign policy spheres—is rooted in
the premise that Western countries (mainly the U.S.) use the internet to overthrow governments
in “countries where the opposition is too weak to mobilize protests” (Nocetti, 2015, p. 114)— or,
in other words, countries living under authoritarian regimes. Russian foreign policy hews to a
strict interpretation of Westphalian nation-state sovereignty, at the core of which is the principle
of non-intervention. The free and open internet threatens that principle, allowing foreign and
potentially subversive viewpoints to circulate across Russia. The “color revolutions” of the early
21st century and the Arab Spring have further fueled concerns that the internet represents a threat
to the status quo and that it poses a threat to Russian political leaders (Howard & Hussain, 2013;
Nocetti, 2015). Indeed, opposition groups led by Alexei Navalny used Facebook to coordinate
street protests in the aftermath of the 2011 legislative elections, and while the protests failed to
coalesce into a lasting social movement, such an outcome was not completely outside the realm
of possibility (Soldatov & Borogan, 2015; White & McAllister, 2014). Moreover, there is good
reason to believe that Putin sees the U.S., and specifically then-Secretary of State Hillary
Clinton, as directly responsible for fomenting these protests. Under this paradigm, such
interference in Russia’s domestic politics constitutes a violation of national sovereignty
! 333
tantamount to information warfare. Likewise, U.S. policy initiatives like democracy promotion
and the Internet Freedom agenda are seen as promoting political projects that are aligned with
U.S. interests, almost invariably at the expense of Russia’s own interests (Nocetti, 2015).
Putin won the 2012 presidential election in spite of the protests, and redoubled his efforts
to control the internet. It soon became clear that the traditional mechanisms for information
control — censorship and intimidation of key individuals, resulting in chilling effects — were
inadequate in the social media era (Maréchal, 2017a; Ognyanova, 2015; Soldatov & Borogan,
2015). Intimidation and financial pressure had successfully wrestled control of Vkontakte from
libertarian Pavel Durov and replaced him with management that was more willing to cooperate
with the FSB’s demands, but foreign platforms like Twitter and Facebook remained outside the
security services’ reach.
The 2012 reform of the SORM surveillance system, first launched in 1995 to monitor
148
telephone and internet communications, brought social networking sites under the mass
surveillance system’s purview, which already required internet service providers to deploy Deep
System of Operational-Investigatory Measures
148
! 334
Packet Inspection (DPI) technology to monitor all communications originating or terminating
149
in Russia (Soldatov & Borogan, 2015). As Soldatov and Borogan note, the tools used to monitor
social networking sites at the time had a crucial flaw:
These systems were developed for searching structured computer files, or databases, and
only afterwards adapted, some more successfully than others, for semantic analysis of the
As Laura DeNardis puts it, “DPI is a transformational technology that creates unprecedented
149
regulatory possibilities for controlling the flow of content online” (2014, p. 206). Demonstrating
why this is the case requires a basic understanding of the technology itself. Information (whether
it’s text, voice, or something else) is transmitted over the internet as packets, small bundles of
data that are individually routed from the sender to the receiver, then put back together in the
correct order. Packets consist of both payload (the actual content of the communication) and a
header, which contains the packet’s metadata: its origin, destination, and not much else. The
header is analogous to an envelope, telling each piece of equipment along the way where the
payload should be delivered. Until fairly recently, computing power limited the types of analyses
that routers, switches and other network hardware could perform on passing traffic, but advances
in this domain have made it possible for hardware to simultaneously process millions of packets,
reading not just the headers but the payload as well. Unless the packet is encrypted, the only
impediment to stopping a DPI-capable machine from reading the payload are social and legal
norms against this type of surveillance—which are absent in Russia. From there it is possible to
block or throttle back traffic based on its origin, destination, file type (text, voice, multimedia),
protocol (P2P, FTP, HTML, SMTP) or the content of the message itself (DeNardis, 2014).
! 335
Internet. Most of these systems were designed to work with open sources and are
incapable of monitoring closed accounts such as Facebook.
The FSB discovered early on that the only way to deal with the problem was to turn to
SORM. The licenses require businesses that rent out site space on servers to give the
security services access to these servers via SORM, without informing the site owners.
With this provision, the FSB has had few problems monitoring closed groups and
accounts on Russian social networks Vkontakte and Odnoklassniki. But Facebook and
Twitter don’t store their user data in Russia, keeping it out of SORM’s reach (Soldatov
and Borogan, 2013, para. 20).
This desire to gain access to Russian users’ online activities was the key motivation for
data localization laws promulgated in the aftermath of the Snowden revelations. While the laws’
proponents claimed to be motivated by a desire to protect Russians’ personal data from the U.S.
National Security Agency (NSA), these requirements do nothing to impede NSA spying while
facilitating the SORM system’s access (Maréchal, 2017a; Sargsyan, 2016). Russian media
regulator Roskomnadzor began enforcing data localization requirements for foreign companies
in 2017, and foreign companies face with the possibility of being blocked in Russia if they don’t
comply. LinkedIn (owned by Microsoft) became the first foreign site to be banned for failing to
comply with the data localization requirement in October 2016 (Rothrock, 2016).
In 2016, Russia passed a draconian legislative package, known as the Yarovaya Laws
after one of its sponsors, that further restricts privacy and freedom of expression both offline and
! 336
online. Among other things, the laws require “information dissemination services” to register
with the state media regulator, to store all message content for six months and metadata for three
years, and to preemptively make decryption keys available to the authorities (International
Center for Not-for-Profit Law, 2016). Telegram complied with the registration requirement,
150
but refused to share decryption keys, for which it was fined 800,000 rubles (roughly $14,000
USD). The company lost its appeal in March 2018, and on April 6 the media regulator files suit
against Telegram in an effort to ban the platform in Russia (Krishna 2017a, 2017b, 2018a,
2018b). On April 13, after an 18 minute hearing, the court ordered Russian ISPs to start blocking
Telegram, though the RuNet was abuzz with instructions for circumventing the ban within hours
(Stubbs & Ostroukh, 2018).
This court battle is but one example of how internet technologies challenge traditional
conceptions of sovereignty: countries can block services or content, but enforcing these blocks is
technically complex and very expensive. The case also illustrates two important things about the
company and its co-founders. The Durovs’ lawyers advanced two lines of argumentation: first,
that the requirement to share decryption keys was unconstitutional, and second, that Telegram
was unable to comply with the requirement because it did not itself possess the decryption keys:
the end-to-end encrypted Secret Chats use keys that Telegram cannot access, and the keys used
to secure “normal” chats and public channels while in transit are stored in a distributed fashion
across multiple legal jurisdictions (Telegram, n.d.). As we will see in the next section, these are
deliberate design choices that reflect Pavel Durov’s libertarian political ideology.
The requirement to store message content came into effect on July 1, 2018. Other
150
requirements were already in force.
! 337
Telegram’s ideology
This section delves further into Telegram’s ideological underpinnings to draw
connections between this ideology and Telegram’s design choices, policy decisions, and how
people use the platform to political ends. I argue that Telegram is a cyber-libertarian project in
the cypherpunk tradition that is untempered by regulation, corporate governance, or
accountability to any higher authority than Pavel Durov himself. This appears to be an
intentional feature, not a bug, albeit one that should give users pause.
As discussed in Chapter 3, cyber-libertarianism is “the belief that individuals—acting in
whatever capacity they choose (as citizens, consumers, companies, or collectives)—should be at
liberty to pursue their own tastes and interests online,” with the goal “to minimize the scope of
state coercion in solving social and economic problems and looks instead to voluntary solutions
and mutual consent-based arrangements” (Thieren & Szoka, 2009, para. 1-2).
As we saw in the Project History section of this case study, Telegram originated as a way
for the Durov brothers to communicate securely while they (and Vkontakte) were engaged in a
high-stakes standoff with Russian authorities in late 2011. The platform launched in mid-2013,
and became the Durovs’ sole focus within a year, as Pavel was ousted from VK the following
spring.
Media portrayals of Pavel Durov have described his political leanings as “the sort of
techno-utopian, libertarian ideas popular in Silicon Valley” (Yaffa, 2013). Indeed, the White
Paper issued in advance of the Telegram ICO is clear about Telegram’s ideological
underpinnings: “Telegram was founded in 2013 by libertarians to preserve freedom through
! 338
encryption” (Telegram, 2018, p. 5). The idea of “freedom through encryption” is, of course, the
basic tenet of the cypherpunk ideology. Grounded in the writings of David Chaum (1985), Tim
May (1995), J. P. Barlow (1996), and Eric Hughes (1997), among others, the cypherpunk
movement is discursively sustained through the Cypherpunk Mailing List and technologically
embodied in computer programs like Pretty Good Privacy, anonymous remailers, and
crytocurrencies. The cypherpunks envisioned a world where computer code — specifically
encryption — would help end the nation-state’s dominion over individual lives and bring about a
libertarian utopia predicated on individual autonomy and free association (Levy, 2001; West,
2018).
Pavel Durov has long been vocal in his rejection of the nation-state as a legitimate source
of authority. He says that he considers himself “a legal citizen of the world,” and has take a
number of steps to sidestep national regulation, including his purchase of citizenship in St. Kitts
and Nevis. At the same time, he seems to have some degree of national pride: "My dream is to
break the national inferiority complex, proving that products from Russia can be massively
claimed all over the world" (Kononov & Igumenov, 2011).
In 2012, “he published a manifesto in the magazine Afisha that called on Russia to “rid
society of the burden of obsolete laws, licenses, and restrictions … the best legislative initiative
is absence.” (Durov, 2012, cited in Yaffa, 2013). Durov’s biographer Nikolai Kononov describes
the VK founder’s impression of Mark Zuckerberg, after the two met in San Francisco:
Durov asked Zuckerberg: "What do you think about Twitter?" Zuckerberg did not discuss
Twitter and started talking about social networks. The libertarian Durov felt in him a
! 339
revolutionary brother. "We had more in common than [I did] with the business characters
[within Facebook],” he said after. "Mark is an anarchist, but not in terms of denying
power and order, but in terms of understanding the outdated nature of the state." The
architects agreed that social networks are a superstructure over humanity, allowing
information to spread past the centralizing horns of the state (Kononov, 2013, cited in
Forbes Staff, 2012).
Durov and Zuckerberg both seem to view the nation- state as an obsolete legacy of a
bygone era, soon to be replaced as the main organizing logic for human societies by
technological platforms like their own creations (incidentally, placing vast amounts of wealth
and power into their own hands). But the two men are the products of very different
circumstances, and their careers — as well as the platforms they each spawned — would be
shaped by dramatically different social, political, and economic contexts. While Pavel Durov
chiefly turns to his brother Nikolai for advice, Zuckerberg has long been surrounded by venture
capitalists, lawyers, business school graduates, and other products of late American capitalism
who molded Facebook into the (largely) law-abiding surveillance capitalism behemoth it is to-
day (see Vaidhyanathan, 2018).
In contrast, Durov has sought to keep his companies out of reach of both the law and the
market. Under Durov’s leadership, Vkontakte resisted enforcing copyright laws, and for a time
hosted over half of the copyrighted audio and video content on the RuNet. He only agreed to
remove copyrighted content — upon request from the rights holder — after pressure from VK’s
shareholders, who were concerned that enabling copyright violations would stand in the way of
! 340
an IPO on Western stock exchanges (Kononov & Igumenov, 2011). As for Telegram, both the
company’s legal structure and its technical architecture deliberately span multiple jurisdictions as
a way to avoid being subjected to any one government’s authority (or so Telegram’s public
documentation claims).
Durov rejects not only the nation-state, but also surveillance capitalism, the business
model based on targeted advertising that sustains companies like Google, Facebook, and a
growing cross-section of firms in other sectors (Zuboff, 2015). This is an important contrast to
Mark Zuckerberg, who infamously declared that “privacy is dead” and has vociferously (and
incorrectly) denied any causal relationship between his company’s business model and the
current misinformation crisis (Vaidhyanathan, 2018). Durov resisted an advertising-based
business model for VK as long as possible, finally relenting in 2008 (Kononov & Igumenov,
2011), and Telegram pledges never to seek advertising revenue, charge user fees, or sell
traditional shares in the company (Telegram, n.d.). Telegram presents this choice in a positive
light, arguing that users should trust the platform because Durov is accountable only to his own
ideals, rather than to shareholders’ hunger for dividends or advertisers’ voracious appetite for
user data. While this implicit critique of capitalism resonates with many audiences, Telegram’s
value proposition requires complete faith in the Brothers Durov, their good intentions, and their
technical capacity to deliver on their promises.
The online FAQ defines privacy in opposition to the Silicon Valley business model:
Q: What are your thoughts on internet privacy?
! 341
Big internet companies like Facebook or Google have effectively hijacked the privacy
discourse in the recent years. Their marketers managed to convince the public that the
most important things about privacy are superficial tools that allow hiding your public
posts or your profile pictures from the people around you. Adding these superficial tools
enables companies to calm down the public and change nothing in how they are turning
over private data to marketers and other third parties.
At Telegram we think that the two most important components of Internet privacy should
be instead:
1. Protecting your private conversations from snooping third parties, such as
officials, employers, etc.
2. Protecting your personal data from third parties, such as marketers,
advertisers, etc.
This is what everybody should care about, and these are some of our top priorities.
Telegram's aim is to create a truly free messenger, without the usual caveats. This means
that instead of diverting public attention with low-impact settings, we can afford to focus
on the real privacy issues that exist in the modern world (Telegram, n.d.).
While Telegram is right to point out that Silicon Valley corporations rhetorically redefine
privacy as being about other users as a way to avoid discussing their surveillance-based business
model, the company fails to prove that it lives up to its own commitments.
For now, Telegram is not completely free from national regulation, though it has taken a
number of steps to distance itself from governmental attempts to assert power. Though it claims
! 342
to operate as a non-profit, Telegram is structured as a for-profit British LLP that is itself owned
by a complex series of shell companies registered in various tax havens. Its decentralized
technical architecture is likewise designed to store user data and encryption keys across several
jurisdictions,
thus thwarting government requests for user information:
151
Q: Do you process data requests?
Secret chats use end-to-end encryption, thanks to which we don't have any data to
disclose.
To protect the data that is not covered by end-to-end encryption, Telegram uses a
distributed infrastructure. Cloud chat data is stored in multiple data centers around the
globe that are controlled by different legal entities spread across different jurisdictions.
The relevant decryption keys are split into parts and are never kept in the same place as
the data they protect. As a result, several court orders from different jurisdictions are
required to force us to give up any data.
Thanks to this structure, we can ensure that no single government or block of like-minded
countries can intrude on people's privacy and freedom of expression. Telegram can be
forced to give up data only if an issue is grave and universal enough to pass the scrutiny
of several different legal systems around the world.
To this day, we have disclosed 0 bytes of user data to third parties, including governments
(Telegram, n.d.).
This architecture emulates earlier projects like MojoNation and BitTorrent — see
151
Beyer & McKelvey, 2015.
! 343
Telegram thus rejects all government oversight, even when constrained by the rule of law
and judicial review. This position goes well beyond the demands of most digital rights
campaigners, who argue that intermediaries should only turn over user information pursuant to a
court order or equivalent legal process, and that companies should publish figures on requests for
user data and on their own compliance with such requests, thus holding both companies and
governments accountable to a web of civil society organizations (MacKinnon, Maréchal &
Kumar, 2016).
Public channels can be accessed by anyone with a Telegram account, so the concern here
relates to private conversations between users. Telegram supports end-to-end encryption, but this
isn’t enabled by default, a choice for which Telegram is criticized by many privacy advocates.
One privacy and encryption expert I interviewed told me that he believes this is a political choice
designed to placate governments concerned about terrorists and criminals using the platform to
coordinate their activities. Indeed, Telegram has been widely criticized for supposedly being
152
the “terrorists’ app of choice,” following disputed media reports that Telegram had been used to
plan attacks in Paris, Nice, London, and elsewhere. However, sufficient evidence to substantiate
this claim is lacking. Commentators in the media and elsewhere tend to blame technology for
social ills like terrorism, reflecting a widespread belief in technological determinism. Various
social actors either imbue technology with agency as a way to deflect blame from themselves,
and shift sole responsibility for solving social problems onto technology companies, who in turn
insist that their technologies are “neutral” precisely to avoid being held responsible for anything.
Phone interview, March 31, 2018.
152
! 344
Pavel Durov disputed the notion that Telegram bore any responsibility for the attacks in a
virulently xenophobic statement imbued with a classically libertarian rejection of taxation:
"The French government is as responsible as the Islamic State for this because it is their
policies and carelessness that eventually led to the tragedy. They take money away from
hardworking people of France with outrageously high taxes and spend them on waging
useless wars in the Middle East and on creating a parasitic social paradise for North
African immigrants (Quénelle, 2016).”
Durov is no doubt correct that the availability of encrypted Secret Chats did not cause
terrorism, though his political analysis leaves much to be desired. Nonetheless, this exemplifies
how narratives of technological determinism can feed technology companies’ impulse to claim
“neutrality” and reject responsibility for the real-world impacts of their products.
Similarly, the company rejects any responsibility to moderate user content, with the
exception of narrowly defined “public content” and content related to terrorism, notably ISIS.
Telegram deleted 78 different ISIS-related channels shortly after the November 2015 Paris
attack, and continues to monitor such content closely and periodically shuts down channels
related to terrorism. It seems to do so proactively, however, rather than in response to
government requests, despite not having a Terms of Service Agreement or Community Standards
codifying user content. This paragraph from the Telegram website’s FAQ page is all the platform
communicates to users about content removals:
! 345
Q: Wait! 0_o Do you process take-down requests from third parties?
Our mission is to provide a secure means of communication that works everywhere on
the planet. To do this in the places where it is most needed (and to continue distributing
Telegram through the App Store and Google Play), we have to process legitimate requests
to take down illegal public content (e.g., sticker sets, bots, and channels) within the app.
For example, we can take down sticker sets that violate intellectual property rights or
porn bots.
User-uploaded stickers sets, channels, and bots by third-party developers are not part of
the core Telegram UI. Whenever we receive a complaint
at abuse@telegram.org or dmca@telegram.org regarding the legality of public content,
we perform the necessary legal checks and take it down when deemed appropriate.
Please note that this does not apply to local restrictions on freedom of speech. For
example, if criticizing the government is illegal in some country, Telegram won‘t be a
part of such politically motivated censorship. This goes against our founders’ principles.
While we do block terrorist (e.g. ISIS-related) bots and channels, we will not block
anybody who peacefully expresses alternative opinions (Telegram, n.d.).
Unlike other platforms like Facebook, Twitter, and even Vkontakte, Telegram does not
provide any further details about the types of content (beyond copyrighted materials subject to
the Digital Millennium Copyright Act, or DMCA) it might remove or the process for evaluating
such requests. Nor does it publish so-called “transparency reports,” which list the number of
government requests for content removal that a platform has received, complied with, or both.
! 346
This practice was pioneered by Google in 2010, and has since become standard practice in the
technology sector, though civil society continues to push companies for more transparency about
how their policies and practice impact users’ rights to privacy and freedom of expression.
Telegram’s opacity in this regard is one of the critiques leveled against the company by digital
rights advocates.
As my co-authors and I have argued elsewhere (MacKinnon, Maréchal & Kumar, 2016),
this transparency serves to create mechanisms by which companies, governments, and civil
society groups hold one another accountable for meeting their respective commitments to respect
and protect human rights in the digital age, thus bridging a critical governance gap in global
society. Telegram simultaneously rejects such a compact even as it falls short of the open source
standards that form the cornerstone of the cypherpunk ideal. Pavel Durov would have Telegram’s
users trust Telegram without subjecting the company’s code or business operations to any outside
scrutiny.
Finally, Pavel Durov’s paradigmatic view of the world seems to combine libertarianism
with an idiosyncratically Russian understanding of politics. In an interview with Yasha Levine,
an outspoken critic of the Internet Freedom agenda and the digital rights movement, Durov said
that “he could not understand how people could trust a supposedly anti-government weapon that
was being funded by the very same U.S. government it was supposed to protect its users
from” (Levine, 2017). Levine drew an analogy to the Soviet era:
Imagine if the KGB funded a special crypto fax line and told Aleksandr Solzhenitsyn and
dissident samizdat writers to use it, promising that it was totally shielded from KGB
! 347
operatives. Then imagine that Solzhenitsyn would not only believe the KGB, but would
tell all his dissident buddies to use it: “It’s totally safe.” The KGB’s efforts would be
mercilessly ridiculed in the capitalist West, while Solzhenitsyn would be branded a
collaborator at worst, or a stooge at best (Levine, 2017).
Durov agreed with the assessment: “I don’t think it’s a coincidence that we both
understand how naïve this kind of thinking is, and that we were both born in the Soviet
Union” (Levine, 2017).
The problem with Levine’s analogy is that it assumes that governance works the same
way in 21
st
century America as it did in the Soviet Union. It is impossible to prove a negative, of
course, and I can’t categorically rule out the NSA making a pact with Moxie Marlinspike to
“backdoor” the Signal Protocol, but it seems highly unlikely, especially since Signal’s codebase
is entirely open-source and is regularly scrutinized by the world’s top cryptographers. Durov’s
153
(and Levine’s) apparent belief that the U.S. government is a unitary actor whose actions are
internally consistent and coordinated at a high level are consistent with an idiosyncratically
Russian understanding of politics rooted in the country’s Soviet heritage. As Andrei Soldatov and
Irina Borogan write in The Red Web,
[The KGB] were trained to think that every person was only driven by baser, inferior
motives. When confronting Soviet dissidents, they looked for money, dirty family secrets,
Field notes, 2015-2018
153
! 348
or madness, as they couldn’t accept for a second that someone could challenge the
political system simply because they believe in their cause.
Putin is a product of this thinking. He doesn’t believe in mankind, nor does he believe in
a benign society — the concept that people could voluntarily come together to do
something for the common good. Those who tried to do something not directed by the
government were either spies — paid agents of foreign hostile forces — or corrupt — i.e.
paid agents of corporations (Soldatov & Borogan, 2017, p. 336).
This seems to be the lens through which Durov and Levine view the relationship between
the Internet Freedom agenda and the digital rights movement: it it nearly unthinkable to them
that civil society actors would develop digital rights tools without being directed to do so.
According to this logic, the financial relationship between these actors and government
institutions implementing the Internet Freedom agenda proves this causal link.
Crypto controversies
While media coverage of Telegram makes much of its encrypted Secret Chats, linking the
affordance to terrorism, criminality, and the like, cryptography experts have cast doubts on the
robustness of the Durovs’ MProto encryption protocol, dubbed MProto. Though a technical
evaluation of the protocol’s shortcomings would fall beyond the scope of this chapter (and of my
expertise), it is important to underscore that Telegram’s claims to unbreakable encryption have
come under serious attack from respected experts.
! 349
The core of the critique of MProto is that rather than building on established encryption
protocols and col- laborating with experienced experts, the Durovs “rolled their own crypto,”
thus breaking the “cardinal rule of cryptography” (Clary, 2016; Cox, 2015). As Runa Sandvik,
Director of Information Security at the New York Times and former Tor developer, told
Motherboard,
Asking why you should not roll your own crypto is a bit like asking why you
should not design your own aircraft engine. The answer, in both cases, is that
well-studied and secure options exist. Crypto is hard and I would rather rely on
encryption schemes that have been studied and debated than schemes that are
either secret or have yet to receive much, if any, attention (Cox, 2015).
Additionally, Telegram has failed to provide the necessary documentation for
independent cryptographic evaluations. The code for the application itself (i.e. the app that users
install on their ow machines) is open source, but the documentation is reportedly incomplete (see
Couprie, 2013). Moreover, code for the server-side software is not available, with Telegram’s
FAQ merely stating that “all code will be released eventually” (Telegram, n.d.). Evaluations of
the portions of Telegram’s code that are publicly available have uncovered a number of serious
flaws, notably leaving the platform’s users vulnerable to man-in-the- middle (MITM) attacks
(Jakobsen, 2015). Until Telegram makes all its code available to outside review, security-
conscious users would do well to exercise caution.
! 350
Telegram’s libertarian business model
Durov’s commitment to keeping Telegram free to use and aversion to losing control of
the company to outside investors (as he did with VK) rule out the revenue streams that sustain
many tech start-ups. Moreover, his hostility to state sovereignty in general and to U.S. foreign
policy specifically preclude the types of grants and products that sustain the other projects
analyzed in my larger study (Psiphon, Tor, and Signal), leaving very few avenues for revenue
generation. This context explains Telegram’s embrace of the cryptocurrency craze. The 2018
Initial Coin Offering (ICO) promised not only a much-needed influx of no-strings-attached cash,
but also a cypherpunk, cyber-libertarian future where Telegram controls its own technical
infrastructure, keeps nation-states at arms’ length, and even issues its own currency. Given Pavel
Durov’s professed libertarianism, it is perhaps unsurprising that Telegram is turning to
cryptocurrency to secure its financial future. Indeed, as early as 2012, Durov called the idea of
national currency “anachronistic” (Durov, 2012, cited in Yaffa, 2013).
For the nearly five years of its existence, Telegram seems to have been solely financed by
Pavel Durov, who made a reported $300 million from the forced sale of his shares in Vkontakte.
The company, which is legally structured as a British LLP, is privately held and maintains “a
deliberately complex structure of scattered global shell companies intended to keep it a step
ahead of subpoenas from any one government” (Hakim, 2014). There are no financial statements
(audited or not) available publicly, and Pavel Durov’s public statements are the sole source of
information about Telegram’s finances.
! 351
If Durov is to be believed, Telegram is not a commercial venture designed to earn its
creators money but an ideological one. A recent blog post explained:
This is why you – our users – have been and will always be our only priority. Unlike
other popular apps, Telegram doesn’t have shareholders or advertisers to report to. We
don’t do deals with marketers, data miners or government agencies. Since the day we
launched in August 2013 we haven’t disclosed a single byte of our users' private data to
third parties.
We operate this way because we don’t regard Telegram as an organization or an app. For
us, Telegram is an idea; it is the idea that everyone on this planet has a right to be free.
Above all, we at Telegram believe in people. We believe that humans are inherently
intelligent and benevolent beings that deserve to be trusted; trusted with freedom to share
their thoughts, freedom to communicate privately, freedom to create tools. This
philosophy defines everything we do (Durov, 2018).
Durov has alleged that he has turned down offers of financial backing from some of “the
most famous” venture capital firms in Silicon Valley. Instead, he prefers to fund Telegram
himself, spending a reported $1 million a month on salaries, infrastructure costs, and other
expenses (Walt, 2016). However, this is not sustainable indefinitely.
! 352
The FAQ page has long left open the possibility that Telegram might “introduce non-
essential paid options to support the infrastructure and finance developer salaries,” stressing that
“making profits will never be an end-goal for Telegram” (Durov, 2018). In 2016, he told
Fortune’s Vivienne Walt that “We still have a few years,” he says. “But it would be responsible
for us to come up with a business model within a year or two from now” (Walt, 2016).
Telegram seems to have found its business model in the hype surrounding blockchains
and cryptocurrencies. A blockchain is a distributed ledger system whose entries are
cryptographically verified, thus protecting the entries from tampering. Rather than residing in a
centralized database, the information stored on the blockchain is distributed among a large
number of machines that verify each other’s work. The blockchain is the core technology behind
Bitcoin, Ethereum, and other cryptocurrencies, which facilitate anonymous monetary
transactions over the internet. There has also been intense interest in other potential applications
over the past few years, such as digital identity schemes and so-called “smart contracts.” In 2016,
creators of new cryptocurrencies began raising startup fund thoughts Initial Coin Offerings
(ICOs), which are similar to Initial Public Offerings (IPOs), with a key difference: rather than
company shares and a promise of future dividends, investors receive tokens, or units of the future
currency, that they are often prohibited from selling for a predetermined period of time under the
terms of the ICO. If the cryptocurrency takes off, investors will eventually be able to use their
tokens for purchases. If it doesn’t, the value of the investment is lost.
Amid a growing frenzy of ICOs, the U.S. Securities and Exchange Commission (SEC)
released an Investor Bulletin cautioning potential investors that “new technologies and financial
products, such as those associated with ICOs, can be used improperly to entice investors with the
! 353
promise of high returns in a new investment space” (U.S. Securities & Exchange Commission,
2017). The Investor Bulletin emphasized that many tokens offered as part of ICOs were
securities subject to federal securities laws. Notably, ICOs may be restricted to “accredited
investors” who can demonstrate either $200,000 in annual income or a net worth of at least $1
million (U.S. Securities & Exchange Commission, 2017).
In January 2018, the specialized blockchain/cryptocurrency press started buzzing with
154
news concerning a “Telegram ICO.” The public ICO, planned for March 2018, would be
preceded by a “pre-sale” limited to accredited investors willing to invest large sums of money,
with a floor as high as $20 million. Investors would receive tokens called “Grams,” which would
eventually be the unit of exchange for Telegram’s native cryptocurrency economy. The public
ICO in March would be open to anyone (Constine, 2018). The pre-sale raised a record $850
million from 81 different investors, and was followed in February by a second “secretive” pre-
sale, which also raised $850 million, from 94 different investors (Jeffries, 2018; Moore, 2018).
Having thus raised $1.7 billion in pre-sales, Telegram cancelled the public ICO, possibly as a
way to sidestep Securities and Exchange Commission (SEC) requirements (Liao, 2018). Created
in the aftermath of the 1929 stock market crash, the SEC’s mission is to protect investors,
encourage the growth of capital, and optimize financial markets by requiring entities involved in
the securities trade to treat investors honestly and fairly by providing sufficient information about
the securities in question to allow investors to do due diligence. Circumventing the SEC allows
Telegram to avoid potentially onerous disclosure requirements that would allow investors to
verify that the “Gram” is not, in fact, snake oil.
Yes, that’s a thing.
154
! 354
The funds raised are intended to finance the development of the new Telegram Open
Network (TON), described in a leaked Technical White Paper authored by Nikolai Durov:
The Telegram Open Network (TON) is a fast, secure and scalable blockchain and
network project, capable of handling millions of transactions per second if necessary, and
both user-friendly and service provider-friendly. We aim for it to be able to host all
reasonable applications currently proposed and conceived. One might think about TON
as a huge distributed supercomputer, or rather a huge “superserver”, intended to host and
provide a variety of services (Durov, 2017, p. 1).
The 132-page document provides a complex technical explanation of TON, heralding its
potential as “a truly scalable general-purpose blockchain project, capable of accommodating
essentially any applications that can be implemented in a blockchain at all,” (Durov, 2017, p. 78).
Attempting to evaluate TON on its technical merits would be well beyond the scope of this
project; however, some experts are skeptical. Cryptographer Matt Green, who teaches at Johns
Hopkins University, told The Verge: “So to their credit, Telegram has shown that it can execute
and get software written. That’s actually a big deal when it comes to blockchain projects. That
plus millions of dollars means they could pull something off. But I’ll be honest, the white paper
reads like someone went out on the internet and harvested the most ambitious ideas from a dozen
projects and said ‘let’s do all of those but better!’ It feels unachievable, at least at the scale
they’re aiming for now” (Jeffries, 2018). Indeed, the White Paper “promises an Ethereum-like
ecosystem with apps, services, and a store for digital and physical goods,” as well as “a suite of
! 355
blockchain-based products including file storage, a DNS service, and an ad exchange” (Jeffries,
2018). But it isn’t clear that this ambitious scheme will actually come to fruition, and some
suspect that the ICO is primarily a mechanism to generate cash flow for Telegram:
Others have speculated that Durov is not really raising money for a new blockchain-
centric venture, but simply to keep Telegram afloat. Durov was reportedly self-funding
the company with his earnings from selling VK.com, the Russian Facebook clone that he
founded. “With growing user base, he would’ve eventually run out of money. Therefore
he opted for an ICO as a mechanism to raise funds without getting outside investors into
Telegram’s shareholder capital,” Gregory Klumov, CEO of the government blockchain
company Stasis, told Bloomberg (Jeffries, 2018).
Some in the crypto community remain skeptical of TON. “I just think this is the CEO’s
way of monetizing Telegram, basically,” says Jackson Palmer, the founder of early
cryptocurrency Dogecoin (Constine, 2018).
"It really felt like it was one of these start-ups that's burning through cash and needs a
way to bring money in to keep funding their operations," said [Digital Currency Group’s
Travis] Scher. "This is how they decided they're going to do it" (Levy, 2018).
Indeed, the SEC Form D (“Notice of Exempt Offerings of Securities”) lists the intended
“Use of Proceeds” as “the development of the TON Blockchain, the development and
maintenance of Telegram Messenger and the other purposes described in the offering
! 356
materials” (Palmer, 2018; SEC Form D submitted by TON Issuer Inc., 2018), and there doesn’t
appear to be an “alpha” version of the TON platform yet (Dale, 2018).
Telegram has ambitious plans for the next 18 months: launching the “Telegram External
Secure ID,” the “Minimal Viable Testing Network of TON,” and Telegram Wallet by the end of
2018, and creating a “TON-based economy in Telegram” as well as launching “TON Services,
TON Storage, and TON Proxy” in the first half of 2019 (Telegram, 2018, p. 15). Whether these
plans are realistic remains to be seen, but what seems clear is that the ICO has generated enough
income to sustain Telegram’s existing activities for the foreseeable future. Whether Telegram’s
“investors” will actually receive any Gram tokens is an open question.
Conclusion
Telegram stands in contrast to the other case studies examined in this dissertation: its
founders hail from outside North America, it lacks any institutional or financial connection to the
U.S. Internet Freedom agenda, and its opaque business model places complete control of the
project in the hands of its founders. Moreover, its ideological commitment to “freedom” is rooted
in libertarian principles rather than a commitment to human rights. This is significant because
Pavel Durov’s brand of cyber-libertarianism recognizes no higher authority than himself: he
defers to neither the laws of nation-states nor international human rights standards. Moreover,
while Telegram claims to be open source it fails to provide enough information (i.e. code) to
allow others to verify the company’s claims, and cryptography experts have expressed serious
reservations about the security of its Secret Chats in particular.
! 357
Yet Telegram meets the definition of “digital rights technology” that I presented at the
beginning of this dissertation: hardware and software tools that allow individuals to better protect
their privacy, access the information they wish to access notwithstanding censorship attempts by
nation-states or other actors, express themselves as they wish in both the public sphere and in
private, or any combination of the above. Telegram was explicitly created to help Pavel and
Nikolai Durov avoid Russian state surveillance, and company documentation frequently
references Telegram’s non-commercial mission and commitment to libertarianism. In Russia,
Iran, and elsewhere, Telegram’s public Channels are used to disseminate news and political
content that would be censored in the traditional media, and Secret Chats (supposedly protected
by “unbreakable” encryption) provides a space for mass mobilization. Acute political and legal
battles over Telegram are a testament to the growing role that the platform plays in the political
life of many societies.
Its mission of providing freedom through encryption places Telegram in the cypherpunk
tradition, even though patchy disclosure of source code is at odds with that tradition. Like the
original cypherpunks, Durov’s discourse sees the relationship between the state and its citizens
as inevitably authoritarian and oppressive, leaving no room for the idea of democratic, rights-
respecting governments constrained by the consent of the governed, as evidenced by its
avoidance of SEC oversight. Nor does it recognize civil society as an independent political actor,
declining to engage in transparency reporting (and therefore be held accountable by globally
networked non-profit watchdogs) and casting aspersions on other digital rights projects based on
their funding models. In my view, Telegram’s ideology precludes the possibility of political
projects designed for the broader good of society. Durov seems resigned to a vision of politics
! 358
and policy-making as a zero-sum game with winners and losers, and it appears that after losing
his battle against the Kremlin for control of VKontakte, he is now determined to remain firmly in
control of his newest venture and, just as importantly, beyond the reach of the state. This is
certainly understandable, from his perspective, but is unlikely to do much for global human
rights, peace, or prosperity. Like many other tech pioneers, Durov seems woefully uneducated
about political philosophy, social theory, and the finer points of policymaking, and his concerted
efforts to place himself beyond all accountability should be cause for concern.
! 359
Conclusion
Introduction to the conclusion
This dissertation has examined the emerging geopolitics of information through a study
of the transnational social movement fighting to preserve citizen control of communication
technologies against governmental and corporate regulations and restrictions. This digital rights
movement possesses a broad repertoire of contention, including technology development,
litigation, public awareness campaigns, distributed denial of service attacks, insider advocacy,
and even electoral politics as exemplified by the Pirate Parties in several countries (Croeser,
2012). This research project focuses on four case studies centered on tools that are the subject of
intense public controversy and collectively impact the ability of billions of people worldwide to
exercise their communication rights.
This conclusion first summarizes the key themes and major findings of the seven
substantive chapters comprising the study, before turning to three major analytical tensions that
run throughout the study: the professionalization of the digital rights space, the relationship
between the digital rights movement and the U.S. government’s Internet Freedom agenda, and
the impact of digital rights technologies on geopolitics.
! 360
Executive Summary
Chapter 1, Information Controls, Digital Rights, and Resistance, explores the reasons
behind the rapid growth of information controls and the restrictive use of internet infrastructure
by nation-states to affect political outcomes domestically and in the international system — one
of the phenomena that digital rights technology is intended to combat. While the field of Science
and Technology Studies (STS) has a much more nuanced understanding of the coevolving
relationships between technology and society, governments have generally regarded internet
technologies as either inherently liberating or controlling (Aouragh & Chakravartty, 2016). State
efforts to control and surveil online expression are almost as old as the World Wide Web itself,
with information controls pioneers China and Singapore implementing sophisticated
sociotechnical systems to preserve their information sovereignty starting in the early 2000s.
Western governments, especially the United States, initially saw ICTs and particularly social
media as “liberation technologies” (Diamond, 2010) that would inevitably liberate foreign
populations from their repressive governments, thus ushering in a new era of democracy and
prosperity.
A series of “Color Revolutions” in the first decade of the 21
st
century seemed to support
this view, as did the early stages of the Arab Spring in 2011. Would-be reformers used ICTs—
notably mobile phones and social networking sites—to mobilize protests, document atrocities
committed by the authorities, and communicate with the international community. This reliance
on ICTs, however, also introduced two major weaknesses by allowing movements to leap-frog
over much of the hard work of organizing that enables cohesion and resilience and leaving trails
! 361
of digital breadcrumbs for threatened governments to follow, thus aiding the brutal crackdowns
that would derail almost all of the MENA region’s revolutions (Tufekci, 2017a). Nation-states
around the world developed increasingly sophisticated methods for surveilling and controlling
information flows, largely by relying on technologies developed by Western companies, and
organizing formal mechanisms for knowledge transfer such as the Shanghai Cooperation
Organization. Successive “generations” of information controls evolved from crude technical
censorship to include complex regulatory systems that deputized the private sector to implement
surveillance and censorship, while targeting hacking and cyber-espionage against the regime’s
adversaries, including human rights defenders and democracy activists, and developing new
policy interventions designed to embed authoritarian structures in global internet governance
(Deibert et al., 2008, 2010, 2011; Deibert, 2016). As the information wars evolve, new tactics
emerge such as “distributed denial of attention.” Rather than suppress inconvenient truths (a
Sisyphean task), propagandists flood the public sphere with false, misleading, or distracting
information in order to focus the public’s attention elsewhere and deny opponents an audience
(Tufekci, 2017a). The digital rights movement pushes back against these developments to assert
citizen control over communication technologies, often framing communication rights as the
bedrock of all social movements, therefore framing freedom of expression and privacy as
requirements for human rights, social justice, and good governance.
Developing and maintaining what I call digital rights technology is part of the digital
rights movement’s repertoire of contention. After a chapter on research methods and ethical
considerations, Chapter 3, titled The Digital Rights Movement: Portrait of a Social Movement,
investigates the global social movement dedicated to the promotion of digital rights, understood
! 362
as the right of individuals to freely access, create, and disseminate content online without
surveillance or reprisal, with narrow legal exceptions that are compatible with human rights.
After engaging with the scholarly literature on social movements, I trace the genealogy of the
digital rights movement to earlier waves of contention over communication rights, including
decolonization, human rights activism, the anti-corporate globalization movement, the
cypherpunks, and hacker culture (Milan, 2013). At the core of the policy debates encompassed
by the digital rights movement is a perpetual negotiation over who has the right to communicate
what, to whom, and under what conditions — specifically, who can extract economic benefit
from the exchange and who will police the entire arrangement. While the sub-currents within the
digital rights space have much in common, they are rooted in very different ideologies. For
example, the cypherpunks’ libertarian commitment to “freedom through encryption,” which sees
electronic privacy as the end-goal, is in tension with the human rights and social justice
movements’ frame, which emphasizes privacy as an intermediary goal in service of society-wide
equity and human rights. This has significant implications for the movement’s strategic priorities,
modes of action, preferred funding models and organizational forms, and for governing
interpersonal relationships within the movement.
The second half of the chapter focuses on gender, diversity and inclusion within the
digital rights movement as one example of intra-movement tensions rooted in ideological
differences. Though gender was not an explicit focus of this study when I began my research, my
fieldwork coincided with significant tensions within the community over gender equality and
sexual assault. Gender is far from being the only tension in the space, but it was the most visible
one to me, as a white cisgender Western woman, during the timeframe of my fieldwork. Several
! 363
high-profile men within the community were publicly revealed to be repeat sexual offenders,
prompting a sector-wide reckoning that paralleled the broader “#MeToo movement” taking place
across North America and elsewhere. In the course of my fieldwork, I have found that divergent
attitudes toward gender, diversity and inclusion tend to closely track the “human rights” and
“technology development” sub-currents within the movement. I argue that initiatives to reduce
the potential for sexual violence and to increase diversity and inclusion amount to calls for
professionalization, and that many techies’ resistance to such efforts is grounded in a culture of
toxic masculinity, belief in the meritocracy fallacy, bad faith engagement with appropriate Codes
of Conduct, and desire to retain the fun environment of the “clubhouse” (Dunbar-Hester, 2018)
— as well as more principled concerns about “selling out.”
Somewhat paradoxically, much of the funding that sustains digital rights technologies
flows from the U.S. government under the aegis of Internet Freedom funding and/or public
diplomacy activities. It is likely that many digital rights tools, including three of my case studies
(Psiphon, Tor, and Signal) would not exist without this support. Thus, analyzing the political
economy of digital rights technology requires an understanding of the ideology and policy goals
behind Internet Freedom agenda’s funding, the bureaucratic politics behind the specific ways that
monies are appropriated and spent, and how Internet Freedom policies and funding intersect
with, and sometimes contradict, other U.S. policy initiatives. Chapter 4 traces the Political
Economy of Internet Freedom Funding through the halls of power in Washington in order to shed
light on the strange relationship between American foreign policy and the techno-activists who
develop digital rights tools. Early in my fieldwork, I discovered that many of my research
participants had only a limited understanding of the Internet Freedom agenda and the funding
! 364
streams that sustain many of technologies they relied on for their day-to-day communication
needs. Similarly, I observed that many of the harshest critics of both the Internet Freedom agenda
and of the digital rights movement profoundly misunderstood the ideologies, power structures,
and institutional relationships at play. This chapter seeks to correct these misunderstandings by
highlighting the story of Internet Freedom, rooted in a longstanding American commitment to
human rights, through bipartisan support from the Congress, and influence from insurgent
experts (O’Maley, 2015; Shaw, 2011) with backgrounds in digital rights activism who helped
shape the implementation of Internet Freedom programs. The public narrative, however,
disguises the contradictions between the Internet Freedom agenda and other concerns of the U.S.
government. Notably, the mass surveillance programs such as those ostensibly undertaken for
security and defense reasons, and revealed by Edward Snowden in 2013, provide a glaring
example, and it is undeniable that U.S. domestic policy often fails to meet the standards that the
foreign policy apparatus extolls abroad—e.g., the FCC’s repeal of the 2015 network neutrality
order. These tensions are both emblematic of long-running contradictions in American
governance and of the institutional structuring that layers co-existing policies and regulations
that achieve multiple and sometimes antithetical goals. Though some contend that these
contradictions are a sign that the Internet Freedom agenda is merely window-dressing for
policies intended to extend America’s geopolitical and economic empire, and that it should be
abandoned, I celebrate what the U.S. government gets right and throw a spotlight on what it
needs to change. Human rights advocates should bolster these efforts and resist the impulse to
throw out the good with the bad.
! 365
It is against this background that I present my four case studies: Psiphon, Tor, Signal, and
Telegram. The cases were selected based on a combination of factors, including each project’s
connection to the U.S. Internet Freedom agenda, its legal and organizational infrastructure, the
tool’s user base, its implication in public controversies about internet regulation, and researcher
access. While this study cannot claim to represent the full spectrum of digital rights tool
development, the diversity of selected tools provides a nuanced view into the contemporary
digital rights movement. Each of the four following chapters traces the institutional history of the
tool, examines the factors that spurred the tool’s creation and evolution, considers the tool’s
affordances and use cases, and analyzes the tool’s relationship to the emerging geopolitics of
information.
Chapter 5, Psiphon: Software for the Soft War?, offers a window into the geopolitical
considerations behind internet filtering and censorship circumvention. First developed at the
University of Toronto’s Citizen Lab, Psiphon is a Canadian company whose business model
relies on a mixture of government funding, notably sponsorship contracts with American, British,
and German public diplomacy broadcasters whose target audiences would not be able to access
their online content without circumvention tools. These actors see their activities as consistent
with individual human rights and the principle of the free flow of information on a free and open
internet. Much of Psiphon’s user base is located in Iran, whose current government sees public
diplomacy broadcasters’ mission, as well as Psiphon’s, as part of a “cultural NATO” waging a
“Soft War” designed to undermine the goals and values of the 1979 revolution. In this paradigm,
Psiphon represents an existential threat to the regime and the society whose caretaker it imagines
itself to be, and is identified as part of a foreign apparatus fomenting seditious domestic
! 366
opposition in service of Western imperialism. In my fieldwork, I found that Psiphon’s creators
and current staff are cognizant of the geopolitical stakes of their work, but consciously reject the
“Soft War” frame as well as claims that they are “pawns” or “agents” of the U.S. government. As
these “freedom technologists” (Postill, 2014) see it, Psiphon’s business model may be reliant on
the public diplomacy budgets of the U.S., U.K., and Germany, but that doesn’t mean that the
company is doing the bidding of those governments. Rather, they have negotiated an identity for
Psiphon that is grounded in providing a public good to which all human beings are entitled. The
fact that Psiphon is only able to provide that good thanks to financial arrangements linked to
certain countries’ foreign policy objectives is secondary, in their view, and does not diminish
their agency or autonomy. This case study surfaces three divergent frames: the Iranian
government’s Soft War paradigm, rooted in a Westphalian understanding of national sovereignty;
the Internet Freedom agenda, which links the free flow of information to both human rights and
the national interest of the U.S. and its allies while focusing on nation-states as the primary
actors in international politics; and the digital rights frame, which emphasizes the agency of
individual developers, activists, and internet users.
Next, Tor: Peeling the Onion considers one of the technologies at the heart of the digital
rights movement, Tor, and the non-profit organization behind it, the Tor Project. Tor is the core
technology underlying many other digital rights tools, such the Open Observatory of Network
Interference (OONI) and the whistleblowing platforms SecureDrop and GlobaLeaks. It is a
central node in the contemporary digital rights movement, with a rich history that has yet to be
told comprehensively yet is often misrepresented. In particular, its institutional and financial ties
to the U.S. government are perennial objects of controversy. After tracing Tor’s intellectual and
! 367
organizational trajectory, the case study analyzes the U.S. federal government’s ambivalent
relationship with the Tor Project as an exemplar of how different agencies’ divergent missions
approach digital rights technologies in general, as well as the political and geopolitical impact of
Tor’s three key affordances (censorship circumvention, anonymous internet access, and
anonymous online publishing) as they impact governments in general — not just the United
States — and the international system as a whole.
I argue that the Tor Project, with all its flaws and contradictions, is what happens when
computer scientists try to run a nonprofit whose work puts it at the epicenter of geopolitical
tensions they are ill-prepared to confront. Specifically, an ideology of “programmer
supremacy” (as one of my informants, Tor whistle-blower Karen Reilly, put it) and pervasive
devaluation of so-called “soft skills” (such as management experience and political expertise)
enabled a toxic employee to bully his colleagues, take credit for their work, derail efforts at
professionalization, and engage in a pattern of serial sexual assault against his colleagues and
members of the broader digital rights community. This section of the chapter builds on the
discussion of gender in chapter 3. The employee in question, Jacob Appelbaum, was finally
ousted in 2016 after the Tor Project’s longtime executive director was replaced by Shari Steele, a
woman with extensive experience leading nonprofit organizations. She has devoted much of the
past two years to improving the organization’s governance (notably replacing the previous Board
of Directors, whose negligence enabled Appelbaum’s behavior), diversifying funding streams
away from a reliance on U.S. government grants, and she oversaw a cultural shift away from “a
programmer supremacy” culture that valued self-promoting “rock stars” over organizational
health. While the Tor Project’s evolution is ongoing, the organization has the potential to serve as
! 368
an example of a deeply dysfunctional organization that has managed to right itself, and whose
turnaround strategy could be emulated by other groups in the digital rights space.
Chapter 7, titled Signal: Bringing Encryption to the Masses, looks at how the world’s
most widespread digital rights tool has normalized the use of end-to-end encryption through
pragmatic partnerships with the U.S. government and American tech giants. The Signal Protocol
is the encryption protocol, first developed by Moxie Marlinspike and Trevor Perrin, that
underlies not only the Signal messaging application but also WhatsApp, Facebook Messenger’s
“Secret Conversations,” Google Allo’s “Incognito mode,” and Skype’s “Private Conversations,”
as well as Wire. The case study traces the story of the Signal Protocol’s development and
deployment from 2007 to early 2018, from its precursors TextSecure and RedPhone’s usage
during the Arab Spring, to Marlinspike’s short-lived stint at Twitter and partnerships with the
Open Technology Fund (one of the main conduits for U.S. Internet Freedom agenda funding) and
several American technology companies.
The Signal case displays the same tendency toward “strange bedfellows” that is typical of
the internet freedom/digital rights space more broadly. Founded and led by a cypherpunk crypto-
anarchist, the project is primarily funded by the U.S. government via the Open Technology Fund,
and the majority of the Signal Protocol’s users owe their access to end-to-encryption to Signal’s
collaboration with WhatsApp and other American tech giants. Signal occupies an uneasy position
at the intersection of two related ideologies that nevertheless exhibit important differences
concerning the meaning of freedom and the relationship between governments and the governed:
the cypherpunks’ techno-libertarianism and the Internet Freedom agenda, which is linked to the
rhetoric of democracy and human rights. Moreover, there is a major contradiction between
! 369
Signal’s commitment to universal communication privacy and its partnerships with companies
whose business model relies on surveillance capitalism (Zuboff, 2015). Signal appears to take a
pragmatic approach to cooperating across ideological chasms. Internet freedom advocates and
cypherpunks all agree that human rights defenders, journalists, political dissidents, and ordinary
people should have access to encryption, so they work together to provide it. It is then up to
individual users to determine how they will use it. Likewise, it is pragmatism that governs the
relationship between Signal and the behemoth tech companies that have adopted the Signal
Protocol. Marlinspike has indicated in a number of interviews that his goal is to bring end-to-end
encryption to the greatest number of people possible, both for the concrete benefits he believes it
will bring users but also as a way to normalize the use of encryption. Though primary sources
concerning the companies’ motivations for these partnerships are lacking, my hypothesis is that
surveillance-based corporations like Facebook and Google see offering optional end-to-end
encryption, which protects the content of specific conversations but little else, as a device to
obscure the continued monetization of their privacy-conscious users.
The final case study, Telegram: From Russia With Crypto, departs from the other three
case studies in that its object lacks ties the Internet Freedom agenda, whether in terms of funding,
a “revolving door” of personnel, attendance at conferences or participation in mailing lists.
However, its history and ideology overlap with other digital rights technologies. Telegram grew
out of an effort to resist the Russian information control regime and to thwart state surveillance
as an ideology. The platform positions itself as an alternative to Silicon Valley platforms and to
tools linked to the U.S. Internet Freedom agenda for users who value “freedom,” particularly
freedom from a heavy-handed state. The documentation supporting Telegram’s early 2018 Initial
! 370
Coin Offering (ICO) positions the company as an explicitly libertarian project (Telegram, 2018),
and rhetorically presents its now widely copied entrepreneurial business model as evidence that
the platform is beholden to neither governments nor corporate masters. Telegram thus provides
an interesting contrast to the other three case studies, whose cyber-libertarian leanings are
tempered by the rhetoric of human rights and by the transparency and oversight requirements
associated with nonprofit status, contractual obligations, and/or grant reporting requirements.
CEO Pavel Durov’s disregard for the legitimacy of the nation-state has placed Telegram on a
collision course with powerful governments intent on controlling and surveilling information
flows, resulting in the platform being blocked in both Iran and Russia in recent months.
However, there are a number of technical work-arounds that allow users to access Telegram in
spite of these bans, which may be seen as validating Durov’s decision to thumb his nose at
information sovereignty.
Professionalizing the digital rights space
Chapter 3 and the Tor case study both highlighted the importance of professional
management and skillsets beyond computer systems design and coding for the health and success
of digital rights technology development projects by focusing on gender as one dimension of the
tensions surrounding diversity and inclusion in the digital rights space. The fieldwork time
period (2015-2018) coincided with an evolution from a “whisper network” intended to warn
individual women and genderqueer people about specific problematic men, to collective actions
designed to alter social norms and power structures within the space. This evolution was spurred
! 371
by three public controversies at the intersection of hacking cultures and the digital rights space:
Jacob Appelbaum’s ouster from the Tor Project in 2016; the 2017 public revelations that security
researcher Morgan Marquis-Boire had been credibly accused of raping dozens of women over
the course of a decade, and that the prominent Toronto-based activist Ali Bangi had been charged
with rape and false imprisonment; and perennial disputes over Codes of Conduct for conferences
and other events, and their “enforcement,” notably concerning the 2017 Chaos Computer
Congress (Ann-King, 2017; Brandom, 2018; Cameron, O’Neill & Larson, 2016; Jeong 2017a;
Jeong, 2017b; Locker, 2018).
My fieldwork suggests that disputes over gender, diversity, and inclusion in the digital
rights space stem from seemingly irreconcilable cultural differences between the movement’s
social justice and technology development wings. Similarly, the Tor Project’s longtime
dysfunction was rooted in an ideology of “programmer supremacy” that discounted the
contributions and expertise of professionals with other skillsets, such as nonprofit management,
grant-writing, and political acumen. As a result, for roughly a decade the Tor Project was
exclusively led by people — mostly men — with no nonprofit management experience, and
overseen by a Board of Directors comprised of academic computer scientists who were more
interested in examining source code than in holding the organization’s leadership accountable. A
cascade of derelictions of duty allowed Jacob Appelbaum to manipulate the organizational
culture to suit his own needs, sowing chaos and destruction in his wake as he bullied, gaslighted,
and assaulted his colleagues. It remains to be seen how well the organization will fare without
executive director Shari Steele, who is due to retire at the end of 2018. The Tor case study
illustrates the central role that founders and other early leaders lay in the creation and
! 372
development of organizational cultures. It took a deliberately chosen and empowered
transformative leader to begin to mend the structural flaws in the Tor Project’s organizational
culture. The tenure of the next Executive Director will be a test of the Board of Directors and the
sustainability of Steele’s legacy.
Though the Tor case is perhaps the most dramatic example of the dangers of programmer
supremacy and a lackadaisical approach to management, and certainly the most publicized one, it
is far from unique. In late 2017, the digital rights community was shocked to learn that Ali
Bangi, a founder and co-director of Toronto-based Iranian diaspora group ASL19, had been
charged with rape and false imprisonment — and according to media reports, this was not the
first time he had been charged with a sex crime. An investigation by The Verge — the same
publication that had detailed security researcher Morgan Marquis-Boire’s decades-long pattern of
sexual assault across three continents — found that the group’s stringent focus on secrecy and
operational security, which went so far as to require employees to use pseudonyms and keep their
real identities from one another, had contributed to the culture of silence that protected Bangi for
so long (Brandom, 2018).
Like many others, I was appalled to learn of Bangi’s rape charge, but not surprised. I had
known Bangi for several years by then, and I had heard a number of disturbing stories about him
through the “whisper network.” These stories were too substantively similar to one another to
discount. In fact, when I had last seen Bangi in July 2016, he had tried to offer me a stunning
variety of illegal drugs and groped me repeatedly. Over the coming months, it became clear that
— as in the Appelbaum case — a critical mass of people had known enough about the abusive
behavior to take action, but they did not. By March 2018, when much of the community gathered
! 373
at the Internet Freedom Festival (IFF), a quorum that gathered under the “Protect Our Spaces”
banner had come together to demand change, specifically professionalization and accountability.
My co-conspirators and I organized a number of sessions dedicated to gender equity, inclusion,
and organizational health, centering the voices of survivors like Karen Reilly, who was fired
from the Tor Project in 2015 after blowing the whistle on Jacob Appelbaum (Bernstein, 2016). I
was gratified by the opportunity to use my presentation (based on chapter 3 of this dissertation)
to break the silence surrounding gender-based violence in the community and set a tone that, I
am told, created space for survivors to say their piece and be heard. Partly because of this
determined focus on raising difficult topics, and partly because of the unseasonably cold weather,
the 2018 edition of the IFF lacked the joyful exuberance of previous years. But this “#MeToo
Internet Freedom” (as some of us started to call it) was an important step in getting
organizational leaders and funders to pay attention to the problem. Several organizations and
funders agreed to sponsor a “Day Zero” event on gender for the day before RightsCon, the
annual policy conference hosted by digital rights organization Access Now, which I helped
organize.
Part of the agenda for this event focused on exploring the responsibility of funding
organizations to oversee their grantees’ management policies and practices. An open letter
published by a group of anonymous former ASL19 employees argued that the group’s funders
“have a role to play” in preventing similar abuses, calling on funders to “review appropriate HR
policies of organizations before awarding and issuing funds,” and “ensure that investigations into
complaints regarding inappropriate workplace behavior are conducted by qualified and reputable
HR professionals familiar with local legislation and HR standards.” They also should “ensure
! 374
that the organizations to which they are issuing funds are reasonably compliant with local
legislation and regulations,” should “put in place mechanisms to ensure appropriate and
continuous efforts are expended towards increasing diversity and inclusion (such as regular
reports on participation in organizational activities such as conferences), and ensuring a
commitment to pay equity,” and “consider putting in place various mechanisms to help ensure a
safe and healthy work environment in the organizations they support” (Former employees of
ASL19, 2018). These are valuable recommendations that I believe funders ought to take
seriously. However, they may also raise serious questions about grantee autonomy, particularly
for organizations who (for various reasons) fear being perceived as “agents” of their funding
organizations. In ASL19’s case, the group’s issues were compounded by its status as a privately
held company, lacking shareholders or a Board of Directors to hold its management accountable.
As with the Tor project, the absence of meaningful oversight and accountability enabled toxic
and abusive behavior to flourish, thus imperiling the organization’s mission.
While I have no knowledge of similar events at Signal or Telegram, both groups have
been strikingly lacking in transparency, oversight, and accountability. Until a few months ago,
Signal operated as a Limited Liability Corporation (and a pseudonymous one, at that — its legal
name was Quiet Riddle Ventures LLC, “doing business as” Open Whisper Systems), which
exempted it from the financial disclosure requirements associated with nonprofit tax status.
Signal did have to submit progress reports to its main funder, the Open Technology Fund, and
presumably to any other funders as well, but ultimately the project’s leaders were not
accountable to any higher authority. The 2018 announcement of the creation of the Signal
Foundation, to be led by WhatsApp co-founder Brian Acton, was an encouraging step toward
! 375
increased transparency and professional management. As one research participant told me,
“having an adult with actual adult leadership experience will be really good for Signal.” This
155
rhetoric of “adult leadership” echoed similar comments made by other research participants
concerning the Tor Project, and points to a colloquial distinction between “children’s” haphazard
approach to technology development and the more professional management characteristic of
“adults.” The current lack of information about the new Signal Foundation’s structure,
governance, or activities does not bode well for the organization’s commitment to transparency
or accountability.
As for Telegram, its libertarian ideology led it to opt for a peculiar business model (for
lack of a better term) that leaves CEO Pavel Durov in complete control of the company, its
activities, and its finances. If I were a betting woman, I would wager that it is only a matter of
time before improprieties of one kind or another come to light.
Strange bedfellows: Digital rights activism and the Internet Freedom agenda
“Digital rights” and “internet freedom” are two closely related concepts that both refer to
the enjoyment of human rights online, specifically privacy, freedom of expression, and access to
information. However, they are not exact synonyms. As I explained in chapter 4, “Internet
Freedom” is a polysemic term that means many different things to many different people,
echoing the myriad meanings of “freedom” identified by Chris Kelty (2014), and has come to be
associated with the U.S. government’s Internet Freedom agenda. Even there, principles like anti-
Phone interview, March 31, 2018.
155
! 376
censorship and commercial non-regulation co-exist uneasily with the commitment to human
rights underlying the Internet Freedom agenda’s congressional mandate. Meanwhile, “digital
rights” has been embraced by the transnational social movement for communication rights online
to refer to human rights in a digitally networked world.
Though they share many ideological underpinnings, the two concepts co-exist somewhat
uneasily: the digital rights movement is wary of “internet freedom” discourses that many
activists see as advancing American empire and economic interests (Ben Gharbia, 2010; York,
2015), while the goals and tactics of privacy activism in particular — such as the push for
ubiquitous end-to-end encryption — are at odds with U.S. policy objectives beyond the Internet
Freedom agenda itself (Owen, 2015). Moreover, the cultural gap between State Department
“suits” and hoodie-wearing crypto-anarchists makes the idea of an alliance between these worlds
seem completely incongruous. And yet, they are more closely aligned than appearances might
suggest, as we have seen throughout this dissertation.
This section further examines the relationship between the transnational movement for
digital rights (Croeser, 2012) and the U.S. government’s Internet Freedom agenda by focusing on
one specific site where the bureaucrats tasked with implementing the Internet Freedom agenda
encounter the civil society activists fighting for privacy and free expression online: the Internet
Freedom Festival (IFF).
The IFF is an annual “un-conference” held in Valencia, Spain, that gathers activists and
technologists from around the world in a celebration of internet freedom and digital rights, with
an explicit mandate to forge community and encourage collaboration, networking, and solidarity
among the participants. Activist-run and community-oriented, the IFF is partly funded by the
! 377
Open Technology Fund (OTF), a key conduit for U.S. government grants related to the Internet
Freedom agenda, and by other public and private funders. This section traces the history of the
IFF as a key site where U.S. government bureaucrats, human rights activists, cypherpunks, and
hackers — and their sometimes sharply divergent ideologies — collide, exploring how these
disparate actors co-create a space that is conducive to exchange and collaboration across
geographic, political, and sectoral boundaries. As mentioned previously, the IFF was one of this
project’s primary field sites. I attended all four iterations of the IFF (2015-2018), interviewed its
organizers and many attendees, and helped curate the Festival’s program in 2017. As I learned
more about the IFF’s particular history and mission, I realized that the festival itself exemplified
the “insurgent experts” (O’Maley, 2015; Shaw, 2011) dynamic that characterizes the relationship
between digital rights activism and the Internet Freedom agenda.
The IFF began in its current form in 2015, but its genesis began several years earlier with
the creation of the Open Internet Tools Project (OpenITP), an initiative hosted at New America’s
Open Technology Institute between 2013 and 2015. Its remit was twofold: to administer sub-
grants for digital rights technology projects, and to help develop the internet freedom/digital
rights community. To that end, OpenITP organized a series of conferences held around the world
called the “Circumvention Tech Summits” designed to bring together developers and human
rights defenders to encourage developers to interact with the people who would be using their
tools on the front lines. This was a major innovation that would influence the evolution of the
digital rights space by emphasizing tools’ usability and fostering transnational solidarity and
collaboration. For example, human rights defenders emphasized that the digital rights tools
available to them were difficult to use: the user experience was clunky and un-intuitive, and
! 378
usually required a decent command of the English language. These interactions helped end users
to understand the difficulties involved in designing software tools, while enabling them to give
feedback to developers. They, in turn, had access to OpenITP’s in-house usability and
localization experts. This new emphasis on usability and localization filtered up through funding
organizations, resulting in these mandates being written into Internet Freedom policy guidance at
the State Department. Meanwhile, the Summits created opportunities for face-to-face
156
engagement and relationship-building between groups and individuals who shared a commitment
to common values and were working toward similar goals, but wouldn’t have had the
opportunity to meet otherwise.
For the 2015 edition, the OpenITP team was intent on reaching out to the Spanish-
speaking world and attracting more participants from the Global South. There were a few options
for where to host the summit — the Netherlands, Iceland, and Spain stood out — and OpenITP’s
Outreach Manager, Sandy Ordoñez, started cold-calling local technology co-working spaces to
identify potential partners. There were a lot of dead ends, but one of her calls was answered by
Pepe Borrás, who was in the process of shutting down the Valencia co-working space he owned
at the time. Ordoñez and Borrás would later describe this chance phone conversation as a
“meeting of minds”: both were energized by the prospect of bringing together human rights
defenders and tech developers from around the world, and to do so in Valencia — Borrás’
hometown, and a city where Ordoñez had studied years earlier — was just the icing on the cake.
OpenITP hired Borrás as a contractor, and he, Ordoñez, and other OpenITP staff formed
the core team organizing the event. Renamed the Circumvention Tech Festival, the gathering was
Interview, March 14, 2018. Washington, DC.
156
! 379
imagined as a “conference of conferences” loosely organized around four main events: Iran
Cyber Dialogues, the Trainers’ Summit, the Tor Public Project Hack Event, and an event focused
on safety for journalists, in addition to the “unconference” itself. The event was successful at
157
attracting participants from the Global South, thanks in part to its Spanish-language track, in
spite of the costs associated with traveling to Valencia.
Shortly after the CTF, the grants sustaining the OpenITP project ended. By then, much of
OpenITP’s purpose was now being fulfilled by the Open Technology Fund, which absorbed
some of OpenITP’s former staff, including Ordoñez. Others would go on to work for different
funding organizations, where they used the expertise and contacts acquired at OpenITP to inform
their new employers’ policy positions and funding priorities.
OpenITP’s closure left the Circumvention Tech Festival without a formal institutional
home, which was unfortunate as the event had been hugely successful, bringing together diverse
groups of activists and technologists working on digital rights in a fun atmosphere that was
conducive to networking and relationship-building. The CTF was unusually diverse for a tech-
oriented event, across a number of dimensions: national origin, ethnicity, ideology, sector,
educational background, gender, gender identity, sexual orientation… As such, it was a much-
needed counterweight to tech-focused events like the hacker-dominated Chaos Computer
Congress. Staff from several funding organizations, including OTF, intervened to save the
festival, and the international development nonprofit IREX became its fiscal sponsor, processing
the IFF’s funding and hiring Borrás as the IFF’s Creative Director, along with other staff.
I kicked off fieldwork for the project that would become the Tor chapter of this
157
dissertation at the 2015 Circumvention Tech Festival.
! 380
By March 2016, the event had been rebranded the Internet Freedom Festival or IFF
(“circumvention tech” was deemed both too narrow and too arcane), with many of the same
organizers (albeit in different professional capacities), notably Ordoñez and Borrás. The second
iteration was more curated than the first, though still in an “un-conference” format: it was
participant-driven, involved “people sitting in circles,” and featured ample down-time for
informal conversations and relationship-building. The organizers made a deliberate effort to be
different from RightsCon — one of the other principal events on the digital rights/internet
freedom circuit — which brings together representatives from government and the private sector
as well as civil society. After three years of attending both events, my observation is that
RightsCon is mainly attended by organizational leaders in formal attire seeking to influence
governments and corporations, while IFF is designed for activists at the working-level to build
relationships with one another in a much more casual atmosphere. As one funder confided to me,
one of the IFF’s weak points is that it isn’t a “super-strong policy event,” leading the leadership
of some policy-focused organizations to discount it. As a result, the IFF’s program focuses on
frontline human rights defenders, their needs, and “the intersection of those folks and
technologists.” The funder saw this as a strength for capacity-building, particularly for activists
who are not on the international conference circuit and/or are relatively early in their careers.
158
In contrast to many academic and industry conferences that focus on innovation, these activist
conferences have strategic agendas that center on bringing together individuals from different
backgrounds, viewpoints and skillets to foster new collaborations. These are liminal spaces with
important affordances like capacity building and strengthening human rights and professional
Interview, February 26, 2018. Washington, DC.
158
! 381
behavior, though conferences struggle just as much as individual organizations do to transition
from adolescent “club houses” to more “adult” norms of professional behavior.
Over the years, a number of informants have told me that they see the IFF as key to the
convergence of the cypherpunks’ techno-libertarianism and the human rights movements’
rhetoric of internet freedom, which combined (along with other sub-currents) to form the digital
rights space described in chapter 3. Given the IFF’s institutional ties to the U.S. Internet Freedom
agenda, this raises important questions about the digital rights space’s authenticity and
legitimacy as a social movement. The insurgent experts dynamic (O’Maley, 2015; Shaw, 2011),
whereby civil society actors purposely take on roles within the state bureaucracy to influence
government policy and direct funding toward projects they view as priorities, points to a different
interpretation. I argue that the relationship between the Internet Freedom agenda and the digital
rights movement is not one of Astroturfing, which would indicate a fake grassroots movement,
nor of regulatory capture, which would suggest that the “industry” of digital rights activism
controls the government apparatus in charge of regulating it. Rather, I observed a dialectical
relationship between the top-down Internet Freedom agenda and the bottom-up digital rights
movement, wherein bureaucrats and activists — categories that are not mutually exclusive given
the “revolving door” of government service — co-construct policies and programs in service of
common goals. This relationship is not without tensions, of course, and actors in both groups are
keenly aware that their interests only partially overlap. In particular, consensus is growing within
the digital rights movement that the surveillance capitalism business model that undergirds much
of the internet economy is incompatible with human rights and democracy. Activist interventions
in the policy arena will not only put the digital rights space in direct confrontation with Silicon
! 382
Valley, they will also surface long-simmering tensions with the U.S. Internet Freedom agenda, a
core tenet of which is commercial non-regulation, particularly when such regulation could
jeopardize the commercial interests of American tech companies.
Digital rights technologies and the geopolitics of information
This section considers how the digital rights space — and civil society more broadly — is
implicated in the open conflict between the two paradigms about the role of information that are
now in stark competition: an illiberal authoritarian ideology that sees (and uses) information as a
dangerous weapon, and another, a transatlantic alliance whose commitments to the free flow of
information have yet to adjust to the new economics of the social media era and surveillance
capitalism. Highly publicized research into computational propaganda, precision propaganda,
and organized disinformation campaigns began to raise awareness of the new geopolitics of
information among elites and the public (see Ghosh & Scott, 2018; Marwick & Lewis, 2017;
Woolley & Howard, 2017). More recently, the controversy surrounding Cambridge Analytica’s
involvement in the 2016 referendum on the United Kingdom’s membership in the European
Union and in the 2016 U.S. presidential election has focused attention on Facebook, Google, and
the surveillance capitalism business model (Zuboff, 2015).
Between the “Brexit” vote and Donald Trump’s electoral college victory, 2016 was a
watershed year for understanding the geopolitics of information. Nation-states at the
authoritarian, illiberal and anti-democratic end of the spectrum had long seen the internet and
related communication technologies as a threat to the status quo and to elites’ hold on power
! 383
(Maréchal, 2017). Liberal democracies largely celebrated this disruption, hailing “Color
Revolutions” and the Arab Spring for harnessing the free flow of information in a continued
march toward a new “end of history” (Fukuyama, 1992, 2012). Skeptics of this “liberation
technology” (Diamond, 2010) discourse have been sounding the alarm almost from the
beginning, cautioning that governments and corporations were quickly learning how to use their
control over the internet’s physical infrastructure to control the flow of information (Deibert,
2013; MacKinnon, 2012; Morozov, 2011). Until recently, one could reasonably interpret the
central struggle in internet governance as one between state sovereignty, corporate profit-seeking
and individual self-determination. The debate was certainly contentious, but there was potential
for compromises that granted most stakeholders their top priorities within a liberal-democratic
governance model.
In parallel, illiberal states led by China and Russia were developing a panoply of tools to
restrict the circulation of undesirable ideas and information while taking advantage of the
internet’s economic opportunities. Barriers to access, content censorship, intermediary liability
regimes, mass surveillance, and chilling effects induced by regular violations of users’ human
rights are deployed in ever inventive ways to muzzle the internet’s revolutionary potential.
Importantly, the Shanghai Cooperation Organization (SCO) provides illiberal regimes with a
policy learning forum through which innovations in repression are diffused (Nocetti, 2015).
Many scholars and policymakers in the world’s democracies dismissed these concerns as the
desperate efforts of despots to maintain their grasp on power, which in turn has allowed them to
despoil their societies while escaping accountability. The mainstream rhetoric in American policy
and business circles was that the spread of internet technologies and particularly of social media
! 384
would inevitably lead to democracy, good governance, and human rights. Faced with geopolitical
opponents like China and Iran who used first generation information controls (Deibert et al.,
2008, 2010, 2011) to restrict information flows across their borders, the United States and its
allies began funding civil society efforts to develop censorship circumvention technologies,
framing this support as grounded in concern for human rights while also justifying the
expenditures as essential for the national interest.
As governments around the world acquired sophisticated tools for surveilling their
citizens and repressing opposition movements, insurgent experts (O’Maley, 2015; Shaw, 2011)
with first-hand experience on the frontlines of digitally empowered social movements leveraged
their new roles within the bureaucracy to direct funding toward encrypted messaging tools,
notably Signal. As was the case with circumvention technology, this funding was couched as
supporting the national interest by helping human rights defenders, democracy activists, ethnic
and religious minorities, and political dissidents abroad in their struggles against repressive
regimes — many of which happened to be U.S. geopolitical adversaries. This Internet Freedom
agenda thus operates both as bona fides support for human rights and as a component of U.S.
foreign policy designed to advance the national interest. Moreover, while Internet Freedom
programs focus on human rights and tend to ignore the principle of commercial non-regulation,
the rhetoric of Internet Freedom was mobilized by elements of the State Department outside of
the Bureau of Democracy, Human Rights and Labor (DRL) to advance U.S. industrial policy by
promoting the economic interests of American tech companies (Powers & Jablonski, 2015).
The tension between human rights and U.S. commercial interests is not a new one, but it
is currently coming to a head. Public discourse and the digital rights space are both increasingly
! 385
attuned to the threat that unregulated ICT companies, notably those whose business model hinges
on surveillance capitalism, pose to human rights and democracy. Since the 2013 Snowden
disclosures, the rhetoric of digital rights has historically focused on the threats that nation-states
pose to the free flow of information and to human rights online, framing private sector
companies mainly as tools used by governments to collect data or to censor information, or as
neutral platforms obliged to comply with government demands. This frame was compatible with
the U.S. government’s Internet Freedom agenda, including the commercial non-regulation
principle, as it did not materially challenge the private sector’s economic interests. Advocates
like Rebecca MacKinnon (2012) have long warned about the threats that behemoth tech
companies in particular pose for democracy and human rights, and many civil society
organizations heeded the call to pressure companies to build respect for freedom of expression,
privacy, and human rights more generally into their operations. But government censorship and
surveillance present more immediate and tangible threats than the looming threat of networked
authoritarianism does (MacKinnon, 2011; Maréchal, 2017). Moreover, confronting surveillance
capitalism requires challenging powerful economic interests as well as convincing the public and
policymakers of the dangers posed by the internet’s advertising-based business model. As a
result, activism against corporate surveillance has taken a backseat, but the tide is turning, as
current events continue to demonstrate that surveillance capitalism fundamentally undermines
the democratic project. Civil society organizations in the digital rights space, aided by digital
rights technologies, are well positioned to make this case and to create new economic models for
the internet that are compatible with democracy and human rights. Unlike governments, whose
many subcomponents are driven by competing motivations, and companies, who are motivated
! 386
by short-term profits, civil society organizations in the digital rights space center human rights
and related pro-social norms. Input from civil society will be key to moving away from the
misguided focus on narrowly defined national security and financial profit that has driven much
of the story of the internet so far.
Directions for future research
As is true of any dissertation, this study has made empirical and theoretical contributions
to human knowledge while also raising a number of vital questions for future research. On the
empirical level, I have traced the organizational histories of four influential organizations whose
technology products are used by millions of activists and ordinary people worldwide to better
protect their privacy, access information, and express themselves online. The analysis explores
each tool’s impact on the global geopolitics of information, from Psiphon’s role connecting
Western public diplomacy broadcasters to foreign publics living in censored environments to
Telegram’s disruption of nation-state control over information flows, and from Tor’s facilitation
of online anonymous publishing to Signal’s popularization of robust end-to-end encryption.
Theoretically, the study offers important insights into the relationship between the U.S.
government’s Internet Freedom agenda and the transnational social movement for digital rights,
and surfaces deep tensions between the ideological currents undergirding both Internet Freedom
and digital rights.
In addition to ethnographic research at Signal and Telegram, questions for future research
include: How can Silicon Valley behemoths reinvent their business models to eschew targeted
! 387
advertising, while preserving the beneficial affordances that keep even critics like myself from
leaving these platforms entirely? What legal and regulatory interventions should governments
consider to disrupt surveillance capitalism’s perverse incentive structures while respecting
freedom of expression and leaving room for innovation? How can founders and early leaders of
digital rights organizations build the foundations for healthy, sustainable, and inclusive
organizational cultures? How can dysfunctional organizations reinvent themselves? These
questions, and others that will arise in the course of attempting to answer them, are essential to
halting our current descent into networked authoritarianism. Let’s get to work.
! 388
Appendix A: Interview script
Introductory questions
1. Please look over this informed consent information sheet, and verbally consent to
participating in this study. Feel free to ask me any questions at all about the study. 2. Please choose a pseudonym by which I can identity you in my notes. 3. Tell me why you are attending this conference. 4. What does the phrase “circumvention technology” mean to you? 5. What does the phrase “liberation technology” mean to you? 6. What does the phrase “privacy enhancing technology” mean to you? 7. What other phrases come to mind, that refer to this broad set of tools? Use of digital rights tools
8. Have you ever used digital rights tools?
! 389
If no: skip to 6 If yes:
a. How often do you use digital rights tools? b. For what purpose(s)? c. Which specific tools do you use? d. What kind of research do you do before deciding to adopt a tool? e. What would happen if you didn’t have access to digital rights tools? Development of digital rights tools
6. Are you involved with any digital rights technology projects beyond being a user, such as
a developer or other staff member?
a. Tell me more about your role If funder: skip to 7 If developer/other:
b. Tell me a bit about your career path
! 390
c. How did you come to be involved with this project? d. What are the main challenges and rewards of this kind of work? e. Tell me about your project’s business model f. How did the organization select this business model? g. What do you think about the business model? What are the pros and cons? h. Is there anything that you’d like to see this organization or project do differently,
either from a pure tech standpoint or as an organization? Funding digital rights technology
7. Are you involved in the funding side of digital rights technology?
a. Tell me more about your role
b. Tell me a bit about your career path
c. How did you come to be involved with this funding organization?
d. Why do you think funding digital rights tech is important to your organization’s
mission?
e. What kind of projects and tools does your organization fund?
f. Tell me about the business model(s) of the project(s) your organization funds
! 391
g. What do you think about the business model(s)? What are the pros and cons?
h. How does your organization decide what projects or tools to fund?
i. What are the main challenges and rewards of this kind of work?
The big picture
8. Some people say that control over information is a key part of international relations
today. Would you agree or disagree? 9. How does your organization or project fit into that idea? 10. Does your project have any nation-state allies? Adversaries? Tell me more about these
dynamics. 11. How do you see the role of the private sector in this struggle for control of information? 12. What about US-based tech giants like Google and Facebook, specifically? 13. Is there anything that you wish outsiders to this community — such as policymakers,
academics, journalists, and the general public — understood about the community? Wrapping up
! 392
14. Is there anything that we haven’t discussed, that you think I ought to be aware of? 15. Is there anyone, either here in [location] or elsewhere, that you think I ought to talk to for
this study? 16. Would it be ok for me to contact you with any follow-up questions? (If yes, exchange
private keys or other secure means of contact)
! 393
References
Abelson, H. “Hal,” Neumann, P. G., Rivest, R. L., Schiller, J. I., Schneier, B., Specter, M. A., …
Landau, S. (2015). Keys under doormats. Communications of the ACM, 58(10), 24–26.
https://doi.org/10.1145/2814825
Adler, P., Adler, P., & Rochford, B. E. (1986). The politics of participation in field research.
Urban LIfe, 4(4), 363–377.
Alimardani, M. (2018a, January 1). What Telegram owes Iranians. Politico. Retrieved from
https://www.politico.com/magazine/story/2018/01/01/irans-telegram-revolution-216206
Alimardani, M. (2018b, March 8). Evidence says Iran throttled Telegram connections after
January protests. Retrieved April 5, 2018, from https://advox.globalvoices.org/
2018/03/08/evidence-says-iran-throttled-telegram-connections-after-january-protests/
Alimardani, M., & Milan, S. (2017). The internet as a global/local site of contestation: The case
of Iran. In R. Celikates, J. de Kloet, E. Peeren, & T. Poell (Eds.), Global Cultures of
Contestation. London, UK: Palgrave MacMillan.
Allison, G. T., & Halperin, M. H. (1972). Bureaucratic politics: A paradigm and some policy
implications. World Politics, 24(Supplement: Theory and policy in international
relations), 40–79.
Amin, S. (1976). Unequal development: An essay on the social formations of peripheral
capitalism. New York, NY: Monthly Review Press.
Anderson, C., & Guarnieri, C. (2018, February 13). Dissidents have been abandoned and
besieged online. Motherboard. Retrieved from https://motherboard.vice.com/en_us/
! 394
article/bj5jvw/dissidents-abandoned-human-rights-iranian-surveillance-and-hacking?
utm_medium=twitter&utm_source=dlvr.it
Angwin, J. (2014). Dragnet nation: A quest for privacy, security, and freedom in a world of
relentless surveillance (First edition). New York, NY: Times Books, Henry Holt and
Company.
Ann-King, C. (2017, November 29). “We never thought we’d be believed”: Inside the decade-
long fight to expose Morgan Marquis-Boire. The Verge. Retrieved from https://
www.theverge.com/2017/11/29/16715018/morgan-marquis-boire-sexual-assault-citizen-
lab-toronto-cybersecurity
Aouragh, M., & Chakravartty, P. (2016). Infrastructures of empire: Towards a critical geopolitics
of media and information studies. Media, Culture & Society, 38(4), 559–575. https://
doi.org/10.1177/0163443716643007
Auchard, E. (2016, March 10). Skype co-founder launches ultra-private messaging, with video.
Reuters. Retrieved from https://www.reuters.com/article/us-dataprotection-messaging-
wire/skype-co-founder-launches-ultra-private-messaging-with-video-
idUSKCN0WC2GM
Aurora, V ., Gardiner, M., & Honeywell, L. (2016, June 21). No more rock stars. Retrieved from
https://hypatia.ca/2016/06/21/no-more-rock-stars/
Barberá, P., Wang, N., Bonneau, R., Jost, J. T., Nagler, J., Tucker, J., & González-Bailón, S.
(2015). The critical periphery in the growth of social protests. PLOS ONE, 10(11),
e0143611. https://doi.org/10.1371/journal.pone.0143611
Barlow, J. P. (1996, February 8). A declaration of the independence of cyberspace. Retrieved
from https://www.eff.org/cyberspace-independence
! 395
Beck, U. (1992). Risk society: Towards a new modernity. London, NY ; Newbury Park, CA: Sage
Publications.
Ben Gharbia, S. (2010, September 17). The internet freedom fallacy and the Arab digital
activism. Retrieved March 14, 2018, from https://nawaat.org/portail/2010/09/17/the-
internet-freedom-fallacy-and-the-arab-digital-activism/
Bennett, C. J. (2010). The privacy advocates: Resisting the spread of surveillance. Cambridge,
MA: MIT Press.
Bernstein, J. (2016, June 9). Sources: Tor Project board knew of allegations against Jacob
Appelbaum for over a year. Buzzfeed. Retrieved from https://www.buzzfeed.com/
josephbernstein/sources-tor-project-board-knew-about-allegations-against-jac?
utm_term=.tapV3Wo7Wn#.ycW09NxBNl
Botsman, R. (2017, October 21). Big Data meets Big Brother as China moves to rate its citizens.
Wired UK. Retrieved from https://www.wired.co.uk/article/chinese-government-social-
credit-score-privacy-invasion
Boyette, C. (2013, August 5). Russia’s Mark Zuckerberg offers Edward Snowden a job. CNN
Tech. Retrieved from http://money.cnn.com/2013/08/05/technology/social/snowden-
vkontakte/
Brandom, R. (2018, February 12). How a rock star of Iranian digital activism built a culture of
misogyny and fear. The Verge. Retrieved from https://www.theverge.com/
2018/2/12/16995882/asl-19-ali-karimzadeh-bangi-alleged-assault
Bratich, J. Z. (2018). Up all night, down for the count? A compositionist approach to Nuit
Debout. International Journal of Communication, 12, 1–20.
! 396
Broadcasting Board of Governors. (n.d.). Who We Are -- History. Retrieved February 26, 2018,
from https://www.bbg.gov/who-we-are/history/
Brower, R. S., & Abolafia, M. Y . (1997). Bureaucratic politics: The view from below. Journal of
Public Administration Research and Theory, 7(2), 305–331.
Brown, I. (2013). The Global Online Freedom Act. Georgetown Journal of International Affairs,
14(1), 153–160.
Bueno de Mesquita, B., & Dowds, G. (2005). Development and democracy. Foreign Affairs, 84,
77.
Cameron, D., O’Neil, P. H., & Larson, S. (2016, June 15). Jacob Appelbaum allegedly
intimidated victims into silence and anonymity. The Daily Dot. Retrieved from http://
www.dailydot.com/layer8/jacob-appelbaum-tor-project-suspension-sexual-misconduct-
victims/
Carr, M. (2013). Internet freedom, human rights and power. Australian Journal of International
Affairs, 67(5), 621–637. https://doi.org/10.1080/10357718.2013.817525
Carr, M. (2016). US power and the internet in International Relations. Houndmills, Basingstoke,
Hampshire ; New York, NY: Palgrave Macmillan.
Carroll, W., & Hackett, R. (2006). Democratic media activism through the lens of social
movement theory. Media, Culture & Society, 28, 83–104.
Carty, V . (2015). Social movements and new technology. Boulder, CO: Westview Press, a
Member of the Perseus Books Group.
Castells, M. (2001). The internet galaxy: Reflections on the internet, business, and society.
Oxford, UK: Oxford University Press.
! 397
Castells, M. (2007). Communication, power and counter-power. International Journal of
Communication, 1, 238–266.
Castells, M., & Kiselyova, E. (2005). The collapse of Soviet communism: A view from the
information society. Los Angeles, CA: Figueroa Press.
Chatham House. (2018). Chatham House Rule. Chatham House The Royal Institute of
International Affairs. Retrieved from https://www.chathamhouse.org/about/chatham-
house-rule
Chaudhuri, R. (2014). Forged in crisis: India and the United States since 1947. New York, NY:
Oxford University Press, Inc.
Chaum, D. (1985). Security without identification: Transaction systems to make Big Brother
obsolete. Communications of the ACM, 28(10), 1030–1044.
Clary, G. (2016, January 4). The flaw in ISIS’ favorite messaging app. The Atlantic. Retrieved
from https://www.theatlantic.com/technology/archive/2016/01/isiss-favorite-messaging-
app-has-a-security-problem/422460/
Clinton, H. R. (2010, January). Remarks on Internet Freedom. The Newseum, Washington, DC.
Retrieved from https://2009-2017.state.gov/secretary/20092013clinton/rm/
2010/01/135519.htm
Clinton, H. R. (2011, February). Internet rights and wrongs: Choices and challenges in a
networked world. The George Washington University, Washington, DC. Retrieved from
https://2009-2017.state.gov/secretary/20092013clinton/rm/2011/02/156619.htm
Clinton, W. J. (1999). United States national security strategy. The White House, Washington,
DC.
! 398
Clinton, W. J. (2000). Defending America’s cyberspace: National plan for information systems
protection. The White House, Washington, DC.
Coleman, E. G. (2013). Coding freedom: The ethics and aesthetics of hacking. Princeton, NJ:
Princeton University Press.
Coleman, G. (2014). Hacker, hoaxer, whistleblower, spy: The many faces of Anonymous. London,
UK ; Brooklyn, NY: Verso.
Collier, B. (forthcoming). Cyborg infrastructures and resistance: A sociological study of Tor.
Dissertation, University of Edinburgh.
Comey, J. (2014, October). Going dark: Are technology, privacy, and public safety on a collision
course? Brookings Institution, Washington, DC. Retrieved from http://www.fbi.gov/
news/speeches/going-dark-are-technology-privacy-and-public-safety-on-a-collision-
course
Constine, J. (2018, January 8). Telegram plans multi-billion dollar ICO for chat cryptocurrency.
Tech Crunch. Retrieved from https://techcrunch.com/2018/01/08/telegram-open-network/
Costanza-Chock, S. (2014). Out of the shadows, into the streets! Transmedia organizing and the
immigrant rights movement. Cambridge, MA: The MIT Press.
Couprie, G. (2013, December 17). Telegram, AKA “Stand back, we have math PhDs!” Retrieved
July 12, 2018, from http://unhandledexpression.com/2013/12/17/telegram-stand-back-
we-know-maths/
Cox, J. (2015, December 10). Why you don’t roll your own crypto. Motherboard. Retrieved from
https://motherboard.vice.com/en_us/article/wnx8nq/why-you-dont-roll-your-own-crypto
! 399
Cox, J. (2018, April 12). Cops around the country can now unlock iPhones, records show.
Motherboard. Retrieved from https://motherboard.vice.com/amp/en_us/article/vbxxxd/
unlock-iphone-ios11-graykey-grayshift-police
Croeser, S. (2012). Contested technologies: The emergence of the digital liberties movement.
First Monday, 17(8). https://doi.org/10.5210/fm.v17i8.4162
Dale, B. (2018, January 16). Big money, murky governance: Kicking the tires of Telegram’s
token sale. Coindesk. Retrieved from https://www.coindesk.com/big-money-murky-
governance-kicking-tires-telegrams-token-sale/
de Souza Abreu, J. (2016). From jurisdictional battles to crypto wars: Brazilian courts vs.
WhatsApp. Retrieved March 30, 2018, from http://jtl.columbia.edu/from-jurisdictional-
battles-to-crypto-wars-brazilian-courts-v-whatsapp/
Deibert, R. (2013a). Black code: Inside the battle for cyberspace. Toronto, ON: McClelland &
Stewart.
Deibert, R. (2013b). Black code: Surveillance, privacy, and the dark side of the Internet
(Expanded edition). Toronto, ON: Signal.
Deibert, R. (2016a). Cyberspace under siege. In L. J. Diamond, M. F. Plattner, & C. Walker
(Eds.), Authoritarianism goes global: the challenge to democracy (pp. 198–215).
Baltimore, MD: Johns Hopkins University Press.
Deibert, R. (2016b). Cyberspace under siege. In Authoritarianism goes global: the challenge to
democracy (pp. 198–215). Baltimore, MD: Johns Hopkins University Press.
Deibert, R. J. (2003). Black code: Censorship, surveillance, and the militarisation of cyberspace.
Millennium - Journal of International Studies, 32(3), 501–530. https://doi.org/
10.1177/03058298030320030801
! 400
Deibert, R., Palfrey, J., Rohozinski, R., & Zittrain, J. (Eds.). (2008). Access denied: The practice
and policy of global Internet filtering. Cambridge, MA: MIT Press.
Deibert, R., Palfrey, J., Rohozinski, R., & Zittrain, J. (Eds.). (2010). Access controlled: The
shaping of power, rights, and rule in cyberspace. Cambridge, MA: MIT Press.
Deibert, R., Palfrey, J., Rohozinski, R., & Zittrain, J. (Eds.). (2012). Access contested: Security,
identity, and resistance in Asian cyberspace information revolution and global politics.
Cambridge, MA: MIT Press.
Della Porta, D. (Ed.). (2007). The global justice movement: Cross-national and transnational
perspectives. London, UK: Routlege.
DeNardis, L. (2015). The global war for internet governance. New Haven, CT: Yale University
Press.
Denzin, N. K., Lincoln, Y . S., & Sage Publications (Eds.). (2009). The Sage handbook of
qualitative research (3rd edition). Thousand Oaks, CA: Sage.
Diamond, J. (2010). Liberation technology. Journal of Democracy, 21(3), 69–83.
Diani, M. (1992). The concept of social movement. The Sociological Review, 40(1), 1–25.
Dredge, S. (2014, April 3). Major labels sue Russian social network vKontakte for “large-scale”
music piracy. The Guardian. Retrieved from https://www.theguardian.com/technology/
2014/apr/03/major-labels-vkontakte-russia-music-piracy
Dunbar-Hester, C. (forthcoming). Hacking community: The politics of inclusion in open
technology cultures (Princeton University Press). Princeton, NJ.
Dunbar-Hester, C. (2008). Geeks, meta-geeks, and gender trouble: Activism, identity, and low-
power FM radio. Social Studies of Science, 38(2), 201–232. https://doi.org/
10.1177/0306312707082954
! 401
Dunbar-Hester, C. (2014). Low power to the people: Pirates, protest, and politics in FM radio
activism. Cambridge, MA: MIT Press.
Dunbar-Hester, C. (2018). If “diversity” is the answer, what is the question? Understanding
diversity advocacy in open technology projects. In J. Vertesi & D. Ribes (Eds.),
digitalSTS Handbook (Princeton University Press). Princeton, NJ.
Durov, N. (2017, December 3). Telegram Open Network. Telegram. Retrieved from https://
drive.google.com/file/d/1lqVlrgiztnA5dkOHP7-ENDKT1FgZuCUV/view
Durov, P. (2012, May 18). Музыканты, писатели, журналисты, поэты и другие жители
страны о том, что делать: Павел Дуров, основатель «ВКонтакте» [Musicians, writers,
journalists, poets and other residents about what to do: Pavel Durov, the founder of
“VKontakte”]. Afisha. Retrieved from https://www.afisha.ru/article/pavel-durov-
vkontakte/
Durov, P. (2018, March 22). 200,000,000 monthly active users. Retrieved April 2, 2018, from
https://telegram.org/blog/200-million
Electronic Frontier Foundation. (n.d.). Glossary. Retrieved March 17, 2018, from https://
ssd.eff.org/en/glossary
Electronic Frontier Foundation. (2017, September 7). Assessing your risks. Retrieved March 17,
2018, from https://ssd.eff.org/en/module/assessing-your-risks
Engels, F. (1978). On authority. In R. Tucker (Ed.), The Marx-Engels reader (2nd edition). New
York, NY: W. W. Norton & Company.
Evans, J. (2014, November 18). WhatsApp partners with Open Whisper Systems to end-to-end
encrypt billions of messages a day. Tech Crunch. Retrieved from https://techcrunch.com/
2014/11/18/end-to-end-for-everyone/
! 402
Facebook-WhatsApp data-sharing plan blocked by UK watchdog. (2018, March 14). BBC News.
Retrieved from http://www.bbc.com/news/technology-43404071
Farivar, C. (2016, October 4). FBI demands Signal user data, but there’s not much to hand over.
Ars Technica. Retrieved from https://arstechnica.com/tech-policy/2016/10/fbi-demands-
signal-user-data-but-theres-not-much-to-hand-over/
FBI pushes lawmakers for greater surveillance. (2014, November 7). Government Technology.
Retrieved from http://www.govtech.com/federal/FBI-Pushes-Lawmakers-for-Greater-
Surveillance.html
Fisher, D. R., & Wright, L. M. (2006). On utopias and dystopias: Toward an understanding of the
discourse surrounding the Internet. Journal of Computer-Mediated Communication, 6(2),
0–0. https://doi.org/10.1111/j.1083-6101.2001.tb00115.x
Forbes Staff. (2012, November 22). Kod Pavla Durova: pyat istorii iz zhizni Vkontakte i ee
sozdatelya [Pavel Durov’s code: Five stories from the lives of Vkontakte and its creator].
Forbes Russia. Retrieved from http://www.forbes.ru/sobytiya-opinion/lyudi/212150-kod-
pavla-durova-pyat-istorii-iz-zhizni-vkontakte-i-ee-sozdatelya
Former employees of ASL19. (2018, March 1). Open letter regarding ASL19 and hostile work
environments in the Internet Freedom Community. Retrieved from https://ex-asl19-
staff.github.io/open-letter/
Freedom House. (n.d.). About Freedom on the Net. Retrieved February 17, 2018, from https://
freedomhouse.org/report-types/freedom-net
Freedom House. (2015). Freedom on the Net. Washington, DC: Freedom House. Retrieved from
https://freedomhouse.org/report/freedom-net/freedom-net-2015
! 403
Freeman, J. (1972). The tyranny of structurelessness. Berkeley Journal of Sociology, 17, 151–
164.
Fukuyama, F. (1992). The end of history and the last man. New York, NY: Free Press.
Fukuyama, F. (2014, June 6). At the “End of History” still stands democracy. The Wall Street
Journal. Retrieved from http://www.wsj.com/articles/at-the-end-of-history-still-stands-
democracy-1402080661
Galtung, J. (1964). A structural theory of aggression. Journal of Peace Research, 1(2), 95–119.
https://doi.org/10.1177/002234336400100203
Garling, C. (2011a, November 28). Twitter buys some Middle East Moxie. Wired. Retrieved
from https://www.wired.com/2011/11/twitter-buys-moxie/
Garling, C. (2011b, December 20). Twitter open sources its Android Moxie. Wired. Retrieved
from https://www.wired.com/2011/12/twitter-open-sources-its-android-moxie/
Gehl, R. (2018). Weaving the dark web: Legitimacy on Freenet, Tor, and i2p. Boston, MA: MIT
Press.
Gellman, B., Timberg, C., & Rich, S. (2013, October 4). Secret NSA documents show campaign
against Tor encrypted network. The Washington Post. Retrieved from https://
www.washingtonpost.com/world/national-security/secret-nsa-documents-show-
campaign-against-tor-encrypted-network/2013/10/04/610f08b6-2d05-11e3-8ade-
a1f23cda135e_story.html?utm_term=.9460d226eba8
Ghosh, D., & Scott, B. (2018). Digital deceit: The technologies behind precision propaganda on
the internet. Washington, DC: New America. Retrieved from https://
www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit/
Giffard, C. A. (1989). UNESCO and the media. New York, NY: Longman.
! 404
Glennon, M. J. (2015). National security and double government. New York, NY: Oxford
University Press.
Golumbia, D. (2013, September). Cyberlibertarianism: The extremist foundations of ‘Digital
Freedom.’ Talk, Clemson University, Clemson, SC. Retrieved from http://
www.uncomputing.org/wp-content/uploads/2014/02/cyberlibertarianism-extremist-
foundations-sep2013.pdf
Gorny, E. (2007). The Russian internet: Between kitchen-table talks and the public sphere. Art
Margins.
Greenberg, A. (2014, July 29). Your iPhone can finally make free, encrypted calls. Wired.
Retrieved from https://www.wired.com/2014/07/free-encrypted-calling-finally-comes-to-
the-iphone/
Greenberg, A. (2016, July 31). Meet Moxie Marlinspike, the anarchist bringing encryption to all
of us. Wired. Retrieved from https://www.wired.com/2016/07/meet-moxie-marlinspike-
anarchist-bringing-encryption-us/
Greenwald, G. (2014). No place to hide: Edward Snowden, the NSA, and the U.S. surveillance
state. New York, NY: Henry Holt and Co.
Guba, E. G., & Lincoln, Y . S. (2009). Paradigmatic controversies, contradictions, and emerging
confluences. In N. K. Denzin, Y . S. Lincoln, & Sage Publications (Eds.), The Sage
handbook of qualitative research (3rd edition, pp. 191–215). Thousand Oaks, CA: Sage.
Habermas, J. (1989). The structural transformation of the public sphere: An inquiry into a
category of bourgeois society. Cambridge, MA: MIT Press.
Hackett, R. A., & Carroll, W. K. (2006). Remaking media: The struggle to democratize public
communication. New York, NY ; London, UK: Routledge.
! 405
Hakim, D. (2014, December 2). Once celebrated in Russia, the programmer Pavel Durov chooses
exile. The New York Times. Retrieved from https://www.nytimes.com/2014/12/03/
technology/once-celebrated-in-russia-programmer-pavel-durov-chooses-exile.html?
ref=technology&_r=1
Hanson, E. C. (2008). The information revolution and world politics. Lanham, MD: Rowman &
Littlefield.
Hastings, M. (2012, January 18). Julian Assange: The Rolling Stone interview. Rolling Stone.
Retrieved from https://www.rollingstone.com/politics/news/julian-assange-the-rolling-
stone-interview-20120118#ixzz1jqjjK0dJ
He, B., & Warren, M. E. (2011). Authoritarian deliberation: The deliberative turn in Chinese
political development. Perspectives on Politics, 9(02), 269–289. https://doi.org/10.1017/
S1537592711000892
Herman, E. S., & McChesney, R. W. (1997). The global media: The new missionaries of
corporate capitalism. London, UK ; Washington, DC: Cassell.
Hill, K. (2016, August 29). Facebook recommended that this psychiatrist’s patients friend each
other. Splinter. Retrieved from https://splinternews.com/facebook-recommended-that-
this-psychiatrists-patients-f-1793861472
Hill, K. (2017, November 7). How Facebook figures out everyone you’ve ever met. Gizmodo.
Retrieved from https://gizmodo.com/how-facebook-figures-out-everyone-youve-ever-
met-1819822691
Hintz, A., & Milan, S. (2010). Social science is police science: Researching grass-roots activism.
International Journal of Communication, 4, 837–844.
! 406
Honeywell, L. (2016, June 7). He said, they said. Retrieved from https://hypatia.ca/2016/06/07/
he-said-they-said/
Howard, P. N., & Hussain, M. M. (2013). Democracy’ s fourth wave? Digital media and the Arab
Spring. Oxford, UK ; New York, NY: Oxford University Press.
Hughes, E. (1997). A cypherpunk’s manifesto. In The electronic privacy papers (pp. 285–287).
Hoboken, NJ: John Wiley & Sons.
Hussain, M. M., & Howard, P. N. (2013). What best explains successful protest cascades? ICTs
and the fuzzy causes of the Arab Spring. International Studies Review, 15(1), 48–66.
https://doi.org/10.1111/misr.12020
Interdoc. (1984). The Velletri Agreement. Antenna.
International Center for Not-for-Profit Law. (2016). Overview of the package of changes into a
number of laws of the Russian Federation designed to provide for additional measures to
counteract terrorism. Washington, DC: International Center for Not-for-Profit Law.
Retrieved from www.icnl.org/research/library/files/Russia/Yarovaya.pdf
Internet Engineering Task Force. (1998). IETF working group guidelines and procedures.
Retrieved from https://tools.ietf.org/html/rfc2418
Internews. (n.d.). Our history. Retrieved February 17, 2018, from https://www.internews.org/our-
history
Ioffe, Julia. (2011, December 10). Snow Revolution. The New Yorker. Retrieved from https://
www.newyorker.com/news/news-desk/snow-revolution#entry-more
Ioffe, Julie. (2017, March 1). The state of Trump’s State Department. The Atlantic. Retrieved
from https://www.theatlantic.com/international/archive/2017/03/state-department-trump/
517965/
! 407
Issenberg, S. (2012, December 19). A more perfect union. MIT Technology Review. Retrieved
from https://www.technologyreview.com/s/509026/how-obamas-team-used-big-data-to-
rally-voters/
Jakobsen, J. B. (2015). A practical cryptanalysis of the Telegram messaging protocol (Master’s
thesis). Aarhus University, Aarhus, Denmark. Retrieved from https://cs.au.dk/~jakjak/
master-thesis.pdf
Jardine, E. (2015). The Dark Web dilemma: Tor, anonymity and online policing (Global
Commission on Internet Governance Paper Series No. 21). Waterloo, ON: Centre for
International Governance Innovation. Retrieved from http://papers.ssrn.com/sol3/
papers.cfm?abstract_id=2667711
Jeffries, A. (2018, February 21). Exclusive: Telegram is holding a secretive second pre-ICO sale.
The Verge. Retrieved from https://www.theverge.com/2018/2/21/17037606/telegram-
open-network-app-ico-cryptocurrency-ton
Jeong, S. (2017a, October 13). Sexual assault allegations levied against high profile security
researcher and activist. The Verge. Retrieved from https://www.theverge.com/
2017/10/13/16473996/morgan-marquis-boire-citizen-lab-sexual-assault
Jeong, S. (2017b, November 19). In chatlogs, celebrated hacker and activist confesses countless
sexual assaults. The Verge. Retrieved from https://www.theverge.com/
2017/11/19/16675704/morgan-marquis-boire-hacker-sexual-assault
Juris, J. S. (2008). Networking futures: The movements against corporate globalization. Durham,
NC: Duke University Press.
Kaldor, M. (2003). Global civil society: An answer to war. Cambridge, UK : Malden, MA: Polity
Press.
! 408
Kathuria, K. (2011). Bypassing internet censorship for news broadcasters. Presented at the
USENIX Workshop on Free and Open Communication on the Internet, San Francisco,
CA.
Kaye, D. (2015). Report on encryption, anonymity, and the human rights framework (Annual
report of the Special Rapporteur on freedom of expression). New York, NY: United
Nations Office of the High Commissioner on Human Rights. Retrieved from http://
www.ohchr.org/EN/Issues/FreedomOpinion/Pages/Annual.aspx
Kaye, D. (2018). The limits of supply-side internet freedom [Knight First Amendment Institute at
Columbia University]. Retrieved from https://knightcolumbia.org/content/limits-supply-
side-internet-freedom
Keddie, N. R. (2003). The revolution. In N. R. Keddie (Ed.), Modern Iran: Roots and results of
revolution (pp. 214–222). New Haven, CT: Yale University Press.
Kelty, C. M. (2014). The fog of freedom. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.),
Media technologies: essays on communication, materiality, and society (pp. 195–220).
Cambridge, MA: The MIT Press.
King, G., Pan, J., & Roberts, M. E. (2013). How censorship in China allows government
criticism but silences collective expression. American Political Science Review, 107(02),
326–343. https://doi.org/10.1017/S0003055413000014
Kononov, N. (2013). Kod Durova: realʹna i
︠
a
︡
istori i
︠
a
︡
so t
︠
s
︡
seti “VKontakte” i ee sozdatel i
︠
a
︡
[Durov’ s code: the real story of “Vkontakte” and its creator]. Moscow: Izdatelsʹtvo
“Mann, Ivanov i Ferber.”
! 409
Kononov, N., & Igumenov, V . (2011, June 24). Kod Pavla Durova [Code of Pavel Durov].
Forbes Russia. Retrieved from http://www.forbes.ru/ekonomika/kompanii/69666-kod-
pavla-durova
Krishna, S. (2017a, June 26). Telegram will register with Russia but won’t share secure data.
Engadget. Retrieved from https://www.engadget.com/2017/06/28/telegram-not-sharing-
secure-data-russia/
Krishna, S. (2017b, October 16). Telegram fined after refusing to provide user data to Russia.
Engadget. Retrieved from https://www.engadget.com/2017/10/16/telegram-fined-by-
russian-court/
Krishna, S. (2018a, February 20). Telegram loses appeal over encryption keys in Russia.
Engadget. Retrieved from https://www.engadget.com/2018/03/20/telegram-encryption-
keys-russia-supreme-court/
Krishna, S. (2018b, April 6). Russia is getting closer to banning Telegram. Engadget. Retrieved
from https://www.engadget.com/2018/04/06/russia-block-telegram-filed-lawsuit/
Kubitschko, S. (2015). The role of hackers in countering surveillance and promoting democracy.
Media and Communication, 3(2), 77. https://doi.org/10.17645/mac.v3i2.281
Kumar, P., Prasad, R., & Maréchal, N. (2016). Progress and peril: The role of information and
communication technology in promoting and curtailing human rights. In Human rights
and technology: The 2030 agenda for sustainable development. San Jose, Costa Rica:
United Nations University for Peace.
Larson, S. (2016, June 7). Security engineer goes public about Appelbaum: “I felt afraid and
violated.” The Daily Dot. Retrieved from http://www.dailydot.com:80/layer8/jacob-
appelbaum-harassment-woman-comes-forward
! 410
Lebert, J. (2003). Wiring human rights activism: Amnesty International and the challenges of
information and communication technologies. In M. McCaughey & M. D. Ayers (Eds.),
Cyberactivism: online activism in theory and practice (pp. 209–232). New York, NY:
Routledge.
Lee, M. (2015, March 2). You should really consider installing Signal, an encrypted messaging
app for iPhone. The Intercept. Retrieved from https://theintercept.com/2015/03/02/signal-
iphones-encrypted-messaging-app-now-supports-text/
Lee, M. (2017, September 28). How to use Signal without giving out your phone number. The
Intercept. Retrieved from https://theintercept.com/2017/09/28/signal-tutorial-second-
phone-number/
Lessig, L. (2006). Code: Version 2.0 (2nd ed.). New York, NY: Basic Books.
Levine, Y . (2014a, July 16). Almost everyone involved in developing Tor was (or is) funded by
the US government. Retrieved December 17, 2014, from http://pando.com/2014/07/16/
tor-spooks/
Levine, Y . (2014b, November 14). How leading Tor developers and advocates tried to smear me
after I reported their US Government ties. Retrieved December 17, 2014, from http://
pando.com/2014/11/14/tor-smear/
Levine, Y . (2016, December 9). Spy-funded privacy tools (like Signal and Tor) are not going to
protect us from President Trump. Retrieved March 17, 2018, from https://
surveillancevalley.com/blog/government-backed-privacy-tools-are-not-going-to-protect-
us-from-president-trump
! 411
Levine, Y . (2017a, September). The crypto-keepers: How the politics-by-app hustle conquered
all. The Baffler, (36). Retrieved from https://thebaffler.com/salvos/the-crypto-keepers-
levine
Levine, Y . (2017b, September 7). My story in The Baffler: Pavel Durov, CIA-funded privacy and
the paranoid grift of crypto politics. Retrieved March 31, 2018, from https://
surveillancevalley.com/blog/baffler-pavel-durov-cia-privacy-crypto-grift-mass-politics-
by-app
Levine, Y . (2018). Surveillance valley: The rise of the military-digital complex. New York, NY:
PublicAffairs.
Levy, A. (2018, February 15). Secretive messaging app Telegram is selling a $2 billion crypto
dream -- but skeptics smell a “ploy.” CNBC. Retrieved from https://www.cnbc.com/
2018/02/15/telegram-the-2-billion-crypto-offering-thats-dividing-tech.html
Levy, S. (2001). Crypto: How the code rebels beat the government, saving privacy in the digital
age. New York, NY: Penguin Books.
Liao, S. (2018, May 2). Telegram cancels its much-hyped initial coin offering. The Verge.
Retrieved from https://www.theverge.com/2018/5/2/17312046/telegram-initial-coin-
offering-ico-cancelled
Light, B., Burgess, J., & Duguay, S. (2016). The walkthrough method: An approach to the study
of apps. New Media & Society, 146144481667543. https://doi.org/
10.1177/1461444816675438
Ling, J. (2018, January 9). This app is helping Iranians beat Tehran’s internet censorship.
Motherboard. Retrieved from https://motherboard.vice.com/en_us/article/zmqqn9/
! 412
psiphon-app-is-helping-iran-protesters-beat-tehran-internet-censorship-citizen-lab?
utm_campaign=sharebutton
Locker, T. (2018, February 5). Wir haben mit Frauen über sexuelle Belästigung in der Hacker-
Szene gesprochen [We talked to women about sexual harassment in the hacker scene].
Motherboard. Retrieved from https://motherboard.vice.com/de/article/xw5qy4/wir-
haben-mit-frauen-uber-sexuelle-belastigung-in-der-hacker-szene-gesprochen
Lund, J. (2018, January 11). Signal partners with Microsoft to bring end-to-end encryption to
Skype. Retrieved from https://signal.org/blog/skype-partnership/
MacBride, S. (1980). Many voices, one world. Report of the International Commission for the
Study of Communication Problems. Paris, France: UNESCO.
MacKenzie, D. A., & Wajcman, J. (Eds.). (1999). The social shaping of technology (2nd ed).
Buckingham, UK ; Philadelphia, PA: Open University Press.
MacKinnon, R. (2008, June 18). Chinese internet research conference: Getting beyond “Iron
Curtain 2.0.” Retrieved November 14, 2017, from http://rconversation.blogs.com/
rconversation/2008/06/chinese-inter-1.html
MacKinnon, R. (2010). Networked authoritarianism in China and beyond: Implications for
global internet freedom. Hoover Institution and Center on Democracy, Development and
the Rule of Law, Stanford University. Retrieved from http://fsi.stanford.edu/sites/default/
files/evnts/media/MacKinnon_Libtech.pdf
MacKinnon, R. (2011). China’s “networked authoritarianism.” Journal of Democracy, 22(2), 32–
46.
MacKinnon, R. (2012). Consent of the networked: The world-wide struggle for internet freedom
(Paperback edition). New York, NY: Basic Books.
! 413
MacKinnon, R., Maréchal, N., & Kumar, P. (2016). Corporate accountability for a free and open
internet (GCIG Paper No. 45). Ottawa, ON and London, UK: Centre for International
Governance Innovation and Chatham House. Retrieved from https://www.cigionline.org/
publications/corporate-accountability-free-and-open-internet
MacKinnon, Rebecca. (2010, November 19). No quick fixes for internet freedom. The Wall
Street Journal. Retrieved from https://www.wsj.com/articles/
SB10001424052748704104104575622080860055498
Mahmoudi, F., & Bashar, F. (2018, January 12). Tech companies are complicit in censoring Iran
protests. Wired. Retrieved from https://www.wired.com/story/tech-companies-are-
complicit-in-censoring-iran-protests/
Marczak, B., Weaver, N., Dalek, J., Ensafi, R., Fifield, D., McKune, S., … Paxson, V . (2015).
China’ s Great Cannon. Toronto, ON: The Citizen Lab. Retrieved from https://
citizenlab.ca/2015/04/chinas-great-cannon/
Maréchal, N. (2013). WikiLeaks and the public sphere: Dissent and control in cyberworld.
International Journal of Technology, Knowledge and Society, 9(4).
Maréchal, N. (2015). Ranking Digital Rights: Keeping the internet safe for advocacy. The
Fibreculture Journal, (26), 302–308. https://doi.org/10.15307/fcj.mesh.009.2015
Maréchal, N. (2017a). Networked authoritarianism and the geopolitics of information:
Understanding Russian internet policy. Media and Communication, 5(1), 29. https://
doi.org/10.17645/mac.v5i1.808
Maréchal, N. (2017b, May 7). The Macron hack was a warning shot. Slate. Retrieved from http://
www.slate.com/blogs/future_tense/2017/05/07/
the_macron_hack_was_a_warning_shot.html
! 414
Marlinspike, M. (2013, January 31). Creating a low-latency calling network. Retrieved March
28, 2018, from https://signal.org/blog/low-latency-switching/
Marlinspike, M. (2014, November 18). Open Whisper Systems partners with WhatsApp to
provide end-to-end encryption. Retrieved March 19, 2018, from https://signal.org/blog/
whatsapp/
Marlinspike, M. (2015, November 2). Just Signal. Retrieved from https://signal.org/blog/just-
signal/
Marlinspike, M. (2016a, March 30). Signal on the outside, Signal on the inside. Retrieved March
28, 2018, from https://signal.org/blog/signal-inside-and-out/
Marlinspike, M. (2016b, April 5). WhatsApp’s Signal Protocol integration is now complete.
Retrieved from https://signal.org/blog/whatsapp-complete/
Marlinspike, M. (2016c, May 18). Open Whisper Systems partners with Google on end-to-end
encryption for Allo. Retrieved from https://signal.org/blog/allo/
Marlinspike, M. (2016d, July 8). Facebook Messenger deploys Signal Protocol for end-to-end
encryption. Retrieved from https://signal.org/blog/facebook-messenger/
Marlinspike, M., & Acton, B. (2018, February 21). Signal Foundation. Retrieved from https://
signal.org/blog/signal-foundation/
Marwick, A., & Lewis, R. (2017). Media manipulation and disinformation online. New York,
NY: Data & Society Research Institute. Retrieved from https://datasociety.net/output/
media-manipulation-and-disinfo-online/
May, T. (1995). The crypto anarchist manifesto. Retrieved from http://www.spunk.org/library/
comms/sp000151.html
! 415
McCarthy, D. R. (2011). Open networks and the open door: American foreign policy and the
narration of the internet. Foreign Policy Analysis, 7(1), 89–111. https://doi.org/10.1111/j.
1743-8594.2010.00124.x
McCarthy, D. R. (2015). Power, information technology, and international relations theory: The
power and politics of US foreign policy and the internet. New York, NY: Palgrave
Macmillan.
McCarthy, J. D., & Zald, M. N. (1977). Resource mobilization and social movements: A partial
theory. American Journal of Sociology, 82, 1212–1241.
McChesney, R. W. (2013). Digital disconnect: How capitalism is turning the internet against
democracy. New York, NY: The New Press.
Melucci, A., Keane, J., & Mier, P. (1989). Nomads of the present: Social movements and
individual needs in contemporary society. Philadelphia, PA: Temple University Press.
Metz, C. (2016, April 5). Forget Apple vs. the FBI: WhatsApp just switched on encryption for a
billion people. Wired. Retrieved from https://www.wired.com/2016/04/forget-apple-vs-
fbi-whatsapp-just-switched-encryption-billion-people/
Metzl, J. F. (1996). Information technology and human rights. Human Rights Quarterly, 18(4),
705–746. https://doi.org/10.1353/hrq.1996.0045
Milan, S. (2013). Social movements and their technologies: Wiring social change. Houndmills,
Basingstoke, Hampshire ; New York, NY: Palgrave Macmillan.
Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods
sourcebook (3rd edition). Thousand Oaks, CA: SAGE Publications, Inc.
! 416
Moore, J. (2018, April 3). Telegram ICO sells-out 2nd round allocation, closes in on $2bn.
CryptoNews Review. Retrieved from http://cryptonewsreview.com/telegram-ico-sells-
out-2nd-round-allocation-closes-in-on-2bn/
Morozov, E. (2011). The net delusion: The dark side of internet freedom (1st ed). New York, NY:
Public Affairs.
Moussouris, K. (2017, December 17). Serious progress made on the Wassenaar Arrangement for
global cybersecurity. The Hill. Retrieved from http://thehill.com/opinion/cybersecurity/
365352-serious-progress-made-on-the-wassenaar-arrangement-for-global
Mueller, M. (2010). Networks and States: The Global Politics of Internet Governance. Boston,
MA: The MIT Press.
Mueller, M. (2017). Will the Internet fragment? Sovereignty, globalization, and cyberspace.
Cambridge, UK ; Malden, MA: Polity Press.
Mumford, L. (1964). Authoritarian and democratic technics. Technology and Culture, 5, 1–8.
Nakashima, E. (2016, April 12). FBI paid professional hackers one-time fee to crack San
Bernardino iPhone. The Washington Post. Retrieved from https://
www.washingtonpost.com/world/national-security/fbi-paid-professional-hackers-one-
time-fee-to-crack-san-bernardino-iphone/
2016/04/12/5397814a-00de-11e6-9d36-33d198ea26c5_story.html?utm_term=.
97ce0dff5852
Newman, L. H. (2018, January 11). Skype’s rolling out end-to-end encryption for hundreds of
millions of people. Wired. Retrieved from https://www.wired.com/story/skype-end-to-
end-encryption-voice-text/
! 417
Newton, C. (2016, November 10). Zuckerberg: the idea that fake news on Facebook influenced
the election is ‘crazy.’ The Verge. Retrieved from https://www.theverge.com/
2016/11/10/13594558/mark-zuckerberg-election-fake-news-trump
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York,
NY: New York University Press.
Nocetti, J. (2015). Contest and conquest: Russia and global internet governance. International
Affairs, 91(1), 111–130.
Norton, Q. (2014, December 9). Clearing the air around Tor. Retrieved December 17, 2014, from
http://pando.com/2014/12/09/clearing-the-air-around-tor/
Nye, J. S. (2004). Soft power: The means to success in world politics (1st edition). New York,
NY: Public Affairs.
Obama, B. (2011). International Strategy for Cyberspace. The White House, Washington, DC.
Retrieved from https://obamawhitehouse.archives.gov/sites/default/files/rss_viewer/
international_strategy_for_cyberspace.pdf
Ognyanova, K. (2015). In Putin’s Russia, information has you: Media control and internet
censorship. In M. M. Merviö (Ed.), Management and Participation in the Public Sphere
(pp. 62–78). Hershey, PA: IGI Global. Retrieved from http://services.igi-global.com/
resolvedoi/resolve.aspx?doi=10.4018/978-1-4666-8553-6
OHCHR. (2011). Guiding Principles on Business and Human Rights: Implementing the United
Nations “Respect, Protect and Remedy” framework. Geneva: United Nations Office of
the High Commissioner on Human Rights. Retrieved from http://www.ohchr.org/
documents/publications/GuidingprinciplesBusinesshr_en.pdf
! 418
Olsen, P. (2014, February 19). Exclusive: The rags-to-riches take of how Jan Koum built
WhatsApp into Facebook’s new $19 billion baby. Forbes. Retrieved from https://
www.forbes.com/sites/parmyolson/2014/02/19/exclusive-inside-story-how-jan-koum-
built-whatsapp-into-facebooks-new-19-billion-baby/#797557222fa1
O’Maley, D. (2015). Networking democracy: Brazilian internet freedom activism and the
influence of participatory democracy. Dissertation, Vanderbilt University, Nashville, TN.
Ong, T. (2017, September 25). Obama personally warned Mark Zuckerberg to take the threats of
fake news ‘seriously.’ The Verge. Retrieved from https://www.theverge.com/
2017/9/25/16360482/obama-mark-zuckerberg-fake-news-facebook
Owen, T. (2015). Disruptive power: The crisis of the state in the digital age. Oxford, UK ; New
York, NY: Oxford University Press.
Oxford Internet Institute. (2017). Computational propaganda worldwide (Working Paper).
Oxford, UK.
Padovani, C., & Pavan, E. (2009). The emerging global movement on communication rights: A
new stakeholders in global communication governance? Converging at WSIS but looking
beyond. In C. Rodríguez, D. Kidd, & L. Stein (Eds.), Making our media: global
initiatives toward a democratic public sphere. Cresskill, NJ: Hampton Press.
Palmer, D. (2018, February 19). $850 million raised in ICO so far, says Telegram. Coindesk.
Retrieved from https://www.coindesk.com/850-million-raised-in-ico-so-far-says-
telegram/
Pearce, K. E., & Kendzior, S. (2012). Networked Authoritarianism and Social Media in
Azerbaijan. Journal of Communication, 62(2), 283–298. https://doi.org/10.1111/j.
1460-2466.2012.01633.x
! 419
Penney, J. W. (2017). Internet surveillance, regulation, and chilling effects online: A comparative
case study. HIIG - Alexander von Humboldt Institute for Internet and Society. https://
doi.org/10.14763/2017.2.692
Peters, B. (2016). How not to network a nation: The uneasy history of the Soviet internet.
Cambridge, MA: MIT Press.
Petrow, S. (2014, August 25). When psychiatrists are on Facebook, their patients can get a case
of TMI. The Washington Post. Retrieved from https://www.washingtonpost.com/national/
health-science/when-psychiatrists-are-on-facebook-their-patients-can-get-a-case-of-tmi/
2014/08/25/ed31e522-110a-11e4-9285-4243a40ddc97_story.html?
utm_term=.aff4b0965e71
Pinch, T. (1988). Understanding technology: Some possible implications of work in the
sociology of science. In B. Elliott (Ed.), Technology and social process. Edinburgh, UK:
Edinburgh University Press.
Poitras, L. (2017). Risk. Neon.
Pomfret, J. (2010, May 12). U.S. risks China’s ire with decision to fund software maker tied to
Falun Gong. The Washington Post.
Posner, M. H. (2012, January). Internet Freedom and the digital earthquale of 2011. Remarks to
the State of the Net conference. Washington, DC. Retrieved from https://
2009-2017.state.gov/j/drl/rls/rm/2012/180958.htm
Postill, J. (2014). Freedom technologists and the new protest movements: A theory of protest
formulas. Convergence: The International Journal of Research into New Media
Technologies, 20(4), 402–418. https://doi.org/10.1177/1354856514541350
! 420
Powers, S. M., & Jablonski, M. (2015). The real cyber war: The political economy of internet
freedom. Urbana, IL: University of Illinois Press.
Preston, W., Herman, E. S., & Schiller, H. I. (1989). Hope and folly: The United States &
UNESCO, 1945-1985. Minneapolis, MN: University of Minnesota Press.
Price, M. E. (2015). Free expression, globalism, and the new strategic communication. New
York, NY: Cambridge University Press.
Price, M. E., & Stremlau, N. (Eds.). (2017). Speech and society in turbulent times: Freedom of
expression in comparative perspective. New York, NY: Cambridge University Press.
Protect Our Spaces. (2017). Open Letter. Retrieved March 12, 2018, from https://
protectourspaces.org/
Quénelle, B. (2016, June 6). Pavel Durov’s war on state power. The World Weekly. Retrieved
from https://www.theworldweekly.com/reader/view/2862/pavel-durovs-war-on-state-
power-
Romanosky, S., Libicki, M. C., Winkelman, Z., & Tkacheva, O. (2015). Internet Freedom
software and illicit activity: Supporting human rights without enabling criminals. Santa
Monica, CA: RAND Corporation.
Rothrock, K. (2016, October 25). Russia Is reportedly banning LinkedIn. Global Voices.
Retrieved from https://globalvoices.org/2016/10/25/russia-is-reportedly-banning-
linkedin/
Said, E. W. (1979). Orientalism (1st Vintage Books ed). New York, NY: Vintage Books.
Said, E. W. (1994). Culture and imperialism (1st Vintage Books ed). New York, NY: Vintage
Books.
! 421
Salter, L. (2003). Democracy, new social movements, and the internet. In M. McCaughey & M.
D. Ayers (Eds.), Cyberactivism: online activism in theory and practice (pp. 117–144).
New York, NY: Routledge.
Sargsyan, T. (2016). Data localization and the role of infrastructure for surveillance, privacy, and
security. International Journal of Communication, 10, 17.
Sauter, M. (2014). The coming swarm: DDoS actions, hacktivism, and civil disobedience on the
Internet. New York, NY ; London, UK: Bloomsbury Academic.
Scheer, R. (2015). They know everything about you: How data-collecting corporations and
snooping government agencies are destroying democracy. New York, NY: Nation Books.
Schiller, H. I. (1976). Communication and cultural domination. White Plains, NY: International
Arts and Sciences Press.
Schneiderman, E. T. (2013). A report on arrests arising from the New York City Police
Department’ s stop-and-frisk practices (p. 86). Albany, NY: New York State Attorney
General Civil Rights Division. Retrieved from http://www.ag.ny.gov/pdfs/
OAG_REPORT_ON_SQF_PRACTICES_NOV_2013.pdf
Schneier, B. (2015). Data and Goliath: The hidden battles to collect your data and control your
world (First edition). New York, NY: W.W. Norton & Company.
Schonfeld, E. (2006, February 8). Analysis: Yahoo’s China problem. CNN Money. Retrieved
from http://money.cnn.com/2006/02/08/technology/yahoo_china_b20/
Schulze, M. (2017). Clipper meets Apple vs. FBI—A comparison of the cryptography discourses
from 1993 and 2016. Media and Communication, 5(1), 54. https://doi.org/10.17645/
mac.v5i1.805
! 422
Scott-Railton, J., & Hardy, S. (2004). Malware attack targeting Syrian ISIS critics. Toronto, CA:
The Citizen Lab. Retrieved from https://citizenlab.ca/2014/12/malware-attack-targeting-
syrian-isis-critics/
SEC Form D submitted by TON Issuer Inc. (2018). Retrieved from https://www.sec.gov/
Archives/edgar/data/1729650/000095017218000030/xslFormDX01/primary_doc.xml
Shaw, A. (2011). Insurgent expertise: The politics of free/livre and open source software in
Brazil. Journal of Information Technology & Politics, 8(3), 253–272. https://doi.org/
10.1080/19331681.2011.592063
Shelton, M. (2017, September 19). Using Signal without giving your phone number. Retrieved
March 19, 2017, from https://medium.com/@mshelton/using-signal-without-giving-your-
phone-number-3a575580f652
Shirky, C. (2011). The political power of social media: technology, the public sphere, and social
change. Foreign Affairs, 90(1), 28–41.
Signal. (2018). Privacy policy. Retrieved March 17, 2018, from https://signal.org/signal/privacy/
Skocpol, T., & Williamson, V . (2012). The tea party and the remaking of republican
conservatism. New York, NY: Oxford University Press.
Smee, B. (2018, March 19). Queensland bans ministers from using private email and message
apps. The Guardian. Retrieved from https://www.theguardian.com/media/2018/mar/19/
queensland-bans-ministers-from-using-private-email-and-message-apps?
utm_term=Autofeed&CMP=twt_b-gdnnews#link_time=1521464655
Smith-Spark, L. (2010, May 12). US “to give $1.5m to Falun Gong internet freedom group.”
BBC News. Retrieved from http://news.bbc.co.uk/2/hi/8678760.stm
! 423
Soldatov, A., & Borogan, I. (2013). Russia’s surveillance state. World Policy Journal. Retrieved
from http://www.worldpolicy.org/journal/fall2013/Russia-surveillance
Soldatov, A., & Borogan, I. (2015). The red web: The struggle between Russia’ s digital dictators
and the new online revolutionaries (First edition). New York, NY: PublicAffairs.
Soldatov, A., & Borogan, I. (2017). The red web: The Kremlin’ s wars on the internet (First Trade
Paperback Edition). New York, NY: PublicAffairs.
Sparrow, A. (2017, March 26). WhatsApp must be accessible to authorities, says Amber Rudd.
The Guardian. Retrieved from https://www.theguardian.com/technology/2017/mar/26/
intelligence-services-access-whatsapp-amber-rudd-westminster-attack-encrypted-
messaging
Starosielski, N. (2015). The undersea network. Durham, NC ; London, UK: Duke University
Press.
Steele, S. (2016, June 4). Statement. Retrieved from https://blog.torproject.org/statement
Strauss, A. L., & Corbin, J. M. (1990). Basics of qualitative research: Grounded theory
procedures and techniques. Newbury Park, CA: Sage Publications.
Stubbs, J., & Ostroukh, A. (2018, April 13). Russia to ban Telegram messenger over encryption
dispute. Reuters. Retrieved from https://www.reuters.com/article/us-russia-telegram-
block/russia-to-ban-telegram-messenger-over-encryption-dispute-idUSKBN1HK10B
Telegram. (n.d.). Telegram FAQ. Retrieved April 4, 2018, from https://telegram.org/faq
Telegram. (2018). Telegram Primer. Retrieved from https://gramfoundation.io/whitepaper.pdf
The Citizen Lab. (2015). Citizen Lab Summer Institute. Retrieved from http://citizenlab.org/
summerinstitute/
! 424
Thierer, A., & Szoka, B. (2009, August 12). Cyber-libertarianism: The case for real internet
freedom. Retrieved February 24, 2018, from https://techliberation.com/2009/08/12/cyber-
libertarianism-the-case-for-real-internet-freedom/
Tilly, C. (1984). Social movements and national politics. In C. Bright & S. Harding (Eds.), State-
making and social movements: Essays in history and theory. Ann Arbor, MI: University
of Michigan Press.
Tilly, C. (2006). Regimes and repertoires. Chicago, IL: University of Chicago Press.
Touraine, A. (1981). The voice and the eye: An analysis of social movements. Cambridge, UK;
New York, NY; Paris, France: Cambridge University Press ; Maison des sciences de
l’homme.
Tréguer, F. (2017a). Gaps and bumps in the political history of the internet. HIIG - Alexander
von Humboldt Institute for Internet and Society. https://doi.org/10.14763/2017.4.714
Tréguer, F. (2017b). Intelligence reform and the Snowden paradox: The case of France. Media
and Communication, 5(1), 17. https://doi.org/10.17645/mac.v5i1.821
Tsai, W.-H. (2016). How ‘networked authoritarianism’ was operationalized in China: Methods
and procedures of public opinion control. Journal of Contemporary China, 25(101), 731–
744. https://doi.org/10.1080/10670564.2016.1160506
Tsotis, A. (2014, February 24). Telegram saw 8M downloads after WhatsApp got acquired. Tech
Crunch. Retrieved from https://techcrunch.com/2014/02/24/telegram-saw-8m-
downloads-after-whatsapp-got-acquired/
Tufekci, Z. (2017a). Twitter and tear gas: The power and fragility of networked protest. New
Haven, CT ; London, UK: Yale University Press.
! 425
Tufekci, Z. (2017b, September). We’re building a dystopia just to make people click on ads. TED
Talk presented at the TED Talk, New York, NY . Retrieved from https://www.ted.com/
talks/zeynep_tufekci_we_re_building_a_dystopia_just_to_make_people_click_on_ads
Turner, R. H., & Killian, L. M. (1957). Collective behavior (3rd edition). Englewood Cliffs, N.J:
Prentice-Hall.
UNESCO. (1976, October). Document COM/MD/38. UNESCO.
U.S. Department of Justice. (2015). Report on the President’ s surveillance program (p. 746).
Washington, DC: Office of the Inspector General. Retrieved from https://oig.justice.gov/
reports/2015/PSP-09-18-15-full.pdf
U.S. Department of State. (2006a). Global Internet Freedom Task Force. Retrieved February 25,
2018, from https://2001-2009.state.gov/g/drl/lbr/c26696.htm
U.S. Department of State. (2006b, December 28). Global Internet Freedom Task Force (GIFT)
strategy: A blueprint for action. Retrieved February 25, 2018, from https://
2001-2009.state.gov/g/drl/rls/78340.htm
U.S. Securities and Exchange Commission. (2017, July 25). Investor bulletin: Initial coin
offerings. Retrieved from https://www.investor.gov/additional-resources/news-alerts/
alerts-bulletins/investor-bulletin-initial-coin-offerings
Usability.gov. (2018). Use cases. Retrieved from https://www.usability.gov/how-to-and-tools/
methods/use-cases.html
Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects us and undermines
democracy. New York, NY: Oxford University Press.
Van Rooy, A. (2004). The global legitimacy game: Civil society, globalization, and protest.
Houndmills, Basingstoke, Hampshire; New York, NY: Palgrave Macmillan.
! 426
Vkontakte founder Pavel Durov learns he’s been fired through media. (2014, April 22). The
Moscow Times. Retrieved from https://themoscowtimes.com/articles/vkontakte-founder-
pavel-durov-learns-hes-been-fired-through-media-34425
Wallerstein, I. M. (1974). The modern world-system: Capitalist agriculture and the origins of the
European world-economy in the sixteenth century. New York, NY: Academic Press.
Walt, V . (2016, February 23). With Telegram, a reclusive social media star rises again. Fortune.
Retrieved from http://fortune.com/telegram-pavel-durov-mobile-world-congress/
West, S. M. (2017). Data capitalism: Redefining the logics of surveillance and privacy. Business
& Society, 000765031771818. https://doi.org/10.1177/0007650317718185
West, S. M. (2018). Cryptographic imaginaries and networked publics: A cultural history of
encryption technologies, 1967-2017. Dissertation, University of Southern California, Los
Angeles, CA.
WhatsApp. (2018). About. Retrieved March 29, 2018, from https://www.whatsapp.com/about/
White, S., & McAllister, I. (2014). Did Russia (nearly) have a Facebook revolution in 2011?
Social media’s challenge to authoritarianism. Politics, 34(1), 72–84. https://doi.org/
10.1111/1467-9256.12037
Wildavsky, A. (1992). Political implications of budget reform: A retrospective. Public
Administration Review, 52(6), 594–599.
Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136.
Wire. (n.d.). Security & Privacy. Retrieved March 18, 2017, from https://wire.com/en/security/
Wire. (2016, May 17). Axolotl and Proteus. Retrieved March 21, 2018, from https://
medium.com/@wireapp/axolotl-and-proteus-788519b186a7
! 427
Wire. (2017a, October 23). Wire privacy white paper. Retrieved from https://wire-docs.wire.com/
download/Wire+Privacy+Whitepaper.pdf
Wire. (2017b, October 25). Wire -- open for business. Retrieved March 21, 2018, from https://
medium.com/@wireapp/wire-open-for-business-2c535033cf9a
Wire. (2017c, December 7). Tech industry veteran Morten Brøgger joins Wire as the new CEO.
Retrieved March 21, 2018, from https://medium.com/@wireapp/tech-industry-veteran-
morten-br%C3%B8gger-joins-wire-as-the-new-ceo-a320b567931d
Wolfson, T. (2014). Digital rebellion: The birth of the cyber left. Urbana, IL: University of
Illinois Press.
Woolley, S. C. (2016). Automating power: Social bot interference in global politics. First
Monday, 21(4). https://doi.org/10.5210/fm.v21i4.6161
WSIS Civil Society Plenary. (2003). Civil society declaration to the World Summit on the
Information Society. Geneva, CH. Retrieved from https://www.itu.int/net/wsis/docs/
geneva/civil-society-declaration.pdf
Wu, J. (2017, November 12). A look inside China’s propaganda bureaucracy. Global Voices
Advox. Retrieved from https://advox.globalvoices.org/2017/11/12/a-look-inside-chinas-
propaganda-bureaucracy/
Xiao, Q. (2011). The battle for the Chinese internet. Journal of Democracy, 22(2), 47–61. https://
doi.org/10.1353/jod.2011.0020
Yaffa, J. (2013, August 7). Is Pavel Durov, Russia’s Zuckerberg, a Kremlin target? Bloomberg
Businessweek. Retrieved from https://www.bloomberg.com/news/articles/2013-08-01/is-
pavel-durov-russias-zuckerberg-a-kremlin-target
! 428
Yang, G. (2009). The power of the internet in China: Citizen activism online. New York, NY:
Columbia University Press.
York, J. C. (2013). The internet and transparency beyond WikiLeaks. In B. Brevini, A. Hintz, &
P. McCurdy (Eds.), Beyond WikiLeaks: implications for the future of communications,
journalism and society (pp. 229–235). New York, NY: Palgrave Macmillan.
York, J. C. (2015, February 6). There are other funding options than the USG. Retrieved from
https://jilliancyork.com/2015/02/06/there-are-other-funding-options-than-the-usg/
York, J. C. (2017, August 4). How to use Signal without giving out your phone number: A
gendered security issue. Motherboard. Retrieved from https://motherboard.vice.com/
en_us/article/9kaxge/how-to-use-signal-without-giving-out-your-phone-number-a-
gendered-security-issue
Zeller, T., Jr. (2006, February 16). Web firms are grilled on dealings in China. The New York
Times. Retrieved from http://www.nytimes.com/2006/02/16/technology/web-firms-are-
grilled-on-dealings-in-china.html
Zeskind, L. (2012). A nation dispossessed: The tea party movement and race. Critical Sociology,
38(4), 495–509.
Zittrain, J. (2008). The future of the internet and how to stop it. New Haven, CT: Yale University
Press.
Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information
civilization. Journal of Information Technology, 30(1), 75–89. https://doi.org/10.1057/jit.
2015.5
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Cryptographic imaginaries and networked publics: a cultural history of encryption technologies, 1967-2017
PDF
Sunsetting: platform closure and the construction of digital cultural loss
PDF
Driving change: copyright, car modding, and the right to repair in the digital age
PDF
A multitheoretical multilevel explication of crowd-enabled organizations: exploration/exploitation, social capital, signaling, and homophily as determinants of associative mechanisms in donation-...
PDF
Modes of cultural resistance post-Arab Spring: humor, music and the making of a new digital identity
PDF
Beyond the electronic curtain
PDF
Viral selves: Cellphones, selfies and the self-fashioning subject in contemporary India
PDF
Toward counteralgorithms: the contestation of interpretability in machine learning
PDF
The popularizing and politicizing of queer media images in Taiwan: 1997 to the present
Asset Metadata
Creator
Maréchal, Nathalie
(author)
Core Title
Use Signal, use Tor? The political economy of digital rights technology
School
Annenberg School for Communication
Degree
Doctor of Philosophy
Degree Program
Communication
Publication Date
10/15/2020
Defense Date
04/30/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
activism,censorship,censorship circumvention,digital rights,encryption,freedom of expression,internet freedom,networked authoritarianism,OAI-PMH Harvest,privacy,Psiphon,signal,social movements,surveillance,surveillance capitalism,technology development,Telegram,The Tor Project,Tor
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Riley, Patricia (
committee chair
), Dunbar-Hester, Christina (
committee member
), James, Patrick (
committee member
), Powers, Shawn (
committee member
)
Creator Email
marechal@usc.edu,nathaliemarechal@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-79744
Unique identifier
UC11670550
Identifier
etd-MarchalNat-6843.pdf (filename),usctheses-c89-79744 (legacy record id)
Legacy Identifier
etd-MarchalNat-6843.pdf
Dmrecord
79744
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Maréchal, Nathalie
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
activism
censorship
censorship circumvention
digital rights
encryption
freedom of expression
internet freedom
networked authoritarianism
privacy
Psiphon
signal
social movements
surveillance
surveillance capitalism
technology development
Telegram
The Tor Project
Tor