Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Domestic and international tensions in artificial intelligence policy
(USC Thesis Other)
Domestic and international tensions in artificial intelligence policy
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Copyright 2020 Aalok Mehta
DOMESTIC AND INTERNATIONAL TENSIONS IN ARTIFICIAL
INTELLIGENCE POLICY
by
Aalok Mehta
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfilment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(COMMUNICATION)
August 2020
ii
ACKNOWLEDGMENTS
This dissertation—like many others, I imagine—almost didn’t happen. At least a dozen
times, as progress stalled and frustration mounted, my fingers hovered over the keyboard, ready
to send my musings into digital oblivion. The kind words and continued encouragement from
dozens of colleagues, friends, and loved ones are the only things that made it possible to finish.
My committee deserves boundless gratitude. Profs. Mike Annany, François Bar, and
most of all, Jonathan Aronson, my chair, provided endless patience, guidance, and wisdom, not
just over substantive issues but also in navigating the many administrative steps necessary to
completing a doctorate.
I also owe much to the fellow members of the Annenberg community. My fellow
students challenged me in profound ways. The many professors and staff I worked with over the
years were also invaluable partners, imparting me with important skills and knowledge and
helping to shape the worldview I’ve laid out in this document.
The D.C. policy world is at once both vast and surprisingly intimate. I owe a great debt to
the many experts who provided generous guidance when I first entered that space and to the
organizations within and without government that took a chance on hiring me. To the extent that
my words have any weight, that is because they have been through the crucible of hundreds of
conversations with policymakers at the highest levels of government and civil society, whose
effortless insight I can only hope to emulate one day.
No one helped me more through this writing than Victoria. Thank you for your keen
eyes, sharp intellect, and loving reassurance, which helped me persevere even through the
iii
darkest moments. My parents and brother deserve unreserved gratitude. So too do my many
friends that not just offered kind words but often showed genuine interest in my ideas.
This dissertation was completed a surreal fashion, as I sat sheltering in place in a
Washington, D.C. rowhouse, in fits and starts among the furious scramble to piece together
legislation that could address the extraordinary threat of a pandemic. COVID 19 has
fundamentally transformed the world. Nothing will ever be the same. In some ways, no doubt,
this new world will undercut my words. But that is to be expected. All attempts to peer into the
future are fraught with uncertainty. That is no excuse not to try. I hope this work carries within it
some truth, nevertheless.
iv
TABLE OF CONTENTS
ACKNOWLEDGMENTS .............................................................................................................. ii
LIST OF FIGURES ..................................................................................................................... viii
ABBREVIATIONS ....................................................................................................................... ix
ABSTRACT ................................................................................................................................. xiii
PREFACE .................................................................................................................................... xiv
CHAPTER 1: INTRODUCTION AND BACKGROUND ............................................................ 1
1.1 Problem Statement ................................................................................................................ 1
1.2 Research Questions and Hypotheses ..................................................................................... 5
1.3 Significance ........................................................................................................................... 7
1.3.1 Cybersecurity. ................................................................................................................. 7
1.3.2 Disinformation. ............................................................................................................... 9
1.4 Definitions ........................................................................................................................... 10
1.5 Methodology ....................................................................................................................... 12
1.6 Assumptions and Limitations .............................................................................................. 19
CHAPTER 2: LITERATURE REVIEW ...................................................................................... 22
2.1 Overview of AI and Machine Learning .............................................................................. 22
2.2 The Market Failure Theory of Public Policy ...................................................................... 25
2.3 The Many Faces of AI ......................................................................................................... 32
2.4 AI as General Purpose Technology ..................................................................................... 36
2.5 AI and Public Policy ........................................................................................................... 40
2.4.1 AI and the future of work. ............................................................................................ 41
2.4.2 AI and algorithmic bias. ............................................................................................... 43
v
2.4.3 AI ethics and governance. ............................................................................................ 45
2.4.4 Great-power competition and the race for technological superiority. .......................... 47
2.6 Gaps in AI Research ............................................................................................................ 51
CHAPTER 3: ECONOMIC ORGANIZATION OF AI COMPANIES ....................................... 52
3.1 Introduction ......................................................................................................................... 52
3.2 Computational Power .......................................................................................................... 53
3.3 Algorithms and Human Capital ........................................................................................... 58
3.4 Data ..................................................................................................................................... 64
3.5 Scale Economies in the AI Industry .................................................................................... 69
3.6 Conclusion: Economic Structures in the AI Industry .......................................................... 74
CHAPTER 4: THE EROSION OF U.S. INDUSTRIAL ADVANTAGES ................................. 75
4.1 Introduction: U.S. Leadership in AI .................................................................................... 75
4.2 Regulation and Antitrust ..................................................................................................... 82
4.2.1 Deregulation of the “information services” sector. ...................................................... 82
4.2.2 The consumer welfare approach to antitrust. ................................................................ 87
4.2.3 Case study: the T-Mobile/Sprint merger. ................................................................... 100
4.2.3 Case study: the Microsoft antitrust case’s legacy. ...................................................... 104
4.2.4 Conclusion. ................................................................................................................. 107
4.3 Privacy ............................................................................................................................... 107
4.3.1 Consumer privacy protection. ..................................................................................... 107
4.3.2 The “right to be let alone.” ......................................................................................... 110
4.3.3 The consumer privacy resurgence. ............................................................................. 120
4.3.4 Case study: COPPA and YouTube. ............................................................................ 128
vi
4.3.5 Conclusion. ................................................................................................................. 133
4.4 Online Speech and Content ............................................................................................... 133
4.4.1 The 26 words that shaped the modern internet. .......................................................... 133
4.4.2 230: sword and shield. ................................................................................................ 138
4.4.3 Conclusion. ................................................................................................................. 145
4.5 Conclusion: Implications for the Technology Industry ..................................................... 148
CHAPTER 5: INSTITUTIONAL CONSTRAINTS ON AI DEVELOPMENT: A
COMPARATIVE ANALYSIS OF THE UNITED STATES AND CHINA ............................. 152
5.1 Introduction ....................................................................................................................... 152
5.2 Industrial Policy ................................................................................................................ 153
5.2.1 Overt and covert policy. ............................................................................................. 153
5.2.2 Direct government investment.................................................................................... 158
5.2.3 Technology and IP transfer. ........................................................................................ 159
5.2.4 Case study: 5G competition. ....................................................................................... 163
5.3 Data Policy ........................................................................................................................ 173
5.3.1 Government data collection. ....................................................................................... 174
5.3.2 Private data collection. ............................................................................................... 177
5.3.3 Data labeling industries. ............................................................................................. 178
5.4 Absorptive Capacity .......................................................................................................... 178
5.4.1 Institutional knowledge. ............................................................................................. 179
5.4.2 Institutional inertia. ..................................................................................................... 184
5.4.3 Conclusion. ................................................................................................................. 187
5.5 Asymmetrical Vulnerability .............................................................................................. 189
vii
5.5.1 Misinformation. .......................................................................................................... 190
5.5.2 Cybersecurity. ............................................................................................................. 192
5.5.3 Elections. .................................................................................................................... 194
5.6 Institutional Constraints and Limits on Policy .................................................................. 196
CHAPTER 6: CONCLUSION ................................................................................................... 198
REFERENCES ........................................................................................................................... 203
viii
LIST OF FIGURES
Figure 1: Representational models of a shallow neural network and deep neural network (Lee,
Shin, & Realff, 2018, p. 113) ........................................................................................................ 23
Figure 2: AI research quadrants .................................................................................................... 41
Figure 3: Equity investments in AI start-ups by country (OECD, 2018) ..................................... 50
Figure 4: Log scale of computing power used by AI programs over time (OpenAI, 2019a) ....... 54
Figure 5: U.S. AI human capital flows ......................................................................................... 61
Figure 6: Chinese international student return home rates ........................................................... 63
ix
ABBREVIATIONS
5G – “Fifth” Generation Wireless Technology
AG – Attorney General
AI – Artificial Intelligence
APA – Administrative Procedure Act
API – Application Programming Interface
AT&T – American Telephone and Telegraph Company
AWS – Amazon Web Services
BIAS – Broadband Internet Access Service
BIS – Bureau of Industry and Security
CCPA – California Consumer Privacy Act
CDA – Communications Decency Act (Title V of the Telecommunications Act of 1996)
CFIUS – The Committee on Foreign Investment in the United States
COICA – Combating Online Infringement and Counterfeits Act
COPPA – Children's Online Privacy Protection Act of 1998
CPU – Central Processing Unit
CRS – Congressional Research Service
DMCA – Digital Millennium Copyright Act of 1998
DNS – Domain Name System
DOJ – U.S. Department of Justice
x
DSL – Digital Subscriber Line
ECPA – Electronic Communications Privacy Act of 1986
EAR – Export Administration Regulations
EU – European Union
FCC – Federal Communications Commission
FDI – Foreign Direct Investment
FERPA – Family Educational Rights and Privacy Act of 1974
FISA – Foreign Intelligence Surveillance Act of 1978
FOSTA-SESTA – Allow States and Victims to Fight Online Sex Trafficking Act of 2017
(combining the Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act)
FTC – Federal Trade Commission
FTCA – Federal Trade Commission Act of 1914
GAO – Government Accountability Office
GDPR – General Data Protection Regulation
GPT – General Purpose Technology
GPU – Graphics Processing Unit
HIPAA – Health Insurance Portability and Accountability Act of 1996
HSR – Hart-Scott-Rodino Antitrust Improvements Act of 1976
ICT – Information and Communication Technology
IOT – Internet of Things
IP – Intellectual Property
xi
ISP – Internet Service Provider
ITU – International Telecommunication Union
LTE – Long Term Evolution
MFJ – Modification of Final Judgment
ML – Machine Learning
NEC – National Economic Council
NGO – Non-Governmental Organization
NPRM – Notice of Proposed Rule Making
NR – New Radio
NSF – National Science Foundation
NTIA – National Telecommunications and Information Administration
OECD – Organisation for Economic Co-operation and Development
OMB – Office of Management and Budget
OS – Operating System
OTA – Office of Technology Assessment
PIPA – Preventing Real Online Threats to Economic Creativity and Theft of Intellectual
Property Act of 2011 (also known as the PROTECT IP Act)
P.L. – United States Public Law
RAN – Radio Access Network
R&D – Research and Development
xii
SOPA – Stop Online Piracy Act
UAS – Unmanned Aerial Systems
USA FREEDOM Act – Uniting and Strengthening America by Fulfilling Rights and Ensuring
Effective Discipline Over Monitoring Act of 2015
USA PATRIOT Act – Uniting and Strengthening America by Providing Appropriate Tools
Required to Intercept and Obstruct Terrorism Act of 2001
USTR – Office of the U.S. Trade Representative
VC – Venture Capital
xiii
ABSTRACT
Much ink has been spilled about the global race to “win” dominance over new artificial
intelligence (AI) technologies and accrue spillover economic and national security benefits.
Through detailed investigation of AI competition between the United States and China, this
project develops a theory of emergent tradeoffs among domestic and international technology
policy priorities. The study first develops an economic model of the AI industry, which is used to
disaggregate essential inputs that have allowed U.S. technology companies to dominate
development of commercially useful AI algorithms. Economic, historical, and legal aspects of
three areas of communications policy—antitrust, privacy, and liability protections for online
platforms—are examined to understand their role in facilitating this dominance. The results
suggest that domestic policy trends threaten the ability of these companies to continue rapid
development of AI technologies and undermine U.S. international policy objectives. An analysis
of institutional constraints in the United States and China establishes that such policy tensions
disproportionately impact the U.S. AI industry.
xiv
PREFACE
The world is in the midst of a profound technological transformation. New information
technologies, leveraging advances in algorithmic design and development, exponential increases
in the availability of data, and continued improvements to computing power and miniaturization,
promise to drastically reshape every aspect of human life. From health care to transportation,
policing to politics, these changes will leave no arena of modern life untouched.
It would not have been surprising to read those words 25 years ago, at the dawn of the
commercial internet. Indeed, many of the heady predictions from that era have come true.
Advanced networking technologies, digitization, and broadband—especially mobile
connectivity—have radically altered modern life for the majority of humanity that now has
access to modern communications tools.
1
Cellular technology provides billions of people with
real-time, on-the-go access to vast amounts of knowledge. The world has shrunk, as 24-hour
access to mobile telephony, messaging tools, and online platforms facilitates communications
with family members, friends, medical professionals, and governments across time and space.
Equally profound are the economic impacts, driven by a wave of innovation in network
infrastructure and digital services. E-commerce has dramatically streamlined the process of
searching for, researching, and obtaining goods and services of any kind remotely, while
telecommuting has improved business operations and increased access to job opportunities. And
while no worker has gone untouched by email, instant messaging, online forums, or other
1
This paper generally equates “modern communications tools” with internet access. As of 2019, 53.6% of the
world’s population is actively using the internet (ITU, 2019, p.1). Not all users may be accessing the internet
through a broadband technology; they may be connecting with legacy narrowband wireline or wireless technology.
xv
modern digital technologies, more than 40 percent of U.S. workers are now “knowledge
workers” who rely heavily on computing and other technologies for creative problem-solving.
2
The current era of transformation both builds on and represents a dramatic break with the
prior era of digitization and networking. Instead of new digital communications tools, this
technological age is driven primarily by developments in automation and algorithms, especially
improvements in artificial intelligence (AI) computational techniques. These advances—
particularly in a subfield known as machine learning (ML)—have allowed programs to gain an
ever-growing capacity to replicate or even exceed aspects of human cognition, including the
capacity to engage in thought-like processes, reasoning, and rational decision-making (Russell &
Norvig, 2016, p.1-5).
This is evolutionary, in the sense that the new era directly leverages technological and
political choices that enabled the internet and fostered novel online platforms and services. For
instance, the growth of technology companies that collect, store, and process enormous amounts
of online consumer data—made possible only by a deliberate set of permissive policy
decisions—provides one of the essential inputs that has enabled ML and other advanced AI
techniques to progress rapidly in recent years. But this transformation is also revolutionary, in
that it is upending the status quo at a pace and scale unprecedented in human history. In
particular, the mass deployment of tools with automated decision-making capabilities will
2
Following the methodology of Dvorkin and Shell (2016), “knowledge worker” is defined as a member of an
occupational group engaged in nonroutine cognitive tasks requiring “mental skills and adapting to the project at
hand.” More precisely, a knowledge worker is anyone belonging to the “Management, professional, and related
occupations” category of the Current Population Survey, Table A-13 (available at
https://fred.stlouisfed.org/release/tables?rid=50&eid=3149). The Current Population Survey is a monthly survey of
labor force statistics compiled jointly by the U.S. Census Bureau and the U.S. Bureau of Labor Statistics. The
proportion of total U.S. employed persons in this category has increased from 31.3% in August 1994 to 40.3% in
August 2019 (author’s analysis).
xvi
eliminate the inherent rate-limiting aspect of human cognition from many sectors, from warfare
to cybersecurity, reshaping them in profound ways.
The ascendance of new machine learning capabilities, however, coincides with a drastic
transformation in society’s relationship with technology. In recent years, numerous experts,
organizations, and government officials as well as the general public have raised increasingly
sharp concerns about the political and economic power of large online platform and e-commerce
companies, most notably the “Big Five” firms of Apple, Microsoft, Amazon, Google, and
Facebook. Over the past four years, for instance, the share of Americans who see technology
companies as having a positive impact on the United States has dropped dramatically, from 71%
to 50% (Doherty & Kiley, 2019).
This anti-technological turn is notable for its breadth and depth, not just in the sheer
numbers of people who have voiced anxieties, but also in their diversity. The shift is
international in character, spanning both Western democracies and autocratic regimes. It is
democratic, in the sense that both the general population and elite lawmakers, abroad and in the
United States, have raised the alarm. And it is bipartisan in nature (Doherty & Kiley, 2019), at a
time when politics in the United States and elsewhere have become more divided than at any
point in modern history (Pew Research Center, 2014).
This broad darkening towards technology companies highlights three important factors
that complicate U.S. AI policy. First, these attitudinal changes are not a coincidence: They are
rooted in a longstanding set of concerns that the impacts of new technologies are outstripping
society’s ability to manage them. A growing reality of modern life is that many of the world’s
most pressing problems arise from previous rounds of technological innovation. Perhaps the
xvii
most visceral example is anthropogenic climate change: an existential threat to humanity
originating directly from advancements in industrial production and engineering technology. But
such effects are equally apparent in the communications realm. The adoption of new digital
communications tools straightforwardly underlies many of society’s contemporary ills: intrusive
robocalls, cyberattacks, misinformation, mass surveillance, automated discrimination, electoral
interference, economic concentration, and more. There are also good reasons to believe that the
pace of technological innovation is increasing, and that each new generation of innovations
creates more serious and substantial negative impacts for society. These factors prime the pump
for more aggressive public intervention to control the trajectory of technological deployment.
Second, there is a fundamental conflict inherent to U.S. policy around artificial
intelligence. The United States intends to maintain technological superiority in AI and related
automation technologies and is leaning heavily on the private sector to help achieve that goal. At
the same time, domestic policies are moving, in fit and starts, toward stronger antitrust
enforcement for the technology sector, greater privacy protections, and erosion of laissez faire
policies around online content. This tension between international and domestic policies may
erode the overall capacity of U.S. firms to obtain the data and other resources needed to train
advanced AI algorithms, undermining the ability of the United States to gain an upper hand on
the global stage.
Finally, the shift highlights deeply entrenched political, economic, and institutional
factors within the United States that set fundamental limits on its ability to creatively manage
such policy tensions—an obstacle that many of the country’s foreign competitors do not face to
the same degree or kind. For example, despite growing concerns about technology companies,
the United States continues to maintain a deep-rooted veneration towards innovation culture and
xviii
economic dynamism, particularly the high-growth, highly scalable businesses that originate from
Silicon Valley and other hubs of technological invention. This is reflected in the country’s
general approach to economic policy, which includes a strong preference for light-touch
regulation and outright hostility to anything even resembling governmental intrusion into private
markets. These deeply rooted preferences limit the range of tools the U.S. government can
employ to balance national and international objectives. Compared to other nations, the United
States is also poorly equipped to quickly assess and respond to dynamic technological issues.
Constitutional limitations, muddled jurisdictional authority over emerging technologies, a
deliberate policy of starving lawmakers and regulators of resources needed for robust oversight,
the general cumbersome nature of democratic policymaking processes, and other biases against
legislative and regulatory action ensure that U.S. responses to novel technological issues will be
lumbering, at best.
My main argument in this dissertation is straightforward. The United States currently
leads the world in the development of commercially useful AI technologies. This advantage
derives in large part from policy choices the country made around previous generations of
communications technologies, particularly broadband and online services, that have allowed U.S.
technology companies to amass vast stores of data and other scale advantages useful for AI
development. However, new domestic pressures relating to antitrust, privacy, and content
moderation are eroding this permissive policy environment, in turn undermining the U.S. ability
to compete for international AI superiority. Moreover, the United States faces unique and
entrenched institutional factors that further limits its ability to manage tradeoffs between national
and international priorities, biasing the country towards internally inconsistent policy
xix
frameworks, restricting its ability to manage the negative impacts of new technological
developments, and leaving it weakened compared to global competitors.
These ideas are rooted in an underlying framework that sees public policy—especially
around technology—as deeply susceptible to emergent effects. This worldview envisions policy
initiatives as akin to squeezing a balloon: Pressure applied to one area is likely to increase stress
on others in ways that are often unexpected or surprising. My fundamental claim is that domestic
actions related to internet companies cannot be uncoupled from international policy objectives,
and that rational AI policymaking must consciously acknowledge and balance those linkages. To
facilitate this investigation, this dissertation takes full advantage of the wide-ranging nature of
communications research to engage in a broad analysis of AI policy that draws opportunistically
from many disciplines and data sources.
These ideas did not arrive ex nihilo. They are rooted in my experiences over the past ten
years working in civil society and as a public servant, including substantial time spent working
on contemporary technology policy issues at the highest levels of the U.S. government. For
instance, as a staffer at the Office of Management and Budget (OMB), the White House budget
office, I oversaw telecommunications, spectrum, wireless, and export control policy, including
helping to craft and pass major spectrum legislation (Spectrum Pipeline Act of 2015, 2015).
3
As
an official at the National Economic Council (NEC), which advises the White House on
economic policy, I developed and implemented Presidential initiatives on telecommunications,
spectrum, privacy, cybersecurity, international data flows, and emerging technologies ranging
from 5G to AI to unmanned aerial systems (UAS). I worked on regulatory policy and
enforcement for around aviation, UAS, national security, public safety, spectrum,
3
Title X of the Bipartisan Budget Act of 2015 (P.L. No: 114–74).
xx
telecommunications mergers, and 5G as a senior advisor in the Federal Communication
Commission’s (FCC) wireless bureau. And as a professional staff member on the House of
Representatives’ Appropriations Committee, I oversee policy, budgetary, and funding issues for
the FCC, the Federal Trade Commission (FTC), and financial regulators, and work closely with
other staff members on election security issues.
Those experiences provided me with regular exposure to the brightest minds in
contemporary American technology policy. My ideas build upon hundreds of conversations with
policy experts from companies, governments, campaigns, and civil society, both in formal
workplace settings and via the informal gatherings that are a key part of Washington, D.C.’s
policymaking ecosystem. My government and NGO roles also provided me the opportunity to
attend dozens of policy symposia, talks, and roundtables addressing pressing issues around
modern technology, and I am grateful for the time and insights that these speakers provided both
to me individually and to the greater audience of policymakers wrestling with the hardest
problems facing modern society.
Although these experiences in government and the public interest community are
foundational, they are also only suggestive. I discussed portions of this work in these settings to
pressure test their timeliness and soundness, but the primary utility of these conversations was to
direct me to relevant legislative and regulatory initiatives, historical precedents, academic and
civil society research, and business developments. Ultimately, my analysis of those sources is
mine alone. All work was conducted individually, and it in no way reflects the views, official or
otherwise, of any government body or civil society organization. All errors are my own.
AI is a difficult topic to study, both because it potentially touches so many aspects of
modern life and because the underlying technologies are progressing so rapidly as to outstrip
xxi
policymaker and researcher capacity to keep pace. Yet those features also make it worthwhile,
even essential, to examine. As a public servant steeped in the regulatory and legislative world, I
am inclined towards research that can inform contemporary policymaking and provide actionable
insights, not just complicate difficult issues. There is still time to influence discussions of AI
policy, and there are many areas ripe for such analysis. My hope is that this work will provide
useful insights for those with the capacity and inclination to shape AI rules and regulations.
1
CHAPTER 1: INTRODUCTION AND BACKGROUND
1.1 Problem Statement
In recent years, AI technologies have advanced rapidly, driven by the development of
sophisticated new machine learning algorithms; access to large amounts of training data; and
availability of cheap, scalable computational power. In many complex cognitive tasks, such as
image classification and game playing, advanced AI algorithms now exceed human capabilities.
If the same patterns hold as previous generations of technological advancement, AI is likely to
transform global economics, politics, and public policy on a vast scale, exceeding even that of
digital communications tools.
Seeking to leverage these capabilities for lucrative new commercial markets, private
companies around the world are pouring enormous resources into AI research and infrastructure.
Annual global investment in new AI technologies already exceeds $10 billion and is likely to
grow to more than $150 billion by 2025, with firms from most sectors of the economy eyeing AI
applications to improve efficiency and drive growth.
Government interest is equally intense. Much ink has been spilled over intensifying
international competition to “win” technological superiority over AI and garner economic and
security benefits. The conflict between the United States and China dominates the headlines, but
competition extends well beyond that; at least 26 countries or international unions have
developed detailed plans to leverage new AI capabilities and proactively manage its social and
economic impacts (Dutton, 2018).
This rapid progress and growing global competition are driving intense interest into the
ethics, law, and regulation of automated systems. Legislators, policymakers, academics,
2
economists, and advocates have flagged several near-term impacts of AI that could cause
economic and societal stress, including industrial transformations that could render many low-
and middle-skilled jobs obsolete and exacerbate wealth inequality, magnification of racial
disparities and discriminatory practices (particularly via use of facial recognition), increases in
polarization and misinformation, and weaponization of automated decision-making systems for
cyberattacks and national defense.
If one accepts the premise that AI will create an unprecedented level of societal upheaval,
the scope of these concerns might seem surprisingly narrow, however. Indeed, research into the
societal and economic impacts of new AI technologies is in many ways in a nascent state, as
scholars struggle to fully grasp the speed and scale of new developments.
For example, AI technology has progressed more rapidly than expected, generating new
questions and emergent issues at a pace that is overwhelming academics and policymakers . One
notable example is board games, which have long generated interest from AI researchers, many
of whom consider chess “the drosophila of artificial intelligence” (cited in McCarthy, 1990).
Using brute-force methods, AI programs began to defeat human chess champions as early as
1997 (IBM, n.d.). In 2015, however, the AlphaGo ML algorithm defeated a human master at Go,
a board game considered highly resistant to AI approaches due to the large number of potential
board states (Silver et al., 2016; Silver and Hassabis, 2016). This was at least five years ahead of
most experts’ predictions (Gershgorn, 2016). Since then, technologists have developed
sophisticated algorithms for facial and photo recognition, picture and video generation, and text
creation, which are already being deployed commercially or being used as tools of proxy
international and economic conflict. Politicians and watchdog organizations are preparing for
aggressive foreign use of “deepfakes,” convincingly rendered artificial video generated by
3
sophisticated ML programs, to interfere in the 2020 U.S. Federal elections (Lima, 2020; Parkin,
2019), while experts believe that automation is already enhancing the effectiveness and impact of
cyberintrusion software (Cunningham & Blankenship, 2018).
Another challenge for researchers is the likelihood that the effects of AI will be realized
faster, and be more impactful, than any previous generation of technology. First, innovation
cycles are accelerating; the adoption speed for each new generation of technology has usually
increased, rising dramatically for digital communications tools and very likely to jump again for
AI (Our World in Data, n.d.). Second, AI has been under intense development for some time.
Although the novel deep learning and neural net algorithms that form the basis for contemporary
policy discussions are a relatively recent innovation, existing commercial deployments and
decades of earlier research are likely to facilitate quicker adoption than previous technologies.
Third, the private firms that are leading much AI innovation already have well-established
business models based on generation and manipulation of large data sets, which will ease the
path to deployment and minimize industrial disruption. Finally, a key feature of AI is that some
implementations are self-reinforcing (e.g. AlphaGo’s training included playing itself in simulated
matches), further accelerating their progress and decreasing time to market.
Finally, AI appears to be a general purpose technology (GPT) with high potential for
rapid improvement, multiple uses across many industrial lines, and numerous spillover effects
(Lipsey, Carlaw, & Bekar, 2005). GPTs create broad and uneven societal impacts across a wide
swathe of industries. The most compelling evidence comes from economic investigations, which
have found that new GPTs cause significant productivity gains for many sectors, although these
gains can lag a GPT’s introduction by several decades (Brynjolfsson, Rock, & Syverson, 2019).
These impacts are unbalanced, however, with low-skill workers often suffering
4
disproportionately from job displacement. The resulting labor redistribution can lead to social
instability and even violent unrest (e.g. Caprettini & Voth, 2017, who find a direct link between
the introduction of grain-threshing machines and the Captain Swing riots in early 19
th
-century
England). The broad, multidisciplinary nature of these impacts presents serious institutional
challenges for AI scholars. For instance, the limitations of traditional university departments
mean that academic AI research tends to focus on sector-specific impacts rather than holistic
issues. Legislators and regulators face similar jurisdictional conflicts, leading to policies and
statutory proposals that address only slices of the AI ecosystem.
Understanding the societal and economic implications of AI technologies is therefore a
moving target. Not only do they impact already fraught policy issues—such as social inequality
or misinformation—in unpredictable ways, but the underlying technology is simultaneously
progressing rapidly, further complicating those same issues.
At the same time, the rapidity of AI’s developments, especially in the context of a
shifting contemporary policy landscape, offers an opportunity for new analysis and insights. A
central premise of this work is that current research into AI fails to glean important insights that
are only possible when looking at the topic holistically. Many AI discussions, for example,
assume that technological advances are inevitable and will on balance provide net societal
benefits. Such research therefore takes AI development as an a priori input and focuses on policy
options for mitigating disproportionate impacts of specific commercial applications. However,
such analysis ignores the way that domestic policies that may at first glance have no direct
connection to AI constrain the development of the technology and the overall societal carrying
capacity to respond to its impacts. Assessing these domestic pressures is essential to
5
understanding the nature of international technological competition, developing insights into a
country’s prospects for “success,” and devising appropriate policy responses.
This dissertation contributes to ongoing discussions about U.S. AI policy by analyzing a
set of interrelated issues that are amenable to analysis from the perspective of communications
theory; developing theoretical insights; and, where appropriate, providing policy
recommendations. First, I present a brief survey of current AI industrial organization, extending
nascent research into the economics of the AI sector and developing a baseline for understanding
the regulatory and economic factors that contribute to development of novel AI applications.
Then, drawing upon historical lessons from the growth of the internet, I assess the extent to
which new domestic pressures are eroding the ability of technology companies to continue their
AI dominance. The analysis concludes by assessing structural and institutional factors that limit
the ability of Western democracies, and the United States in particular, to manage AI policy,
especially in comparison to other nations.
1.2 Research Questions and Hypotheses
More formally, this dissertation examines these questions:
● Why have companies based in the United States, and not in China or other countries,
dominated the development of digital edge services and new AI algorithms?
● How do trends in domestic technology policy impact the ability of U.S. companies to
continue to lead these technologies, vis-à-vis international competitors?
6
● How do political, institutional, and societal factors affect the United States’ overall ability
to respond to the impacts of AI technology, and how do these factors compare to other
countries such as China?
The basic hypotheses are as follows. AI is a resource-intensive industry that relies on
three main inputs: algorithmic innovation (primarily driven by specialized labor), computing
power, and training data. These all exhibit economies of scale that benefit large companies. At
first glance, this seems to favor the U.S. push to achieve international AI superiority, given that
American firms—leveraging their dominance in digital edge services such as search and social
networking—have strong structural and resource advantages over upstart foreign competition.
However, domestic concerns about the increasing political and social power of
technology firms are eroding the key regulatory and legal mechanisms that allowed U.S.
companies to flourish. In particular, angst over misinformation, electoral interference, extremist
content, and political communications threatens unique American liability protections for online
platforms; renewed interest in antitrust may limit the ability of online companies to leverage
economies of scale; and new privacy regimes driven by growing anxiety over corporate
information collection practices undermine collection of key AI training data. In addition, deeply
embedded legal structures, along with a strong anathema to governmental intervention in private
markets, expose the United States and other democracies disproportionately to the negative
impacts of AI technologies and limit institutional capacity to develop responsive public policy.
This creates feedback loops that exacerbate domestic anxieties over the impacts of technology
while restricting the ability to craft timely national AI policies and properly balance competing
policy priorities.
7
1.3 Significance
The broad potential impacts of new GPTs such as AI and ML mandate intensive
investigation to develop actionable insights while a window for intervention is open. Such
scrutiny helps ensure that decisionmakers—in the private and public sectors—have sufficient
information to make clear-eyed decisions about topics that can impact billions of people and
trillions of dollars in economic assets.
This project provides unique insights into AI by focusing on the industrial structure of AI
companies and AI’s intersection with contemporary technology policy. This provides a deeper
understanding of previously underappreciated linkages between domestic and international
policy trends and a more holistic understanding of the second- and third-order ramifications of
policy choices, technological advances, and social developments. Regardless of whether the
primary concern is curbing the market and political power of large corporations, ensuring a level
playing field for innovation and competition, reducing the frequency of damaging cyber-
incidents, or ensuring technical and scientific superiority on the global stage, it is essential to
understand the direct and indirect impacts of policy choices before taking action.
The below discussion analyzes two areas that feature complex interactions with AI policy
that this dissertation may help untangle:
1.3.1 Cybersecurity. Cybersecurity is a field likely to be transformed in fundamental
ways by automation. Unlike conventional kinetic weapons, cyber systems have significant
headroom to grow in scale and speed as adversarial learning techniques and autonomous
decision-making capabilities eliminate the rate-limiting factor of human review. This will result
8
in an arms race, with cyberdefenses becoming equally adaptive and self-directed. Much of
cybersecurity will then consist of high-speed algorithmic work conducted by AI, paving the way
for its use in routine, low-level conflict between government powers via difficult-to-trace proxy
organizations.
This issue highlights tensions between domestic and international AI policy goals in two ways.
First, AI-driven cyber operations may serve as a leveling mechanism between countries.
Information warfare—particularly exfiltration of intellectual property—is already an active area
of indirect international conflict over technology development. However, China and other
countries benefit from such conflict far more than Western democracies, which face legal
limitations on engaging in proxy combat and are disproportionately vulnerable to cyberattacks.
Advances in AI could therefore create new tools for industrial espionage that could undermine
the first-mover advantage of the very firms that develop such technologies.
Second, actions to enhance cybersecurity—and therefore mitigate vulnerability to AI
technologies deployed by rivals—reflect any uneasy internal tension. Currently, U.S. companies
underinvest in data security and monitoring; as a result, high-profile data breaches are
commonplace and often detected months or years after they first began. Enhanced data security
requirements enacted as part of a comprehensive domestic privacy law could block certain types
of industrial espionage. However, the same laws would restrict the collection of consumer
information, reducing the amount of data available for algorithmic training and therefore slowing
technological development in comparison to international rivals. Policymakers can make
conscious tradeoffs and design appropriate interventions only with holistic view of these
conflicting goals and impacts.
9
1.3.2 Disinformation. Misinformation, particularly related to electoral campaign
messaging, is another active site of proxy international conflict. Although Russian interference
into the 2016 U.S. presidential election is the best-known example, this behavior extends beyond
the United States and is accelerating at alarming levels. In 2019, at least 70 countries faced
coordinated misinformation campaigns of varying levels of sophistication, up from 48 countries
in 2018 and 28 countries in 2017 (Bradshaw & Howard, 2019).
AI will profoundly affect such disinformation operations. AI algorithms are already
capable of generating convincing artificial faces and bodies, generating news-like text, and
rendering “deepfake” videos, and these abilities are advancing rapidly. Despite devoting an
enormous amount of resources, including tens of thousands of workers, to content moderation
and safety teams, internet platforms already struggle to identify and manage such
misinformation. The ability of foreign adversaries to identify and exploit platform vulnerabilities
at speeds faster than humans can directly manage, combined with instantaneous generation of
sophisticated false messages, will exacerbate those issues.
Here again domestic trends and international policy interact in complex ways.
Incentivizing firms to take more aggressive steps against online disinformation is a major issue
driving U.S. policymakers to reconsider longstanding legal protections for websites and online
platforms—protections that helped facilitate the current U.S. dominance in digital services. As
with new privacy legislation, limiting those protections threatens the ability of U.S. companies to
accumulate consumer data essential to developing new AI technologies.
Disinformation campaigns also highlight the limitations inherent to U.S. society in
managing the problematic consequences of new information technologies. Empirically, false
news travels faster on social media platforms than true news (Vosoughi, Roy, & Aral, 2018). Yet
10
constitutional limitations on the government’s ability to restrict speech, along with strong
headwinds against any strong intermediary liability regime, stifle creation of a single coherent
Federal policy to manage misinformation. As a result, companies set content policy with national
ramifications on an ad hoc basis, based largely on parochial internal concerns; thus, Facebook
can demur subjecting political to fact-checking standards, unlike many of its competitors (Abril,
2019). This allows foreign actors to venue shop for an online platform that has the most suitable
policies for mass disinformation campaigns—increasing skepticism about, and bolstering
arguments to take action against, technology firms.
1.4 Definitions
Artificial intelligence here refers to sophisticated computing programs that can replicate
advanced human cognitive capabilities, including decision-making, learning, and reasoning
(Russell & Norvig, 2016). In this dissertation, AI is generally synonymous with machine
learning, a subfield of AI that focuses on using input or training data to develop programs with
the ability to infer patterns and make highly accurate predictions (Mohri, Rostamizadeh, &
Talwalkar, 2018). ML is essentially a synthesis of computer science, biology, and statistical
analysis; it relies on neural nets, deep learning, or other complex algorithms that mirror elements
of the human brain to create systems with the capability of self-learning (Alpaydin, 2020, p. 3-4).
These ML algorithms have progressed rapidly in recent years and are typically the AI programs
with the most potential for near- and medium-term commercial deployment. Some examples in
the marketplace now include image classification, facial recognition, predictive analytics,
recommendation engines, and dynamic pricing.
11
This paper uses several terms interchangeably to refer to online content, applications, or
services provided to consumers over the internet, including digital service, edge service, digital
edge service, and edge provider. The FCC provides a comprehensive and useful definition of
edge provider, which derives from residing at the “edge,” as opposed to the network portion, of
the internet: “Any individual or entity that provides any content, application, or service over the
Internet, and any individual or entity that provides a device used for accessing any content,
application, or service over the Internet” (FCC, 2015, p. 284). Online platforms refer to a subset
of edge services that intermediate interactions among multiple sets of users, and include online
marketplaces, search engines, social networking services, repositories for user-generated content,
discussion boards, and similar services. For example, the OECD (2019) defines an online
platform “as a digital service that facilitates interactions between two or more distinct but
interdependent sets of users (whether firms or individuals) who interact through the service via
the Internet.” The technology industry means the set of companies that derive their primary
streams of revenue through the provision of edge services, even though they may have additional
business lines. The “Big Five” technology companies are Apple, Microsoft, Amazon, Google,
and Facebook, which possess several structural similarities relevant to this analysis.
Internet service provider (ISP), network provider, or internet provider refers to an entity
that provides and operates the physical or logical connection between a consumer and the
internet. When preceded by wired, these terms to refer to entities that provide access via Digital
Subscriber Line (DSL), cable modem, fiber optic cable, or other physical connections, whereas
wireless providers typically provide access via cellular networks. The underlying service
provided is generally referred to as internet access or modern communications services, or, if
narrowband services are excluded, broadband Internet access service (BIAS). The FCC (2015)
12
formally defines BIAS as “A mass-market retail service by wire or radio that provides the
capability to transmit data to and receive data from all or substantially all Internet endpoints,
including any capabilities that are incidental to and enable the operation of the communications
service, but excluding dial-up Internet access service [and] any service that the Commission
finds to be providing a functional equivalent … ” (p. 10).
4
In the past, edge services and internet access might have mapped neatly to the U.S.
regulatory categories of information service and telecommunications service, or enhanced
service and basic service, respectively, but this is no longer the case.
5
1.5 Methodology
AI presents numerous challenges for academic researchers. First, much valuable data is
off-limits because of the industrial organization of the AI sector and its strong nexus with
national security. In the United States, most AI technologies with potential near-term
applications are being developed by the private sector, where firms are often in intense
competition for talent to drive internal research and to be first to market with lucrative
commercial applications. These companies therefore restrict information on their products,
especially early-stage or experimental technologies under consideration for patent protection.
While there is robust VC investment in smaller AI companies, this presents a similar dilemma:
Investors and firms have strong incentives to keep information about business models and
products in “stealth” mode to minimize leaks to potential rivals, thereby making their companies
more appealing for acquisition. The details of Federal research are often classified or restricted
4
As with the FCC’s definition, it is not relevant for the purposes of this paper to define BIAS based on minimum
standards for speed or latency.
5
See pages 83-87 for further discussion of this issue.
13
under national security grounds, making data of interest likewise inaccessible. Similarly, China
and other countries that are also making heavy national investments in AI research keep many
details closely guarded to enhance corporate and international competitiveness.
Second, the rapid nature of AI developments—coupled with the deliberate pace of
academic research—means that some research findings risk being outdated by the time they are
published. Technical innovation in automation and new AI algorithms is proceeding at a clip far
faster than experts in the field had predicted. In the time it takes to develop a research thesis, the
field will likely make advancements that render some findings obsolete. To be clear, this is not a
problem unique to academia; governments face a similar “mismatch between law time and new-
economy real time” (Posner, 2000). In fact, projects involving AI or other active policy areas
face these challenges to a greater degree: The ongoing nature of policymaking may serve as an
endogenous factor that shapes the trajectory of AI as much as it responds to external
developments. This creates potential feedback loops that render investigations into policy-related
areas especially contingent. Such investigations may still be worthwhile—but researchers should
be fully aware of such issues as they structure their inquiries.
These issues are not insurmountable, however. There is a long history of
contemporaneous investigation of technology areas with policy implications. For instance,
legislators often demand such research to help generate new options for action, inform
deliberations, and justify the creation of new government initiatives. Given the strong similarities
between those studies and this project, it is reasonable to adopt a similar analytical approach.
Based on an exhaustive survey of investigatory bodies and research programs that conduct
assessments of emerging areas of technology, this project adapts the general analytical approach
employed by the U.S. Office of Technology Assessment (OTA), because it is flexible, allows for
14
the incorporation of many types of data, and has a proven track record for providing actionable
insights into cutting-edge technology issues.
OTA was a U.S. Congressional office that operated between 1972 and 1995, with a
mission “to provide early indications of the probable beneficial and adverse impacts of the
applications of technology and to develop other coordinate information which may assist the
Congress” (Technology Assessment Act of 1972, p. 797). Among other statutory requirements,
OTA was tasked to “identify existing or probable impacts of technology or technological
programs,” “ascertain cause-and-effect relationships,” “make estimates and comparisons of the
impacts of alternative methods and programs,” and “identify areas where additional research or
data collection is required” (Technology Assessment Act of 1972, p. 797-798).
Prior to its defunding as part of a wave of Congressional cost-cutting measures, OTA
released approximately 750 assessments, background papers, technical memoranda, case studies,
or workshop proceedings, on areas ranging from energy to agriculture to security policy (OTA,
1996, p. 7). Although occasionally criticized for making partisan recommendations, OTA
generally had a reputation for objective, fact-based, comprehensive analysis of difficult technical
topics, which often directly influenced the legislative drafting process. OTA’s work represented
a pioneering approach to the study of emerging technology, and its tools and techniques were
subsequently adopted by numerous other countries, including Austria, Denmark, France,
Germany, the United Kingdom, the Netherlands, Sweden, and others, for their technology
assessment organizations (OTA, 1996, p. 6; Tudor and Warner, p. 133). As technical topics such
as privacy, cybersecurity, AI, and climate policy consume increasing amounts of legislators’
15
time, U.S. experts and policymakers have called for resurrecting OTA, reflecting a recognition
that the legislative branch needs to deepen its technical expertise.
6
OTA used a highly variable approach in its research work. Its studies were not
approached in an arbitrary fashion, however. Interviews with former OTA staff and close
analysis of the agency’s published reports reveal that OTA staff generally took the same initial
steps in every investigation: 1) consulting stakeholder networks with relevant expertise, and 2)
conducting a thorough review of relevant literature (with the full expectation that, for many of
the topics OTA covered, the literature would be particularly thin). Further investigative steps
depended on the outcome of that work, consultation with an advisory committee, and
coordination with other Federal expert agencies. To prepare their final reports, OTA staff would
typically employ a combination of workshops, literature reviews, case studies, contractor reports,
legal analysis, and quantitative analysis (OTA, 1993, p. 11-12). As Tudor and Warner (2019)
note, “OTA’s treatment of [technology assessment] methodologies as flexible and highly
inclusive allowed it to be responsive to Congress and command political credibility” (p. 33).
This project employs a similar analytical framework, modified to reflect the features of
an academic dissertation. For instance, in lieu of formal stakeholder outreach, this dissertation
draws upon insights gleaned from hundreds of discussions with thought leaders and public
intellectuals, both inside and outside government. Because this dissertation is written in my
individual capacity, these conversations were not part of any of my official work duties, and I did
not have the benefit of statutory authority or dedicated funding to assemble a formal advisory
committee. Rather, these conversations reflect the access afforded to me, as a result of personal
and professional relationships developed through years of work in government and civil society,
6
See page 183.
16
to a robust informal network of Washington, D.C’s most prominent technology policy thought
leaders. Through one-on-one meetings, conversations at roundtables and conferences, and
impromptu discussions, I have benefited from the analysis and commentary of numerous experts
wrestling with the most difficult and pressing issues facing modern society.
These conversations were largely informal in nature and off-the record. They were not
recorded, and I did not conduct structured interviews or panels for this paper. This is for several
reasons. First, many of these experts were or are highly placed government employees or
prominent authorities in civil society, and they are actively engaged on issues that overlap with
portions of this research project. This is exactly why these conversations were so illuminating.
However, it is also the reason why my requests for formal interviews were largely rebuffed, even
if I offered to identify people only by general title or expertise area: Many expressed concerns
that it might limit their ability to conduct their work effectively or be interpreted as prematurely
conveying an official position for their organizations. Second, the productivity of these
conversations was due in large part to their extemporaneous nature. Discussions included
unrefined thoughts on developing events and often touched on areas outside of a person’s
established expertise that they might have felt uncomfortable commenting on in a formal setting,
providing an increased level of multidisciplinary insight. As a result, I was able to gather more
insightful information, and refine my arguments and positions more effectively, by relying on
informal conversations than on-the-record interviews.
Although these conversations are not explicitly cited in this paper, they were beneficial in
at least three ways. First, expert conversations facilitated the literature review process, the second
essential component of all OTA reports. Given the recency of many AI innovations, there is a
dearth of relevant research in certain subfields relating to AI policy, economics, and governance.
17
However, pertinent material exists in other disciplines or the gray literature. My conversations
guided me to appropriate historical documents, analogous examples from other fields, civil
society white papers, and other relevant texts. Second, these conversations help me calibrate my
interpretation of complex legal and regulatory documents. Although I have gained significant
experience in such analysis through my policy roles, such input was particularly helpful in
situating historical decisions and assessing their significance. Finally, experts would interrogate
portions of my reasoning, helping me develop, expand, and refine my arguments.
After careful assessment of the analytical tools used by OTA and numerous informal
consultations on research methodologies, I selected three main techniques for this analysis. First,
this paper analyzes structural features of the AI sector, using industry data and reports to develop
a conceptual framework of AI industrial organization. The includes a detailed examination of
essential AI inputs and their impacts on market entry, competition, product development and
differentiation, and the policy environment. Second, this paper analyzes recent legislative,
regulatory, and judicial trends that have contributed to the development of the modern AI
industry. This investigation focuses on key policy decisions that have allowed U.S. companies to
dominate in many edge services and interrogates the shifting contemporary policy environment
for each factor. Third, this research uses comparative methods to assess structural and societal
differences between countries investing heavily in AI, namely China and the United States, and
identifies key features that differentially affect the trajectory of their AI industries. The overall
approach not only parallels numerous OTA reports, but also reflects the analytical process used
by a government agency or legislature in determining how to update regulations or laws.
As is typical in technology-oriented studies, this work draws heavily on non-academic
literature. My arguments often involve examination of primary government documents,
18
including statutes, proposed bills, legislative reports and hearings, court filings, judicial rulings,
agency rulemakings, and legal settlements. In other cases, I rely—with appropriate reservation—
on gray literature produced by think tanks or non-governmental organizations with expertise in
AI topics. However, this dissertation draws only on “publicly” available data and information. In
no way does it reference or incorporate any classified, sensitive, or restricted information
available only to government officials by virtue of their position.
My conversations with experts helped me identify and calibrate my understanding of
relevant primary documents, particularly for key jurisprudence. So did my experience at Federal
agencies and civil society organizations, where my work often involved assessing the reliability
of and synthesizing vast amounts of relevant material, including academic research, economic
reports, stakeholder comments, hearing testimony, white papers, shareholder communications,
Federal rulemakings, and state and Federal statutes. My analysis of judicial rulings and agency
rulemakings leans heavily on my time as a White House and FCC official, particularly the
expertise I gained in administrative, telecommunications, and competition law. Insights on
legislative issues and process draw deeply from my experiences as a Congressional staffer.
Because the core concern of this project is emergent policy impacts, this analysis pulls
extensively from numerous disciplines for theoretical and empirical insights, including
economics, international relations, public policy, communications, economic history, and judicial
theory. This broad-ranging inquiry was essential to tracing the complex interconnections among
various areas of AI policy.
Given the strong nexus between AI and data, it might seem strange that this dissertation
does not centrally feature an empirical analysis. This is not as unusual as it might seem, however.
OTA, despite working exclusively on emerging technology, environmental, or scientific issues,
19
only employed quantitative analysis in a fifth of its reports (OTA, 1993, p. 12). Often, this is
because, in rapidly developing technology areas, data may be incomplete, suffer from reliability
or consistency issues, or not be directly applicable to the issues under scrutiny. In such instances,
emphasizing quantitative analysis might imply a false level of precision. In other cases, the data-
gathering process would take so long as to render findings ineffectual. AI is prone to these
dynamics, given the intense pressures against disclosure that face commercial investors and
government researchers and the rapid pace of AI development. Accordingly, while this
dissertation incorporates quantitative data where appropriate and available, they do not serve as a
primary analytical tool to avoid presenting a misleading empirical rigor. Rather, real-world data
is incorporated more often through liberal use of case studies that assess relevant legal and
historical developments in detail. After workshops and literature review, case studies are OTA’s
most employed analytical tool, allowing researchers to better gauge the validity of conclusions
and the probable impacts of policy alternatives (OTA, 1993, p. 11-12).
1.6 Assumptions and Limitations
This paper focuses on the tensions between domestic and international AI priorities, as
informed by an analysis of AI’s key industrial inputs. The economic outputs of the industry—
namely, the huge range of potential government and commercial applications—are not assessed
in comprehensive detail. Similarly, this dissertation focuses on near-term policy implications and
only lightly considers general artificial intelligence, i.e., an AI with human-like intelligence that
can perform the full range of cognitive tasks of which a human being is capable. Although this
continues to be an area of active research, general AI is unlikely to be developed within the next
several decades, even accounting for the steep acceleration in AI development.
20
Many countries have developed plans to accelerate their AI industries, but the
international component of this analysis focuses on differences between the United States and
China. This is for pragmatic reasons: These countries are considered to the front-runners in AI
development and investment by a large margin. It is also partially opportunistic. The United
States and China have vastly different economic and political approaches, which highlights
relevant differences and clarifies theoretical and empirical findings better than a comparison
between more similar regimes.
AI is of intense public and scholarly interest. However, given the pace of development,
research in this area will inevitably lag the latest developments, undermining portions of this
paper. This is unavoidable when assessing fast-developing areas of emergent technology,
especially when faced with complex feedback loops with ongoing policymaking efforts. This is
not, however, to say that the insights presented in this dissertation have no value. No matter the
degree of global change that might result from new AI technologies, they also possess a thread of
continuity with past cycles of technological innovation and disruption. Some of those
connections may actually be deeper than in previous cases, because the speed of development
and fundamentally data-centric nature of AI means that many of the same companies that
dominate edge services are poised to retain that level of control over AI.
This paper does not focus on making detailed policy recommendations; rather, it
develops an analytical framework for assessing AI policy in holistic terms, leaving more
prescriptive proposals for future work. To the extent the paper does include precise suggestions,
it relies on tools known to improve prediction outcomes: domain expertise, a broad and
interdisciplinary approach to potential futures, no prejudgement of outcomes, accurate weighting
of probabilistic data and acknowledgment of appropriate caveats, and repeated revisions in light
21
of new data or insights (see, e.g. Tetlock, 2017). However, because this is an individual project, I
could only employ these tools imperfectly; most detailed predictive work in the real world is
generally done in small groups to leverage differing expertise and diverse viewpoints.
Finally, this dissertation was written largely before the coronavirus pandemic. The world
is changing dramatically in response, reshaping societal attitudes permanently and unpredictably.
Already, there are conflicting signs for technology policy. The pandemic may increase
nationalism and great-power competition as countries battle for limited supplies or engage in
xenophobic rhetoric, or it may usher in a new age of globalism as nations recognize that pooling
resources and relying on international institutions could provide significant economic and health
benefits. The crisis may have long-term consequences on the resources available to invest in new
technologies, lead to lasting shifts in telecommuting practices and workplace reliance on
automated systems, and permanently increase the power of governments to surveil their
populaces. Perhaps most importantly, the pandemic may soften the plethora of contemporary
concerns aimed at technology companies—or, as dependence on connectivity for daily tasks
increases, highlight deficiencies in privacy law and practice. The answers to these questions will
not reveal themselves immediately. So the pandemic will loom overhead like Damocles’ sword
for my research, and the research of many others.
22
CHAPTER 2: LITERATURE REVIEW
2.1 Overview of AI and Machine Learning
Because this dissertation begins with an industrial analysis of the AI sector, it is essential
to have a clear understanding of what AI is and why the field has changed so dramatically in
recent years. For this paper, the most relevant AI sub-field is machine learning, which underpins
many of the recent advances in AI technology and is responsible for the most commercially
relevant applications. ML is a branch of statistical computational science that focuses on the
development of mathematical models that can make inferences from a sample, using example
data or past experience (Alpaydin, 2020, p. 3-4). The “machine” refers to a computer program
defined up to some set of parameters and running on a dense processing architecture. “Learning”
is the execution of the program, using a selected set of input data, to optimize the parameters
(Alpaydin, 2020, p. 3-4).
Since the early days of AI research, scientists have attempted to replicate structural
elements of the human brain. These models consist of one or more “hidden layers” of processing
sandwiched between the input and output layers (Figure 1). Each layer consists of parameters
responsible for some pre-specified computational task, and the learning process entails
developing statistical weights for each pair of parameters, the total number of which can run into
the millions (Lee, Shin, & Realff, 2018). ML development accelerated around 2006 due to two
complementary developments. First, drawing inspiration from the hierarchal processing regime
in the mammalian visual system, researchers developed new models reliant on extracting higher
levels of abstraction from each processing layer (Hinton, Osindero, & The, 2006; Hinton, &
Salakhutdinov, 2006). Termed “deep learning,” these methods essentially consist of “multiple
23
levels of representation, obtained by composing simple but non-linear modules that each
transform the representation at one level (starting with the raw input) into a representation at a
higher, slightly more abstract level,” and dramatically increased accuracy on classification and
other complex cognitive tasks (LeCun, Bengio, & Hinton, 2005, p. 436).
Figure 1: Representational models of a shallow neural network and deep neural network (Lee,
Shin, & Realff, 2018, p. 113)
Second, the advent of efficient, inexpensive graphics processing units (GPUs), designed to
effectively process large amounts in data in parallel, allowed models to overcome the limits of
single-CPU systems and process millions of free parameters without specialized computing
equipment (Raina, Madhavan, & Ng, 2009).
Modern deep-learning algorithms fall into three major categories (Lee, Shin, & Realff,
2018, p. 111). Unsupervised learning, in which the program analyzes patterns within unlabeled
data, is most often used in clustering or feature extraction tasks. This process resembles
traditional statistical tools and thus has generated limited commercial interest. On the other hand,
supervised learning, in which the program attempts to extract relationships from labeled data, is
used most often for classification and regression and is used in many notable AI applications,
24
while reinforcement learning—which relies on iterative processing of the program’s own outputs
to determine statistical relationships between an input and outcome—facilitates the self-
correcting algorithms seen for tasks such as self-driving cars and game playing. Importantly, ML
models are task-specific: An algorithm is designed to process only specific types of data and
perform a limited set of calculations. In this sense, AI models are not a singular product, but a set
of diverse tools with some common computational DNA.
Since these advances, ML algorithms have developed at a rapid pace, now matching or
exceeding human performance in numerous tasks. The ImageNet Large Scale Visual
Recognition Challenge, for example, has since 2010 operated as “a benchmark in object category
classification and detection” (Russakovsky et al., 2015, p. 1). ImageNet is a database of more
than 14 million images, sorted into 1000 object classes (Russakovsky et al., 2015). In 2015, a
deep residual neural network performed the ImageNet visual recognition task, which consists of
sorting images into the correct object classes, with a 3.57% error rate (He, Zhang, Ren, and Sun,
2016, p. 775), while humans have a 5.1% error rate (Russakovsky et al., 2015, p. 31). ML
algorithms also meet or exceed human expert-level performance in board games such as Go
(Silver et al., 2016) and video games such as BreakOut (Mnih et al., 2015) and have developed
sophisticated capabilities for speech recognition (Amodei et al., 2016), text recognition (Khan,
Baharudin, Lee, & Khan, 2010), and text generation (Sutskever, Martens, & Hinton, 2011).
Uniquely, modern ML algorithms are a “black box.” That is, their operation is opaque,
even to their designers, and largely atheoretical, relying only on the strength of correlations with
no need or ability to understand deeper underlying causal relationships (Mayer-Schönberger and
Cukier, 2013). As a result, many ML algorithms engage in novel approaches to tasks. For
example, although biologists have long known that human gaits have unique characteristics
25
(Murray, Drought & Kory, 1964), in recent years neural nets have had significant success in
identifying individuals by gait, an ability far exceeding untrained human capabilities (Alotaibi &
Mahmood, 2017; Gadaleta & Rossi, 2018), driving development of commercial applications for
surveillance (Kang, 2018). Game-playing algorithms, unconstrained by convention, have also
developed strategies previously inconceivable to master-level players (Metz, 2016). On the other
hand, AI algorithms are relatively fragile, failing in ways that for humans would be catastrophic;
ML programs can just as easily confuse a cat with a computer as two human faces, and are
susceptible to relatively subtle changes in input data, in some cases on the order of just a few
pixels (Cisse, Adi, Neverova, & Keshet, 2017; Elsayed, Goodfellow, & Sohl-Dickstein, 2018;
Finlayson et al., 2019; Komkov & Petiushko, 2019).
2.2 The Market Failure Theory of Public Policy
From a theoretical perspective, any examination of public policy issues—especially those
relating to technological innovation such as AI—must begin with consideration of economic
organization and market failure. The modern dominance of capitalist theories of policy
intervention necessitates such a starting point, regardless of whether one ultimately agrees with
this underlying premise (Mehta, 2015, p. 2).
In particular, the prevailing theory of social organization in the United States is rooted
deeply in neoliberal economic theory, namely that market-based systems are the preferred
method of structuring most exchanges. This idea is entrenched so profoundly in modern U.S.
culture and politics that it serves as the “baseline” for most contemporary discussions of public
and private action. This is for two reasons. First, market-based systems are highly efficient at
allocating resources under conditions of scarcity; clear pricing signals incentivize innovation in
26
production and facilitate the transfer of goods and services to their highest and best use
(Schultze, 1977). Second, market-based transactions are considered morally superior to
alternatives. Since markets rely on voluntary, “unanimous consent” interactions requiring no
overt coercion, they not only possess inherent ethical virtues but align most closely with core
democratic virtues of individual liberty and choice (Schultze, 1977). This “market
fundamentalism” (Pickard, 2013) is key to understanding contemporary discussions of the
communication industries and other technology-dependent sectors.
This logic has several implications for public policy. First, it presumes that government
intervention is only justified in clear cases of market failure, such as monopoly or uncaptured
externalities. Second, such action should proceed only when there is demonstrable evidence that
the potential benefits outweigh the inefficacies of government administration. Finally, such
interventions, when they do occur, should focus narrowly on remedying the market failure at
hand, to restore a functioning free market (Stiglitz, 1989). Thus, market fundamentalism
profoundly shapes the type and quality of viable policy interventions. Tariffs protect infant
industries or national supply chains, intellectual property protections transform public goods into
private ones, and disclosure requirements ensure that firms are not able to leverage problematic
information asymmetries over customers.
To be clear, this is not a fully reductive accounting of public policy analysis. Rather, it
acknowledges the pragmatic reality that, regardless of whether one intends to influence policy
issues through traditional means or suggest alternative theories of political organization, one
must first generally engage with the neoliberal presumptions that undergird contemporary
conceptions of democracy before being able to move beyond them. This is especially true for
sectors involving emerging technologies, where neoliberal thinking is perhaps most deeply
27
entrenched. For instance, legal actions under U.S. antitrust laws generally operate under a
consumer welfare standard, under which plaintiffs suing to enjoin a merger or acquisition must
make a demonstration—usually informed by detailed economic analysis—of higher prices or
lowered product quality (Karsten and West, 2019). To gain traction, alternative theories of
market structure must therefore present not just an affirmative vision of the world but must also
explain the deficiencies of the dominant consumer welfare paradigm. Likewise, cost-benefit
analysis is generally required for any regulation that is “economically significant”
7
(Executive
Order 12866, 1993). Arguments for new regulatory action therefore generally focus on
uncaptured or second-order economic costs and benefits, or they argue that a market-based
approach fails to consider important alternative values such as distributional justice, historical
inequity, or erosion of core democratic values.
There are good reasons to be skeptical of the market-based approach to public policy.
Two categories of concerns are worth flagging. On a micro-level,
8
the neoliberal paradigm
enables significant burden-shifting to consumers. For example, economic theory acknowledges
that “information asymmetry” is a cognizable category of market failure that can serve as a
justification for market intervention (Akerlof, 1978). However, even when complete information
is ostensibly readily available, consumers bear the onus of assessing, parsing, and internalizing
this data. One telling example involves privacy policies: It would take the average consumer 244
7
More precisely, this is a regulation that has an “annual effect on the economy of $100 million or more or [will]
adversely affect in a material way the economy, a sector of the economy, productivity, competition, jobs, the
environment, public health or safety, or State, local, or tribal governments or communities” (Executive Order 12866,
1993).
8
“Micro level” refers to analysis of the implementation of laws that affect how individual transactions are made, just
as microeconomics generally refers to the study of behavior at the individual or firm level. Although the traditional
means of regulating at this level is “command and control” prescriptive rules backstopped by enforcement penalties,
in recent years scholars have noted increasing use of more flexible “systems based” and “performance based” policy
schemes that focus, respectively, on creating systems that can ensure adherence to regulatory goals and that focus
primarily on performance objectives (May, 2007).
28
hours per year simply to read the policies of websites they visit, with an aggregate national cost
based on value-of-time estimates of nearly $800 billion (McDonald & Cranor, 2008, p. 563-4).
The neoliberal paradigm also often fails to account properly for other significant real-world
constraints on consumer decision-making. In addition to the significant temporal costs of
engaging in thorough price and feature comparison, even with online tools, consumers in time-
constrained environments—notably the health care market—may be forced to purchase services
from the most readily available provider, rendering moot the notion of “voluntary” transaction.
The primary U.S. mechanisms for consumer protection also feature a burden-shifting
dynamic. The FTC, for instance, relies heavily on ex post enforcement to address prohibited
“unfair or deceptive acts or practices,” necessitating some significant consumer injury before
investigation. Moreover, except in limited circumstances, the FTC is prohibited from
commencing civil action or seeking significant monetary penalties unless a company that has
already violated a previous agreement or order with the Commission,
9
meaning that companies
face economic pain only for repeat violations.
In other words, the U.S. economic system is substantially tilted towards corporatist
interests and presents serious inertial obstacles to providing consumers with affirmative
protections. Proponents of pro-consumer policy initiatives—whether inside or outside
government—must expend significant resources to accumulate the economic evidence to
demonstrate market failure, or at least indicate that market-based analysis fails to capture salient
public policy issues. This dynamic, for instance, underlies the present permissive approach to
collection and manipulation of consumer data.
9
See Sections 5(l), 5(m)(l)(A) and 5(m)(l)(B) of the FTCA, codified at 47 U.S.C. § 45.
29
At a macro-level, the neoliberal approach presumes that allocative efficiency is, and
should be, the predominant policy concern. In practice, however, lawmakers and regulators
pursue many objectives simultaneously, including promoting social stability, facilitating job
creation (or preventing job loss), and advancing military and diplomatic objectives.
This issue is perhaps best demonstrated by examining the policy dynamics surrounding
economic shocks. Even healthy markets may undergo severe and sustained turbulence during
cyclical changes or under conditions of rapid technological development; such “creative
destruction” accompanies many business cycles, in a “process of industrial mutation that
incessantly revolutionizes the economic structure from within, incessantly destroying the old
one, incessantly creating a new one” (Schumpeter, 2008, p. 81-82). Yet the human cost of market
turmoil often leads to direct intervention to assist incumbent companies or prevent bankruptcies,
particularly when job losses are not evenly distributed geographically or by sector. When entire
markets founder, such as during severe recessions, the multidimensional nature of social policy
clarifies further as governments design and implement aggressive economic and social policies,
demonstrating that even policymakers that have a rhetorical devotion to neoliberalism in practice
are interested in advancing a much broader set of policy objectives that oftentimes interfere
directly with the ability for the free market to operate unhindered.
This multi-dimensional nature of policymaking might suggest a broader window for pro-
consumer regulation than a purely market-driven approach might allow. However, in practice, a
large body of empirical research demonstrates that companies influence the policymaking
process in profound ways, driving legislative or regulatory action that favors specific firms or
industries even at the cost of greater market efficiency (Stigler, 1971; Dal Bó, 2006). For
example, structural features of the market, particularly industrial consolidation, often allows
30
companies to gain substantial political power (either directly or via trade associations) they can
leverage into coercive pressure on consumers, customers, and the government. Companies can
use the potential to grow favored classes of jobs, invest in certain categories of R&D, site a new
facility in a particular location, or shore up domestic supply chains to enact favored policies or
extract significant concessions from legislators. In this case, theory and practice match up in
striking precision. For much of this century, and especially since 2016, firms have reliably been
able to obtain preferred regulatory decisions, stave off pro-consumer and pro-environmental
policy actions, and obtain direct state and Federal assistance, ensuring that they can “privatize
the gains and socialize the losses” even of poor corporate decision-making.
Communications markets demonstrate well this uneven approach to government
intervention. In the early United States, officials were quite unabashed in their use of market
intervention to advance social policy, such as educating the electorate and bolstering
constitutional protections for free speech. Thus, the early government adopted below-cost rates
for newspaper and periodical distribution, subsidized in a “redistributive” fashion by letter
service paid for mostly by the merchant class, to underpin the journalism industry (John, 2009, p.
39-40). Later, media mail subsidized circulation of educational material, on the grounds that the
social benefits of increased knowledge outweighed the taxpayer burdens. Relying on the logic
that television and radio broadcasters receive free use of publicly owned airwaves, the
government has long imposed a “public interest, convenience, and necessity” standard to license
assignment and renewals (47 U.S.C. § 310(d)). Prior to the wave of industry deregulation that
began in the 1980s (Levi, 2008), perhaps no issue in communications policy had spilled more
lawyerly ink than the jurisprudence over how to interpret that vague standard and its use to
31
justify, at various times, requirements for programming diversity, children’s educational
programming, equal air time for politicians, and other public policy objectives (Pool, 1983).
Contrast this to the current government approach to journalism. The prevailing logic of
“corporate libertarianism” (Pickard, 2013, p. 337) now assumes that the value of the journalistic
enterprise is solely in its ability to generate revenue in excess of the costs of production, and not
on advancing the social aim of ensuring a well-educated electorate. So even though traditional
news reporting firms face significant headwinds even under traditional market failure analysis
(Cooper, 2011; McChesney and Nichols, 2011), U.S. policy provides per-capita investment in
public media at rates far below other industrialized nations (Nordicity, 2011), continued
deregulation of major media sectors (FCC, 2020),
10
and no substantial interventions to support
journalism employment.
This multifaceted aspect of high-level policymaking is central to this paper’s main
arguments. A key claim is that policymakers and the public, due to structural factors, often
remain unaware of the extent to which they are pursuing divergent outcomes—even when those
objectives are partially or wholly irreconcilable. That conflict reaches its peak tension when an
issue touches on both foreign and domestic policy issues simultaneously.
10
The television markets first began significant deregulation in the 1980s (Levi, 2008), but the 1996
Telecommunications Act codified the market-driven approach to telecommunications and media markets. Among
other provisions, that law provided the FCC with significant flexibility to forbear from statutory provisions under
certain circumstances, such as “promot[ing] competitive market conditions” (47 U.S.C. § 160). As a result of these
shifts, broadcasters face significantly reduced public interest obligations and have increased flexibility to
consolidate. Deregulation has accelerated under the presidency of Donald Trump. In just the past three years, the
FCC’s “Modernization of Media Regulations Initiative” has resulted in 19 proceedings, the vast majority of these
related to eliminating existing rules or mandates, or otherwise making them less burdensome (FCC, 2020a, p. 9-10).
32
2.3 The Many Faces of AI
Despite the prevalence of neoliberal thinking and its close association with public policy
analysis, relevant economic investigations of the AI sector have significant deficiencies and
gaps. For example, at the macro-level, some scholars ascribe to AI an unprecedented potential
for revolutionary impact unlike any other technological shock in history. Klaus Schwab (2015)
has termed the current era a “Fourth Industrial Revolution,” driven by advancements in
automation, that is “characterized by a fusion of technologies that is blurring the lines between
the physical, digital, and biological spheres.” Whereas the “[t]he First Industrial Revolution used
water and steam power to mechanize production,” “[t]he Second used electric power to create
mass production,” and “[t]he Third used electronics and information technology to automate
production,” the Fourth Industrial Revolution is distinguished by its “velocity, scope, and
systems impact.” Schwab says that
“[t]he speed of current breakthroughs has no historical precedent. When compared with
previous industrial revolutions, the Fourth is evolving at an exponential rather than a
linear pace. Moreover, it is disrupting almost every industry in every country. And the
breadth and depth of these changes herald the transformation of entire systems of
production, management, and governance.”
There are two issues with this type of approach, however. First, such grand-scale analysis
is abstracted to such an extent that it offers little footing for analysis. Even if one assumes that AI
will completely restructure industrial structures and cause significant social upheaval, it is
33
important to develop insights into specifically how, when, and why those changes might take
place. Grand statements about the impact of AI do not offer a credible roadmap for doing so.
Second, this type of discussion conflates the nature of AI in problematic ways. ML is a
task-specific technology; there is limited transferability between algorithms working on
divergent objectives. The rhetoric around AI, however, rarely discusses this fine point, allowing
AI products with limited, task-based proficiency to become conflated with a host of unassociated
benefits. Often, this is deliberate decision by companies and lawmakers to benefit the interests of
industry and stifle questions about government investments. However, grand analyses, including
many with an economic focus, are prone also to such definitional ambiguity.
The notion of “5G” highlights the perils of imprecise technological definitions and
reveals ongoing strategic use of technological argot to exaggerate the promise of new products.
Given the intense governmental and commercial interest in 5G, there is surprising lack of
definitional clarity about the term. Neither the White House’s “National Cyber Strategy” (White
House, 2018) nor its “National Strategy to Secure 5G” report (White House, 2020) provides a
definition of 5G, for instance. Instead, most media, commercial, and government writings about
5G tend to focus on the most extreme use cases for the technology. A recent Wired article
captures the typical breathlessness used in the popular press, noting that the “promise of a 5G
network seems almost mythic in comparison—a leap of perhaps a hundredfold over 4G speeds,
with virtually no latency and the ability to connect an astonishing number of devices and sensors
to the network” (Graff, 2020). Qualcomm (2017), a major wireless supplier, states that 5G is “a
new global wireless standard … meant to deliver higher multi-Gbps peak data speeds, ultra low
latency, more reliability, massive network capacity, increased availability, and a more uniform
user experience to more users.”
34
Generally, when a technical definition of 5G is provided (e.g. FCC, 2019b, p. 12; FCC,
2020b, p. 8), it refers to the 5G NR standard developed by the 3GPP standard organization.
3GPP Release 15, which was finalized in mid-2018, is “the first full set of 5G standards” (3GPP,
2019). The upgrades that 5G NR provides mirror the roadmap developed by the ITU (2017, p.
16) for next-generation wireless services, with notable improvements in speed, driven in part by
extension of mobile services to extremely high-frequency “millimeter wave” spectrum bands
with greatly increased data capacity; support for large-scale IOT deployments; and the ability to
offer low-latency communications useful for time-sensitive applications.
This might leave the impression that 5G can “do it all.” However, while there is a single
5G standard that supports all those capabilities, in practice 5G acts as a collection of related
technologies with different functionalities depending largely on the spectrum utilized. Lower-
frequency spectrum offers greater coverage but provides lower data throughput and benefits far
less from repurposing to 5G. For instance, T-Mobile executives note that upgrading to 5G would
lead to efficiency improvements of only 19% in its low-band spectrum (Ray, 2018, p. 16), a far
cry from widely touted tenfold or hundredfold improvements in speed. Testing has borne this
out, revealing that while T-Mobile has the greatest 5G availability in the United States, T-
Mobile’s low-band 5G performs similarly to its legacy 4G LTE network (Rootmetrics, 2020).
Such nuances are rarely articulated in discussions about 5G, however. Instead, the term
“5G” is used in a deliberately imprecise manner, to gain the benefits of association with its most
promising theoretical upgrades. Most discussions of 5G, for instance, do not elaborate on
spectrum-dependent quality of service. Firms and policymakers talking about the benefits of 5G
will often tout speed increases only possible with millimeter-wave spectrum, without elaborating
35
that such deployments are still in early stages and will require extremely dense, expensive
infrastructure buildouts, limiting them to the most profitable urban areas.
The strategic use of “5G” terminology does not end there, however. Just as companies
ignored the technical specifications for 4G service and began labeling their networks with the
moniker regardless (Hamblen & Lawson, 2010), telecom firms leverage confusion around 5G to
upsell existing products. In 2019, for instance, AT&T rebranded its 4G phones with a “5G
Enhanced” logo, prompting lawsuits from competitors about deceptive trade practices (Brodkin,
2019). Firms now indiscriminately deploy the term “5G” in any discussion of future wireless
service to gain its cachet, even when the underlying technology has little to do with 5G NR.
The similarities with AI are striking. As a result of engineered imprecision, the
conversation around “artificial intelligence” has transformed from a technical discussion to a
sociopolitical one, with the term employed strategically to advance political and economic ends.
In many contexts AI is used in a deliberately vague fashion, implicating a gargantuan range of
potential impacts only loosely connected to the actual state of the technology. AI, in a sense,
means both nothing and everything at once. This impoverishes understanding of how AI is
developing and where it might be headed. For instance, when the public confuses AI
functionality in this way, it exaggerates the potential downsides of trailing other countries in
adoption and thereby can engender additional public support for aggressive policy actions. In this
way, the strategic use of AI terminology enhances the sense that international competition is a
zero-sum game and undermines the possibility of pursuing cooperative solutions.
36
2.4 AI as General Purpose Technology
If high-level economic theory does not present a useful roadmap for AI research, then
more empirical economic investigations might offer a compelling alternative. There is a board
corpus of quantitative work, for example, on “general purpose technologies” (GPTs) that can
broadly reshape social, political, and economic institutions. Generally, GPTs are considered to
possess four distinct features, all of which seem to apply to AI: They are identifiable as a single
generic product, process, or organizational form; they possess significant scope for improvement
and development; they have multiple uses and eventually can be widely used across the entire
economy; and they enjoy technological complementarities and engender spillover effects
(Lipsey, Carlaw, and Bekar, 2005, p. 109-110). However, even this approach offers limited
guidance on empirical and theoretical approaches to AI research.
Much of GPT theory is rooted firmly in the neoliberal economic framework, assessing
the impacts of technology primary by macroeconomic impacts, particularly to productivity and
employment growth (Carlaw & Lipsey, 2006; Helpman, 1998; Lipsey, Carlaw, and Bekar,
2005). The most relevant empirical studies arise from studies of information and
communications technologies (ICTs),
11
because ICTs represent an innovation that is recent
enough to have developed after governments and other organizations began collecting regular,
detailed economic data and distant enough to already have had significant economic impacts.
GPTs do have significant effects on productivity, although the exact impacts vary based
on temporal, geographic, and sector-specific factors. Jorgenson, Ho, and Stiroh (2003) and
Oliner and Sichel (2002), for example, report that information technologies contributed more
than half of United States gains in labor productivity between 1995 and 2000, and 1996 and
11
As used here, ICTs refers to telecommunications technologies, computing devices, and the integrated services
made possible by their interaction.
37
2001, respectively. After the “dot com” bust, that effect subsided somewhat as computer
technology prices began to drop less quickly. Jorgenson, Ho, and Stiroh (2008) conclude that
“although information technology has remained an important source of productivity growth,
other factors drove the productivity gains from 2000-2006.” Van Ark and Inklaar (2005) find
ICTs led to substantial gains in labor productivity in market service industries in the United
States, but not in Europe. Investment data provide empirical validation that service-sector
industries are most likely to benefit from ICTs; ICT-induced job growth is most significant for
education, health care, and financial services, and less so in manufacturing industries (Crandall,
Litan, and Lehr, 2007). Overall, the productivity effect of ICTs appears to be significant,
positive, and growing over time (Cardona, Kretschmer, & Strobel, 2013), and ICTs contribute to
productivity growth broadly across sectors based on consumption of technology, further
reinforcing its status as a GPT (Basu & Fernald, 2006).
Researchers have raised a number of concerns with GPT productivity studies; these
include the idea that productivity does fully capture macroeconomic impacts, that definitions of
productivity are not homogenous and sometimes are incompatible, and that productivity does not
measure technological change (Lipsey, Carlaw, and Bekar, 2005). Perhaps most relevant for AI,
however, productivity gains from GPTs lag on the order of a decade or longer as technologies
slowly become incorporated into the economy, complementary innovations are developed, and
firms overcome switching and learning costs that might cause temporary decreases in
productivity (Lipsey, Carlaw, and Bekar, 2005, p. 111-113). The “paradoxical” stagnation in
productivity growth that was observed simultaneous with early deployments of ICTs highlights
the significant delays in GPT impacts on productivity, along with other potential
mismeasurement and definitional issues (Brynjolfsson, 1993; Brynjolfsson & Hitt, 1998).
38
Indeed, modern economies, face a similar productivity paradox. Like the early days of ICTs,
productivity growth is slowing even as the commercial deployment of new AI applications
would be expected to have a countervailing impact (Brynjolfsson, Rock, & Syverson, 2019).
Even assuming the accelerating nature of technological innovation and commercialization, this
suggests that the impacts of AI are unlikely to be reflected in macroeconomic data for at least
several years and productivity studies do not represent a viable investigative method for
contemporary AI policy.
Equally important, productivity studies of GPTs are also underdetermined, accounting in
inconsistent or incomplete manners for exogenous factors, particularly policy developments that
hinder or facilitate technological deployments (Biagi, 2013). As Lipsey, Bekar, & Carlaw (1998)
note, a GPT’s “effects are actually determined not only by the new GPT’s characteristics but also
by how it interacts with other exiting technologies, the facilitating structure, and public policy”
(1998, p. 196). Thus, with AI commercialization in an early state while also the subject of
intense political interest, explorations of policy issues and relationships with extant industrial
structures are likely to result in particularly cogent insights.
The history of the communications industries highlights the interdependent nature of
policy and innovation, offering lessons for analysis of AI and other emerging technologies.
Telecommunications networks are subject to network externalities that drive towards single-firm
dominance (Oren & Smith, 1981; Rohlfs, 1974). For much of the 20
th
century, the U.S.
government sanctioned AT&T’s natural monopoly over telephone service in exchange for
various commitments to the long-standing American preference for inter-platform, rather than
inter-firm, competition in communications industries (Starr, 2004). These included divestiture of
the company’s interests in the telegraph business (the 1913 Kinsgsbury commitment, see Starr,
39
2004, p. 209) and the 1956 consent decree restricting AT&T to telecommunications services and
preventing it from leveraging its telephone dominance to control adjacent markets. However,
regulated markets, free from the potential competition, offer perhaps the “easiest” path to long-
term supranormal profitability (Robinson, 1988, p. 530), which propagates technological
stagnation. Under monopolist complacency, AT&T in the 1930s discovered, and then
abandoned, commercial development of magnetic recording technology and a working prototype
answering machine, out of fear that recorded messages might lead to a decline in customer
interest in its core synchronous telephony business (Clark, 1993; Saunders & Levine, 2004; Wu,
2010). Similarly, in 1926, AT&T struck a licensing agreement that provided it exclusive licenses
for two-way wireless telephony patents held by GE, RCA, and Westinghouse (Saunders &
Levine, 2004, p. 54). The result was significant delays in the potential development of highly
welfare-enhancing technology. “By amassing and refusing to use its patent rights, and by
building a monopoly position for itself, AT&T suppressed wireless telephony for over four
decades” (Saunders & Levine, 2004, p. 54).
By contrast, regulatory intervention had clear pro-competitive impacts that facilitated
development of new end-user devices and competition for long-distance telecommunications
services (Robinson, 1988). The FCC’s (1968) Carterfone decision allowed end-user devices to
electrically connect to the AT&T network, facilitating a market for customer-premises
equipment, including answering machines, fax machines, and modems for data transmission
essential to development of the internet. This decision also laid the groundwork for the Second
Computer Inquiry that would lead to deregulation of information services and foster innovation
in computing applications (FCC, 1976; 1979; 1980). The FCC’s (1960) authorization of spectrum
for private microwave transmission prompted competition in interstate and long-line telephone
40
services. The continuing clashes over these policy decisions ultimately culminated in the MFJ
breaking up AT&T (United States v. AT&T Co., 1982), which led to both lower prices and
greater competition in local services (Hausman, Leonard, & Sidak, 2002) and would eventually
facilitate codification of the information-telecommunications divide in the 1996
Telecommunications Act. Only intervention against monopolist rent-seeking destroyed the
legacy economic structures that had stifled development of data access technologies, internet
services, and other key ICT technologies and prevented their full realization as a GPT. The
results reinforce Lipsey, Bekar, & Carlaw’s (1998) warning that understanding facilitating
structures, as shaped by policy and exiting technologies, is essential to properly assessing the
impact of machine learning as a new GPT.
2.5 AI and Public Policy
Even as concerns about AI permeate every level of commerce and government, the areas
of most intensive scrutiny do not align with the blueprint provided by GPT studies or prevailing
economic theory. In fact, the major lines of research might seem surprisingly limited given the
potential scope of AI’s impact.
In part, this is because there is a set of implicit assumptions built into ongoing policy and
economic discussions around AI. This includes a belief that development and deployment of new
AI capabilities and applications is inevitable and will provide net societal benefits, even after
considering the disproportionate impacts on certain populations (such as those working in
occupations highly susceptible to automation). This axiomatic belief drives the narrative that
there is an ongoing “race” to achieve global AI. The natural nexus between AI and
communications, which is dominated by network effects, further reinforces the notion that AI is
41
a “winner takes all” technological arms race that requires immediate and large-scale support at a
national level. The result is research agenda that focuses primarily on examining the outputs of
AI industrial production and studying ways to control their impacts.
Conceptually, contemporary research streams fall into four broad categories, depending
on if they are mainly “inward,” or domestic focused or take a global perspective, and whether
they focus on economic issues or social policy (Figure 2). The following sections briefly discuss
each quadrant.
Figure 2: AI research quadrants
2.4.1 AI and the future of work. In the domestic sphere, the primary economic policy
concern relating to AI is labor market displacement caused by automation of routine mechanical,
and increasingly, cognitive, work. These disruptions are under intense research scrutiny, but little
consensus has emerged on their scope and distribution.
For example, assessments of the impact of AI on U.S. labor markets conclude, depending
on the methodology employed, that from 9% (Arntz, Gregory, and Zierahn, 2016) to 47% (Frey
& Osborne, 2017) of total U.S. employment is at high risk of automatization from new AI
42
technologies. The estimated impact elsewhere could be greater (Frank et al., 2019). This is
driving concerns over AI’s potential to increase social and economic inequality. For instance,
Brynjolfsson and McAfee (2014, p. 11) note that:
“Rapid and accelerating digitization is likely to bring economic rather than environmental
disruption, stemming from the fact that as computers get more powerful, companies have
less need for some kinds of workers. Technological progress is going to leave behind
some people, perhaps even a lot of people, as it races ahead. … there’s never been a
better time to be a worker with special skills or the right education, because these people
can use technology to create and capture value. However, there’s never been a worse time
to be a worker with only ‘ordinary’ skills and abilities to offer, because computers,
robots, and other digital technologies are acquiring these skills and abilities at an
extraordinary rate.”
Others dispute these claims, noting that, while technological innovation inevitably
disrupts existing labor markets, it simultaneously creates new labor opportunities. As Autor
(2015) notes, “[a]utomation does indeed substitute for labor—as it is typically intended to do.
However, automation also complements labor, raises output in ways that lead to higher demand
for labor, and interacts with adjustments in labor supply.” He adds: “even expert commentators
tend to overstate the extent of machine substitution for human labor and ignore the strong
complementarities between automation and labor that increase productivity, raise earnings, and
augment demand for labor” (Autor, 2015). Moreover, as with productivity, it will take significant
43
time for firms to fully incorporate AI technology into their internal processes, suggesting that AI
labor impacts will not become apparent for several years at minimum.
Even the distribution of jobs impacted by previous and current waves of automation is
unclear. For example, scholars have observed a potential “hollowing out” in the middle of the
employment expertise curve. These types of jobs involve “routine tasks”–work processes such as
bookkeeping or clerical duties involving highly structured and labeled data that are amenable to
automation because of their explicit and codifiable nature (Autor, Levy, and Murnane, 2003;
Autor and Dorn, 2013). However, while specific tasks may be automated in this way, many of
these jobs also involve numerous tasks that are resistant to automation. As Autor (2015) notes,
“I expect that a significant stratum of middle-skill jobs combining specific vocational
skills with foundational middle-skills levels of literacy, numeracy, adaptability, problem
solving, and common sense will persist in coming decades. My conjecture is that many of
the tasks currently bundled into these jobs cannot readily be unbundled—with machines
performing the middle-skill tasks and workers performing only a low-skill residual—
without a substantial drop in quality. This argument suggests that many of the middle-
skill jobs that persist in the future will combine routine technical tasks with the set of
nonroutine tasks in which workers hold comparative advantage: interpersonal interaction,
flexibility, adaptability, and problem solving.”
2.4.2 AI and algorithmic bias. Domestic social policy concerns over AI tend to focus on
bias in AI systems. AI algorithms are dependent on training data; systematic biases in that data
44
can lead to partiality in the outcomes of algorithmic decision-making. These input biases include
disparities in the number of data points related to specific subgroups, e.g. a lack of African
Americans included in a facial recognition database, or errors in labeling of data the exacerbate
or reflect societal biases, for example, in phenotypic visual identification of gender (Ali,
Flaounas, De Bie, Mosdell, Lewis, & Cristianini, 2010; Leavy, 2018).
Examples abound. A recent analysis of three commercial gender classification systems
found that “male subjects were more accurately classified than female subjects ... and lighter
subjects were more accurately classified than darker individuals,” often by substantial margins.
For instance, some systems had error rates for 0-0.03% for lighter male faces versus 20.8-34.7%
error rates for darker female faces (Buolamwini & Gebru, 2018). And large technological firms
that maintain a corporate culture of fast-paced product rollouts have suffered from high-profile
AI controversies. An Amazon system for automating job applications was discontinued because
it developed systemic biases against female candidates, in part because the training data used by
the system—applications submitted to the company—was imbalanced, with males submitting
most of the résumés (Dastin, 2018). Google was forced to issue a public apology after its Photos
image recognition software identified African Americans as gorillas, again, in part because of a
lack of diversity in its training data and staff (Guynn, 2015).
Despite these high-profile failures, automated systems are being deployed at a rapid pace
and in ways that threaten to reinforce current economic and social disparities. As Eubanks (2018)
notes, across the United States, “poor and working-class people are targeted by new tools of
digital poverty management and face life-threatening consequences as a result.” These tools,
ranging from automated verification mechanisms for social safety net eligibility to predictive
analytics for risk management, “are being integrated into human and social services across the
45
country at a breathtaking rate, with little or no political discussions about their impacts … they
impact poor and working-class people across the color line.” Amazon, for example, developed a
facial recognition system known as Rekognition and worked with several U.S. state- and
municipal-level police agencies to roll out the technology for law enforcement surveillance and
tracking purposes (American Civil Liberties Union Foundations of California, 2018). Europe
also has a history of deploying such technologies following terrorism or other high-profile
security events (Hautala & Shankland, 2016), even though independent analyses find that such
systems can return more than 98% percent false returns in a real-world setting (Sharman, 2018).
As a result of concerns over disparate impacts and reliability, the state of California has
temporarily banned use of facial recognition in body-worn camera systems (AB-1215),
12
as have
San Francisco; Somerville, Mass.; Oakland, Calif., and other cities (Fadulu, 2019).
The opaque nature of AI algorithms amplifies these concerns. Policymaking is not purely
teleological; process matters, especially in democratic systems. Thus, as deployment of AI
systems increases, there are increasing calls to incorporate a norm of “explainability” into ML
systems, so that policymakers and the public can ensure transparency, accountability, and
fairness in AI decision-making processes.
2.4.3 AI ethics and governance. From a global social perspective, AI researchers have
focused primarily on developing governance mechanisms and codes of ethical conduct for AI
companies and technologies, intended to maximize beneficial outcomes and mitigate potential
harms (Altman, Wood, & Vayena, 2018; Boddington, 2017; Cath, Wachter, Mittelstadt, Taddeo,
& Floridi, 2018; Greene, Hoffmann, & Stark, 2019). The most existential of these concerns focus
12
Available at https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB1215.
46
on the potentially apocalyptic impacts of a general artificial intelligence capable of replicating
the full range of human cognitive capabilities. Bill Joy (2000), reflecting the concerns of many
futurists, expressed concerns about the potentially uncontrollable extinction-level threat posed by
technologies with the ability to self-replicate—robotics, genetic engineering, and
nanotechnology. He asks “if our own extinction is a likely, or even possible, outcome of our
technological development, shouldn't we proceed with great caution?” Various scholars have
explored in more detail or proposed methods to address this “AI control problem” (Bostrom,
2014; Yampolskiy, 2012).
Most AI experts agree that general AI remains at least a decade away. However, rapid
advances in AI task-specific capabilities, combined with potential disparities in their societal
impact, have inspired numerous corporations, NGOs, and public-sector organizations to develop
their own ethical frameworks for managing the near-term implications of AI. By 2019, at least
84 such documents were issued just in English or four other European languages (Jobin, Ienca, &
Vayena, 2019). Contributing entities reflect the breadth of AI’s potential impacts and the
diversity of interested parties: pan-governmental organizations such as the United Nations and
OECD; global NGOs such as Amnesty International and the Internet Society; dedicated AI
firms; corporations with heavy investments in AI technology; AI-focused think tanks, such as the
Partnership on AI; academic institutions, including the AI Now Institute and the Future of
Humanity Institute at the University of Oxford; government agencies; and other organizations.
The scope of recommendations varies, but five principles—transparency, justice and fairness,
non-maleficence, responsibility, and privacy—represent the areas with the most consensus.
There is, however, “substantive divergence in relation to how these principles are interpreted,
47
why they are deemed important, what issue, domain or actors they pertain to, and how they
should be implemented” (Jobin, Ienca, & Vayena, 2019).
2.4.4 Great-power competition and the race for technological superiority. A fourth
area focuses on competitive technological development: the use of AI to gain economic and
military superiority over rival countries. Except for direct military conflict, no other research
highlights the continuing success of realism-oriented theories of political relations (Donnelly,
2000). It assumes that a nation-state that gains a substantial advantage in AI will profit from
enhanced trade of goods and service, data-driven governance, sector-specific industrial
production, military affordances, and general economic spillovers. This type of technological
arms race is not unique to AI; it has recurred time many times when technologies have a broad
range of potential applications, including potential military uses. Much of the American and
Soviet research investment strategy during the Cold War explicitly reflected concerns that
technological innovation underpinned military advantage, and the ongoing rhetoric over winning
the race for 5G technology also reflects such naked power calculations.
One way to assess great-power competition for AI superiority is investment of
governmental resources. For example, at least 26 countries or regional unions have or are
developing national strategies around AI development and use (Dutton, 2018). These plans vary
in scope and level of detail, but they usually address some combination of these issues: research
and other direct government investment in AI technologies, workforce development, adoption of
new AI technologies by public and private entities, public-private collaboration, retraining and
mitigation of displaced workers, regulatory and policy reforms, AI governance, and management
of key AI inputs such as data and computational technology.
48
For example, U.S. Executive Order 13859, released on February 11, 2019, outlines the
American AI Initiative, “the United States’ national strategy on artificial intelligence” (White
House, n.d.) It details a “whole-of-government strategy in collaboration and engagement with the
private sector, academia, the public, and like-minded international partners” focused on six key
pillars: promoting AI R&D investment, enhancing access to U.S. federal government data and
resources, reducing barriers to AI innovation, ensuring that Federal standards minimize
vulnerabilities to attacks and facilitate innovation and public trust in AI, training a new
generation of AI researchers and users, and promoting a supportive international environment
Under this vision, the government has a supporting and convening role; the private sector,
leveraging its existing advantages in data, computational infrastructure, and rapid product
development, is expected to assume the bulk of the burden of developing new AI technologies.
Similarly, the European Commission’s 2020 white paper on AI outlines a strategy that
builds on previous efforts to develop a unified EU approach to technological development. The
plan focuses on excellence along the entire value chain and on engendering consumer trust by
ensuring that European firms comply with protective regulations and directions, such as the
bloc’s strong new privacy regime (European Commission, 2020).
China has set aggressive goals for AI. Under the New Generation Artificial Intelligence
Development Plan, China “will achieve major breakthroughs in basic theories for AI, such that
some technologies and applications achieve a world-leading level and AI becomes the main
driving force for China’s industrial upgrading and economic transformation,” with the eventual
goal of cultivating “a scale of related industries” of nearly $1.5 trillion in economic value
(Webster, Creemers, Triolo, & Kania, 2017).
13
This plan aligns with the Made in China 2025
13
This plan was translated into English by New America’s Cybersecurity Initiative.
49
initiative to reinvigorate Chinese manufacturing and reduce reliance on foreign countries for key
technologies and advanced equipment such as semiconductors.
14
Aggregate economic investment in AI research and development represents the most
empirical method of assessing technological competition. The available data, though incomplete,
suggest that the United States maintains a significant advantage in aggregate investment over
China, the EU, and other countries.
Direct government spending—an area where the United States may significantly lag
China—represents a surprisingly small amount of total AI investment. Aggregate U.S. Federal
spending on AI research in 2019 was roughly $644 million (The Networking & Information
Technology Research & Development Program, p. 10), plus an unspecified amount from the
Department of Defense and Defense Advanced Research Projects Agency that likely puts the
total closer to $2 billion. The most exhaustive analyses of Chinese spending suggest that it spent
between $2 billion and $8.4 billion in direct R&D spending, with a lower estimate more likely
(Acharya & Arnold, 2019). In addition, China has likely invested several billion additional
dollars in AI via guidance funds, which essentially act as state-backed VC funds (Acharya &
Arnold, 2019). These estimates are subject to a great deal of uncertainty because they attempt to
interpolate spending on classified AI programs, an inexact and unverifiable art.
In the venture capital sector, the OECD, an international club of 36 rich countries
(OECD, n.d.) has found significantly greater private investment in AI start-ups in the United
States than elsewhere, although that advantage is declining (OECD, 2018, p. 4). Overall
spending increased nearly 16-fold between 2011 and 2017, from approximately $1 billion to
roughly $17 billion; annual spending is expected to increase even further in 2018 (Figure 3). The
14
This plan is available in English at http://www.cittadellascienza.it/cina/wp-content/uploads/2017/02/IoT-ONE-
Made-in-China-2025.pdf (accessed April 10, 2020).
50
United States constituted nearly all 2011 spending and contributed more than $8 billion in 2017;
Chinese equity investments were second in 2017, at just over $5 billion (OECD, 2018, p. 4).
However, other data suggest that China may have already surpassed other countries in VC
spending. For example, in 2018, Tsinghua University (2018) estimated that China garnered 60%
of global AI VC, although the United States still leads in the number of VC investments and far
exceeds China in total number of AI companies. Regardless of which total is more accurate, it is
clear United States and China constitute the lion’s share of early-stage AI investments, with
Chinese investment growing faster.
Figure 3: Equity investments in AI start-ups by country (OECD, 2018)
Even if the United States does lag China in VC outlays and direct government spending,
total American investment likely exceeds China by several multiples due to in-house AI
spending by large technology companies. The Big Five U.S. tech companies spent more than $70
51
billion in R&D in 2018. Not all this funding is dedicated to AI, and some of it originates in
overseas facilities, but the data suggest that corporate R&D spending by large companies is the
most significant source of AI investment and mostly accrues to U.S. interests. By contrast, the
AI-intensive Chinese companies—Baidu, Alibaba, and Tencent—spent just over $8 billion in
R&D in 2018.
15
The Big Five are also pursuing the most aggressive AI acquisition strategy.
Since 2010, 431 companies have acquired an average of 1.5 AI firms each (out of 635 total
acquisitions). However, Facebook, Amazon, Microsoft, Google, and Apple alone constitute 59 of
those purchases, which places their acquisition rate at 8 times that of the average acquirer.
16
2.6 Gaps in AI Research
The review above reveals significant mismatches between prevailing lines of AI inquiry
and the theoretical frameworks best suited to assess disruptive technologies. First, the
presumption of AI’s inevitability has created a bias towards investigation of AI outputs; AI
inputs and the industrial organization of the sector remain seriously understudied. Second,
research into AI has seriously understudied the influence of existing market structures and the
exogenous influence of policy decisions on the development of AI technologies. Third, although
various lines of AI research are not completely disconnected—AI ethical frameworks and
national plans generally include robust discussion of equity issues, for example—they miss key
interlinkages between elements of domestic and international policy. These gaps in the extant AI
research confirm the analytical approach taken in this paper.
15
Data for R&D spending originates from public filings and was pulled from the Global Innovation 1000 database
assembled by strategy&, available at https://www.strategyand.pwc.com/gx/en/insights/innovation1000.html
(accessed April 1, 2020).
16
Data for AI acquisitions was compiled by CBC Insights, available at https://www.cbinsights.com/research/top-
acquirers-ai-startups-ma-timeline/ (accessed Oct. 31, 2019)
52
CHAPTER 3: ECONOMIC ORGANIZATION OF AI COMPANIES
3.1 Introduction
This chapter explores the industrial structure of AI companies, specifically those focused
on machine learning. It is insufficient simply to acknowledge that AI is a capital-intensive
industry. Only decomposing AI technologies into their key economic inputs allows for a full
identification of the interactions between the extant industrial structure and the policy
environment, the most noteworthy areas of differential advantage among countries, and potential
pressure points for political intervention.
ML involves the development of computer programs that can infer patterns from complex
data sets and make highly accurate predictions about novel information (Mohri, Rostamizadeh,
& Talwalkar, 2018). Although ML features a dizzyingly large set of technical approaches, the
industry lends itself to a relatively streamlined analytical model. In particular, the ML sector
consists of firms that take as inputs a specific set of data, an algorithm, and computational
resources to produce a trained model.
17
Because each model is task-specific, it acts as a
differentiated product and can serve as a substitute, in an economic sense, only with other
models that can ingest similar types of information and perform similar types of processing.
The remainder of this chapter explores these core inputs in greater detail, assessing key
sub-components; their relative availability and importance to development of the final model;
transferability for other ML applications; and notable differences among nations, particularly the
United States and China. The analysis reveals that AI development features high fixed costs,
17
Much of this framework was developed through lectures and seminars I attended as part of the Artificial
Intelligence Lab presented by the Wilson Center in Washington, D.C.
53
increasing returns to scale, and network externalities (Goolsbee, 2018), which benefit entrenched
incumbents and limits market entry by new firms.
3.2 Computational Power
Due to the emergent nature of ML models, it is often unclear what algorithmic design and
parametric structure will yield the best performance. Each model, in turn, can have millions of
unspecified parameters. Resolving an ML task therefore requires the use of robust computational
systems, operating in parallel, to rapidly ingest data and conduct simultaneous statistical
analyses. Over the past several years, the increasing availability of powerful, affordable GPUs
and multi-core CPUs, designed for efficient parallel processing, has dramatically bolstered AI
development by supporting ever-increasing model size and complexity. For instance, since 2012,
when the “modern” era of deep learning ML began, total computing power used for speech,
vision, gaming, and language tasks—ML domains that have seen substantial progress—is
doubling roughly every 3.4 months (OpenAI, 2019a). This is roughly seven times as fast as the
pre-2012 era, when the doubling rate was around 2 years, and highlights the ways in which
parallel computing allows model complexity to accelerate beyond the traditional confines of
Moore’s law (Figure 4).
54
Figure 4: Log scale of computing power used by AI programs over time (OpenAI, 2019a)
High-end computing power is expensive, especially as model complexity increases over
time, but these costs tend to lag expenses related to data and algorithms for several reasons. First,
GPUs are easily adaptable to new ML tasks, without the need for additional capital investment.
Thus, incremental outlays for computing equipment are required primarily to address increased
model complexity or to accelerate model development ( since computing power generally has an
inverse relationship with the length of time it takes to resolve a ML model). Data and algorithms,
however, tend to be narrowly constrained to a single task, with limited transferability. Second,
GPUs are versatile technology that are included in most mobile phones, mid- and high-end
personal computers, game consoles, workstations, and servers. This has created significant
55
economies of scale in the GPU business and driven steady increases in value. A notable
exception was 2018, when high cryptocurrency prices led to a significant spike in GPU prices as
currency miners bid up units on the expectation of outsize returns, but that pricing pressure has
largely alleviated due to the subsequent plunge in cryptocurrency valuations.
A typical ML task highlights how computational systems are used in practice. In 2019,
scientists developed a new ML algorithm for face swapping—sometimes known as
puppeetering—in which one person’s face is analyzed and then seamlessly swapped in for
another face in a video (Nirkin, Keller, & Hassner, 2019). Each model was run on eight Nvidia
Tesla V100 GPUs—a chipset costing roughly $10,000 apiece—and an Intel Xeon CPU, which
can run well north of $1000. The face-swapping models each converged in two days, except for
one that resolved in six hours. These costs are rather modest for research equipment, although
they are likely out of reach for most regular consumers. As a result, rather than being dependent
on the limited availability of supercomputers, ML experts can use readily available commodity
hardware to process complex computational tasks quickly enough for iterative improvement and
multi-model pruning.
While GPUs are highly efficient at parallel computing, they are designed for general-
purpose graphical tasks and are not optimized for AI calculations. Because of the potential
outsize impacts of even marginal improvements in chip performance, dozens of companies,
including many that are not traditional GPU and CPU makers, have begun designing and
manufacturing specialized AI chipsets and wrappers. Many of these chips focus on data center,
IoT, edge computing, or other specialized big-data services, rather than increasing the scalability
and efficiency of deep learning applications. However, the Big Five are all working on AI-
optimized chipsets, either for deep learning or to support specific in-house products such as
56
artificial reality, virtual reality, speech processing, or facial recognition (Bright, 2016; Palladino,
2018; Synced, 2018; Synced, 2019). Google’s Tensorflow Processing Units, for example, are
“custom-developed application-specific integrated circuits (ASICs) used to accelerate machine
learning workloads.”
18
While China has not yet had success fostering a homegrown
semiconductor industry, doing so is a key element of its Made in China 2025 initiative. As a
result, in recent years Alibaba and Huawei have both unveiled specialized AI chips (Kharpal,
2019), as part of a coordinated effort to close the gap with U.S. technology companies and meet
China’s strategic plan to in-source production of at least 70% of its semiconductor needs by 2025
(Kharpal, 2019).
The ubiquity of GPUs makes them particularly difficult to control via policy tools. Under
U.S. export control law, the Bureau of Industry and Security (BIS) within the Department of
Commerce is responsible for setting policy for dual-use items (items that have both military and
non-military applications) via its Export Administration Regulations (EAR) (15 C.F.R. § 730 et
seq.). However, because GPUs are embedded into many globally popular devices, particularly
smartphones, gaming consoles, and personal computers assembled in a variety of countries,
controlling access is problematic. Demonstration projects with PlayStation 3 superclusters
illustrate that even commodity hardware, operating in parallel, can support complex AI
applications; a cluster of 1,700 PlayStation 3s was at one point the world’s 35
th
fastest
supercomputer (Griggs, 2019).
However, specialized AI chips are a different story. The United States has already taken
steps to restrict Chinese access to high-end Intel, Nvidia, and AMD CPUs and GPUs over
concerns about their use in nuclear weapons research (Moammer, 2015). In 2018, the Export
18
Data from Google Cloud, available https://cloud.google.com/tpu/docs/tpus, (accessed April 2, 2020).
57
Controls Act of 2018
19
became law, authorizing BIS to establish controls on the export of
emerging and foundational technologies, including AI. As a result, BIS has issued a rulemaking
proposal that contemplates adding numerous AI technologies to the EAR, including AI cloud
technologies and chipsets (BIS, 2018).
20
The likelihood of new export restrictions is a strong
driver for Chinese investment in its domestic semiconductor capacities, and in homegrown AI
chipset capabilities in particular. So long as U.S. companies continue to lead in development of
sophisticated AI chips and export controls limit distribution, Chinese firms will be forced to rely
on less efficient general-purpose GPUs that will slow down product development and testing.
Processing units require minor sub-inputs to operate properly. First, parallel computing
generally needs control software to assign tasks to different processors and collate results.
However, this is a marginal factor. Due to the multi-threaded nature of modern CPUs and GPUs,
they have robust in-built parallel processing firmware, with ML-dedicated chips having
particularly efficient on-board functionality. For more sophisticated needs, the ML algorithm can
perform this function, or the researchers responsible for developing the model can craft
appropriate software.
Likewise, the electricity needed to power processing units is readily available in
industrialized nations. While countries such as India and China do have cheaper unit prices for
electricity, overall the relative costs of electrical power—vis-à-vis processing units and other
inputs—is small.
19
Passed as part of the National Defense Authorization Act for Fiscal Year 2019, P.L. 115–232.
20
The full list of potential additions to the EAS in the AI category is as follows: “(2) Artificial intelligence (AI) and
machine learning technology, such as: (i) Neural networks and deep learning (e.g., brain modelling, time series
prediction, classification); (ii) Evolution and genetic computation (e.g., genetic algorithms, genetic programming);
(iii) Reinforcement learning; (iv) Computer vision (e.g., object recognition, image understanding); (v) Expert
systems (e.g., decision support systems, teaching systems); (vi) Speech and audio processing (e.g., speech
recognition and production); (vii) Natural language processing (e.g., machine translation); (viii) Planning (e.g.,
scheduling, game playing); (ix) Audio and video manipulation technologies (e.g., voice cloning, deepfakes); (x) AI
cloud technologies; or (xi) AI chipsets” (BIS, 2018).
58
3.3 Algorithms and Human Capital
Although there are complex feedback loops arising from the interactions among the
various AI inputs, the current ML era began largely with the development of sophisticated “deep
learning” algorithms mirroring elements of human neural processing. However, from an
economic perspective, it is essential to understand the limitations of these algorithms. Although
deep learning systems employ variable statistical weighting between parameters based on
analysis of training data, the nature and arrangement of those parameters within a model is fixed.
This is a key point of differentiation between current ML systems and neural architecture, which
can dynamically create new connections or rearrange existing ones in response to stimuli or
injury. This means that a model faces limitations on its ultimate efficacy; it cannot make certain
foundational self-adjustments to optimize its own performance. This has two major implications.
First, ML models designed around a specific task have limited transferability to other
applications; some architectural elements might be useful, but in general ML algorithms need
significant updates when applied to other domains. Second, algorithmic innovation is tied largely
to the expertise to develop and refine elements of ML models. In this way, algorithms as an
industrial input are functionally equivalent to access to human capital, particularly AI scientists
with doctoral-level training in model development.
The talent pool for AI experts is extremely thin, with less than 40,000 estimated AI
experts globally in 2019, based on self-identification data from LinkedIn (ElementAI, 2019).
This, plus insatiable demand from the technology and automotive sectors, has sent salaries sky-
high. For example, “[t]ypical AI specialists, including both Ph.D.s fresh out of school and people
with less education and just a few years of experience, can be paid from $300,000 to $500,000 a
year or more in salary and company stock” (Metz, 2017). AI superstars often receive
59
compensation packages, including equity, of “single- or double-digit millions over a four- or
five-year period” (Metz, 2017).
High salaries raise the capital costs for ML startups, compared to firms relying on more
readily available labor (even computer science expertise is much cheaper than AI talent). For
example, DeepMind, a UK-based AI firm acquired by Google in 2014 (and developers of
AlphaGo), paid staff costs of more than 400 million pounds in 2018 for less than a thousand
staff; this was double its staff costs in 2017.
21
Matching this level of growth and salary requires
AI startups to aggressively court startup capital from VCs, who in turn apply pressure on these
firms to focus on only the most salable products and develop a clear glide path to profitability.
This extreme demand for AI researchers also has significant policy implications. First, it
affords staff an unusual degree of autonomy and influence, even within large technology
companies. Because most AI researchers are educated at the doctoral level, for instance, they
have imported numerous academic norms into the corporate world. This includes a remarkable
level of data sharing and collaboration, with top experts routinely attending conferences and
openly publishing the latest AI findings. Contemporary policy concerns, including international
competition issues, have done little to erode this practice. For instance, at one point, OpenAI
(2019b) decided to restrict access to its full GPT-2 language model, capable of “generat[ing]
conditional synthetic text samples of unprecedented quality,” due to concerns “about malicious
applications of the technology,” a unprecedented decision that at the time generated significant
controversy in the industry. However, OpenAI reversed itself after criticisms mounted that
transparency, which would allow researchers to investigate the model for ways to mitigate
21
Data from DeepMind Technologies Limited Report and Financial Statements, Year Ended 31 December 2018, p.
21, available at https://beta.companieshouse.gov.uk/company/07386350/filing-history (accessed April 10, 2020).
60
potential harm, is preferable to secrecy (Vincent, 2019). This highlights how deeply rooted the
ethos of academic disclosure remains in the AI sector.
Second, high salaries drain expertise from the Federal government, which cannot come
anywhere close to matching such compensation, limiting its ability to craft appropriate policy
interventions around AI. Rather, to the extent that large companies continue to aggressively
pursue AI experts, these firms have also developed a body of specialized in-house knowledge far
in excess of government officials, increasing corporate influence over technical policy issues.
From a comparative perspective, the United States continues to have a significant
advantage in AI talent. Conference publication is often used as a proxy for intellectual caliber.
On that metric, no other country comes close to the United States, where roughly 44% of a
sample of AI authors received their Ph.D.s (ElementAI, 2019). Authors trained in China were a
distant second, representing just under 11% (ElementAI, 2019). Employment patterns reflect this
distribution as well, with 46% of authors working in the United States and a bit more than 11%
in China (ElementAI, 2019). Importantly, at least for the time being, the United States is
retaining its own talent, with the vast majority of American-trained researchers finding
employment in the country (Figure 5).
61
Figure 5: U.S. AI human capital flows
The talent gap between the United States and China may be narrowing, however. A 2018
status report on Chinese AI found that China, in addition to having the globe’s second-largest AI
workforce, leads the world in AI papers and highly cited AI papers (when Chinese-language
papers are fully incorporated into the analysis) and has more AI patents than any other country
(Tsinghua University, 2018). A recent analysis of citations and other features in more than two
million academic AI papers reinforces this possibility: “China has already surpassed the US in
published AI papers” and “[i]f current trends continue, China is poised to overtake the US in the
most-cited 50% of papers this year, in the most-cited 10% of papers next year, and in the 1% of
most-cited papers by 2025” (Cady & Etzioni, 2019, emphasis in original). Moreover, the authors
note that citation counts are a lagging indicator of impact, so that the “results may understate the
62
rising impact of AI research originating in China”. Lee (2018) also notes that China possesses
more overall software engineers that the United States. Although they may not be as high quality
as top U.S. performers, this surplus might naturally align with the nature of AI jobs, which are
expected to also create numerous middle-skill computational roles. Meanwhile, the United States
is facing stressors on its educational system. By courting both high-performing Ph.D.’s and
offering experienced professors salaries several multiples of those offered by universities (Metz,
2017), AI companies are essentially impoverishing the future pipeline of talent.
The most noteworthy policy interventions related to AI algorithms also involve human
capital. As noted, the United States is considering broad export bans on AI technologies,
including techniques underlying AI algorithms (including neural nets, deep learning, and
reinforcement learning) along with specific applications (ranging from computer vision to speech
processing to natural language processing). Even if such restrictions were put into place,
however, the United States does not have a policy framework for limiting the circulation of
underlying algorithmic expertise to foreign competitors.
In fact, an inward-focused foreign policy regime has led to the opposite result. U.S.
universities remain the center of gravity for cutting-edge AI research and training, but they are
also reliant on a steady supply of graduate students from abroad. China sends more graduate
students to the United States than any other country, comprising more than 35% of the nearly
378,000 international students that studied in the United States in the 2018-2019 academic year
(Institute of International Education, 2019). Yet at the same time, more than 80% of Chinese
students now return home rather than staying in United States after completing their studies, up
from around 10% at the start of the century (Zhou, 2018) (Figure 6).
63
Figure 6: Chinese international student return home rates
Some of this is a result of Chinese policies to bolster the number of in-country AI
departments, as well investment incentives that offer an appealing environment to potential
Chinese entrepreneurs. In large part, however, this change has been driven by difficulties in
navigating the U.S. visa process and other anti-immigrant policy. Foreign students interested in
remaining in the United States must find a company willing to not only hire them but also
sponsor them for the H-1B visa lottery—something many firms are unwilling to do because it
delays hiring and is fraught with uncertainty. In addition, on national security grounds, the
United States has recently started applying more stringent restrictions to Chinese graduate
students studying aviation, robotics, and advanced manufacturing, limiting their renewable
student visa durations from 5 years to 1 years (Mervis, 2019). Like other issues, the U.S.
64
preference to support research through significant use of foreign student labor and a domestic
policy that sees immigrants as harmful to national interests are not fully compatible. So long as
both persist, the result is a net transfer of AI knowledge from the United States to China.
3.4 Data
Perhaps no statement has been uttered more about the technology industry in recent years
than “data is the new oil.”
22
If the intent is to signal the importance of data as a raw commodity
in the modern economy, this statement holds some rhetorical draw. Even before the modern era
of ML, much of the commerce had come revolve around the collection, processing, and
monetization of consumer information, especially the use of behavioral data for targeted
advertising, just as the industrial era relied on petroleum to fuel factories, engines, and
transportation networks. The “unreasonable” effectiveness of large data sets to train AI
algorithms (Halevy, Norvig, & Pereira, 2009) has reinforced this transformation. As of March
31, 2020, the seven largest public companies in the world, by market capitalization, were the Big
Five, Alibaba, and Tencent, all of which derive their core revenues from data processing and
brokerage, underscoring the extent of the worldwide shift to an information economy.
But in substance, the statement is problematic. Unlike oil, data is not rivalrous, nor is it
consumed upon use. This means that companies tend to accumulate ever larger amounts of data
over time, but also suggests that new entrants into the market are restricted only by the price of
obtaining new data, not by potential resource shortages or supply chain disruptions. Perhaps
most relevant, data is not fungible; it originates from many sources and comes in many forms,
most of which are not readily interchangeable. This latter issue strikes at the crux of the role of
22
This first reported use of this phrase is from 2006, by Clive Humby, a British mathematician (as cited in Arthur,
2013). See also Economist, 2017.
65
data in machine learning. While algorithms require expertise to design and tweak models, there
is much greater conceptual similarity between various approaches to ML than in the underlying
training data. Although there are technical workarounds, typically a facial recognition program
can ingest only particular images; machine translation programs, only appropriate text selections.
The amount and types of data now routinely being collected, compiled, and processed
that might serve as the raw material for AI is enormous. For example, companies collect a
plethora of information from actual or potential commercial transactions, even when no purchase
is made. This includes financial data, mailing addresses, phone numbers, browsing history,
shopping cart information, and user-generated content such as reviews and questions. Zero-
priced products such as Gmail or Facebook provide another potent data source, as these tools are
generally offered on the condition that users accept explicit data mining practices, including
collection of user behaviors, shopping preferences, and message content.
The data sources extend far beyond that, however. As Zuboff (2015) notes, other
information sources include the exponentially increasing data generated from sensors and other
Internet of Things devices; information in government and corporate databases, “including those
associated with banks, payment-clearing intermediaries, credit rating agencies, airlines, tax and
census records, health care operations, credit card, insurance, pharmaceutical, and telecom
companies, and more”; and video and audio recordings from private and government
surveillance devices (p. 78). The rise of cloud computing services amplifies the mobility and
dissemination of this information, as third parties now store, process, and analyze data that
would previously have resided on local servers.
The needs and capabilities of AI create yet additional categories of data resources. In
many cases, ML algorithms rely on synthetic data, or information artificially generated to
66
provide additional training inputs to AI algorithms. Self-driving car software, for instance, is
often fed low-probability events, such as an encounter estimated to occur no more than once in
every million road miles, to refine edge-case behavior. AI also provides increased capabilities to
generate imputed data, predictive assessments of information that does not otherwise exist in a
data set, such as deriving characteristics of a person that is not a registered user of a social media
platform by assessing information provided by friends and associates. However, perhaps the
most notable category of this type is self-generated data. Oftentimes, programs will create their
own information sets as an assistive tool in training. This is common in reinforcement learning
applications, such as programs that play games or engage in other tasks with clearly defined
“correct” end states.
Raw data and the type of data that are useful in AI applications differ significantly,
however. Most applications of interest (except for unsupervised learning, which focuses on
pattern recognition and clustering) require “labeled” data, that is, information in which the
salient features that an algorithm will assess are clearly and consistently identified (Roh, Heo, &
Whang, 2019). For autonomous vehicle testing, for instance, it is not enough to simply have
video from on-board cameras. The data must be cleaned, sorted, and appended with classification
data—i.e., frame-by-frame metadata identifying people, vehicles, street signs, traffic signals, and
other relevant features—so that the algorithm has a baseline against which to conduct statistical
analysis and assess the veracity of results. This labeling is a time-intensive task and is reliant on
manual human labor. While ML programs can handle some more straightforward identification
tasks and capabilities in this field are growing, human decision-making (albeit not technical
knowledge) is still required for assessing difficult cases and resolving ambiguity, which are
precisely the cases that provide the most value for model development.
67
The labeling process is expensive. When performed in-house, data gathering and
preparation takes, on average, 80% of project time, far exceeding the percentage of time spent on
algorithm development, tweaking, and training. In aggregate, in 2018 the AI industry spent more
than $1 billion on labeling efforts and supporting infrastructure (Cognilytica, 2019). This has
driven the creation of a labeling sub-industry, heavily concentrated around cheap third-party
labor in India and China (Whalen & Wang, 2019; Yuan, 2018). Analysts estimate that the market
for data labeling solutions was $150 million in 2018 and will expand to more than $1 billion by
2023, and that overall data preparation solutions for ML applications will grow from $500
million to $1.2 billion (Cognilytica, 2019).
While the importance of data quality is well-established, questions around the value of
data quantity are more complex. AlphaGo’s success was driven in large part by modeling
improvements; as one of its creators notes, “algorithms matter much more than either computing
or data available” (as cited in Gibney, 2018). From a static standpoint, it makes perfect sense that
data would have diminishing marginal value. A model will eventually converge on a statistical
solution, rendering the contribution of an additional datum virtually nil. However, algorithms
and data also have a complex interdependent relationship, which suggests that neither is more
important than the other. In particular, the most effective ML solutions involve an algorithm
designed proportionally around the amount of data available. On the one hand, if the model is too
simple, the results will be subject to statistical underfitting and provide less accuracy that a
system with more parameters. On the other hand, if the model is too complex, errors and
variances will be fully incorporated in the model, leading to overfitting (Qiu, Ding, & Feng,
2016). As Zhu, Vondrick, Fowlkes, and Ramanan (2016) note, “In some sense, we need ‘better
models’ to make better use of ‘big data.’” Empirical analyses have found that, when model
68
complexity is removed as in independent variable, accuracy for image recognition tasks
increases logarithmically with volume of data (Sun, Shrivastava, Singh, & Gupta, 2017; Zhu,
Vondrick, Fowlkes, and Ramanan, 2016), suggesting that data can lead to significant
improvements in task performance at the price of exponentially increasing requirements.
Because digital communications do not respect political borders, controlling data flows
internationally is difficult and the relevant policy initiatives focus on increasing informational
resources available to local firms. Not all countries have the same inherent capacity to generate
quality ML data, however. U.S. firms may generate significant amounts of commercial and user
data due in part to permissive privacy practices, but the country faces at least three significant
limitations vis-à-vis other countries at the Federal level. First, while the U.S. government also is
a prolific data creator—publishing high-quality data sets on everything from economic indicators
to weather sensor collections—under copyright law, all U.S. governmental works are in the
public domain (17 U.S.C. § 105). The recent trend, in fact, has been to take this data even more
accessible through APIs and other accessibility measures. This means that much data is as
available to foreign competitors as they are to U.S. firms. Second, the United States is limited by
constitutional protections, notably the Fourth Amendment restriction on unreasonable searches
and seizures, from collecting personal information without judicial review, although this
protection has eroded to a limited extent due to the third-party doctrine
23
and various national
security exceptions. Finally, under the Privacy Act of 1974, the government cannot disclose
personally identifiable information that it collects unless it meets one of twelve exceptions.
Authoritarian countries face a much smaller set of restrictions, allowing them to cultivate
a greater amount and quality of behavioral and social data and share it with state-sponsored
23
See page 116.
69
enterprises. China, for example, conducts significant mass surveillance of its population, both via
online tracking and through ubiquitous facial recognition. The extent of this observation is
highlighted by the “Social Credit System,” a national reputational database that collates a variety
of personal information to rate citizens and apply various penalties or benefits, such as travel
restrictions, based on perceived trustworthiness. China’s 2018 Internet Security Law, meanwhile,
includes among other provisions requirements that firms store information collected in China
within the country (“data localization”) and provide data to authorities upon request. This
increases the vulnerability of sensitive corporate records to intellectual property theft, an ongoing
concern in China; in a recent survey of large companies, 20% reported that Chinese firms have
stolen their IP in just the last month (Rosenbaum, 2019).
Data labeling has also become a prime nexus for international competition over AI. Low
labor costs make China a more appealing home for third-party data processing than U.S.
locations, allowing China to develop labeling expertise and infrastructure that the United States
is not able to match. Because the U.S. labor cost structure is unlikely to trend significantly
downward and the country faces significant institutional obstacles to jumpstarting a homegrown
labeling industry, this may generate a significant differential advantage in AI competition.
3.5 Scale Economies in the AI Industry
The analysis above highlights that all three of the major inputs into ML feature
significant economies of scale, barriers to entry, or increasing returns to scale. The latter
phenomenon rewards first-mover advantages, locks in early technological implementations, and
tends to create natural monopolies (Arthur, 1989). For these reasons, at least in the United States,
major technology companies are likely to maintain their near-term dominance.
70
First, while GPUs are abundant and relatively affordable, there are significant efficiencies
of scale relating to computing resources. For companies that have “excess,” or slack, computing
resources, the effective cost of AI hardware approaches zero. Such slack resources tend to arise
when a company is large enough to operate its own data center and computing infrastructure
rather than purchase services from a third party, and when demand for these resources is highly
variable and prone to temporary spikes. While there are numerous companies that operate data
centers or other computational facilities, the Big Five tech companies represent the confluence of
firms that are making significant AI investments and have excess computational power.
For example, Amazon is a worldwide leader in on-demand cloud computing services via
its Amazon Web Services (AWS) platform. AWS is highly profitable, responsible for generating
more than $35 billion, or 20%, of Amazon’s net sales, and more than $9.2 billion, or 63%, or
Amazon’s net income.
24
Google and Microsoft have also entered this line of business and
captured significant market share. Google’s extensive data-center architecture, meanwhile, likely
numbers close to a million servers and makes it one of the world’s top networks by traffic
carried. To streamline and coordinate operations within and across geographically dispersed
centers, Google has developed sophisticated in-house tools, such as MapReduce, TensorFlow,
and Cloud Dataflow, to manage big-data and parallel processing tasks. Although Google has
released code underlying these tools to the open-source community, it remains famously tight-
lipped about its data-center operations and it is probable the company retains more sophisticated
proprietary tools.
Second, the high cost of AI labor favors firms that have the resources to meet or exceed
salary expectations for top-performing experts. As of late 2019, the Big Five tech companies had
24
Data from Amazon’s 10-K filing for the Fiscal Year Ended December 31, 2019, p. 24-25, available at
https://ir.aboutamazon.com/sec-filings/default.aspx (accessed April 2, 2020).
71
an aggregate of more than $450 billion in cash, cash equivalents, and marketable securities on
hand (Stevens, 2019), allowing them to offer not only large compensation packages but also
pursue an aggressive acquisition strategy. Indeed, their collective purchase of nearly 60 AI
companies in the past decade seems to be driven largely by the need to acquire additional AI
talent unavailable in regular labor pools (and perhaps to stifle latent competition),
25
not the desire
to obtain new products. They can also afford to heavily cross-subsidize speculative research—
such as letting DeepMind rack up operating losses exceeding 500 million pounds—due to
extremely profitable core business lines.
Finally, the differentiated nature of ML products, the high costs of data labeling, and the
increasing accuracy possible with additional training information rewards companies with large
and diverse data sets. Technology companies are likely the largest holders of data that is not
under serious restrictions; governments face various statutory obstacles to the use of data and
many other large holders are in highly regulated sectors, such as financial services or health care,
with enhanced privacy requirements. Due to their rich collection practices, large user bases, and
longstanding dominance of online platforms, the holdings of these tech companies are vast: They
include data on shopping behavior, location and travel patterns, social relationships, website
surfing behavior, email preferences, demographic information, and more. In 2012, Google alone
processed about 24 petabytes (or 24,000 terabytes) of data each day (Davenport, Barth, & Bean,
2012). Moreover, data collection itself has increasing returns to scale; the more consumer data a
company collects, the greater its ability to create highly targeted products that allow the firm to
attract yet more users. In addition, technology companies generally have good data hygiene,
which drives down their marginal costs for labeling.
25
See page 51.
72
The self-reinforcing nature of ML models also exacerbates the influence of size in the AI
industry. In general, the output of a completed ML model, especially when applied to a novel set
of information as would be the case for commercialized product, can also serve as an input into
subsequent product development. Thus, in addition to simple scale advantages and learning
efficiencies, ML also features a second type of significant increasing returns to scale, as each
model generates data that can be used in future models.
This is mitigated, to some extent, by the differentiated nature of AI products; so long as
data is not fungible for different ML algorithms, AI companies will have increasing advantages
only in areas in which they hold specialized information. There are several factors that mitigate
this, however. First, the costs of obtaining and cleaning data have fostered interest in transfer
learning, which allow some algorithms to be trained on closely related set of data (Weiss,
Khoshgoftaar, & Wang, 2016). This lowers potential entry costs, but it also allows companies
with more data on hand to leverage such transferability to develop better models, allowing them
to outcompete on product quality. Second, since large technology companies have the ability and
desire to acquire the requisite human talent and large amounts of data on hand, they do not face
significant resource limitations on pursuing multi-product business strategies. Third, these firms
collect rich data profiles on their customers, so their data holdings include information that is
amenable to a large variety of AI tasks.
It would be reasonable, at this point, to consider whether the AI industry will experience
the “creative destruction” that commonly occurs when new inventions are introduced
(Schumpeter, 2008). After all, the development of a GPT is one of the most vulnerable times for
incumbents, which are not particularly agile even in the face of seemingly minor process or
product improvements.
73
There is reason to be skeptical that such turmoil will occur in the short term. First, studies
of industrial disruption have found that the failure of incumbent firms to innovate is largely due
to issues posed by organizational structure, not a failure to take new technology seriously: “In
nearly every case, the established firm invested heavily in the next generation of equipment, only
to meet with very little success. Our analysis of the industry's history suggests that a reliance on
architectural knowledge derived from experience with the previous generation blinded the
incumbent firms to critical aspects of the new technology” (Henderson & Clark, 1990, p. 24). In
other words, over time firms embed into their core practices an implicit set of assumptions that
make adaptation to innovations difficult. However, in the case of technology companies, ML—
revolving around sophisticated data manipulation—has significant compatibility with existing
core lines of business, and in fact has been embedded into their data centers and other systems
for many years. This lessens the mismatch between old and new institutional structures. Second,
many of these companies have adopted a deliberately decentralized structure featuring both
stable core units and more experimental incubator-style divisions. Although this is no guarantee
that the core companies will necessarily be more amenable to rapid adoption of disruptive ideas,
it does allow a firm to maintain an increased level of flexibility. Third, technology companies are
aggressively acquiring AI startups, which has the twin effect of boosting internal AI capabilities
and foreclosing potential competition before it grows to scale. In 2020, regulators began an
inquiry into whether such acquisitions are anticompetitive.
26
In addition, the network effects and other scale economies inherent to many
communications industries lead to recurrent conditions of monopoly or oligopoly that are
particularly resistant to disruption, in part because this fosters enormous political power. History
26
See p. 95.
74
is replete with examples of communications firms that levied their dominant position to obtain
favorable regulation and strangle would-be competition (Wu, 2010). Much of the innovation in
these industries become possible only when the burden of such anticompetitive conduct tipped
the government into aggressive intervention. The modern internet was developed because
regulators broke AT&T’s stranglehold over customer premises equipment,
27
and Google and
Facebook exist, in part, because Microsoft—once as dominant in the software industry as AT&T
was in its heyday—was forced to stop blocking third-party middleware from its operating
system.
28
Although a major premise of this paper is that some aspects of U.S. domestic policy
are heading in that direction, the Federal government has not yet made any decisions of this
magnitude in the AI space.
3.6 Conclusion: Economic Structures in the AI Industry
This chapter dissects the industrial production of AI into its constituent inputs and
outputs, an analytical approach that remedies a gap in the existing research literature. The core
components of ML—computing power, algorithms, and data—are currently dominated by large
technology companies and feature increasing returns to scale and other economic features that
reward incumbency. This suggests that the U.S. decision to rely on the technology sector to
advance its international AI objectives is sound. At the same time, these inputs have become
active sites of domestic and proxy economic conflict. The following chapters analyze these
developments in greater detail.
27
See p. 39.
28
See p. 104-107.
75
CHAPTER 4: THE EROSION OF U.S. INDUSTRIAL ADVANTAGES
4.1 Introduction: U.S. Leadership in AI
The previous chapter established that the AI industry is dependent on inputs that feature
significant economies of scale, providing significant advantages to large, established technology
companies. These firms have leveraged dominance in online services such as search, social
networking, cloud computing, and ecommerce—along with wide-ranging and diversified
business models—to accumulate vast and varied stores of data useful for algorithmic training.
These firms’ size and scope allow them to make outsize investments in the expensive labor
underlying AI innovation. And they can harness robust computing and data center
infrastructure—needed to support their primary lines of business—to supply the raw computing
power needed to develop ML technologies.
It is no coincidence that U.S. companies lead globally in the provision of edge services.
The ability of these firms to scale rapidly and expand aggressively into adjacent markets was the
direct result of a unique combination of American policy and regulatory decisions. Three distinct
but interconnected choices by lawmakers—a lack of strong baseline privacy protections for
commercial data, favorable treatment under regulatory and antitrust law, and broad legal liability
protections—directly fostered the development and consolidation of novel online services into a
small number of dominant companies headquartered in the United States. These companies,
which include the Big Five and sectoral leaders such as Twitter, Uber, and Netflix, among
others, are notable not just for their market dominance but for remarkable structural similarities:
They have wide-ranging, diversified business models, with a heavy reliance on collecting and
monetizing user data.
76
The origin of these companies is not due solely to regulatory action or inaction.
Technological and economic advances played just as important a role. For example, cookies—
information deposited by websites and servers onto internet browsers—are the technology upon
which “the whole of web commerce initially was built” (Lessig, 2006). Cookies allow companies
to track computers persistently despite the bias toward anonymity built into the core internet
protocols. To this day, cookies are essential for mining data on users across visits and browsing
sessions, helping to create the rich, deep profile information that underlies commonplace online
services such as targeted advertising. But it is quite likely that these technologies would not have
been deployed at such wide scales and in the same time frames were it not for a regulatory
environment explicitly structured to favor rapid private development in edge services.
However, these industrial advantages are now eroding due to growing anxiety over the
political, legal, economic, and societal power of large technology companies. These tensions are
deep and abiding, bipartisan, and global, representing a fundamental restructuring of attitudes
toward technology not just in the United States but across the globe. And they are not restricted
to a single issue but represent deep anxiety over many aspects of industrial organization and
practice. The result is an unprecedented level of scrutiny. As of early September 2019, just four
technology companies—Apple, Amazon, Facebook, and Alphabet/Google—were facing at least
16 distinct investigations by regulators and lawmakers. These include eight Federal, six state and
local, and two Congressional examinations over privacy, antitrust, discrimination,
cryptocurrency, and other issues (The New York Times, 2019).
This is certainly not the first time concerns have been raised over the negative impacts of
new technologies. Indeed, scholars expressed such skepticism over what is in many ways the
first communications technology: writing. In the Phaedrus (360 BCE), Socrates expressed his
77
trepidation over both the impacts of the technology and the difficulty of inventors in properly
distancing themselves from and assessing the repercussions of their creations. He says,
addressing the creator of writing, that “the parent or inventor of an art is not always the best
judge of the utility or inutility of his own inventions to the users of them. And in this instance,
you who are the father of letters, from a paternal love of your own children have been led to
attribute to them a quality which they cannot have; for this discovery of yours will create
forgetfulness in the learners' souls, because they will not use their memories; they will trust to
the external written characters and not remember of themselves.” Postman (1993) presaged many
contemporary concerns over algorithmic use of data in a biting criticism of the deification of
technology in modern societies, calling out the tendency for modern nation-states to subjugate
human decision-making, moral authority, and ideological meaning to computing technology and
statistical analysis.
The current “techlash” builds upon such criticisms but is also notable for the rapidity of
its onset and the dramatic break it represents from more utopian visions about the role of
emerging technologies in society, particularly narratives of digital economic empowerment and
social liberation. In many ways, this era of mistrust against technology companies traces back to
the disclosures by Edward Snowden, beginning in 2013, of mass global surveillance by the
United States and other Western democracies, primarily on the grounds of homeland security and
terrorism prevention. Snowden’s revelations of mass data-sharing with Federal agencies touched
both traditional telecommunications companies that maintained and operated core Internet
infrastructure (Greenwald, 2013) and the technology firms behind consumer-facing edge
applications such as email, text, and video chat (Gellman & Poitras, 2013).
78
Immediately prior to that, the discourse around technology platforms was more idyllic.
Despite some missteps, tech companies—indeed, anything associated with Silicon Valley—had
been considered “emissaries of the future” (Zuboff, 2015, p. 85) that retained an aura of
empowerment and human advancement. For example, pro-democratic social media was tied
closely to the Arab Spring, the wave of civil unrest and revolution that struck North Africa and
the Middle East between 2010 and 2012, in journalistic and popular narratives (Harlow &
Johnson, 2011), findings that were later corroborated empirically (Howard, Duffy, Freelon,
Hussain, Mari, & Maziad, 2011). (Notably, the bloody rebellions and failed regime changes that
resulted from this wave of protests mirror the growing cynicism toward technology generally.)
Domestically, 2011 also saw civil society join with large and small technology
companies to generate popular pushback, and ultimately to successfully derail, the Combating
Online Infringement and Counterfeits Act of 2010 (COICA), the Stop Online Piracy Act of 2011
(SOPA), and the Preventing Real Online Threats to Economic Creativity and Theft of
Intellectual Property Act of 2011 (PROTECT IP Act or PIPA). All three bills were attempts by
U.S. legislators to impose intermediate liability on major Internet platforms to help combat
online copyright infringement and the trafficking of counterfeit goods (Benkler, Roberts, Faris,
Solow-Niederman, & Etling, 2015). Such cooperation would seem impossible today; indeed, as
detailed later, the erosion of longstanding protections for online platforms under Section 230 (47
U.S.C. § 230) of the Communications Decency Act (CDA, Title V of the Telecommunications
Act of 1996) is representative of growing popular and political support for imposing additional
duties in Internet intermediaries, rather than preserving broad liability protections and freedom
from government interference.
79
The Snowden revelations irreversibly darkened America’s attitudes toward technology.
First, they highlighted more starkly than ever before the sheer extent to which society is
dependent on modern technologies, not just to conduct public and government business but also
for the daily activities of regular life. Second, they underscored the visibility and control that a
few large technology and telecommunications companies have over those communications.
Third, they generated a large, persistent distrust between a government focused on exploiting
vulnerabilities in technology for purposes of domestic and international security and private
companies that face serious and substantial operating risks from the existence of such
weaknesses. This makes any public-private cooperation on technology issues within the United
States more difficult, as technology companies face pressure to limit actions that might prioritize
Federal needs over those of consumers. This tension has come to a head in the debate over
encryption, with technology companies expanding the use of strong encryption tools and
lawmakers demanding the creation of backdoor access measures that can make information
accessible if the government presents a legitimate warrant.
The Snowden disclosures coincided with another online development: increasing
concerns about the reach and power provided by social media to online agitators interested in
politicizing social issues and preserving hegemonic world views. Even from the early days,
online media had afforded dominant groups a forum for cyberbullying and abuse; consider the
“cyber-rapes” performed in the multiplayer online game LambdaMOO, in which a player used a
“voodoo doll” program to hijack and virtually violate player avatars (Dibbell, 1993; see also
Lessig, 2006, p. 97-102). By 2014, abusive behavior had become strikingly commonplace,
affecting upwards of 60% of online users (Duggan, 2014). The impacts had also become more
visceral and more costly. Online trolls and cyberbullies took advantage of online anonymity and
80
asymmetrical power dynamics to amplify minor controversies into major online events such as
GamerGate (Massanari, 2017). The consequences can be severe. Constant rape and death threats,
along with doxxing (the revelation of private personal information, such as home addresses) has
sabotaged careers and left many victims in a state of perpetual fear. And instances of “swatting”
(or false calling an armed police team to a home) have led to the deaths of uninvolved parties.
The 2016 U.S. elections fanned this smoldering discontent into a full-on blaze. Whereas
previous concerns were tied largely to impacts on the commercial and social spheres, the
revelations of Russian disinformation campaigns intended to manipulate U.S. presidential and
Federal elections (Select Committee on Intelligence, 2019) and the rampant spread of false and
misleading news stories on social media platforms (Allcott, Gentzkow, & Yu, 2019; Meserole,
2018; Mitchell, Gottfried, Stocking, Walker, & Fedeli, 2019) disturbed public faith in core
institutions underlying American democracy. These developments firmly demolished any idea
that digital technology was a utopian panacea. Instead, they underscored the extent to which
technology companies were embedded into, and capable of exerting enormous influence over,
every aspect of American life and culture. Worse, those same companies seemed incapable of
managing or mitigating the serious information problems that they had helped create.
The cumulative result of these developments is that the dominant U.S. technology firms
are facing growing threats to their ability to maintain their market size and presence. These
domestic concerns also have significant repercussions for AI policy. The anxiety over the power
of American tech companies is largely directed towards the very feature that has allowed them to
make significant advancements in AI technology: the accumulation, monetization, and
manipulation of consumer data. This is the key tension in current U.S. technology policy:
81
Domestic undercurrents clash directly with the advantages that the country is leveraging to gain
global supremacy over emerging technologies.
The U.S. political system is far from nimble. While serious reforms are in play for each
of the three major areas underpinning American technological superiority—privacy, antitrust,
and liability protection—partisan disagreements and strong opposition from politically powerful
incumbents muddy the legislative forecast. It is unclear when any of the myriad legislative
proposals directed toward the technology industry might become law, and regulators face
significant legal challenges in shifting the tenor of their enforcement and investigative activities.
Even when change is slow, however, it can still be effective. Historical data reveal that
even exploratory actions by regulators and lawmakers can chill problematic corporate behaviors.
It is precisely this type of latent threat that prevented companies like Google and Facebook from
being “strangled in the crib” by dominant software companies like Microsoft in the earliest days
of the commercial internet.
29
The discussion below traces the origins and impacts of these three areas of U.S.
advantage and discusses the domestic trends that are undermining each. This is followed by
some general conclusions about the impact of domestic trends on the economic organization of
large technology companies. The most compelling lesson is that new policy developments
directly threaten the ability of technology companies to collect and harness the data resources
that serve as a key input into AI technologies.
29
See p. 104-107.
82
4.2 Regulation and Antitrust
4.2.1 Deregulation of the “information services” sector. The U.S. technology
companies that are the current target of popular and political ire came of age in a particularly
amenable regulatory environment. In the early 1990s, U.S. politicians took a series of forward-
looking steps that set the stage for online innovation and experimentation, spurring a wave of
investments that facilitated the “dot com” crash of 2000 but also eventual U.S. domination of
edge services.
First, spurred by the invention of the World Wide Web and growing popular and
commercial interest in online technologies, the U.S. government moved to privatize internet
infrastructure, transitioning key functions such as naming, numbering, and access point control
to private or quasi-private entities (NSF, 2003). (While the Department of Commerce technically
maintained supervisory control over internet naming and numbering functions until 2016
(Strickling, 2016), it began the privatization process in 1997 under explicit instructions from
then-President Clinton (NTIA, 1998)).
Second, lawmakers used the first major overhaul of the foundational U.S.
communications statute in more than 30 years to expressly nurture the burgeoning online
industry. The Telecommunications Act of 1996 touched numerous areas of communications law
and was explicit about its anti-interventionist philosophy, in service of both economic growth
and the generalized social benefits of bringing online capabilities to Americans. The
accompanying report, for example, described it as a “pro-competitive, de-regulatory national
policy framework designed to accelerate rapidly private sector deployment of advanced
telecommunications and information technologies and services to all Americans by opening all
telecommunications markets to competition” (House of Representatives, 1996).
83
The Telecommunications Act included numerous deregulatory elements. One key
provision, for example, provided the FCC with the ability to “forbear from applying any
regulation or any provision of this chapter to a telecommunications carrier or
telecommunications service, or class of telecommunications carriers or telecommunications
services, in any or some of its or their geographic markets,” if the Commission found certain
conditions applied (47 U.S.C. § 160). In other words, if the FCC, utilizing its subject-matter
expertise, found that enforcement of a statutory provision was not needed to ensure just,
reasonable, and non-discriminatory behavior; was not necessary to protect consumers; and was
in the “public interest,” the FCC could take further deregulatory actions under its own delegated
authority, without the need to consult Congress or other parts of the executive branch.
Another central element of the Telecommunications Act was the codification into statute
of a longstanding light-touch FCC regulatory framework that favored online services. Since the
1960s, the FCC, in a series of landmark regulatory actions known as the Computer Inquiries
(FCC, 1966; 1971; 1976; 1979; 1980; 1985; 1986),
30
had wrestled with “the regulatory and
policy problems raised by the interdependence of computer technology, its market applications,
and communications common carrier services,” particularly the increasing use of data processing
services over the monopoly telephone network (FCC 1980, p. 389). To address these issues, the
FCC developed a philosophical approach that distinguished between “basic” services regulated
as common carriers under Title II of the Communications Act and non-regulated “enhanced”
services (FCC, 1980, p. 428), a distinction that served as “a necessary precondition for the
success of the Internet” (Cannon, 2002). Basic service was “limited to the common carrier
30
Although the FCC conducted three proceedings that would come to be known as the Computer Inquiries, the
discussion here focuses primarily on the Second Computer Inquiry (FCC, 1976) and the associated regulatory
decisions (FCC, 1979, 1980), where the key distinctions between enhanced and basic service—concepts core to
modern communications debates such as net neutrality—were first elucidated.
84
offering of transmission capacity for the movement of information, whereas enhanced service
combines basic service with computer processing applications that act on the format, content,
code, protocol or similar aspects of the subscriber's transmitted information, or provide the
subscriber additional, different, or restructured information, or involve subscriber interaction
with stored information” (FCC, 1980, p. 387). In justifying this differential approach, the FCC
found that the market for enhanced services was highly competitive, with low barriers to entry,
and therefore both law and policy essentially required these services to remain unregulated
(FCC, 1980, p. 432-435):
The principal limitation upon, and guide for, the exercise of these additional powers
which Congress has imparted to this agency is that Commission regulation must be
directed at protecting or promoting a statutory purpose. In some instances, that means not
regulating at all, especially if a problem does not exist. … We have examined the
extensive record in this proceeding to determine whether a comprehensive regulatory
scheme for enhanced services is necessary to protect or promote some overall objective
of the Communications Act. We find that it is not. (FCC, 1980, p. 433)
The validity of the FCC’s approach in the Computer Inquiries was hard to dispute in the
early days of the privatized Internet. “That approach was wildly successful in spurring
innovation and competition in the enhanced-services marketplace: Government maintained its
control of the underlying transport, sold primarily by regulated monopolies, while eschewing any
control over the newfangled, competitive ‘enhancements’” (Weinberg, 1999, p. 222). In
codifying the basic-enhanced distinction—albeit using alternative telecommunications-
85
information services vocabulary (47 U.S.C. § 153) hearkening back to the MFJ against AT&T
(United States v. AT&T Co., 1982, p. 179)
31
—the Telecommunications Act locked into place a
framework that provides online services essentially no ex ante legal obligations and little
regulatory oversight. Thus, unlike commercial telephone, cable, or wireless companies, digital
edge services have minimal exposure to regulatory fees (47 U.S.C § 159) and face limited legal
costs for regulatory compliance.
The durability of the telecommunications-information services divide is noteworthy.
Rather than eroding over time, this framework has tenaciously resisted any weakening. Indeed,
the FCC’s conclusions in the Computer Inquiries have allowed a nominal information processing
component to “infect” a service that otherwise would appear to be telecommunications,
broadening that category significantly over time. For example, the FCC noted that attempting to
classify enhanced services, which by definition include a basic communications component,
would lead to “troublesome” unintended consequences such as arbitrary limitations on offerings
to “avoid crossing the regulatory boundary” and be captured by Title II common carrier rules
(FCC, 1980, p. 427, 426). As a result, the Commission decided that “all enhanced computer
services should be accorded the same regulatory treatment and that no regulatory scheme could
be adopted which would rationally distinguish and classify enhanced services as either
communications or data” (FCC, 1980, p. 428). Over time, this logic has led the FCC to reclassify
cable modem (FCC, 2002) and DSL (FCC, 2005) broadband access technologies as information
services. Ultimately, this issue also became the primary philosophical point of contention in both
31
Drawing on legislative history, the FCC quickly formalized this reading of the 1996 Telecommunications Act:
“Reading the statute closely, with attention to the legislative history, we conclude that Congress intended these new
terms to build upon frameworks established prior to the passage of the 1996 Act. Specifically, we find that Congress
intended the categories of ‘telecommunications service’ and ‘information service’ to parallel the definitions of ‘basic
service’ and ‘enhanced service’ developed in our Computer II proceeding, and the definitions of
‘telecommunications’ and ‘information service’ developed in the Modification of Final Judgment breaking up the
Bell system.” (FCC, 1998, para. 21)
86
the 2015 and 2017 FCC net neutrality rulemakings (FCC, 2015; 2017). In those highly disputed
rulings and the associated court cases, for instance, much debate revolved around whether
Domain Name System (DNS)
32
services provided by broadband internet service providers
provide “the offering of a capability for generating, acquiring, storing, transforming, processing,
retrieving, utilizing, or making available information via telecommunications” (47 U.S.C. §153)
sufficient to render the entire mixed service an information service (FCC, 2015, para. 365-371;
FCC, 2017, para. 27-40).
This longstanding tenet of U.S. communications policy may be changing, however, as
policymakers reconsider whether information services should continue to have carte blanche
from regulatory oversight. For as long as technology companies have established market
dominance in a particular service, there have been calls to recognize and regulate that power
through utility-style regulation if necessary. As early as 2007, Werbach (2007) noted that “it is
possible for applications to become exclusive platforms with anti-competitive effects similar to
those of exclusive physical broadband networks. Google’s dominant search engine and
MySpace’s massive social networking site might be candidates for such [regulatory] scrutiny at
some point in the future.” And Odlyzko (2009) argues that, in a world with net neutrality rules
for telecommunications providers, other companies, such as Google, will become similar “choke
points” and “it would be wise to prepare to monitor what happens, and be ready to intervene by
imposing neutrality rules on them when necessary” (p. 57). In other words, the recognition that
certain online services, subject to network effects, may be natural monopolies leads logically to
the conclusion that the government might need to impose common carrier or other utility-style
32
DNS is a decentralized naming system for networked computers and servers. It serves as an “address book” for
translating between domain names and IP addresses. BIAS providers typically provide DNS services for their
customers. However, such functionality is strictly optional, as free alternative DNS resolution services, such as
Google Public DNS, are readily accessible to internet users.
87
rules over previously unregulated information services. Calls for such regulatory scrutiny are
intensifying in policy circles. For instance, Elizabeth Warren has proposed that large tech
platforms to be designated as “platform utilities” subject to structural separation,
nondiscrimination, and fair dealing requirements (Warren, 2019).
4.2.2 The consumer welfare approach to antitrust. Another consequence of
information services deregulation is the narrow intervention aperture that the framework
afforded the government in overseeing online firms. With the FCC affirmatively washing its
hands of any jurisdictional oversight, the government had only industry self-regulatory activity
and an antitrust backstop to address potential competitive harms. Indeed, as legal analysts have
noted, “[t]here are four general approaches to the regulation of broadband network providers vis-
a-vis independent applications providers: structural regulation, such as open access; ex ante non-
discrimination rules; ex post adjudication of abuses of market power, as they arise, on a case-by-
case basis; and reliance on antitrust law and non-mandatory principles as the basis for self-
regulation. At present, the FCC follows the last approach” (Goldfarb, 2005).
Even as antitrust became the primary mechanism for addressing competitive issues
arising in information services, U.S. antitrust doctrine and practice was moving in a direction
that makes legal action against online platforms and social media companies difficult to envision,
much less put into practice.
Historically, economic structuralism—“the idea that concentrated market structures
promote anticompetitive forms of conduct” —animated antitrust thought and activity from the
passage of the foundational U.S. federal antitrust laws (Khan, 2016, p. 718). These statutes, the
Sherman Act of 1890, the Clayton Act of 1914, and the Federal Trade Commission Act of 1914
88
(FTCA), were passed largely in response to the enormous accumulation of power—both
economically and politically—by oil, steel, and railroad companies (Khan, 2016, p. 718).
Beginning in the mid-1970s, however, new economic thinking that challenged that focus, often
termed “Chicago School” economics since many of its leading proponents were based at the
University of Chicago, began to gain traction. Chicago School doctrine, leaning heavily on
neoclassical price and utility maximization theory, argues that only quantifiable economic
impacts—primarily, changes in pricing—were the proper subject of antitrust analysis and of any
potential legal challenges against corporate business practices (Bork, 1978).
Under this logic, market structure in and of itself is not a reason for concern, as the
arrangement of corporations reflects economic dynamics rather than causing them (Bork, 1978).
Chicago School economists reject as irrational—and therefore non-existent—practices such as
predatory pricing and tying arrangements. They also developed a general presumption that
vertical integration, by creating internal efficiencies that are passed down as lower prices, has
pro-consumer impacts (Crane, 2013). These arguments makes clear the extent of the break the
Chicago School represents with prior antitrust regimes. Under a strict interpretation of the
Chicago philosophy, for example, regulators might not have been able to prosecute landmark
communications antitrust cases against highly vertically integrated companies, such as the break-
up of AT&T (United States v. AT&T Co., 1982) and the Paramount Decree that ended the
Hollywood studio system (United States v. Paramount Pictures, Inc., 1948).
This concept of antitrust is now firmly dominant. The Supreme Court has explicitly
established that “the purpose of the antitrust laws is the promotion of consumer welfare”
(Westman Comm'n Co. v. Hobart Int'l, Inc., 1986, p. 1220, also cited in Reazin v. Blue Cross and
Blue Shield of Kansas, 1990, p. 960; Ginzburg v. Mem'l Healthcare Sys., 1997, p. 1015). When it
89
first developed this line of reasoning (Reiter v. Sonotone Corp., 1979, p. 434), the Court relied
directly on Bork’s (1978) claim that “The Sherman Act was clearly presented and debated [by
Congress] as a consumer welfare prescription” (p. 66).
33
Current antitrust guidelines also reflect this line of thinking. For instance, they note that
“non-horizontal mergers are less likely than horizontal mergers to create competitive problems”
(U.S. Department of Justice, 1984, p. 23). Although antitrust guidelines do acknowledge the
possibility that mergers and acquisitions can cause non-price impacts such as reduced
innovation, lowered product variety, or exclusionary impacts (U.S. Department of Justice and
Federal Trade Commission, 2010), “it is fair to say that a concern for innovation or non-price
effects rarely animates or drives investigations or enforcement actions—especially outside of the
merger context” (Khan, 2016, p. 722).
The rejection of the U.S. government’s request to enjoin the merger of AT&T and Time
Warner underscores the extent to which this approach has established deep roots in U.S.
jurisprudence. In that decision, the court stated that while it accepted the general principle that
vertical mergers are not invariably innocuous, it found that, “notwithstanding the proposed
merger’s conceded procompetitive effect,” the government did not meet its burden of proof of
“establishing, through ‘case-specific evidence,’ that the merger of AT&T and Time Warner, at
this time and in this remarkably dynamic industry, is likely to substantially lessen competition in
33
The Court said the following (internal citations retained): “Nothing in the legislative history of § 4 [of the Clayton
Act] conflicts with our holding today. Many courts and commentators have observed that the respective legislative
histories of § 4 of the Clayton Act and § 7 of the Sherman Act, its predecessor, shed no light on Congress' original
understanding of the terms “business or property.” Nowhere in the legislative record is specific reference made to
the intended scope of those terms. Respondents engage in speculation in arguing that the substitution of the terms
“business or property” for the broader language originally proposed by Senator Sherman was clearly intended to
exclude pecuniary injuries suffered by those who purchase goods and services at retail for personal use. None of the
subsequent floor debates reflect any such intent. On the contrary, they suggest that Congress designed the Sherman
Act as a ‘consumer welfare prescription.’ R. Bork, The Antitrust Paradox 66 (1978).” (Reiter v. Sonotone Corp.,
1979, p. 434, emphasis added).
90
the manner it predicts” (United States v. AT&T Inc., 2018, p. 59). The Chicago School prima
facie assumption that vertical mergers are welfare-enhancing places an enormous evidentiary
burden on any government prosecution of the antitrust statutes.
Chicago School antitrust doctrine effectively immunizes technology companies from
antitrust concerns. Many products offered by these companies, ranging from search engines to
social networking platforms to email services, are zero-priced, that is, provided at no cost (see,
e.g. Anderson, 2009). Often, this is because they are subsidized by advertising, but it is also
common for such products to be used strategically to grow market share in industries with
substantial supply-side economies of scale. For example, early Google and Facebook provided
free search and social networking services, respectively, to grow their user bases well before they
even conceived of the advertising revenue models that now drive billions in monthly profits
(Auletta, 2009; 2011). The absence of any possibility of negative pricing impact means that
antitrust action would need to meet the much higher legal burden of establishing empirically a
the likelihood of significant non-price impacts. The result is that these companies have faced few
legal or regulatory obstacles to growth, either organically or via acquisition.
When the FCC began conducting its Computer Inquiries, its primary concern was not the
possible market power of “enhanced” services firms but the near certainty that, absent
government action, AT&T would abuse its market power to undermine or purchase potential
competitors. This is why, even as the FCC moved to deregulate information services entirely,
“There was one major caveat: Ma Bell and her descendants, when seeking to offer enhanced
services, were subject to a set of rules designed to ensure that they did not leverage their
monopoly power” (Weinberg, 1999, p. 221-222). Throughout the Computer Inquiries and
91
beyond, the FCC has never walked back its finding that the information services market is highly
competitive and dynamic, with few barriers to entry.
The technology sector, however, no longer seems to be the perfectly competitive market
that helped the FCC to deregulate; the size, power, and political influence that large technology
companies have accumulated after two decades of unchecked growth have reconstituted
monopolist dynamics. This has created a resurgence of interest in antitrust theory and its
application to modern economies. In particular, economists, civil society organizations, and legal
scholars have raised three major categories of competitive concerns over the contemporary
technology industry.
First, these companies often control essential platform infrastructure used by their actual
competitors, while simultaneously developing first-party products to distribute over those
platforms. This provides an incentive for the companies to engage in anticompetitive blocking or
copying of these products, or to use their control to steer consumers to first-party products.
Examples of such platforms include Amazon Web Services, which hosts much of the world’s
cloud computing data; Amazon’s e-commerce services, which feature third-party products as
well as in-house brands; Google’s Android mobile operating system; and Google’s search
infrastructure. Google, for example, has raised the ire of European antitrust regulators for such
anticompetitive behavior, reflected in three notable antitrust fines between 2017 and 2019. A fine
of 2.42 billion euros levied in June 2017 said that Google abused its dominance as a search
engine by favoring its own product comparison site over competitors providing similar services
(European Commission, 2017); a July 2018 fine of 4.43 billion euros was for “illegal restrictions
on Android device manufacturers and mobile network operators to cement its dominant position
in general internet search” (European Commission, 2018); and a 1.5 billion euro fine in March
92
2019 was for “imposing a number of restrictive clauses in contracts with third-party websites
which prevented Google's rivals from placing their search adverts on these websites” (European
Commission, 2019).
Another concern is that these companies engage in rampant horizontal and vertical
acquisitions, stifling the development of potential competition. The most commonly cited
examples are Facebook’s acquisitions of WhatsApp and Instagram. However, large technology
companies are highly acquisitive in general, both to obtain new products and intellectual
property and to acquire highly talented personnel. A recent analysis, for example, found that the
Big Five have made more than 720 acquisitions since 1987, only one of which was challenged in
Federal district court (Moss, 2019).
34
In addition to the anticompetitive implications, this also
distorts the start-up market: Venture-funded companies face intense pressure to accept lucrative
acquisition offers and provide equity payouts to their investors.
A final set of concerns revolves around the tendency of technology companies to use
below-cost pricing—even zero-cost pricing—to drive growth at the expense of profit, a practice
that has received surprising levels of deference from stockholders (Khan, 2016). This allows
companies to establish dominance in “winner takes all” industries such as search and e-
commerce. A second-order effect is rapid and aggressive entry into related markets—exactly the
type of behavior that led to restrictions on AT&T’s enhanced services offerings. Through this
behavior, technology firms accumulate large advantages in data that can be used as inputs for in-
house consumer targeting or to support a robust tailored advertising business.
34
Under the Hart-Scott-Rodino [HSR] Antitrust Improvements Act of 1976, as amended [15 U.S.C. §18(a)], only
transactions above a certain size are reportable to antitrust agencies, require payment of HSR premerger filing fees,
and are subject to mandatory waiting periods. Many of these technology acquisitions would not have reached the
HSR filing level and thus were not reportable transactions that would have received some level of government
review.
93
Underlying these technical arguments is a larger macroscopic concern that the consumer
welfare dogma in antitrust is problematic from political and historical perspectives. Khan (2017)
calls it “wrong” (p. 739) and Orbach (2013) notes that “[T]he introduction of the consumer
welfare standard sparked a great controversy over the meaning of the term in antitrust and the
desirable goals of antitrust … [and] placed antitrust at war with itself” (p. 2152). Modern
antitrust analysis’s almost exclusive focus on economic considerations misses important social
and political considerations that drove the creation of the foundational antitrust statutes.
Economic power begets political power. This power provides the ability to direct legislative
agendas and influence the political process in ways that could erode democratic institutions and
undermine the stability of the republic. As Pitovsky (1979) notes, “It is bad history, bad policy,
and bad law to exclude certain political values in interpreting the antitrust laws. By ‘political
values,’ I mean, first, a fear that excessive concentration of economic power will breed
antidemocratic political pressures, and second, a desire to enhance individual and business
freedom by reducing the range within which private discretion by a few in the economic sphere
controls the welfare of all” (p. 1051). The nexus between social media platforms and foreign
influence campaigns (Select Committee on Intelligence, 2019) provides perhaps the most
compelling contemporary example: Facebook’s content policies and practices—largely
untouched and untouchable by government regulators—now have outsize influence on the
stability and reliability of Federal and state election infrastructure, and therefore on the future
direction of the nation.
There are two parallel efforts to address these tensions. First, government authorities are
aware of the limits of current precedent and agency practice and are working to revise their
analytic frameworks and assess whether legislative changes to antitrust law are necessary. As an
94
FTC official noted at a recent antitrust conference, “... our highest priority is to complete and
release a guidance document on the application of the antitrust laws to conduct by technology
platforms. … If we are successful, this document will identify an analytic framework for
identifying, evaluating and remedying conduct by dominant technology platform companies. It
will help us, and the Commission, and interested parties, understand better whether there are
limitations in antitrust law that prevent the agencies from prohibiting or successfully remedying
anticompetitive or unfair conduct” (Sayyad, 2019).
Second, government enforcers have adopted more aggressive interpretations of existing
tools to allow for greater antitrust action against companies, including potential retrospective
review of already consummated vertical and horizontal mergers. There is a flurry of government
activity reflecting this new focus, even if it is too early for this to result in any definite
enforcement action. For example, the FTC—which shares jurisdiction over enforcement of U.S.
antitrust laws with the DOJ’s Antitrust Division—in 2019 established a Technology Task Force
“dedicated to monitoring competition in U.S. technology markets, investigating any potential
anticompetitive conduct in those markets, and taking enforcement actions when warranted”
(FTC, 2019a), which it converted within eight months to a full new competition enforcement
division, the first created in more than a decade (FTC, 2019e).
Although it is the general practice of agencies not to comment on ongoing investigations,
reporters and investment disclosures have confirmed that some specific inquiries are already
underway. For example, the FTC and DOJ have agreed to split antitrust investigations into the
technology industry, with the FTC working on issues related to Facebook and Amazon while
DOJ handles Alphabet and Apple (Kendall & McKinnon, 2019). Facebook has already
confirmed an active FTC antitrust investigation into its acquisition practices, and there is likely a
95
looming investigation into Amazon over its handling of third-party competitors that make use of
its selling platforms (Statt, 2019). And the DOJ has begun a broad antitrust review to determine
“whether and how market-leading online platforms have achieved market power and are
engaging in practices that have reduced competition, stifled innovation, or otherwise harmed
consumers” (U.S. Department of Justice, 2019). State attorneys general are being equally
aggressive: 47 state AGs are investigating Facebook for potential antitrust violations, including
stifling of competition (James, 2019), while 50 AGs have teamed up to examine Google’s
conduct relating to its search, advertising, and other businesses for antitrust issues (Fung, 2019).
The FTC in February 2020 also announced that it is employing its authority under
Section 6(b) of the FTC Act, which allows it to conduct wide-ranging studies that do not have a
specific law enforcement purpose, to examine “prior acquisitions not reported to the antitrust
agencies under the HSR Act” (FTC, 2020). The FTC’s “orders require Alphabet Inc. (including
Google), Amazon.com, Inc., Apple Inc., Facebook, Inc., and Microsoft Corp. to provide
information and documents on the terms, scope, structure, and purpose of transactions that each
company consummated between Jan. 1, 2010 and Dec. 31, 2019.” This level of retrospective
review, intended to “deepen [the FTC’s] understanding of large technology firms’ acquisition
activity, including how these firms report their transactions to the federal antitrust agencies, and
whether large tech companies are making potentially anticompetitive acquisitions of nascent or
potential competitors that fall below HSR filing thresholds and therefore do not need to be
reported to the antitrust agencies,” is highly unusual and signals a strong interest by antitrust
regulators to expand the scope of their antitrust investigations to issues of market structure. (To
be clear, the FTC is currently engaged in a fact-finding operation, which may or may not lead to
litigation to address particular competitive concerns.)
96
In many cases, renewed antitrust scrutiny is focused specifically on the potential
anticompetitive impacts of consumer data collection practices. As OECD (2016) researchers
note, “in markets where zero-prices are observed, market power is better measured by shares of
control over data than shares of sales or any other traditional measures.” As a result, antitrust
regulators are increasingly placing data at the heart of their antitrust inquiries. Makan Delrahim,
Assistant Attorney General for the DOJ Antitrust Division, has noted that “The aggregation of
large quantities of data can also create avenues for abuse. ... Today, the extraction of monopoly
rents may look quite different than it did in the early 20th century. Therefore, it is not surprising
that data and its market value as an asset class would raise competition concerns. After all,
antitrust properly understood promotes consumer welfare in all its forms, including consumer
choice, quality, and innovation” (Delrahim, 2019).
However, it would be easy to overstate the significance of these developments. Even if
there is a greater willingness to challenge vertical consolidations and other acquisitions that
create problematic market structures, there are at least four reasons why competition-related
activities—including both antitrust review and conduct remedies—against the large technology
firms are likely to face significant legal obstacles, absent a significant overhaul of the underlying
statutory frameworks.
First, in Ohio v. American Express (2018), the Supreme Court raised the burden of proof
required to prove anticompetitive conduct on two-sided platforms. Two-sided platforms are
businesses that intermediate between multiple sets of users and exhibit indirect network effects,
in which the size of one side of the market affects the value to the other side (p. 1). The U.S.
government and several states had raised concerns over the market for credit card transactions.
American Express employs a business model focusing on member rewards and resultingly
97
charges merchants higher processing fees than its competitors. To combat this, some merchants
attempt to “steer” business towards these competitors, but American Express employs
antisteering provisions in its contracts with merchants that block such behavior. The U.S.
government and states sued American Express, claiming that its antisteering provisions violate
antitrust statutes.
The District Court concurred with this argument, “finding that the credit-card market
should be treated as two separate markets—one for merchants and one for cardholders—and that
Amex’s antisteering provisions are anticompetitive because they result in higher merchant fees”
(p. 2). But the Supreme Court agreed with the Second Circuit, which had reversed, finding that:
Applying the rule of reason generally requires an accurate definition of the relevant
market. In this case, both sides of the two-sided credit-card market—cardholders and
merchants—must be considered. Only a company with both cardholders and merchants
willing to use its network could sell transactions and compete in the credit-card market.
And because credit-card networks cannot make a sale unless both sides of the platform
simultaneously agree to use their services, they exhibit more pronounced indirect network
effects and interconnected pricing and demand. Indeed, credit-card networks are best
understood as supplying only one product—the transaction—that is jointly consumed by
a cardholder and a merchant. Accordingly, the two-sided market for credit-card
transactions should be analyzed as a whole.” (p. 2)
As a result, the Court says that “The plaintiffs have not carried their burden to show
anticompetitive effects,” elaborating that “Their argument—that Amex’s antisteering provisions
98
increase merchant fees—wrongly focuses on just one side of the market. Evidence of a price
increase on one side of a two-sided transaction platform cannot, by itself, demonstrate an
anticompetitive exercise of market power. Instead, plaintiffs must prove that Amex’s antisteering
provisions increased the cost of credit-card transactions above a competitive level, reduced the
number of credit-card transactions, or otherwise stifled competition in the two-sided credit-card
market. They failed to do so.” Advertising is the archetypal two-sided market, and most of the
major technology companies have business models that rely heavily on pairing users with
advertisers. This legal precedent sets a high bar for conduct investigations into technology
companies by requiring that enforcers conduct a complex analysis showing that there is a net
anticompetitive effect across both sides of a two-sided platform.
Second, courts continue to be skeptical of novel interpretations of the competition
statutes, which will likely be central to litigation enjoining future technology mergers. For
example, in FTC v. Rag-Stiftung (2020), the FTC sought a preliminary injunction against the
merger of two of five North American suppliers of hydrogen peroxide, “a veritable swiss army
knife of chemicals” with numerous and varied industrial uses (p. 1). The FTC relied upon a
theory of supply side substitution, which, “[r]ather than relying on consumers’ ability to
constrain prices … focuses on suppliers’ responsiveness to price increases and their ability to
constrain anticompetitive pricing by readily shifting what they produce” (p. 13). The FTC
proposed conflating the product markets for standard, specialty, and pre-electronics grade
hydrogen peroxide, which differ by the amount of pre-sale purification and processing, into a
general category of “non-electronics” hydrogen peroxide (p. 2), even though suppliers of one
type of non-electronic hydrogen peroxide do not generally market and sell the other categories.
99
Arguments around supply-side substitution generally focus on product swinging, or the
ability to rapidly enter alternative product markets. The court, however, concluded that the FTC
failed to strictly satisfy the three-pronged test under the merger guidelines that swinging must be
universal, easy, and profitable (p. 13-24). In ruling against the government, the court said that
“the FTC has not met its burden of establishing its prima facie case because it has not identified a
relevant market within which to analyze the merger’s possible anticompetitive effects. That
failure begins and ends with the FTC’s theory of supply-side substitution, or ‘swinging,’ a
substantial departure from the typical way in which a product market is defined” (p. 10-11). In
doing so, the courts signal that they are unlikely to seriously consider hypothetical or future
impacts of a merger, but rather rely on strict interpretations of existing product markets. This is
highly constraining when it comes to technology markets, where products are usually
differentiated in ways that would not satisfy the strict product substitutability test. For example,
even though Facebook and Reddit might both fall under the social networking category, they
offer vastly different features, such that many users have accounts on both services. Such a
merger may not be subject to anticompetitive concerns under prevailing antitrust doctrine.
Third, if existing market structure is a motivating concern for antitrust activity, it is not
clear what a successful remedy might look like for the technology sector—especially how to be
responsive to the prominent calls to “break up” the large technology firms (e.g. Hughes, 2019).
While Google’s restructuring of its business lines into Alphabet might provide a roadmap for
separating that company, and one can imagine unwinding Facebook’s acquisitions of WhatsApp
and Instagram, it is not clear how one might reduce the market power inherent to their core
search and social networking products, which are subject to strong network effects and in which
they have established a level of dominance that creates enormous barriers to new entrants.
100
Fourth, the regulatory and judicial approval of the T-Mobile-Sprint merger, subject to
certain conditions, suggests that under current doctrine no merger is per se anticompetitive. Even
in straightforward four-to-three horizontal consolidations, behavioral remedies can address any
competitive concern. This suggests that regulators may not be able to block any merger in the
future, but only seek conditions—subject to potentially ineffective post-consummation
enforcement—if parties are motivated enough to agree to certain behavioral restrictions.
4.2.3 Case study: the T-Mobile/Sprint merger. On April 1, 2020, the U.S.
telecommunications market began a dramatic transformation. Nearly two years after first
submitting their application to merge, T-Mobile and Sprint announced that they had officially
completed the process to create a single firm, New T-Mobile (T-Mobile, 2020), reducing the
U.S. wireless market from four to three major nationwide firms. A deeper investigation of this
merger is worthwhile, because few developments in recent history more starkly illustrate the
inherent tensions between domestic and international priorities in technology policy.
In the context of domestic antitrust trends, the pursuit of this merger might seem odd. The
union is a straightforward horizontal consolidation, which raises unambiguous questions about
pricing and competition even under Chicago School antitrust doctrine. Empirical evidence
reveals that prices are higher in national markets with only three major wireless carriers
(Rewheel/Research, 2018), suggesting substantial difficulties in arguing the merits under the
consumer welfare standard. And Sprint and T-Mobile had explored a merger several times prior
to 2018 but were rebuffed in informal talks with regulators.
101
The merger was also subject to enhanced scrutiny. Since both T-Mobile and Sprint have
large foreign shareholders, the merger required national security review by both the Committee
on Foreign Investment in the United States (CFIUS) and an executive-branch national security,
law enforcement, and public safety screen applicable to telecommunications mergers known as
“Team Telecom.”
35
Mergers involving spectrum licenses are also subject to a multi-party
domestic antitrust review process, requiring approval from DOJ under the Federal antitrust
statutes, from state utility commissions under applicable state laws, and from the FCC under a
“public interest, convenience, and necessity” standard. The FCC has not been shy about using its
review power to stop analogous consolidations. In 2011, AT&T abandoned an acquisition of T-
Mobile in large part due to an FCC staff report finding that the combination raised a “serious
concern” and would lead to price increases of 6% for wireless service (FCC, 2011, p. 2, C-23).
The primary argument in favor of the Sprint/T-Mobile consolidation was advancing U.S.
5G leadership. Sprint and T-Mobile stated that they did not individually possess the scale or
spectrum assets to successfully compete in 5G service with the much larger AT&T and Verizon:
“The transaction will enable New T-Mobile to build a network with distinct advantages over
both the standalone 5G networks planned by T-Mobile and Sprint and will provide a platform for
an unrivaled nationwide 5G mobile service” (Sprint/T-Mobile, 2018, p. 16). The firms stated that
the merger would actually increase coverage and product quality. “This proposed merger is
necessary to accomplish a goal critical to enhancing consumer welfare in this country: the rapid
and widespread deployment of 5G networks in a market structure that spurs rivals to invest in a
35
At the time of the merger, the “Team Telecom” process was an informal committee. On April 4, 2020, Executive
Order 13913 formalized this review process, establishing “the Committee for the Assessment of Foreign
Participation in the United States Telecommunications Services Sector … the primary objective of which shall be to
assist the FCC in its public interest review of national security and law enforcement concerns that may be raised by
foreign participation in the United States telecommunications services sector.”
102
huge increase in capacity, and, correspondingly, to drop tremendously the price of data per
gigabyte” (Sprint/T-Mobile, 2018, p. i). The application also included an explicit appeal to
international policy objectives. “New T-Mobile will be able to leverage a unique combination of
complementary assets to unlock massive synergies in order to build a world-leading nationwide
5G network that will deliver unprecedented services to consumers, increasingly disrupt the
wireless industry, and ensure U.S. leadership in the race to 5G” (Sprint/T-Mobile, 2018, p. i). Put
more bluntly, “New T-Mobile will level the playing field with Chinese operators” (Saw, 2019).
At least on paper, the arguments about 5G appear to have swayed regulators. Noting that
“Building leading 5G networks is of critical importance for our nation,” the FCC (2019b, p. 3-4)
states in its approval order “that the transaction, as conditioned, will result in significant public
interest benefits, including encouraging the rapid deployment of a new 5G mobile wireless
network, and improving the quality of the Applicants’ services for American consumers.”
A key premise of this paper is that national and international technology policy priorities
are in tension. Sprint and T-Mobile claims are different: that leadership in 5G does not require a
trade-off with domestic policy priorities. Rather, they contended that approving the merger
allows for both goals to advance simultaneously.
There are reasons, however, to be skeptical of this claim. For instance, the DOJ and FCC
places several conditions on the merger, including commitments to offer current or better rate
plans for at least three years, to cover 97% of the U.S. population with 5G service within three
years and 99% within six years, to build out 5G service to rural communities, and to meet other
5G deployment milestones for mid-band spectrum and in-home broadband (FCC, 2019b, p. 11-
14). Moreover, the transaction includes another highly unusual requirement. The DOJ, finding
that the proposed merger would likely result in competitive harm in some markets, agreed not to
103
block the transaction only after finalizing a settlement that essentially required the combined
company create a new wireless competitor, DISH Networks (United States of America et al., v.
Deutsche Telekom AG, T-Mobile US, Inc., Softbank Group Corp., Sprint Corporation, and DISH
Network Corporation, 2019). In particular, the settlement requires New T-Mobile to, among
other things, sell its prepaid business and certain spectrum licenses to DISH; provide DISH with
access to cell sites and retail locations; and enter into an expansive mobile virtual network
operator agreement that allows DISH to use New T-Mobile’s wireless network to provide
service, provides DISH the option to construct its own network, and mandates interconnection
between DISH and New T-Mobile.
The oddity of this arrangement cannot be overstated. It essentially capitulates to claims
that Sprint is a “failing firm” that has been left in an untenable competitive position due to a long
history of poor financial and technical choices. Yet the settlement simultaneously accepts at face
value the disciplining competitive pressure of a new entrant with no track record in the wireless
business, dependent on access to assets from a more established rival. The magnitude of the
settlement terms certainly does not lend credence to T-Mobile and Sprint’s narrative that the
merger causes no harm to domestic goals such as improving consumer welfare.
The arrangement also sets a problematic precedent. If an untested new entrant is a
sufficient remedy for clear competitive harms posed by a merger, then no market consolidation is
outright illegal. Some combination of remedies and temporary commitments is enough to
assuage any antitrust harm—despite a lengthy record of poor adherence to such conditions.
There are reasons to temper assessments of the importance of New T-Mobile’s 5G
arguments. The firms proceeded with their merger largely because Republican DOJ and FCC
leadership had adopted extremely business-friendly policy orientations at the time. FCC
104
Chairman Ajit Pai had made industry deregulation a centerpiece of his tenure (FCC, 2020a),
while court records of text messages show that DOJ Antitrust Division head Makan Delrahim
was actively assisting T-Mobile and Sprint executives with navigating potential FCC and
Congressional objections and helped arrange negotiations between the firms and DISH (Benner
& Kang, 2019).
Still, the final approval of the merger provides important information about AI policy.
Notably, it suggests that the rhetoric in an appeal to new technologies is extremely powerful, and
that it can serve as a powerful factor in convincing policymakers to advance international
objectives at the cost of domestic priorities. It also suggests that the pressure to make decisions
that favor technological progress over consumer protection might enfeeble enforcement-based
policy regimes such as antitrust.
4.2.3 Case study: the Microsoft antitrust case’s legacy. At the same time, it is
important not to lose sight of the potential impact that even implied regulatory scrutiny can have.
Historically, the mere possibility of government intervention into unregulated markets has
spurred intense response from industry, ranging from increases in lobbying activity to creation of
advocacy organizations to pre-emptive self-regulation. Many of the self-censorship initiatives in
the communications sector, including the Motion Picture Production Code (Hays Code); the
Comics Code Authority; record advisory labels; and motion picture, television, and video game
content ratings systems, were proactively crafted to head off the potential for more restrictive
government regulation.
A recent example illustrates the pitfalls facing modern antitrust action and the impacts it
can have, nonetheless. In 2000, the United States District Court for the District of Columbia
105
ruled in favor of the DOJ that Microsoft had monopoly power in the market for Intel-compatible
PC operating systems and “engaged in a concerted series of actions designed to protect the
applications barrier to entry, and hence its monopoly power, from a variety of middleware
threats, including Netscape's Web browser and Sun's implementation of Java” (United States v.
Microsoft Corp., 1999, para. 33, 409). Accordingly, the court ordered the breakup of Microsoft
into an operating systems business and an applications business (United States v. Microsoft
Corp., 2000).
However, on appeal, the decision was partially overturned (United States v. Microsoft
Corp., 2001), leading the DOJ to reach a more modest settlement with Microsoft. The agreement
prohibited the company, for five years, from retaliating against other firms in certain ways,
required disclosure of its application programming interfaces and communications protocols,
required certain changes to the Windows operating system, and mandated certain organizational
changes and compliance procedures (United States v. Microsoft Corp., 2002a, 2002b).
At first glance, this may seem like an antitrust failure. It took more than three years from
trial to settlement, which were preceded by many years of preparatory investigation and followed
by additional time navigating appeals. So even though Federal courts that try antitrust cases are
relatively efficient in the judicial context, such litigation in highly dynamic industries creates a
“mismatch between law time and new-economy real time” that threatens to render verdicts
ineffectual and hinders investment unnecessarily (Posner, 2000). Moreover, the settlement itself
displeased many parties that felt it let Microsoft off too lightly for egregious anticompetitive
behavior. And many scholars took issue with flaws in the government’s reasoning that ultimately
weakened its antitrust case, including poor understanding and analysis of the underlying
technologies at issue (e.g. Chin, 2005; Economides, 2001).
106
However, in many ways, the case continues to have reverberations to this day, including
shaping the modern internet in profound ways. Upstart companies coming to age at that time,
such as Google, would likely not exist, at least in the same form, without the Microsoft
settlement. In particular, the case had a chilling impact on further predatory actions by Microsoft,
which at that time was just as dominant a company as AT&T was in its heyday or the Big Five
are now. As Gary Reback, an antitrust lawyer who represented Netscape, said:
Microsoft had run Netscape out of the browser market. Internet Explorer was literally 98
percent of the market. The only way you could get to Google was through Microsoft.
You had to go onto the Microsoft browser and type www.google.com. Now if you did
that, there’s no reason Microsoft had to send you to Google. They could have just put up
the big red warning screen, saying “Don’t click on this site. It’s a bad site. It takes your
personal information without telling you …” I have long asserted, based on my
conversations with Microsoft people, that they didn’t do it because they were tired of
getting fined a billion euros a pop. It was clear what would happen to them if they killed
Google or the next generation of technology.
The reason they didn’t do that was the threat of antitrust enforcement. Because of
antitrust enforcement, that’s why we have Google. There is no other reason. (as cited in
Luckerson, 2018)
Tim Wu echoes that sentiment: “A whole generation of companies — Google, Facebook,
some of these early companies — they don’t owe everything to antitrust, but they owe a sizable
debt to the antitrust law” (as cited in Patel, 2018).
107
4.2.4 Conclusion. The current political tides are problematic for big technology
companies. They thrived in large part because of the United States’ longstanding and deeply
embedded laissez faire approach to online services. But some regulation now seems inevitable.
From an economic standpoint, there is a growing recognition that these companies may control
products that, due to network effects, are natural monopolies that may require utility-style
regulation like the power, transportation, and telecommunications industries. This is often paired
with concerns that such monopolies have become so embedded into modern society that they
serve the effective role of an essential utility, e.g. that absence from Facebook deprives a U.S.
resident of full participation in the modern public sphere.
From an antitrust perspective, the reign of the Chicago economists—and the structural
obstacles that posed to intervention into zero-priced markets—also seems to be waning.
Renewed antitrust activity in the technology sector may not lead to the breakup of any company,
but it could cause significant divestitures, and the mere threat of action is likely to consume
resources, stifle future acquisitions, limit entry into new lines of business, and slow the overall
growth of these firms. Notably, this antitrust resurgence comes paired with a corresponding
increase in scrutiny over the economic value of data, striking at the heart of the ability of U.S.
companies to acquire the raw resources needed to develop and deploy new AI technologies.
4.3 Privacy
4.3.1 Consumer privacy protection. A deregulatory approach to online services,
combined with lax antitrust enforcement, may have allowed technology companies to grow
unimpeded by government intervention, but that is insufficient to explain the size and dominance
108
of U.S. online firms. They also needed to develop business models capable of driving
exponential growth in the first place. That took yet more government action (or more precisely,
inaction): an enduring lack of baseline information privacy rights for consumer data.
Privacy law in the United States follows a generally siloed approach, in which certain
categories of commercial user data—such as financial, student, or health information—are
considered particularly sensitive and provided affirmative baseline protections. The primary
protections for all other forms of consumer data are ex post enforcement action under the
FTCA’s prohibition on “unfair or deceptive acts or practices in or affecting commerce” (15
U.S.C. § 45(a)). As a result, U.S. companies are given extreme deference to collect, retain,
aggregate, manipulate, and monetize user-provided and other consumer information. This
facilitated a supercharged version of “permissionless innovation” that does not require ex ante
consultation with government agencies, perhaps best demonstrated by Facebook’s infamous
former slogan of “move fast and break things.”
The ubiquity, scale, and scope of this data collection is tremendous. Although the
unregulated nature of the industry makes an exact tolling impossible, generally all businesses
with even a modest online presence collect and store consumer information directly or have
third-party services do so on their behalf. For instance, through 2019, almost 5100 companies
representing almost every sector of the economy had signed up for the E.U.-U.S. Privacy Shield,
a legal framework that allows companies to self-certify compliance with E.U. privacy laws and
export data across the Atlantic (Privacy Shield Framework, 2019). This underrepresents the full
extent of data collection in the United States, as it ignores companies that do not conduct global
business; that use alternative compliance methods such as contract model clauses; or that have
109
decided to globally comply with Europe’s General Data Protection Regulation (GDPR), a new
European legal regulation for privacy and data security that began implementation in 2018.
The practical limitations of the U.S. ex post enforcement regime for privacy are also
noteworthy. The FTC is a small agency—with just two-thirds of the funding of the FCC, which
regulates only a single industry
36
—with responsibility for overseeing, to some extent, most
sectors of the U.S. economy. In addition, the FTC has joint jurisdiction over both antitrust and
consumer protection issues, further limiting its capacity to police privacy practices. As a result,
the FTC is highly selective in taking enforcement action and prefers cases of clear deception or
that have substantial signaling potential to industry, such as blatant misrepresentations in privacy
policies or user settings; solicitation of information under false pretenses; or unambiguous
violations of basic corporate obligations, e.g. the use of spam or spyware (FTC, 2018, p. 3-4).
Despite the thousands of U.S. companies that collect user information, the FTC had brought only
75 general privacy lawsuits through 2018 (FTC, 2018, p. 3). Similarly, state attorney generals,
who have jurisdiction over state-level consumer protection issues including privacy, face similar
resource constraints and likewise focus on cases with substantial precedential value.
The following discussion addresses privacy and data security as if they are
interchangeable. The link is straightforward: Privacy generally encapsulates the broad notion of
protection against misuse of “private” information, which includes securing such data from
unauthorized access by third parties. The private sector generally considers these issues in
tandem, and privacy legislation, both proposed and passed, usually includes provisions related to
both. The FTC’s oversight of data issues also includes cybersecurity and data security. The
36
In fiscal year 2019, the FTC’s budget was $309.7 million (P.L. 116–6, 133 Stat. 167), while the FCC’s budget was
$469.284 million (of which $339 million was appropriated funding and the remainder from proceeds from the use of
a competitive bidding system under 47 U.S.C. 309(j)).
110
courts have ruled that, without ambiguity, “FTC has authority to regulate cybersecurity under the
unfairness prong of § 45(a)” (FTC v. Wyndham Worldwide Corp., 2015), and the agency has
brought 65 security cases since 2002 (FTC, 2018, p. 5).
4.3.2 The “right to be let alone.” At first glance, it might seem odd that the United
States has adopted such a lax approach to consumer privacy. Privacy was amongst the central
concerns of the Founding Fathers. In particular, the Constitutional framers, reacting to “the
[British] abuse of executive search and seizure powers primarily manifested … in the form of
general warrants and writs of assistance,” (Maclin, 1994) were highly concerned with limiting
government infringements of personal privacy (Solove, 2006, p. 4-5). The result was three
portions of the Bill of Rights that place foundational limitations on government actions related to
property and information. The Third Amendment states that “No soldier shall, in time of peace
be quartered in any house, without the consent of the owner, nor in time of war, but in a manner
to be prescribed by law.” The Fourth Amendment says that “The right of the people to be secure
in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not
be violated … .” And the Fifth Amendment declares that, among other things, “No person …
shall be compelled in any criminal case to be a witness against himself, nor be deprived of life,
liberty, or property, without due process of law.” Legislators and courts have continued to focus
on delimiting executive power when dealing with privacy issues, though the relative strength of
protections afforded the citizenry has waxed and waned over the centuries.
A second common theme in the history of American privacy policy has been the tight
coupling between development of new communications technologies and noteworthy changes in
privacy jurisprudence or law: “Frequently, new laws emerge in response to changes in
111
technology that have increased the collection, dissemination, and use of personal information”
(Solove, 2006, p. 3). The field of privacy therefore provides an archetypal example of Lessig’s
(2006) normative theory of jurisprudence:
The answer could be either, which means that the change reveals what I will call “a latent
ambiguity” in the original constitutional rule. In the original context, the rule was clear
(no generalized search), but in the current context, the rule depends upon which value the
Constitution was meant to protect. The question is now ambiguous between (at least) two
different answers. Either answer is possible, depending upon the value, so now we must
choose one or the other.
For example, mail in the early United States could not be sealed well, leading to anxieties
about the confidentiality of letters, to the point that some correspondence was made in code. In
response, the government took several formal and informal actions, culminating in the passage of
a still-extant statute: “Whoever takes any letter, postal card, or package out of any post office or
any authorized depository for mail matter, or from any letter or mail carrier, or which has been in
any post office or authorized depository, or in the custody of any letter or mail carrier, before it
has been delivered to the person to whom it was directed, with design to obstruct the
correspondence, or to pry into the business or secrets of another, or opens, secretes, embezzles,
or destroys the same, shall be fined under this title or imprisoned …” (18 U.S.C § 1702).
The invention of the telegraph tap, which allowed eavesdropping into electrical
communications, saw a similar dynamic. In that instance, however, Congress sought to obtain
telegraph messages for some of its investigations, and it failed to enact a law prohibiting
112
wiretapping (Solove, 2006, p. 8). In response, several courts, analogizing telegraphs and letters,
quashed subpoenas for telegraphs, and numerous state legislatures enacted provisions forbidding
the revealing of messages (Solove, 2006, p. 8). The action foretells several dynamics of today’s
more complex privacy world, especially the continued reliance on jurisprudence to address
thorny privacy issues and the uncertain protections for information held by third parties.
It was also technological advancements—namely, the development of portable cameras
and the transition to a more sensationalistic press—that spurred the most important development
in U.S. privacy law, the development of a tort remedy for invasions of the right of privacy.
Writing in “that most influential law review article of all,” (Kalven, 1966), “The Right to
Privacy,” Brandeis and Warren (1890) worry about the deleterious impacts of the new inventions
on private life: “Instantaneous photographs and newspaper enterprise have invaded the sacred
precincts of private and domestic life; and numerous mechanical devices threaten to make good
the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops’” (p.
195). Famously articulating “the more general right of the individual to be let alone” (p. 205),
Brandeis and Warren systematically found the existing branches of common law, including
defamation, contract, and property law, insufficient to address the issue. Accordingly, they
concluded that “The remedies for an invasion of the right of privacy are … An action of tort for
damages in all cases” (p. 219).
Though the article had “little immediate effect upon the law,” (Prosser, 1960, p. 384), the
legacy of this argument is enormous. The dean of Harvard Law School would go on to say that
the article “did nothing less than add a chapter to our law” (as quoted in Mason, 1946, p. 68). So
when Abigail Roberson sued in 1902 to enjoin circulation of lithographic prints containing a
portrait made without permission and for damages, the court found that “There is no precedent
113
for such an action to be found in the decisions of this court” (Roberson v. Rochester Folding Box
Co, 1902, p, 543). But within three years, in another case featuring the unauthorized use of a
personal likeness (Pavesich v. New England Life Ins. Co., 1905), the court followed Brandeis
and Warren’s reasoning and determined that “A right of privacy is derived from natural law,
recognized by municipal law, and its existence can be inferred … It therefore follows from what
has been said that a violation of the right of privacy is a direct invasion of a legal right of the
individual. It is a tort, and it is not necessary that special damages should have accrued from its
violation in order to entitle the aggrieved party to recover.” Over the course of the next 30 years,
there was “continued debate” over the issue, but “in the thirties, with the benediction of the
Restatement of Torts, the tide set in strongly in favor of recognition, and the rejecting decisions
began to be overruled” (Prosser, 1960, p. 386).
The Warren-Brandeis conception of privacy had several other important repercussions for
U.S. privacy law and regulation. First, in tying privacy to the right to be “let alone,” it captured
only a narrow subset of the modern conception of privacy. As Prosser (1960) notes, four general
categories of torts developed out the case law, “comprising four distinct kinds of invasion of four
different interests of the plaintiff, which are tied together by the common name, but otherwise
have almost nothing in common except that each represents an interference with the right of the
plaintiff, in the phrase coined by Judge Cooley, ‘to be let alone’ ...:
1. Intrusion upon the plaintiff's seclusion or solitude, or into his private affairs.
2. Public disclosure of embarrassing private facts about the plaintiff.
3. Publicity which places the plaintiff in a false light in the public eye.
4. Appropriation, for the defendant's advantage, of the plaintiff's name or likeness.” (p.
389)
114
But the right to be let alone fails to capture the actual relationship that users now have
with their data. Routine information sharing is a precondition to online participation and
therefore to access to modern digital services and the contemporary public sphere. Information
collection tools are embedded into the common methods of accessing online content: ISPs
engage in traffic monitoring, cookies track all web browsing and can only be defeated by highly
technical users, websites commonly deploy countermeasures against attempts to block
advertising or other user tracking, and registration is required to use many online services.
Moreover, much of the value of these services is dependent on the active sharing of personal
information—from search queries to help shift through databases to credit card data to complete
transactions to user-generated content to facilitate discussion and the sharing of ideas. As a
result, users generally have more interest in controlling the secondary use of their data by third
parties than in restricting information sharing entirely. This is more akin to copyright protection,
a tort explicitly rejected by Warren and Brandeis as insufficient to protect the privacy interest
(1890, p. 201).
Case law, then, is insufficient to provide a viable set of privacy rights to online users who
“voluntarily” provide information. Thus, “Legal protection of personal information on the
Internet is generally limited and often incoherent” and “Unless courts expand these torts over
time, which is unlikely, the increasingly routine use of personal information within cyberspace is
likely to fall entirely outside tort protection” (Schwartz, 1999, p. 1632; 1634). This understates
the modern problem, as website privacy policies are generally “take it or leave it,” and agreeing
to such policies limits both a user’s ability to take legal action and the FTC’s ability to launch
enforcement proceedings using its unfair trade practices authority. The commonplace use of
forced arbitration in user agreements (e.g. Dayen, 2019) can remove entirely even the limited
115
ability for online users to rely on tort protection. (Copyright violations are potentially subject to
both criminal and civil penalties, and so might provide a blueprint for how to structure an
aggressive hybrid U.S. privacy law.)
Congress has recognized that some information sharing is inevitable in modern
economies and taken steps to protect certain classes of data. The guiding legislative theory is
based on sensitivity—an abstract and ill-defined concept under which certain categories of
information, or information pertaining to certain classes of people, might cause social, personal,
or welfare harm if loss, misused, altered, or accessed without permission. For example, the
Health Insurance Portability and Accountability Act of 1996 (HIPAA) protects health records
and other medical information, while the Right to Financial Privacy Act of 1978 and Gramm-
Leach-Bliley Act of 1999 apply to certain kinds of financial information. The Family
Educational Rights and Privacy Act of 1974 governs access to educational information, and the
Children's Online Privacy Protection Act of 1998 (COPPA) restricts collection of information
from persons under the age of 13. Communications information has long fallen into the sensitive
category, with some of the strongest restrictions against “maximum information defaults”
(Schwartz, 1999, p. 1673). The Cable Communications Policy Act of 1984 (47 U.S.C § 551)
protects cable subscriber information, the Video Privacy Protection Act of 1988 limits disclosure
of video rental information, and Section 222 of the Telecommunications Act of 1996 (47 USC
§ 222) protects the privacy and confidentiality of customer proprietary network information by
common carriers.
But Congress has failed to extend such categorical protections to consumer electronic
data, and in fact has actively taken steps to weaken the limited statutory protections it has
provided. In 2016, the FCC, following its reclassification of BIAS as a Title II common carrier
116
service (FCC, 2015), issued new privacy rules for ISPs under its Section 222 authority that
mandated transparency around privacy practices, opt-in approval for use and sharing of sensitive
information, opt-out mechanisms for other information, and data security and breach
notifications (FCC, 2016a). A new Republican Congress, however, used an arcane provision
known as the Congressional Review Act of 1996 to pass a resolution of disapproval, nullifying
the rule and preventing the FCC from issuing a substantially similar rule (S.J.Res.34, P.L. 115–
22). As a result, the privacy rules applicable to ISPs remained in legal limbo for several months,
until finally the re-reclassification of BIAS as a Title I information service (FCC, 2017) entirely
removed the applicability of Section 222 to ISPs. Congress was apparently fine both with unclear
privacy rules (in the time between passage of the disapproval resolution and the FCC action) and
no rules at all.
This might seem strange, as regardless of the regulatory classification of ISPs, the
essential nature of modern broadband communications and the sensitivity of the information that
ISPs can access mirror the concerns around telephone subscriber data that led to passage of
Section 222. ISPs manage and track all traffic originating from and terminating into a customer’s
internet connection, a level of insight that far exceeds those of online service providers, which
can access only partial fragments of a user’s total online activity. As the FCC noted when it put
the rules into place, ISPs “have access to vast amounts of information about their customers
including when we are online, where we are physically located when we are online, how long we
stay online, what devices we use to access the Internet, what websites we visit, and what
applications we use” (FCC, 2016a, p. 2).
Additionally, the tort protections under Warren-Brandeis face problems when dealing
with issues raised by online intermediaries and cloud computing services. Under the prevailing
117
third-party doctrine, “This Court consistently has held that a person has no legitimate expectation
of privacy in information he voluntarily turns over to third parties” (Smith v. Maryland, 1979, p.
743-744). This follows logically if one accepts that privacy is primarily the “right to be left
alone”; if one provides information to a third party, then they have clearly waived their interest
in solitude. This effectively nullifies the tort protections available to users of many online
services. Congress has taken limited efforts to provide additional protections for data held by
third parties. The Right to Financial Privacy Act of 1978, for example, requires the use of a
subpoena or warrant to obtain financial information (12 U.S.C § 3402). But the leverage the
third-party doctrine provides to the government, especially law enforcement, to obtain
information without running afoul of Fourth Amendment limitations is enormous, and therefore
Congress faces enormous pressure not to provide additional protections.
The failure of Congress to update the Electronic Communications Privacy Act of 1986
(ECPA) to account for modern realities is a telling example. Title I of ECPA (the Wiretap Act)
currently prohibits interception or disclosure of wire, oral, or electronic communications
(commonly referred to as data in transit) (18 U.S.C § 2511) without judicial authorization (18
U.S.C § 2516). But data stored for 180 day or more is considered abandoned and accessible with
only a subpoena or court order (18 U.S.C § 2703), a relatively trivial process. In other words, the
law fails to account for developments in cloud computing and miniaturization that make long-
term remote electronic storage by third parties cheap, readily accessible, and convenient.
However, legislative proposals to eliminate the 180-day sunset have failed, in large part
due to opposition from law enforcement agencies concerned that such a change will limit their
ability to gather evidence, and from civil agencies that do not have warrant rights and therefore
rely on subpoena power for investigations (Alder-Bell, 2017). Here we might extend Lessig’s
118
(2006) theory of latent ambiguities to its most forceful articulation: A lack of action when faced
with an ambiguity of Constitutional law, due to social or technological developments, is a
normative choice in and of itself. In other words, the failure to amend ECPA reflects prevailing
U.S. morality, which says that the interests of law enforcement trump the right of citizens to
assert proactive control over data they have voluntarily provided to a third-party service.
The question of why Congress never enacted a consumer privacy bill is another example
of normative decision-making through inaction. It may be that Congress failed to recognize the
true sensitivity of information being collected online, for instance, disinclining legislators to
create a new protected class of information. This is not a satisfying answer, however. Congress
did not lack for awareness of potential privacy issues even in the early days of the internet. In
1998, at the dawn of online commercialization, the FTC reported to lawmakers that, in a survey
of more than 1,400 Web sites,
“industry’s efforts to encourage voluntary adoption of the most basic fair information
practice principle—notice—have fallen far short of what is needed to protect consumers.
The Commission’s survey shows that the vast majority of Web sites—upward of 85%—
collect personal information from consumers. Few of the sites—only 14% in the
Commission’s random sample of commercial Web sites—provide any notice with respect
to their information practices, and fewer still—approximately 2%—provide notice by
means of a comprehensive privacy policy.” (p. ii-iii)
And the President’s communications policy advisors reported that the advent of the
internet had led to a dramatic uptick in privacy concerns. In 1996, 89% of the public were
119
concerned about threats to their personal privacy (up from 82% in 1995) and 55.5% were very
concerned, up 8% from 1995 (NTIA, 1988).
Rather, the lack of action on commercial privacy was largely intentional, part of the same
deregulatory economic philosophy that led to privatization of the internet and passage of the
Telecommunications Act of 1996. Observing the privacy debate at the time, Schwartz (1999)
notes that “the Clinton Administration and legal commentators increasingly view the role of the
Internet law of privacy as facilitating wealth-creating transmissions of information, including
those of personal data” (p. 1611). For example, the Clinton Administration administration’s
explicit philosophy for the digital economy was that “The private sector should lead …
Electronic commerce should be a market driven arena not a regulated one … [and] Where
governmental involvement is needed, it should support and enforce a predictable, minimalist,
consistent, and simple legal environment for commerce” (U.S. Government Working Group on
Electronic Commerce, 1998, p. 5).
But other factors also contributed to this hands-off approach to consumer privacy. So
while many legal scholars, and even the Clinton Administration, had developed a working
conception of information privacy as the ability to control the terms under which personal
information was acquired, disclosed, and used (Kang, 1998, p. 1205; Westin, 1968, p. 7), others
were skeptical that such a policy was feasible given technological developments and trends in
online social behavior. “The critical problem with the model of privacy-as-control is that it has
not proved capable of generating the kinds of public, quasi-public, and private spaces necessary
to promote democratic self-rule” (Schwartz, 1999, p. 1660). For Schwartz (1999), there are at
least four major issues with this model: first, the fallacy that users have autonomy to make
choices that preserve their control when interacting with online systems; second, the realization
120
that privacy is at odds with many other public policy objectives that require access to personal
information, including public accountability and bureaucratic rationality; third, that individuals
are not likely to be capable of controlling the use of their information due to a lack of knowledge
of how that information might be captured, used, or sold; and fourth, that the formality of online
consent systems leads to limited instances of “voluntary” consent. Under this logic, the
substantial public benefits arising from online communities that can exist only in a world of
limited information privacy outweigh the advantages of stronger privacy rules.
4.3.3 The consumer privacy resurgence. As with antitrust, the tenability of a light-
touch privacy regime may be fading. International and domestic developments have dramatically
shifted the contemporary privacy landscape, militating against both the economic and social
rationales for failing to enact baseline consumer privacy protections.
On the economic front, legislators are confronting the reality of an industrial sector
dominated by a few extremely large and profitable companies. As the internet has developed, it
has become clear that data itself—not just the underlying networks—create supply-side
economies of scale. This means that large companies with an information advantage tend to keep
that advantage through products that feature data as both an input and an output. This occurs in
numerous ways. The information these companies accumulate allows them to create targeted
advertising and user experiences, which they use to attract and retain new users. These
companies take advantage of their data advantages to jumpstart expansion into adjacent lines of
business, which they triangulate into broader and richer user profiles. And they have leveraged
the infrastructure that supports their data accumulation into new cloud products and services,
providing them control over third-party data resources. This presents barriers to entry for
121
potential competitors and undermines the logic that the current consumer privacy regime in the
United States is a driver of net innovation.
Opponents of new privacy legislation often argue that the aggregate economic cost would
be enormous. McQuinn and Castro (2019), for example, estimate that U.S. adoption of a GDPR-
type privacy regime could cost the U.S. economy up to $122 billion per year. Such analyses are
problematic, however. They ignore the deadweight economic drag from the actions that
technology firms take to “obscure their operations” to evade regulatory scrutiny, public sunshine,
and popular backlash “until opposition is encountered, at which point they can use their
substantial resources to defend at low cost what had already been taken” (Zuboff, 2015, p. 85).
These analyses also minimize or ignore impacts that are non-quantifiable or difficult to estimate,
particularly the benefits of providing consumer protections. For instance, light-touch privacy
rules, while they may create wealth, also transfer capital from consumers to corporations in ways
that poorly serve the public interest and exacerbate wealth inequality. For example, consumers
bear the bulk of costs, monetary and otherwise, when their data is misused or misappropriated.
Consumers also lack the scale advantages and deep expertise of technology firms, so they must
often resort to economically inefficient methods to mitigate the impacts of such misuse. The
commonplace reliance on third parties to hold, process, or monetize data magnifies this issue, as
the transfer process presents an additional point of cyber vulnerability, and third parties often
have poorer data handling and security practices than the primary data collector.
The prevalence of high-profile cybersecurity incidents in recent years underscores the
poor cyber hygiene of holders of large amounts of sensitive personal information and their often
cavalier data sharing practices. A small sampling of these include a breach of 3 billion Yahoo
user accounts between 2013 and 2014 (Stempel & Finkle, 2017); a Chinese intelligence
122
infiltration of roughly 500 million Marriott International accounts (Sanger, Perlroth, Thrush, and
Rappeport, 2018); a 2017 data breach of 147 million Equifax customers (Equifax, 2017); a
breach of 40 million Target accounts in 2013 (Krebs, 2018); and the improper collection of
Facebook profile data by analytics firm Cambridge Analytica (Cadwalladr & Graham-Harrison,
2018). The aggregate impact of these breaches has been enormous, costing U.S. customers tens
of billions of dollars in collective identity theft costs, spawning entire new industries dedicated to
prophylactic protection of consumer data, and creating a general air of resignation around
handling on consumer information. In fact, such breaches foster a growing mistrust of online
services, which serves as a net drag on innovation and new business formation.
The argument that the current U.S. privacy regime still serves important social policy
purposes is also unwinding. In addition to fostering extremely high levels of corporate political
power, the laissez faire approach to privacy has raised concerns over an unprecedented level of
human commoditization that sacrifices dignity, freedom, and choice in the name of corporate
profit. The “surveillance capitalism” regime created by Big Data and its associated commercial
produces “a new global architecture of data capture and analysis that produces rewards and
punishments aimed at modifying and commoditizing behavior for profit” (Zuboff, 2015, p. 85).
There are also more tangible examples of the sociopolitical cost of lax privacy rules. The
Snowden revelations facilitated a dramatic reshaping of the global privacy context by
highlighting the extent to which the U.S. government actively collaborated with American
companies to access and analyze consumer data. The revelations led directly to the dissolution of
the longstanding Safe Harbor framework under which companies could self-certify adherence
with certain privacy principles in order to export data from the European Union to the United
States. This occurred when activist Maximillian Schrems filed a complaint that, “in the light of
123
the revelations made in 2013 by Edward Snowden concerning the activities of the United States
intelligence services (in particular the National Security Agency (‘the NSA’)), the law and
practice of the United States do not offer sufficient protection against surveillance by the public
authorities of the data transferred to that country” (Court of Justice of the European Union, 2015)
And it is likely the GDPR would have ultimately been adopted in a substantially weakened form
without the Snowden disclosures significantly influencing the privacy debate by buttressing the
arguments of privacy proponents (Kalyanpur & Newman, 2019). Put another way, the U.S.
privacy regime is undermining U.S. foreign policy by creating ideological and practical rifts with
otherwise like-minded partners.
While Europe has been more aggressive than the United States in recent years in taking
antitrust action against the technology industry, that pales in comparison to the bloc’s actions on
privacy. GDPR, in many ways, serves as a new global privacy baseline. In part, this is because
national borders are largely porous to modern information flows. Websites, with few exceptions,
are accessible to global audiences; cloud computing infrastructure tends to host data in
geographically dispersed locations for efficiency and redundancy; and most technology
companies, large or small, tend to prioritize global audience growth to achieve economies of
scale and leverage network effects. As a result, given the EU’s market size and economic clout,
it is often more efficient for companies to comply globally with GDPR requirements rather than
geographically segment their treatment of user data.
Deep dissatisfaction with the scope and scale of corporate information collection has also
jumpstarted more intense discussions around baseline consumer privacy protections in the
United States. Although work on consumer privacy was a core part of the technology policy
agenda under the Obama Administration, the key proposal advanced at that time centered around
124
a “privacy bill of rights” and enforceable codes of conduct developed through multistakeholder
processes (White House, 2012, p. 7). Such processes have created notable successes for some
technology issues, such as the transition of management of the Internet naming and numbering
functions from U.S. control to the global multistakeholder community. However, they rely on
consensus-building by stakeholders, which often leads to a dynamic in which the entity willing
to invest the most time and resources into arguing their position prevails: victory through
exhaustion. Because commercial companies often face significant financial consequences in such
proceedings, they often have an incentive to make exactly these kinds of investments. As a
result, processes that center around voluntary codes of conduct and multistakeholder agreements
are structurally deferential to commercial interests.
Over the past few years, however, the U.S. domestic conversation has shifted
dramatically. The U.S. Congress has signaled, through several hearings and task forces, a strong
interest in establishing baseline privacy legislation that provides some affirmative corporate
duties related to consumer data collection and use (e.g. Blackburn, 2019; Policy Principles for a
Federal Data Privacy Framework, 2019; Protecting Consumer Privacy, 2019). Members of
Congress have introduced at least a half dozen privacy bills to instantiate specific right and
enforcement regimes.
37
Most recently, Democrats and Republicans on the powerful Senate
Commerce Committee—representing the most realistic path forward for passage of privacy
37
E.g. Sen. Amy Klobuchar’s Social Media Privacy Protection and Consumer Rights Act of 2019
(https://www.congress.gov/bill/116th-congress/senate-bill/189); Rep. Suzan DelBene’s Information Transparency &
Personal Data Control Act (https://www.congress.gov/bill/116th-congress/house-bill/2013); the Algorithmic
Accountability Act of 2019. introduced separately by Democratic Sen. Ron Wyden
(https://www.wyden.senate.gov/imo/media/doc/Algorithmic%20Accountability%20Act%20of%202019%20Bill%2
0Text.pdf) and Rep. Yvette Clarke (https://www.congress.gov/bill/116th-congress/house-bill/2231); Sen. Marsha
Blackburn’s Balancing the Rights of Web Surfers Equally and Responsibly Act of 2019
(https://www.congress.gov/bill/116th-congress/senate-bill/1116); Sen. Edward J. Markey’s Privacy Bill of Rights
Act (https://www.markey.senate.gov/news/press-releases/senator-markey-introduces-comprehensive-privacy-
legislation); Rep. Anna G. Eshoo and Zoe Lofgren’s Online Privacy Act of 2019 (https://eshoo.house.gov/news-
stories/press-releases/eshoo-lofgren-introduce-the-online-privacy-act/).
125
legislation—released dueling proposals for consumer privacy in the United States: The
Consumer Online Privacy Rights Act and a “discussion draft’ of the United States Consumer
Data Privacy Act of 2019, respectively. Numerous business associations, civil society
organizations, and companies have also released privacy principles or proposals.
38
Interest in Federal privacy legislation, especially from the private sector, has increased
significantly as a result of the most notable domestic privacy development in decades. The
California Consumer Privacy Act (CCPA) (AB-375), passed in 2018, came into force on Jan. 1,
2020.
39
This bill contains many similarities to GDPR, providing consumers with the right to
know what data is being collected, to learn how it is being used, and to request deletion, as well
as other protections. As a result, private companies have begun to support Federal legislation that
would preempt state-level laws such as the CCPA to ensure a single consistent privacy regime
nationwide, reducing overall compliance costs.
(Notably, privacy is one of several issues where California leverages its size and
influence to set national priorities. This dynamic is seen not only on environmental issues such as
vehicle fuel standards and climate change, but also on technology issues, most notably net
neutrality; the California Internet Consumer Protection and Net Neutrality Act of 2018
40
would
38
Corporate examples include Intel (https://usprivacybill.intel.com/legislation/) and Mozilla
(https://www.mozilla.org/en-US/privacy/principles/). Civil society examples include the Center for Democracy and
Technology (https://cdt.org/wp-content/uploads/2018/12/2018-12-12-CDT-Privacy-Discussion-Draft-Final.pdf) and
a joint group of 34 civil rights, consumer, and privacy organizations (https://www.newamerica.org/oti/press-
releases/principles-privacy-legislation/). Trade association examples include the U.S. Chamber of Commerce
(https://www.uschamber.com/press-release/us-chamber-releases-model-privacy-legislation-urges-congress-pass-
federal-privacy-law), the Internet Association (https://internetassociation.org/internet-association-proposes-privacy-
principles-for-a-modern-national-regulatory-framework/), and the Computer & Communications Industry
Association (http://www.ccianet.org/wp-content/uploads/2018/11/CCIA_Privacy_Principles.pdf). Numerous
companies have also expressly committed to global compliance with CCPA, GDPR, or both.
39
California Bill AB-375, available at
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375.
40
California Bill SB-822, available at
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB822. The CCPA was amended
after passage but before implementation by California Bill SB-1121, available at
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121.
126
enact many of the net neutrality provisions of the FCC’s now repealed Open Internet Order
(FCC, 2015), although California and the Department of Justice have agreed to defer
enforcement of the law pending resolution of Federal lawsuits challenging the FCC’s repeal.)
Designing and implementing new privacy legislation is challenging. Profound ideological
disagreements linger over whether such legislation should preempt state laws and whether it
should provide a private right to action, which would allow private citizens to file lawsuits for
privacy violations (Jerome, 2019). The latter provision would effectively deputize the citizenry
to act like state AGs, mitigating underenforcement and selective enforcement issues that arise
because “State and local consumer agencies lack sufficient resources to pursue every consumer
fraud vigorously, and so, like the FTC, face strong incentives to confine their activities to cases
likely to have a broad impact” (Sovern, 1991, p. 448). Such debates reflect a fundamental
theoretical disagreement about the overall burdens imposed by a new statute. Conservatives
generally take a more pro-business tack, supporting preemption and opposing a private right of
action that could greatly increase the complexity and cost of implementation. Liberals favor the
stronger privacy regime that is likely to occur when states can impose additional privacy
requirements on companies and firms are exposed to potential legal action for any violations.
A new privacy law would also have complex downstream impacts on markets. Privacy
legislation with high compliance costs might exacerbate market concentration, since compliance
costs generally do not scale linearly with the number of users, and large companies have more
compliance resources available. There is no consensus on how to address this: CCPA, as
127
amended, exempts companies with annual gross revenues of less than $25,000,000,
41
while
GDPR imposes limited requirements on companies employing fewer than 250 persons.
42
Enforcement activity is also intensifying. The summer of 2019 saw two major settlements
over problematic privacy practices, including “the largest ever [civil penalty] imposed on a
company anywhere for violating consumers’ privacy” (FTC, 2019c). This $5 billion settlement
with Facebook, which also requires corporate restructuring, enhanced external oversight,
increased compliance measures, and other changes, originated over improper collection of user
data by a third-party vendor, Cambridge Analytica, on the Facebook platform; the FTC found
that Facebook knew of the problematic practices and failed to take mitigating actions in violation
of a previous privacy consent decree (FTC, 2019b). In addition to its size, this settlement is
notable because it strikes at the heart of a key advantage of online platforms: their ability to
leverage monopolist control over online infrastructure to accumulate and monetize information
provided via data-sharing agreements with third parties. In the second settlement, YouTube
agreed to a $170 million fine for COPPA violations, the largest COPPA penalty ever obtained by
the FTC (FTC, 2019d). As a sign of the extent of sea change on this issue, Democratic
Commissioners took the unusual step of dissenting
43
from the Facebook (Chopra, 2019a;
Slaughter, 2019a) and YouTube (Chopra, 2019b; Slaughter, 2019b) settlements. This was not
because they disagreed with the enforcement actions, but because they felt that the settlements
41
California Bill SB-822, 1798.140(c)(1)(a), available at
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB822.
42
GDPR, Article 30, available at http://www.privacy-regulation.eu/en/article-30-records-of-processing-activities-
GDPR.htm. Article 30 states that “The obligations referred to in paragraphs 1 and 2 shall not apply to an enterprise
or an organisation employing fewer than 250 persons unless the processing it carries out is likely to result in a risk to
the rights and freedoms of data subjects, the processing is not occasional, or the processing includes special
categories of data as referred to in Article 9(1) or personal data relating to criminal convictions and offences referred
to in Article 10” (emphasis in original).
43
Although many independent Federal agencies have become more partisan over time, the FTC is primarily an
enforcement agency. Traditionally, most decisions requiring full votes by enforcement agencies are adopted on a
unanimous basis.
128
did not go far enough in creating strong enforceable mechanisms to prevent future violations and
did not levy fines large enough to credibly deter future problematic behavior.
4.3.4 Case study: COPPA and YouTube. The YouTube COPPA settlement presents
insights into how corporate behavior may change under a new privacy statute and the complex
ripple effects that might have on the U.S. economy. COPPA’s origins trace book to the early
days of the Internet, when website norms were still developing, and few online destinations had
privacy policies in place. This led to a “a wide variety of detailed personal information … being
collected online from and about children, often without actual notice to or an opportunity for
control by parents” (FTC, 1998 p. 4). This included data harvested as part of registration with or
use of a kids-oriented site, and personal information solicited by adults in general-interest chat
rooms and forums, which generally did not take steps to verify the age of users (FTC, 1998, p. 4-
5). In 1997, the Center for Media Education requested that the FTC investigate the data
collection practices of a website for children known as KidsCom (FTC, 1997). The FTC
determined that KidsCom “solicited personal information from children, including name, birth
date, e-mail and home addresses, and product and activity preferences” and that “certain
practices of KidsCom likely were deceptive or unfair in violation of Section 5 of the FTC Act”
(FTC, 1997). To avoid enforcement action, KidsCom changed the problematic behavior, and the
FTC issued principles for online collection from children (FTC, 1997), a common tool for
providing guidance to companies on best practices to avoid enforcement action. The KidsCom
investigation and other work by the FTC on children’s safety and privacy, summarized in an
annual report to Congress (FTC, 1998, p. 4-6), ultimately led to passage and signing of COPPA
(Warmund, 2004, p. 194).
129
Given the particular vulnerability of children and the rampant problematic behavior seen
towards kids online—including sexual advances and solicitation of passwords, mailing
addresses, and other sensitive personal information by adults—COPPA sets out a particularly
strong set of statutory requirements. For instance, the law makes it “unlawful for an operator of a
website or online service directed to children, or any operator that has actual knowledge that it is
collecting personal information from a child, to collect personal information from a child” unless
the operator “provide[s] notice on the website of what information is collected from children by
the operator, how the operator uses such information, and the operator’s disclosure practices for
such information” and “obtain[s] verifiable parental consent for the collection, use, or disclosure
of personal information from children” (47 U.S.C. § 6502). The FTC was provided rulemaking
authority and required to develop regulations to implement these policies, which came into effect
in 2000.
44
The key issue in the YouTube settlement was the company’s failure to properly follow
COPPA requirements when serving viewers behavioral advertising on behalf of itself and
content creators using the platform. Behavioral advertising relies on persistent identifiers to track
user browsing habits and provide tailored ads (FTC & New York v. Google & YouTube
Complaint, p. 7). The FTC found that, despite claiming to be a general audience website,
YouTube marketed itself to popular sellers of child-oriented products and services, such as
Mattel and Hasbro, as a top destination for kids (p. 8). YouTube did provide methods to
automatically or manually rate content as youth-oriented, but it did not treat such channels or
videos any differently for data collection purposes (p. 9). In addition, YouTube generally allows
viewers to watch videos without registration; while some features, such as commenting, are
44
16 CFR part 312. The rule was amended in 2013.
130
limited to users that create a Google account, the only age verification mechanism used is self-
identification as being 13 or older, which is not “verifiable” under COPPA (p. 6-7). Based on
these factors, the FTC concluded that YouTube was an operator under COPPA, had “actual
knowledge” that it was collecting information from children, and failed to meet COPPA’s notice
and consent requirements (p. 16-17). Under the settlement terms, YouTube is required to
develop, implement, and maintain a system for channel owners to designate whether their
content is directed to children, to provide annual training on COPPA compliance, and to strictly
follow all COPPA notice and consent requirements (FTC & New York v. Google & YouTube
Order, p. 10-12).
The operational changes made by YouTube in response to the settlement demonstrate two
dynamics that U.S. policymakers must navigate as they consider overhauling consumer privacy
in the United States. First, even small changes to policy norms can have drastic downstream
effects. YouTube has crafted new requirements that will require content creators to label if their
work is made for kids and will employ machine learning to “identify videos that clearly target
young audiences” (YouTube, n.d.). Rather than developing a verifiable consent mechanism,
YouTube will entirely stop serving behavioral advertising for content directed towards kids
(indeed, it will cease any information collection at all, including disabling comments that might
allow kids to inadvertently reveal personal information) (YouTube, n.d.). As YouTube notes,
“Overall, viewers will have minimum engagement options with made for kids content on
YouTube.com” (YouTube, n.d.). In addition, even though COPPA is a U.S. law, YouTube will
apply these policies globally (YouTube, n.d.).
131
YouTube’s annual revenues are $15 billion and growing rapidly year after year (see
Alphabet, 2020).
45
The FTC settlement represents approximately 1 percent of annual YouTube
revenue. Yet the dramatic response by YouTube signals the seriousness of privacy reform for big
technology companies. Unfettered collection of consumer data is the heart of their business
models, and any restrictions will dramatically cut into their profitability. YouTube’s compliance
changes, designed to maintain strict adherence to its settlement terms and minimize the
emergence of any edge cases that might be ambiguous under current COPPA regulations,
originates from the same mindset driving technology’s slowly growing interest in Federal
privacy legislation: a resignation that change is coming. In that case, the most prudent course of
action is “to make lemonade from lemons” and shape the ensuing changes to do the least harm.
YouTube’s decision to apply its changes globally are equally notable. The porousness of
online borders was once celebrated as a profoundly democratizing force. That narrative has
fallen out of favor as toxic online behavior, misinformation, and electoral interference erode the
viability of online platforms to serve as a true public sphere. Yet GDPR, CCPA, and the
YouTube settlement highlight the ways in which unilateral action can set a new normative and
legal floor around consumer privacy. Geographic segregation of data may be technically feasible,
but in a world of ubiquitous cloud computing and high mobility, it presents significant
administrative costs, increases potential legal liability, hinders data security, and compromises
user experience. At the same time, this dynamic significantly complicates the process of crafting
new privacy policy. Domestic changes cannot be considered in isolation from the international
stage; rules and requirements ostensibly enacted as part of a national policy can lead to
45
Alphabet has been famously tight-lipped about revenues attributable to its individual business lines. They revealed
YouTube-specific revenue for the first time in 2020 (see also Wakabayashi, 2019).
132
requirements that are mutually exclusive with the laws of other countries, could stifle operation
of online services, and may complicate or restrict international data flows.
A second major impact of the YouTube settlement reveals the deep entrenchment of
major technology companies in the U.S. economy and the profound imbalances of power that
now exist online. Under COPPA, an operator is “any person who operates a website located on
the Internet or an online service and who collects or maintains personal information from or
about the users of or visitors to such website or online service, or on whose behalf such
information is collected or maintained, where such website or online service is operated for
commercial purposes … “ (47 U.S.C. § 6501). Thus, channel owners that monetize videos are
operators under COPPA, since YouTube is collecting user information and serving ads on their
behalf (FTC, 2019f). YouTube, in requiring content owners to make their own determinations
about whether content is child-directed, is leveraging its gatekeeper power over online video to
outsource legal liability to its users. Thus, content creators assume the costs and burdens of legal
compliance, even as YouTube collects half of the advertising revenue generated by content
creators (Rosenberg, 2018). Perhaps more problematically, creators have no input into platform
governance or operations; that is YouTube’s sole domain. This combination of statutory
language and corporate practice highlights the deep vulnerability of online content creators vis-à-
vis online platforms: Arbitrary changes by YouTube to optimize revenue or minimize political
risk can threaten the livelihood of thousands of people, who have no legal or practical recourse.
Previous changes to YouTube policies, designed to alleviate public outrage over inappropriate
content, have already disadvantaged small creators by eliminating their ability to monetize
videos and appear in tailored search results (Ehrenkranz, 2018). Thus, as lawmakers move to
133
increase privacy protections, they must also consider downstream effects on platform users
deeply enmeshed with the dominant tech companies.
4.3.5 Conclusion. In many ways, privacy policy is transforming more rapidly than
antitrust. Domestic concerns around surveillance culture and ubiquitous data collection have
already led to legislative action, including strong new privacy regimes in Europe and California
that in many ways are setting a new global baseline around treatment of consumer data. U.S.
companies have recognized the prevailing winds and shifted from instinctive opposition to
pushing for laws that minimize the threats to their core business models. The central impact
remains the same, however. Loose regulations over the collection and use of consumer data are
key to the enormous informational resources available to the largest technology companies.
Changing that directly weakens their ability to drive domestic AI technology forward. For
instance, the implementation of CCPA and GDPR, combined with increasing privacy
enforcement, is already making firms less likely to engage in data-sharing agreements with
online platforms, undermining the ability of those firms to translate their infrastructure control
into a data advantage.
4.4 Online Speech and Content
4.4.1 The 26 words that shaped the modern internet. The United States was not the
only country to take deregulatory action in the telecommunications, competition, and privacy
arenas. Some smaller countries were quicker on the draw in opening their telecommunications
markets, for instance (Spiller & Cardilli, 1997), and many nations face the same gaps between
134
technological progress and updates to privacy regulations. So there is merit to claims that these
factors alone cannot explain U.S. dominance in online services.
There is, however, a third regulatory innovation unique to the United States. Section 230
of the CDA states, in part, that “No provider or user of an interactive computer service shall be
treated as the publisher or speaker of any information provided by another information content
provider” (47 U.S.C. § 230(c)(1)). Although Section 230 limits those protections when it comes
to criminal and intellectual property laws, the statute provides enormous freedom to online
companies that host user-generated or third-party content, including a liability shield from
controversial or objectionable material. As the Electronic Frontier Foundation notes, “CDA 230
is perhaps the most influential law to protect the kind of innovation that has allowed the Internet
to thrive since 1996” (Electronic Frontier Foundation, n.d.), and legal scholars have called for
“[t]he twenty-six words that shaped the first two decades of the modern Internet [to] remain in
effect” (Kosseff, 2017). According to technology and legal scholar David Post (2015), “No other
sentence in the U.S. Code, I would assert, has been responsible for the creation of more value
than that one.”
The immediate problem that CDA 230 addressed was a split in the court treatment of
online services. In the internet’s early days, websites would edit or remove postings that violated
certain speech codes, or would employ filtering technology to screen for problematic or obscene
material (Ehrlich, 2002, p. 402-403). But “the central question is whether this sometimes-
nominal amount of control over content exposes the ISP to liability if some defamatory or
obscene speech slips through the filter” (p. 403). The case law provided few suitable options:
“Prior to passage of the CDA, courts addressed this problem in the defamation context by
placing the ISP in one of the traditional First Amendment categories of publisher, distributor, or
135
common carrier. The law applies different legal standards to these entities based on their control
over the content they disseminate” (p. 403). Because “ISPs did not fit cleanly into any category,”
(p. 403), the courts came to inconsistent answers. Sometimes the courts would interpret the
actions websites took to scrub content as analogous to a publisher, which face a strict liability
standard in which they are legally responsible even absent a finding of fault or criminal intent
(e.g. Stratton Oakmont, Inc. v. Prodigy Services Co., 1995). Other times, the courts would
determine that the service was more akin to an online library and thus a distributor, which are
liable only upon a showing of knowledge or negligence (Ehrlich, 2002, p. 403-404). “These
cases set up a paradoxical no-win situation: the more an ISP tried to keep obscene or harmful
material away from its users, the more it would be liable for that material” (p. 404). In that
context, Section 230’s statutory language can be read in no other way than a direct response to
Stratton and a means to resolve the legal gray area created by findings of publisher liability for
online services.
Section 230 was passed as part of the Telecommunications Act of 1996, however, and it
was also intended to advance the same twin agenda of economic and social advancement as other
portions of that bill. The statutory language is explicit about this. From an economic perspective,
it is clear that 230 should be considered a core element of the overall Act’s deregulatory
objective. It says, for instance, that “The Internet and other interactive computer services have
flourished, to the benefit of all Americans, with a minimum of government regulation” (47
U.S.C. § 230(a)(4)) and that it is the policy of the United States to “to promote the continued
development of [these services] and other interactive media” (47 U.S.C. § 230(b)(1)) and “to
preserve the vibrant and competitive free market that presently exists for [these services],
unfettered by Federal or State regulation” (47 U.S.C. § 230(b)(1)).
136
From a social perspective, 230 was intended to nurture a digital public sphere and
increase the availability of online information. The statute notes that “The rapidly developing
array of Internet and other interactive computer services available to individual Americans
represent an extraordinary advance in the availability of educational and informational resources
to our citizens” (47 U.S.C. § 230(a)(1)) and that these services “offer a forum for a true diversity
of political discourse, unique opportunities for cultural development, and myriad avenues for
intellectual activity” (47 U.S.C. § 230(a)(3)), Accordingly, the United States should “encourage
the development of technologies which maximize user control over what information is received
by individuals, families, and schools who use these services” (47 U.S.C. § 230(b)(3)).
The breadth and impact of Section 230 is hard to overstate. It allows websites to solicit
and host content from third parties without fear of assuming legal liability, and the resulting bills,
for that content. This provides near unlimited immunity for websites, social media platforms, and
other online services against claims of libel, defamation, distribution of sexually explicit
materials, and many other causes for legal action. Notably, this protection is unusually broad and
is provided without strict preconditions, as is common for other statutory legal safe harbors.
Consider the contrast between Section 230 and the Digital Millennium Copyright Act of 1998
(DMCA). The DMCA provides four safe harbors for online services against copyright liability—
for transitory digital network communications, system caching, information residing on systems
or networks at the direction of users, and information location tools—but in each case lists a
series of required elements needed to avail oneself of that safe harbor (17 U.S.C. § 512). Section
230’s broad immunity, on the other hand, allows companies, especially startups, to avoid costly
legal reviews or disastrous litigation costs, to focus limited resources on product development,
and to experiment with new business models. Post (2015) notes: “[I]t is impossible to imagine
137
what the Internet ecosystem would look like today without it. Virtually every successful online
venture that emerged after 1996—including all the usual suspects, viz. Google, Facebook,
Tumblr, Twitter, Reddit, Craigslist, YouTube, Instagram, eBay, Amazon—relies in large part (or
entirely) on content provided by their users, who number in the hundreds of millions, or billions.
… I fail to see how any of these companies, or the thousands more like them, would exist
without Section 230.”
If a deregulatory environment for telecommunications and privacy provided an engine for
online services, then Section 230 was the fuel. It directly unlocked the potential to create large,
dynamic online platforms the focused centrally on user interaction and participation;
dramatically dropped the entry costs for digital services; and removed the most significant
obstacle to hyperscale growth. These factors in turn facilitated the central feature of the modern
internet: the rapid, ongoing, and voluntary transfer of mass amounts of valuable behavioral and
social information from the online public into corporate hands.
Initially, it was not clear that 230 would have that level of impact. Sen. Ron Wyden, who
authored the provision with former Congressman Chris Cox, says: “We thought it was going to
be helpful. We never realized it was going to be the linchpin to generating investment in social
media” (as quoted in Lecher, 2018). The judiciary’s expansive reading of the statute helped. As
Ehrlich (2002) notes, “While Congress tried to strike a balance between promoting decency and
promoting free speech, recent court decisions interpreting Section 230 have emphatically tilted
the scales in favor of free speech by removing distributor liability from ISPs entirely.” In Zeran
v. America Online, Inc. (1997), for instance, the plaintiff sought to hold AOL liable for
defamatory speech by a third party, claiming that Ҥ 230 immunity eliminates only publisher
liability, leaving distributor liability intact.” The court soundly rejected that reasoning and
138
adopted a broad reading that “Section 230 ... plainly immunizes computer service providers like
AOL from liability for information that originates with third parties.”
The scope of this reading is extraordinary; it effectively immunizes a platform for any
content that it did not itself generate. The court interpreted Section 230 as a provision with an
explicit Constitutional underpinning, saying that “Congress recognized the threat that tort-based
lawsuits pose to freedom of speech in the new and burgeoning Internet medium. The imposition
of tort liability on service providers for the communications of others represented, for Congress,
simply another form of intrusive government regulation of speech.”
4.4.2 230: sword and shield. Even if the authors of Section 230 underestimated its
eventual impact, its importance soon become apparent. Empirical research has found that
“defendants won dismissal on section 230 or other grounds in more than three-quarters of the
cases studied” (Ardia, 2009). While much of the rest of the CDA was struck down as
unconstitutional (Reno v. American Civil Liberties Union, 1997), Section 230 stood untouched
by Congress for many years.
Much of the discussion around 230 focuses on the broad release from liability it provides.
However, 230 provides more expansive powers than that. The key issue Congress set out to
solve with the statute was not simply whether online platforms and services could escape legal
exposure from third-party content, but whether they could do so while still maintaining the
editorial discretion that publishers typically possess to edit, filter, or modify content. Thus, the
statute provides blanket liability protections for “any action voluntarily taken in good faith to
restrict access to or availability of material that the provider or user considers to be obscene,
lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not
139
such material is constitutionally protected” and “any action taken to enable or make available to
information content providers or others the technical means to restrict access to [such] material”
(47 U.S.C § 230(c)(2)). As Wyden notes, “I wanted to make sure that internet companies could
moderate their websites without getting clobbered by lawsuits” (Wyden, 2018).
These provisions provide online platforms extremely flexibility in deciding upon,
implementing, and modifying rules related to what kinds of content they deem acceptable,
without any need to consult with affected parties. All that is needed is a “good faith”
determination by the online provider that the content is objectionable, an easy bar to meet. As
Public Knowledge notes, “One of the most important and basic things to understand about
Section 230 is that it authorizes platforms to exercise editorial discretion with respect to third-
party content without losing the benefit of the law, and that this includes promoting a political,
moral, or social viewpoint” (Bergmayer, 2019, emphasis in original).
This aspect of Section 230—whether companies are properly wielding the sword
afforded them as well as the shield—has attracted the most attention in contemporary discussions
of the technology industry. The growing anxiety over the negative consequences of digital
technology include intense discussions over the kind of content that is acceptable for online
platforms to host, and the impact of that content on the societal, moral, and political fabric of the
United States. These debates touch a broad number of issues: disinformation by foreign actors,
domestic misinformation, political speech, hate speech, political advertising, and more. And the
problematic behavior in question ranges from the merely offensive to the deadly; three
perpetrators of mass shootings posted their plans to the website 8chan in advance (Wong, 2019).
As Wyden (2018) notes:
140
“The fact is, CDA 230 was never about protecting incumbents. I’ve spent my career
taking on the powerful established interests. And when I wrote this policy, I never
envisioned a Facebook, but I did hope it would give the little guy’s startup a chance to
grow into something big. And bottom line, it worked.
But now, despite the fact that CDA Section 230 undergirds the framework of the
internet as we know it today, it is on the verge of collapse. That is largely because the big
internet companies have utterly failed to live up to the responsibility they were handed
two decades ago. Let me explain what I mean.
For these companies, Section 230 is a sword and shield. It offers protection from
liability, but it also gives companies the power—and the responsibility—to foster the sort
of internet Americans want to be part of.
At the time, Wyden was speaking out against the controversial Allow States and Victims
to Fight Online Sex Trafficking Act of 2018 (usually called FOSTA-SESTA because it
combined elements of predecessor bills the Fight Online Sex Trafficking Act and the Stop
Enabling Sex Traffickers Act), which nevertheless passed Congress and become law on April 11,
2018. The law—the first amendment to Section 230 in more than 20 years—clarifies that the
statute does not “prohibit the enforcement against providers and users of interactive computer
services of Federal and State criminal and civil law relating to sexual exploitation of children or
sex trafficking” (47 U.S.C § 230(e)(5)).
FOSTA-SESTA reflected a growing frustration over the lax attitude of some websites,
most notably Backpage.com, towards online personals that contributed to sex trafficking. But in
a larger sense, it also reflects a looming cynicism about the societal outcomes of the great laissez
141
faire experiment with online communities: a growing realization that allowing such communities
to operate with a complete lack of restraint might contribute to societal ills in ways that far
exceed their benefits.
As Goldsmith and Wu (2006) note, the pressure for government to exert influence and
control over internet intermediaries to combat a variety of commercial and social ills is
enormous. This is particularly true in the United States, where the First Amendment restricts the
government from directly restraining public speech. As with the dynamic seen with law
enforcement and electronic information held by third parties, the ability to exert coercive
pressure on online intermediaries to delete or modify content offers the government leverage
over speech that would not be available via direct action. Indeed, the remarkable fact is not that
the United States would turn to such a solution but that it had resisted for so long. The fact that
just years before the enormously influential intellectual property industry could not get SOPA,
PIPA, or COICA passed despite an intense lobbying campaign speaks to the deepness of
contemporary concerns and the broad shift in public attitudes about the role and power of online
intermediaries.
The reaction to FOSTA-SESTA has been mixed at best. Many civil society organizations
and firms had opposed passage over concerns that it would be both ineffective at its stated
purpose and have a broadly chilling impact on internet innovation. EFF stated that the law
represented a “unprecedented push towards Internet censorship, and does nothing to fight sex
traffickers” (Mullin, 2018). At least some of those negative consequences have come about as
predicted: “[i]n the immediate aftermath of SESTA’s passage on March 21, 2018, numerous
websites took action to censor or ban parts of their platforms in response—not because those
parts of the sites actually were promoting ads for prostitutes, but because policing them against
142
the outside possibility that they might was just too hard” (Romano, 2018, emphasis in original).
The lesson is straightforward: Section 230 is so embedded into the DNA of the modern internet
that neither lawmakers nor online platforms can fully grasp the downstream consequences of
changes, either from a social or economic perspective.
FOSTA-SESTA created a chink in the armor of Section 230 that is likely to lead to
further changes in the future. But it is dangerous to generalize from the debate around its
passage. At its core, FOSTA-SESTA addresses an issue upon which there is little substantive
debate or ideological separation. Although some liberal groups object to the current legal status
of sex work, the normative and legal stance condemning sexual exploitation of adults and
children is among the most settled issues in U.S. and global public policy. The controversy over
FOSTA-SESTA was therefore not primarily over the desired outcomes but over its mechanical
workings. Even Wyden’s (2018) opposition to FOSTA-SESTA was not driven by a deep
philosophical opposition to the intent of the bill, just a concern that the particular implementation
“will prove to be ineffective, … have harmful unintended consequences, and … could be ruled
unconstitutional.”
Anxieties over other types of problematic online content are a different beast. For
instance, since 2018, concerns first raised in the context of the 2016 Presidential election about
foreign misinformation campaigns and electoral interference have accelerated. Congressional
investigators have confirmed that “[i]n 2016, Russian operatives associated with the St.
Petersburg-based Internet Research Agency (IRA) used social media to conduct an information
warfare campaign designed to spread disinformation and societal division in the United States,”
and that “[m]asquerading as Americans, these operatives used targeted advertisements,
intentionally falsified news articles, self-generated content, and social media platform tools to
143
interact with and attempt to deceive tens of millions of social media users in the United States”
(Select Committee on Intelligence, 2019, p. 3). But there are deep political and ideological
divides over how big a problem this represents or what the right solution is.
Conservative skepticism over the need for action on election security is not hard to find.
At a Senate hearing on “Social Media Influence in the 2016 U.S. Elections Exhibits” featuring
representatives from Twitter, Facebook, and Google, for example, Republican Intelligence
Chairman Richard Burr urged caution about the scope of foreign interference. “A lot of folks,
including many in the media, have tried to reduce this entire conversation to one premise; foreign
actors conducted a surgical, executed covert operation to help elect a United States president. I'm
here to tell you this story does not simplify that easily. It is shortsighted and dangerous to
selectively focus on one piece of information and think that that somehow tells the whole story”
(Social Media Influence in the 2016 U.S. Elections, 2017). While it is unlikely that the Russian
influence operation changed the outcome of the election, however, it did undermine faith in core
U.S. institutions, which was probably its true aim. Whatever the case, the uneasiness with
potential governmental intervention is at least partially ideological: Republicans have repeatedly
blocked passage of election security reform legislation (Balluck, 2019) and object to expansion
of vote-by-mail and other accessibility initiatives using dubious arguments about voter fraud.
The incoherent approaches to political advertising by the technology sector as 2020
election season begins highlights the complexities online platforms face in addressing the fraught
issue of political speech. Facebook has doubled down on its policy of not only allowing political
content on its platform, but also exempting such speech from its fact-checking initiatives (Kang,
2019). Facebook’s reasoning is that all political speech, in its original form, is newsworthy; that
political speech is already highly scrutinized; and that blocking or preventing such speech would
144
leave voters less informed and less able to hold politicians accountable. On the other hand,
Twitter and Spotify have banned political ads entirely (Lerman & Ortutay, 2019; Durkee, 2019).
Such bans, however, are difficult to implement, raising tricky questions such as what constitutes
a political advertisement, whether issue ads are allowed, how platforms will monitor for such
content, and what steps they will take to punish violators. These firms will inevitably draw ire
about inconsistent implementation and the stifling of speech perceived as legitimate. Like the
days before CDA 230, online platforms are faced with a Catch-22: There is no solution that can,
even in theory, satisfy all parties.
Other primarily domestic issues raise even trickier situations. Any attempt to address
content—including misinformation, hate speech, and in some cases even harassment and
bullying—around a host of issues loosely related to conservative values inevitably gets ensnared
in accusations of anti-conservative bias, complicating the ability for online platforms to develop
clear, consistent, and forceful content moderation policies. For instance, Congressional
Republicans grilled Google’s CEO, Sundar Pichai, at a hearing about, about “the muting of
conservative voices by internet platforms” (Transparency & Accountability: Examining Google
and its Data Collection, Use, and Filtering Practices, 2018). The merits of these arguments are
close to zero. As boyd (2018) observes, “the core of this narrative is a stunt, architected by media
manipulators, designed to trigger outrage among conservatives and pressure news and social
media to react.” But it has been effective: “Over the last two years, both social media and news
media organizations have desperately tried to prove that they aren’t biased. … Anti-conservative
bias are not evaluated through evidence because reality doesn’t matter to them. This is what
makes this stunt so effective. News organizations and tech companies have no way to ‘prove’
their innocence.”
145
Equally pernicious is the incoherent logic behind such a position. On the one hand,
jurisprudence has long protected the First Amendment free speech rights of corporations,
including the ability to choose what appears in search engine results and in what order (Volokh
and Falk, 2011, p. 6-20). Section 230 was designed expressly to preserve this ability to make
editorial decisions without risk of legal culpability, and conservatives have historically supported
aggressive readings of corporate First Amendment rights (see, e.g. Politico Staff, 2010, citing
Citizens United v. Federal Election Commission, 2010). On the other hand, lawmakers are
appealing to the same Constitutional language to argue that Google, due to its size and
dominance in the search engine market, is bound not to “censor” speech by manipulating how
and where content appears in search engine results. A recent lawsuit by Prager University, a
conservative nonprofit educational and media organization, represents this logic. Prager sued
YouTube over concerns that its moderation policies prevented the fair dissemination of
conservation viewpoints. The Appeals court, however, upheld a dismissal of these claims,
reaffirming that “Despite YouTube’s ubiquity and its role as a public-facing platform, it remains
a private forum, not a public forum subject to judicial scrutiny under the First Amendment”
(Prager University v Google LLC, 2020). There is, in other words, no way to square the circle.
There may indeed be good public policy reasons to force online platforms to change their
policies around content moderation, but there appears to be no path there under current law, and
lawmakers would need to resolve wholly divergent worldviews before they can craft new
policies around online speech.
4.4.3 Conclusion. Where does this leave Section 230? There are several reasons to be
bullish about the likelihood of further changes. First, while there is no unifying vision for how to
146
move forward with changing the statute, there is a strong bipartisan consensus that additional
action is needed—whether by directly modifying the statute or by exerting other means of
pressure to unsheathe 230’s “sword.” As Burr told Twitter, Facebook, and Google executives,
“Your three companies have developed platforms that have tremendous reach and, therefore,
tremendous influence. That reach and influence is enabled by the enormous amount of data you
collect on your users and their activities. … Your actions need to catch up to your
responsibilities” (Social Media Influence in the 2016 U.S. Elections, 2017). And Wyden (2018)
says that tech companies “have become bloated, uninterested in the larger good, and feckless …
we are in a fight for the Internet every day. Our Internet companies are not engaged in this fight,
their only interest is currying favor with the nations where they wish to do business.”
Second, these concerns have permeated many areas of government. Lawmakers continue
to debate the role of 230 now that the industry has come to be dominated by a handful of
extremely large corporations (e.g. Fostering a Healthier Internet to Protect Consumers, 2019).
They also made a last-minute (but ultimately unsuccessful) effort to remove 230-like language
from the U.S.-Mexico-Canada Agreement, the successor to the North American Free Trade
Agreement, to preserve their prerogative to amend the statute in the future (Rodrigo, 2019). And
the changing technological context and concerns raised by misinformation, hate speech,
harassment, and other problematic online content is also affecting the judiciary. In recent years,
courts have shown “a growing reluctance ... to apply Section 230 in [a] broad manner” and “a
general hesitance to dismiss cases, and to instead allow them to proceed through discovery,
summary judgment, and trial, on the off-chance that the intermediary may have contributed to
the third-party content” (Kosseff, 2017).
147
Finally, the renewed interest in 230 and other legislative solutions around content issues
has prompted companies to preemptively change their internal policies, even knowing that there
is no solution that will perfectly address the conflicting and inconsistent desires of lawmakers
and the public. Technology companies have hired tens of thousands of new content moderation
and safety staff to increase their capacity to review, monitor, and eliminate problematic content
(Glaser, 2018). Facebook launched a third-party fact-checking program in December 2016, in
which users can flag potentially problematic articles and outside organizations will examine and
report upon their veracity (Mosseri, 2016). (Those efforts have received mixed reviews [Ingram,
2019]; partners have called out the limited scale and transparency of the program in particular
[Full Fact, 2019].) And Google, joining Twitter and others, will limit the types of targeting
available for political advertising (Spencer, 2019).
Section 230, as a unique construction of U.S. law, is the single statute most directly
connected to the success of American companies in the technology industry. It provides
enormous freedom to online platforms and websites to host third-party content without fear of
legal liability and is embedded into the genetic material of the internet in profound ways.
However, just as deeply entrenched are a host of problematic behaviors—ranging from electoral
interference to misinformation—that are accelerating in scope, scale, and impact. As a result,
Section 230 is more vulnerable than it has ever been before. Any changes to the statute would
undermine the ability to host and collect user-generated content that forms the heart of many
online firms’ business models and that underpins their research into new AI technologies.
148
4.5 Conclusion: Implications for the Technology Industry
In addition to a host of technical innovations, three major policy initiatives—deregulation
of the information sector, a lack of baseline consumer privacy protections, and broad immunity
from liability for third-party content—have shaped the modern internet and helped American
firms gain dominance in the provision of online services. The philosophy underlying these
decisions was twofold. First, lawmakers envisioned the internet as a competitive Platonic ideal,
with low marginal costs and minimal barriers to entry. Accordingly, they developed a set of
policies that focused on jumpstarting innovation online while protecting the nascent industry
from dominant legacy firms in adjacent markets. Second, they envisioned the internet as a new
public sphere that would facilitate unprecedented advances in human freedom and expression. In
this view, digital technologies facilitate new and improved forms of educational opportunity,
political discussion, and economic mobility.
It is hard to argue with the results. Leveraging broad legal protections and freedom from
restrictions on collection and handling of consumer information, U.S. firms experimented
relentlessly, coalescing around business models built upon the manipulation and monetization of
user data. Network effects helped these companies grow at hyperscale speeds and dominate
globally in online search, e-commerce, social networking, online video, microblogging, and
numerous other verticals.
But that era of the internet has ended. In its stead is a growing popular and political
anxiety about the technology industry’s negative impacts on freedom, economic opportunity,
political expression, and human dignity. In response, lawmakers are seriously considering
profound changes to the policy initiatives they once considered vital to a robust and competitive
U.S. internet ecosystem.
149
Tracing the history and impact of these policies helps clarify the potential ramifications
of new legislative undertakings. We can draw four major conclusions from such an analysis.
First, the economic and social rationales that guided lawmakers’ actions in the early days
of the internet are no longer tenable. The early commercial internet may have offered a blank
slate for competition, but the current technology ecosystem is gated by bottlenecks at every turn.
The dominant technology firms control huge portions of the online infrastructure, including key
systems and services used by direct competitors. These companies have accumulated massive
stores of user data, allowing them to expand into adjacent sectors and create new products that
further their data advantage while retaining control over the resources firms need to develop
viable products of similar quality. And these firms are highly acquisitive, regularly engaging in
purchases of new entrants. They are, in other words, just as dominant now as Microsoft was in
1999 when the DOJ proposed breaking up the company.
There is no doubt that online services have provided enormous benefits. Internet users
can now freely access vast stores of knowledge. New services and applications have compressed
time and space, facilitating nearly costless global information and financial flows and increasing
efficiencies in many aspects of modern life. At the same time, however, online communities are
beset by a host of deep and abiding social ills, including electoral interference, hate speech,
political echo chambers, misinformation campaigns, cyberbullying, and poor cybersecurity, that
are deepening political divides and may be exacerbating economic inequality (World Bank,
2016). For many experts and lawmakers, these impacts now outweigh the social benefits and
mandate political intervention.
Second, the types of concerns that most animate lawmakers strike directly at the ability of
technology companies to collect, accumulate, and harness data. Informational advantages, and
150
their potential anticompetitive effects, are at the heart of several ongoing antitrust investigations
of the technology sector. New privacy legislation would likely place serious restrictions on the
ability of online firms to collect user data. The GDPR, for example, embraces as a central
principle data minimization, such that any personal data collected are “adequate, relevant and
limited to what is necessary in relation to the purposes for which they are processed” (European
Union, 2016, p. 119/35). And Section 230 of the CDA is central to the ability of online firms to
collect and store third-party content at scale. Any alterations of the statute would almost certainly
reduce or restrict the total amount of information available to technology firms.
Third, the broad interest in action against the technology sector suggests that the industry
landscape will shift in significant ways in coming years. Given increasing political gridlock and
lingering ideological divisions, it is reasonable to be skeptical that U.S. Federal lawmakers will
be able to take swift action in any given policy area related to the technology industry. But where
Federal action has stalled, state action continues to move forward. In fact, Federal inaction
strengthens the hand of aggressive state legislators and enforcers to shape the national
conversation around technology. The passage and enactment of the CCPA—which was not
preempted at the Federal level even with 18 months of advance notice—is perhaps the most
prominent example, setting new national precedents for handling of domestic data just as GDPR
did for data held by multinational firms. However, states are also taking robust action on
competition issues, including antitrust investigations of Facebook and Google. Where the action
is, the money is following: Trade groups, lobbying firms, and advocacy organizations—many of
which had previously been fixated on Washington—have shifted hiring practices to devote an
increasing amount of their resources to tracking and influencing state-level legislation.
151
Finally, historical precedent suggests that even the implied threat of action can lead to
significant changes in corporate behavior. Firms are already making notable adjustments to their
internal policies and practices in response to changing public sentiment and legislative saber-
rattling. Their embrace of new privacy legislation suggests that these companies have
transitioned from a purely oppositional stance to an approach focused on mitigating the deepest
impacts to their core lines of business.
In sum, the contemporary policy environment presents significant headwinds to further
growth by the dominant U.S. companies. As they linger under the public and political
microscope, these firms are forced to devote substantial resources to interfacing with the
government and addressing concerns raised by lawmakers. Meanwhile, recent settlements and
ongoing investigations of problematic corporate behavior increases the due diligence needed to
move forward with new acquisitions and product launches. Addressing domestic concerns over
the impact of technology, in other words, drains the United States of the raw industrial inputs
needs to drive forward and dominate the development of AI technologies.
152
CHAPTER 5: INSTITUTIONAL CONSTRAINTS ON AI DEVELOPMENT: A
COMPARATIVE ANALYSIS OF THE UNITED STATES AND CHINA
5.1 Introduction
Thus far, this paper has argued that the U.S. AI strategy relies on the private sector,
particularly dominant technology platforms; that those companies, due to historical advantages
and the natural tendency of data-driven industries towards concentration, are currently the
leaders in global AI development; and that current trends in U.S. policy undermine the capacity
of those firms to continue to invest aggressively in AI. However, this dissertation also claims that
the backlash against technology companies that is driving those policy changes crosses broadly
over national boundaries and political ideologies. To fully understand international competition
over new technologies, then, what matters is whether these trends have a differential impact
among countries and whether that potential disparity is large enough to allow other nations to
overcome existing U.S. advantages.
This chapter explores these questions in more detail through a comparative analysis of
institutional factors that affect U.S. and Chinese capacity to manage technological development.
Leaning on the economic model of the AI industry developed earlier, this chapter is concerned
with two broad categories of potential difference. First, countries may possess varying
capabilities to manipulate the inputs of AI, such as the availability of data or cost of computing
resources. This leads directly to disparate trajectories in industrial development. Second, nations
have divergent methods for absorbing the outputs of AI. By influencing both political and
popular sentiment, these factors set indirect economic and regulatory limitations.
The analysis below centers on four sets of issues: industrial policy, data policy,
absorptive capacity, and asymmetrical vulnerability. The discussion so far has touched, at least to
153
some extent, on all these areas, but this chapter synthesizes those disparate threads into an
overarching theory of competitive advantage for AI. In sum: In each of these areas, the United
States faces certain deeply rooted, fundamental constraints that, while providing benefits in some
arenas of social and economic policy, harm its efforts to control the development of new
technologies such as AI relative to China. To be clear, this combination of these factors will not
necessarily lead to China or other countries surpassing the United States in AI or other advanced
technologies, as other authors have argued (e.g. Lee, 2018), but it does exacerbate the potential
impacts that domestic policy initiatives might have on the U.S.’s international competitiveness.
5.2 Industrial Policy
5.2.1 Overt and covert policy. Since the 1970s, the U.S. approach to industrial issues
has been driven by a neoliberal faith in free markets. Many of the policy decisions underlying
contemporary U.S. dominance in the technology industry have been informed by this
philosophy, including the antitrust focus on consumer welfare, telecommunications deregulation,
the laissez faire approach to consumer privacy, and a reliance on platform competition to curb
natural monopoly. However, the neoliberal approach to economic organization has another
consequence: It undermines the ability of a country to engage in coherent industrial policy.
What is industrial policy? Although exact definitions vary, industrial policy broadly
refers to governmental engineering of a country’s relative sectoral development. For discussions
of emerging technologies that might reshape existing industrial structures and create entirely new
markets, an expansion definition is most useful, so this paper relies on Bingham’s (1998)
conception: “a nation’s official total effort to influence sectoral development and thus, the
national industrial portfolio.” Regardless of the exact term used, however, the underlying
154
framework is the same. Industrial policy is as paternalistic as neoliberalism is individualistic; it
requires an activist government manipulating and shaping market outcomes.
Every country engages in industrial policy. Partially, this is because countries have
parochial national interests even though modern markets are international in character. This
means that countries often employ industrial engineering to foster sectoral development that
might provide comparative international advantage. The extent to which this interferes with the
established firms in other countries is perhaps the industrial policy most amenable with
neoliberal principles; under the logic of “leveling the playing field,” protectionist tariffs beget
retaliatory fines, while direct government subsidies can engender anti-dumping penalties.
46
However, the impetus for industrial policy extends far beyond that. As noted earlier,
neoliberalism excels at allocative efficiency, but that is not the only, or even the most important,
goal of politicians and stakeholders. Industrial policy facilitates pursuit of other objectives, even
those that might ultimately result in higher domestic prices or limit the international
competitiveness of certain sectors. These goals might include developing a robust defense
industrial base to ensure that a country can manufacture important military items in times of
diplomatic isolation, securing essential supply chains, setting expectations for or otherwise
controlling private behavior, subsidizing infant sectors before they achieve the scale economies
needed to compete internationally, fostering equitable growth, or ensuring the simultaneous
development of industries that produce co-dependent goods.
The United States has relied heavily on industrial policy from the earliest days of the
Republic. Alexander Hamilton (1791) laid out a vision for transitioning the U.S.’s agrarian
economy to one focused on manufacturing, to “render the United States, independent on foreign
46
Dumping refers to the export of a product priced below fair market value. Dumping often occurs when a good in a
foreign market is priced below the cost of that product in its home market.
155
nations, for military and other essential supplies,” attract capital investment, and counter the
subsidies foreign countries provided to their companies. Accordingly, the United States evolved
from an agricultural supplier to an industrial powerhouse largely on the back of protectionist
tariffs and other “pragmatic,” Hamiltonian economic policies (Cohen & DeLong, 2016). The
history of intellectual property protection in the United States reflects a similar reliance on
pragmatic policy tools. “[U]ntil approximately the middle of the nineteenth century, more
Americans had an interest in ‘pirating’ copyrighted or patented materials produced by foreigners
than had an interest in protecting copyrights or patents against ‘piracy’ by foreigners” (Fisher,
1999), which essentially acted as a net wealth transfer from foreign IP holders to the United
States. Only after Hollywood and other creative industries had driven “the transformation of the
United States from a net consumer of intellectual property to a net producer,” that, “[i]n the late
twentieth century, … the United States has become the world's most vigorous and effective
champion of strengthened intellectual-property rights” (Fisher, 1999).
The New Deal ushered in new types of industrial policy. Some of the more audacious
plans of that area did not persist, such as the National Industrial Recovery Act of 1933—
designed to “encourage national industrial recovery, to foster fair competition, and to provide for
the construction of certain useful public works”—which was declared partially unconstitutional
two years later. However, developments such as crop insurance, which protects farmers against
both loss of revenue due to natural disasters and from price declines, and the Export-Import
bank, which provides loan assistance to foreign purchasers of U.S. goods, have survived to this
day (Tucker, 2019).
The primary issue under neoliberalism is not an absence of industrial policy in the
United States. Rather, it is that, “because the use of industrial policy measures has been viewed
156
with suspicion throughout most of the twentieth century” (Herman, 2019), many overt policy
mechanisms are now considered illegitimate. For example, the United States continues to employ
retaliatory tariffs and other tools that correct perceived market distortions caused by foreign
actions. Trade adjustment assistance—compensation and reskilling provided to those
disproportionately impacted by free trade pacts—continues to be a vital device to secure support
for deals that expand export markets. Certain indirect assistance is acceptable, such as loan
guarantees for the mortgage market and small businesses. So are oblique tools, including
complex tax breaks for financiers such as the carried interest loophole or the removal of
regulatory requirements that certain industries, including the automotive, power, and
manufacturing sectors, fully capture the externalities of their business operations. However,
market fundamentalism renders direct forms of assistance problematic, including funding private
business enterprises, influencing firm-level technological choices, or taking equity stakes in
companies. Indeed, many lawmakers continue to call for privatization of the remaining quasi-
governmental organizations in the United States, from Amtrak to the Post Office.
The result is a patchwork of ad hoc policies within any underlying guiding theory. As
Tucker (2019) notes, “the US is no different from any other country in its widespread use of
horizontal levers like industrial policy. What makes it different is the absence of significant or
sustained industrial planning—which has been frustrated by particularities of the US’s political
and economic structure” (p. 17, emphasis in original). So while certain sectors of the U.S.
economy receive significant government support—such as the agricultural and financial
industries—distribution of that aid is distorted by hostility to overt forms of assistance, a lack of
a coordinated planning mechanism, and little discussion of economic trade-offs. Bingham
(1998), summarizing the thoughts of economic and political leaders on the topic, says: “It would
157
thus not be an understatement to say that America’s industrial policy is not perceived in positive
terms by those who think about it.”
The American AI Initiative (Executive Order 13859, 2019) highlights the difficulties of
this hodgepodge approach. First, because of the decentralized nature of U.S. governance, this
plan represents the vision only of the executive branch; meanwhile, the legislature, which
controls government funding and key statutory levers, is working on various legislative
proposals that reflect an alternative vision for AI. Second, even if that was not the case, the
strategy’s six key elements reflect a rather anemic conception of government’s role. Direct
spending is limited to R&D investment, which is controlled by the legislature; in 2019 this
amounted to less than $700 million in non-defense spending (The Networking & Information
Technology Research & Development Program, p. 10) and likely no more than $2 billion when
those agencies are included. Only one other section, training a new generation of AI researchers
and users, directly addresses industrial input and output issues, but with an unclear monetary
component. The rest—reducing barriers to AI innovation, enhancing access to U.S. Federal
government data and resources, ensuring that Federal standards minimize vulnerabilities to
attacks and facilitate innovation and public trust in AI, and promoting a supportive international
environment—are standard neoliberal tools focused on leveraging the coordinating and
convening power of the Federal government to remove barriers to private sector investment.
China, by contrast, relies heavily on centralized industrial planning and policy. Unlike the
United States, it is guided by a unified set of economic plans that articulates priorities between
and among sectors. In many cases, Chinese policy tools—such as direct control over monetary
policy and engineered stability in exchange rates—are deployed explicitly to support the nation’s
158
AI industries in ways that would institutionally be inconceivable in the United States. A
summary of key tools relating to AI is below.
5.2.2 Direct government investment. The prevailing U.S. philosophy is that, for the
most part, potentially profitable enterprises should be privatized. The sovereign should not
control money-making businesses because markets are more efficient at organizing and
operating such firms and the government’s ability to engage in coercive taxation allows them to
subsidize such operations in unfair ways. The major exception is for loans and loan guarantees
that ensure equitable access to services, such as with educational loan markets, or in areas that
would otherwise be undercapitalized due to excessive risk, such as small business lending and
certain types of mortgages.
China, on the other hand, engages in several types of direct investment. First, like many
countries—but not the United States—it operates a sovereign wealth fund, the China Investment
Corporation, which has nearly $1 trillion dollars in gross assets. Such funds, however, generally
aim to provide reliable returns for government operating surpluses, and as such, operate with an
arms-length relationship with portfolio companies. Other types of investment are more directly
designed to guide industrial development. There are also an estimated 1,686 guidance funds in
China, operated by a combination of the national and local governments, with a total estimated
value of nearly $700 billion (Ziyi & Xiaoli, 2020). These funds use state money to exert pressure
on, or “guide,” VC investment to high priority national targets that might otherwise fail to attract
capital due to high risks or low returns. Analysts estimate that such guidance funds invest several
billion dollars annually into AI companies (Acharya & Arnold, 2019). The tools farthest afield
from U.S. practice, however, are state-owned enterprises. Especially under current leadership,
159
China aggressively uses majority ownership in private enterprises to shape industrial policy and
manipulate concentration and competition. For example, China exerts tight control over its
wireless market, with majority ownership of all three of the largest players—China Unicom,
China Telecom, and China Mobile—and actively intervenes with company management to
balance competitive factors and guide industry development.
Critics of state investment argue that governments are susceptible to corruption, have a
poor track record compared to private investors, and distort markets in ways that raise prices and
lower quality. Indeed, there are numerous reports that Chinese government investments are
plagued by bureaucratic infighting, nepotism, fraudulent activity, and administrative
inefficiency. However, the strong private-sector protests that arise whenever such practices are
considered in the United States—for example, 26 states now ban municipal broadband networks
after pressure from telecom firms (Bode, 2019)—suggest that not all effects are negative.
Because state-sponsored enterprises can shed debt easily and face less intense pressure for short-
term profitability than VC-backed or public firms, they can invest more aggressively in risky
products and pursue longer-term growth strategies. Both features apply to AI, where unknowns
remain about the size of commercial markets and the growth trajectory for product quality.
5.2.3 Technology and IP transfer. China also engages in a robust set of practices
designed to pressure foreign companies into divulging their IP to Chinese firms. Several of these
are highly relevant to AI competition.
First, under its “Going Out” strategy, China leverages its state funding to invest directly
in foreign technology companies, including U.S. AI firms (USTR, 2018, p. 64). Although such
expenditures include economic aims such as market expansion, in many cases investments are
160
made with the explicit objective “to obtain cutting-edge technologies and intellectual property
(IP) and generate large-scale technology transfer in industries deemed important by state
industrial plans” (USTR, 2018, p. 65). Current priority areas for China include, among others,
information technology, robotics, and integrated circuits, all key components of AI (USTR,
2018, p. 150). IP acquisition is one of the primary benefits of foreign direct investment, with
studies showing that in many cases that the technology transfer associated with FDI can exceed
the returns from domestic investment (Borensztein, De Gregorio, & Lee, 1998).
The U.S. approach to problematic FDI issues is generally reactive; there is a presumption
that inward investment has a net beneficial effect on the U.S. economy, so the government
assumes much of the burden in assessing whether such actions might be problematic. The
primary mechanism for doing so is the Committee on Foreign Investment in the United States, or
CFIUS, “an interagency committee that serves the President in overseeing the national security
implications of foreign direct investment (FDI) in the economy” (Jackson, 2020, p. 1). Recent
revisions to CFIUS under the Foreign Investment Risk Review Modernization Act of 2018
(Subtitle A of Title XVII of P.L. 115–232) now allow it to examine and potentially block, among
other transactions, non-controlling and controlling investments in U.S. businesses involved in
critical technology or critical infrastructure or collecting sensitive data on U.S. citizens as well as
any transactions in which a foreign government has a direct or indirect substantial interest
(Jackson, 2020, p. 2). Critical technologies encompass “emerging and foundational
technologies,” making CFIUS a frontline fighter in the government’s attempt to retain
supremacy in AI, quantum computing, and advanced telecommunications.
CFIUS activity has increased in recent years. In 2017, the latest year for which data are
available, CFIUS conducted 172 investigations, which was nearly twice as many as 2016 and
161
represents more than 30% of all investigations carried out since 2009 (CFIUS, 2019, p. 4). Much
of this increase reflects growing tensions with China; at 143 investments, the country represented
more than a quarter of all covered transactions between 2015 and 2017 (CFIUS, 2019, p. 18).
CFIUS therefore represents a rare moment of cooperation between the U.S. legislative and
executive branches to leverage tools of industrial policy to protect the U.S. AI industry and other
sensitive sectors.
However, CFIUS is a blunt mechanism, presenting regulators with a binary choice.
Contrast that with the Chinese FDI regime, which includes a more nuanced set of levers against
companies considering entering the large and lucrative Chinese market. Although China’s entry
into the World Trade Organization ended its explicit practice of requiring technology transfer as
a condition of market access, its FDI practices remain opaque, with forced technology transfer
continuing in an implicit manner through verbal guidance and the use of administrative barriers
as a coercive tool (USTR, 2018 p. 19-20). Such forced technology transfers have increased
significantly in the past two years (Wernau, 2019). Moreover, under Chinese law, foreign
companies are unable to operate in many sectors, including the basic and value-added
telecommunications sector, without partnering with a Chinese firm in a joint venture or
surrendering significant equity stakes (USTR, 2018, p. 19, 26). In some cases, the Chinese
partner will then execute a “squeeze” on its partner, leveraging unfavorable local laws and
regulations to muscle out the foreign firm while retaining its intellectual property.
China leverages its internal IP regime to similar effect. For example, evidence suggests
that applications for patents essential to a technological standard are treated unfavorably only if
they originate from a foreign firm (de Rassenfosse, Raiteri, & Bekkers, 2018). China also
imposes “mandatory adverse licensing terms” only on foreign firms seeking market-based terms
162
for IP sharing (USTR, 2018 p. 49). This includes requirements that the foreign firm assume all
indemnity risks and mandatory Chinese ownership of improvements,
47
“a different set of rules
for imported technology transfers originating from outside China, such as from U.S. entities
attempting to do business in China, compared [to those] occurring between two domestic
companies” (USTR, 2018, p. 50). Theoretically, this would allow a Chinese firm to
“legitimately” claim IP protection over a severable improvement, such as a novel AI algorithm
inspired by licensed technology.
The relatively weak IP regime within China may also have an unintended benefit for
local industry. As Lee (2018) argues, U.S. entrepreneurial culture has a strong distaste for
replication, so firms generally steer clear of directly copying competing products. Because
Chinese firms cannot rely on such protections, they tend to focus on physical infrastructure and
vertically integrated organizational forms, which increases experimentation and competition.
China’s 2018 Internet Security Law also acts as a tool of industrial policy. Article 37 of
the law requires a broad range of companies—including essentially any firms that operate their
own email or data networks—to store data in-country and prohibits export of data that would
pose a threat to national security or the public interest (Wagner, 2017). This serves the
simultaneous purposes of enabling government surveillance of companies, facilitating censorship
and information control, and fostering in-country economic activity as foreign firms are forced to
purchase localization services from Chinese vendors. The law also requires firms to share data
with Chinese investigators and includes certification and “spot check” provisions that might
require firms to reveal source code or other core business information (Wagner, 2017).
47
More precisely, the requirement is that the party that makes an improvement retains the rights to that
improvement, and the licensor cannot prevent a licensee from making such improvements (USTR, 2018, p. 49-50).
163
Finally, other common Chinese practices—notably, their cyberespionage activities—fail
to have even the veneer of legitimacy. As the National Counterintelligence and Security Center
(2018) notes, “China has expansive efforts in place to acquire U.S. technology to include
sensitive trade secrets and proprietary information. It continues to use cyber espionage to support
its strategic development goals—science and technology advancement, military modernization,
and economic policy objectives” (p. 5). Although the overall volume of tracked cyberactivity has
dropped since bilateral September 2015 U.S.-China cyber commitments, China continues to
conduct cyber operations against U.S. firms, with a focus on defense contractors and IT and
communications firms (National Counterintelligence and Security Center, 2018, p. 7) that might
have access to sensitive AI technology. As a result, the United States has ramped up its
investigatory operations, bringing at least nine cases of economic espionage and attempted
economic theft in 2018 alone (BBC, 2018).
In comparison to the Chinese approach, CFIUS’s digital method of assessing FDI—that a
covered transaction is either allowed or not—lacks nuance. As with antitrust, the default position
is that FDI is acceptable, and that the government must assume the burden of proof to prevent a
transaction from moving forward. In China, however, market access is a privilege, not a right,
and the country is unabashed in extracting value upfront from entering firms. This suggests that
Chinese acquisition of external AI IP might remedy weaknesses in its homegrown AI industry.
5.2.4 Case study: 5G competition. The global race to dominate 5G, with its parallels to
AI competition, offers useful insights into the current boundaries of U.S. industrial policy. For
instance, the U.S. strategy to accelerate domestic rollout of 5G—the FCC’s “Facilitate America’s
Superiority in 5G Technology” Plan, or 5G FAST Plan—mirrors the U.S. approach to 4G. This
164
approach, however, reflects the continued constraining effects of neoliberalism on industrial
policy that renders the United States vulnerable to foreign competitors that can leverage stronger
industrial planning tools.
In the U.S. policy community, it is taken as a matter of axiomatic faith that the country
“won” the race for 4G. A 2018 report sponsored by the wireless industry notes that “American
companies shifted from lagging in earlier wireless generations to seize global 4G leadership” due
to “industry innovation and investment as well as by smart wireless policymaking” (Recon
Analytics, 2018, p. 1). The United States was among the fastest initial adopters of 4G services
(Recon Analytics, 2018, p. 7), outpacing every other country except Japan. Analysts estimate
that this early growth expanded the wireless industry’s contribution to gross domestic product by
$100 billion compared to pre-4G estimates and secured “roughly $125 billion in revenue to
American companies that could have gone elsewhere if the US hadn’t seized 4G leadership,”
much of that from the online application ecosystem (Recon Analytics, 2018, p. 1).
Three major factors contributed to early U.S. 4G success, all centered on facilitating
private-sector investment. First, the United States moved aggressively to auction spectrum to
carriers on a flexible-use basis. This included the “digital dividend” created by the transition to
digital television broadcasting as well mid-band Broadband PCS and Advanced Wireless
Services spectrum. The technology-neutral rules for such spectrum allowed carriers to innovate
with wireless services and adopt new technologies without policymaker review. Second, the FCC
established a “shot clock” that mandated rapid tower siting decisions by state and local
authorities, removing regulatory barriers to timely deployment (FCC, 2009). Third, with the
development of Android and iOS, the United States became dominant in mobile operating
systems and application ecosystems.
165
Copying much of the 4G playbook, the 5G FAST plan also has three major elements:
making more spectrum available for wireless services, updating infrastructure policy by
removing barriers to deployment of small cell transceivers, and modernizing “outdated”
regulations to promote the availability of mobile backhaul (high-quality broadband connections
that link cell towers together) (FCC, 2018). This approach has three noteworthy implications.
First, despite the supposed importance of 5G to U.S. economic and international goals, the plan
continues to envision an extremely limited role for government and a hostility to direct
intervention into the wireless markets. In some ways, the 5G FAST Plan can hardly be called a
plan at all. Its individual elements were already underway when it was announced; it primarily
serves as a wrapper for extant policy, not a bottoms-up coordinated national vision.
Relying on the private sector means the government has little power to shape important
5G outcomes. Except for New T-Mobile, which made binding 5G deployment commitments
during its merger negotiations, no U.S. telecommunications company is under a mandate to
deploy 5G on any particular timeframe or to any particular area, or even to make some minimal
level of monetary investment. While the FAST plan does mention backhaul, the general
approach to market facilitation is to remove regulatory barriers and hope that sufficiently
incentivizes free market investment. That approach may have enabled U.S. leadership in mobile
operating systems, but it has not addressed shortcomings in U.S. capabilities related to network
equipment or devices.
Contrast this with the Chinese approach. With all three major wireless companies under
partial government ownership, the Chinese industry has developed a coordinated 5G rollout plan
with specific milestones and targets. The Ministry of Industry and Information Technology has
set a goal of offering 5G in every Chinese prefecture-level city by the end of 2020, for instance
166
(Horwitz, 2019). China is also aggressively using policy tools that are not available to U.S.
regulators, such as imposing price controls to ensure that 5G plans are affordable and organizing
a single shared 5G network for China Unicom and China Telecom (Horwitz, 2019).
Second, it is unclear that the United States has adopted a winning strategy by repeating its
4G scheme. For instance, U.S. policy decisions may indeed have facilitated rapid buildout of 4G
networks; the FCC’s swift actions to put unrestricted spectrum on the market during the early
stages of 4G deployment have been widely praised and replicated by countries across the globe.
The deregulation of information services also created an environment that allowed the
development of local mobile operating systems. It is a much harder claim to make, however, that
the eventual global dominance of these OSes was a likely or necessary outcome of U.S. policy;
that appears to have been an accident arising from parochial business bets made by two U.S.
companies. If they had pursued alternative investments, it would have greatly blunted much of
the economic benefit the United States derived from early 4G deployments.
However, even this well-established aspect of the 5G FAST Plan is not proceeding
smoothly. No one can fault the FCC for an unambitious 5G spectrum vision. In 2016, the FCC’s
Spectrum Frontiers proceeding, which had broad support from wireless companies, laid out a
roadmap for making vast swathes of millimeter-wave spectrum available for fixed and mobile
broadband (FCC, 2016). Yet almost as soon as the first of that spectrum went into the auction
hopper, U.S. industry began to complain that it was actually mid-band spectrum—which is in
scant supply in the United States due to military assignments—that was vital to U.S. leadership
in 5G. The FCC’s infrastructure plan is also in doubt. In another instance of deep conflict
between domestic and international policy priorities, dozens of cities have sued the agency over
167
its decisions to implement more aggressive cell siting shot clocks and to set limits on local
application fees (Reid, 2020).
The United States faces other disadvantages compared to China’s more focused
approach. For instance, China has a substantial edge in 5G patents and standards proposals
(Strumpf, 2019). It has engaged in a coordinated effort to develop local industries in associated
sectors, such as semiconductors and network equipment. Through its Belt and Road Initiative—
one of the largest infrastructure investment projects in history—the country is generating
economies of scale and opening external markets for its wireless firms. Perhaps most
importantly, China is outspending the United States on wireless infrastructure by significant
margins: $8 billion to $10 billion per year since 2015 in normalized terms (and substantially
more in raw monetary figures) (Deloitte, 2018). By November 2019, China had deployed 86,000
5G base stations and was on track to put 130,000 in service by the end of that year (Si, 2019), far
exceeding the number of U.S. deployments. As one China scholar notes, “The U.S. government
has yet to commit to any funding or national initiatives in 5G that are close to comparable in
scope and scale to those of China, which is dedicating hundreds of billions to 5G development
and deployment” (Kania, 2019).
Third, the benefits of regaining the lead on certain 5G benchmarks are unclear. The
history of wireless networks suggests that first-mover advantages in mobile technology do not
persist, nor are the long-term benefits of such leadership obvious. Europe leveraged its quick
consolidation around a single standard into an early lead in 2G adoption, only to fall behind in
3G due to technology-specific rules for spectrum use. Japan used innovations in online “i-mode”
edge services to ramp up 3G adoption, but lost ground to U.S. application platforms during the
4G era (Recon Analytics, 2018). Even early-cycle dominance is temporary. By 2018, while the
168
United States remained close to the top of the global charts in 4G coverage, it lagged at least 60
countries in average 4G download speeds (OpenSignal, 2018).
Moreover, because the benefits of 5G are largely centered on new types of applications, it
is not apparent that dominance in network services will necessarily provide the same economic
benefits as 4G. For example, the United States has developed a substantial lead in autonomous
vehicle technology due to its existing automotive industrial base combined with investments by
Silicon Valley. It is unclear why the United States would lose its advantages in this sector if its
rollout of 5G networks lags that of other countries. For example, even if China cements its lead
in 5G deployment, “U.S. companies can dominate the applications and services that run over 5G,
just like they did with 4G. This means a greater focus on software development, especially the
code that links devices to towers” (Segal, 2019).
There is, however, one area where the United States is engaging in aggressive industrial
policy: Its scorched-Earth attacks against Chinese firm Huawei. As the United States jockeys for
5G leadership, Huawei’s dominance over the telecommunications equipment market—where it
holds a 28% share of the global market (Defense Industrial Board, 2019) and controls supply of
certain key 5G components—has raised concerns from U.S. officials about the potential security
and economic impacts of incorporating Chinese gear into core U.S. networks. This reveals a
significant amount of flexibility in industrial policy that manifests only when national security is
implicated.
Unlike the Chinese wireless companies or ZTE (the other major Chinese telecom
manufacturer), Huawei is not a state-sponsored enterprise. Rather, its 96,000 employees own the
company (Graff, 2020). Huawei is not independent of the Chinese government, however.
Analysis of Huawei’s annual reports and public records indicate that the firm has received as
169
much as $75 billion in tax breaks, financing, and resources from the Chinese government (Yap,
2019), including grants, subsidized land, and loan assistance to international customers to fund
purchases of Huawei products (Mcmorrow, 2019). Partly on the back of this support, Huawei has
developed deep expertise in manufacturing 4G and 5G RAN equipment, which coordinates the
communications between cell towers and individual devices.
It is impossible to cleanly separate the origins of the U.S. government’s animus towards
Huawei from the larger trade and diplomatic tensions with China that have been brewing for
years and that accelerated under the Trump Administration. Lawmakers had flagged potential
national security threats and data vulnerabilities posed by Huawei and ZTE as early as 2012
(Rogers & Ruppersberger, 2012). Anxieties intensified after news reports revealed China was
exfiltrating confidential data from the network of the Chinese-built African Union headquarters
in Ethiopia (Fidler, 2018), suggesting that the country was using its “Belt and Road” investment
initiative as a tool of hard power as well as soft power.
When the U.S. government did act, it was with unusual urgency and energy. Early in
2018, AT&T backed out of a deal to distribute Huawei smartphones due to government pressure
(Mozur, 2018a). In May 2018, the Pentagon banned sale of Huawei and ZTE phones in its retail
stores (Liao, 2018). The 2019 National Defense Authorization Act,
48
passed in August of that
year, prohibited the Federal government from purchasing Huawei or ZTE equipment. In May
2019, the Department of Commerce (2019) began adding persons and organizations connected to
Huawei to the Entity List, limiting their ability to receive U.S. exports and undermining the
company’s supply of key subcomponents. The White House also issued an Executive Order
focused on “Securing the Information and Communications Technology and Services Supply
48
Section 889 of the John S. McCain National Defense Authorization Act for Fiscal Year 2019 of 2018, Public Law
No. 115–232.
170
Chain” (Executive Order 13873) that month. The FCC (2019) finalized rules prohibiting the use
of rural broadband subsidies for Huawei and ZTE equipment in November 2019; in February
2020, the Justice Department charged Huawei with racketeering and stealing trade secrets from
U.S. firms over the past two decades (Overly, 2020); and the following month, a new law
authorized the FCC to begin ripping existing Huawei equipment out of U.S. networks (Secure
and Trusted Communications Networks Act of 2019).
U.S. reactions have not been limited to just the domestic sphere. The country has also
applied significant pressure to allies in attempts to limit Huawei’s external market access. This
has met with mixed success. Australia, Japan, and New Zealand took steps to restrict Huawei
equipment, but many European nations have not succumbed to the U.S. hard diplomatic press
(Graff, 2020). Germany moved ahead “with incorporating Huawei into its 5G system” even after
the United States threatened to end a longstanding intelligence-sharing agreement (Graff, 2020).
The vehemence of the U.S. reaction demonstrates that there is some flexibility and vigor
remaining within U.S. industrial policy. Yet it also highlights longstanding weaknesses in the
U.S. free-market approach. The U.S. susceptibility to Huawei originates largely because market
consolidation, boom-and-bust economic cycles, and cost-cutting measures have hollowed out the
country’s equipment manufacturing capacity, leaving it reliant on foreign firms for RAN
equipment (Kania, 2020). The next-largest players in the space after Huawei and ZTE are
Ericsson and Nokia, both European firms, which sell pricier equipment than the Chinese
suppliers and which may be 18 months behind Huawei in technological sophistication (Graff,
2020). (Ironically, Lucent—an American firm that at one time was a major global player in
telecom equipment—was subsumed into Nokia after the 2001 tech crash [Kania, 2020]). This
171
presents a Catch-22 for policymakers. By blocking Huawei, lawmakers are raising supply chain
costs and reducing equipment quality, thereby slowing U.S. deployment of 5G.
Moreover, even when facing severe external security threats, the United States faces hard
limits on the industrial policy considered acceptable. Consider the “Spalding” incident: In
January 2018, Axios reported that a senior official at the National Security Council, the White
House’s national security policymaking organization, had crafted a memo arguing that “America
needs a centralized nationwide 5G network within three years” (Swan, McCabe, Fried, & Hart,
2018). The official, an Air Force Brigadier General named Robert Spalding on loan to the
Council as its senior director for strategic planning, was particularly concerned by the Chinese
Intelligence Law, which he saw as essentially “requir[ing] that companies like Huawei open their
networks to Chinese intelligence” and therefore creating an unacceptable existential risk to U.S.
national security (Graff, 2020). After considering alternatives and consulting with engineering
experts, Spalding determined that a nationalized network was the only viable option to protect
U.S. industry and government (Graff, 2020). The reaction to the leaked documents was swift and
forceful, however. The telecom industry, FCC Chairman Pai, and others quickly and loudly
decried the proposal, embarrassing the White House (Rogin, 2018). Within days, Spalding had
left his detail to return to the Department of Defense (Rogin, 2018).
The level of vitriol aimed at Spalding might seem like an overreaction. The United States
has nationalized its telecommunication infrastructure once before, taking over AT&T during the
First World War (Janson & Yoo, 2012). There is also a general expectation, verging into the
realm of obligation, that defense officials will contemplate the full option space when crafting
security policy. The memo in question was part of that function: an early draft that essentially
functioned as a thought experiment. Yet the wireless industry’s implicit message was that even
172
preliminary consideration of such a plan—indeed, any acknowledgment that the private sector is
incapable of solving the country’s security issues—is not just invalid but inconceivable.
The strength of the Spalding reaction also implies that the simplest solution to the
Huawei issue—having the Federal government incubate a U.S. equipment maker—would be an
intolerable intrusion into “well-functioning” telecom markets. To the extent that scholars have
explored this issue, the recommendations largely focus on holding China “accountable” for
“unfair and even illegal” subsidies (Kania, 2020). For now, the backup solution is a growing
interest in the potential of “open RAN,” essentially a virtualized, open-source approach to RAN
communications that would allow any network equipment to fill the role, rather than the end-to-
end proprietary systems currently used by the industry.
This analysis offers several useful lessons for AI policy. First, it suggests that relying on
the private sector to make commercial AI decisions may lead to problematic omissions. Just as
pharmaceutical companies are unlikely on their own recognizance to pursue treatments for
orphan diseases, AI companies beholden to investors face strong pressure to develop only the
most salable products. This may lead to underinvestment in key areas and therefore reveal itself
to be inferior to more guided development. Second, policy choices intended to indirectly steer
development of technologies are prone to failure. If so, lawmakers may benefit from moving
cautiously and developing programs that allow for testing and iteration. Third, this analysis
suggests that national strategies for emergent technologies are incomplete if they do not properly
incorporate associated sectors across the entire value chain. Such a coordinated approach both
ensures a virtuous cycle of co-development among and between economic sectors and mitigates
the potential negative impacts of a “losing” strategy in one area. Finally, while the supply chains
for the U.S. AI industry appear to be robust and well-secured at the moment, that may not be true
173
for industries that are likely to be transformed by new AI technologies. As the United States
begins to shift its focus from general leadership over AI to specific applications, reducing
reliance on foreign suppliers for key inputs is likely to provide economic and security benefits.
5.3 Data Policy
U.S. companies currently dominate AI development in large part due to a rich trove of
consumer data collected as part of their longstanding dominance of online platforms. However,
the global trend towards increased consumer privacy disproportionately affects these same
companies, in large part because the democratic regimes where those firms are headquartered
have limited access to alternative data sources. China and some other authoritarian states, on the
other hand, have accelerated their use of ubiquitous data collection, which may advantage their
local AI industries in the future. Although this paper has already discussed some of these
issues,
49
this section elaborates on those dynamics.
This is a distinct issue from industrial policy, because the divergent developments around
data practices in the United States and China are not deliberate attempts to shape economic
development. Instead, they are emergent effects arising from the interaction between core
national values and novel policy issues created by new technologies. If China garners an
advantage in AI data, it is largely an accident arising from a profoundly different conception of
the relationship between state and citizenry, albeit one now being deliberately steered for
international advantage.
49
See pages 68-69.
174
5.3.1 Government data collection. The U.S. government faces significant limitations in
its ability to strategically leverage the information it collects. The Privacy Act of 1974 prevents
disclosure of personally identifiable information that the government obtains as part of its regular
operations, except under a narrow set of circumstances. Meanwhile, because copyright law
places all U.S. governmental works in the public domain and there has been a strong push
towards “open data” principles, the government cannot restrict the benefits of Federal weather,
economic, and other data to only U.S. firms.
However, the most relevant issue for comparison against China is sharply curtailed
government surveillance powers. The Founding Fathers had deep-seated concerns, driven by
abusive British search practices, over the intrusive power of a strong central government. As a
result, the animating philosophy underlying the U.S. government’s ability to collect data on its
citizens is highly restrictive. The Fourth Amendment, for instance, sets inviolable boundaries on
the government’s ability to harvest information by ensuring that the “right of the people to be
secure in their persons, houses, papers, and effects, against unreasonable searches and seizures,
shall not be violated.”
As with other areas of the law, judicial interpretations of the Fourth Amendment have
changed in response to historical events and novel issues poised by technology (Lessig, 2006).
For example, in Katz vs. United States (1967), the Supreme Court ruled that citizens have a
reasonable expectation of privacy around their phone calls. In addition, by determining that
Fourth Amendment protections extend to situations that do not involve physical trespass of
“persons, houses, papers, and effects,” the Court set a precedent invoked in thousands of cases
that Constitutional protections extend to intangible communications. In United States v. Jones
(2012), the Court ruled that installing a Global Positioning System tracker to monitor vehicular
175
movements without a warrant is a violation of the Fourth Amendment, limiting what had
previously been a common form of warrantless surveillance.
The aftermath of the Sept. 11, 2001, terrorist attacks highlights the limited appetite that
Americans possess for expansive government search powers. While the USA PATRIOT Act of
2001 significantly increased the Federal ability to engage in warrantless surveillance, the
Snowden revelations and the passage of time have since circumscribed those capabilities. For
instance, the USA Freedom Act of 2015, which extended several expired provisions of the USA
PATRIOT Act, placed new limitations on the bulk collection of metadata, a practice highlighted
by Snowden. And continuing concerns around errors in warrant applications relating to the FISA
secret court—another area where Snowden released information about controversial U.S.
practices—have left Congress in a deadlock over reauthorizing expiring provisions of the
Foreign Intelligence Surveillance Act of 1978 relating to a records collection program, “lone
wolf” surveillance, and roving wiretaps (Carney, 2020).
The net result of this individualistic philosophy is sharply curtailed government data
collection powers. Most government searches in the United States require an initial collection of
enough evidence to survive judicial warrant review. Nearly two decades after 9/11, lawmakers
have curtailed warrantless bulk data collection, while the courts have extended Fourth
Amendment protections to most digital information. Even when the United States does collect
such data, it is highly limited in its ability to share with third parties.
The Chinese philosophy is the exact opposite. This is not to say that Chinese citizens do
not care about privacy. Rather, the underlying Chinese theory of government is that individual
rights, rather than limiting state action, are curtailed by the needs of the sovereign. Article 51 of
the Chinese Constitution, relating to the “Interest of the State,” encapsulates this communitarian
176
philosophy: “The exercise by citizens of the People's Republic of China of their freedoms and
rights may not infringe upon the interests of the state, of society, and of the collective, or upon
the lawful freedoms and rights of other citizens.”
50
As a result, countries such as China have broad discretion to collect, manipulate, and
share data about their citizens. China had already leveraged such powers extensively to monitor
and censor online content as part of its Great Firewall. However, the scope of personal mass
surveillance has accelerated under the new “Social Credit System,” which openly collates
enormous amounts of government-held data, including omnipresent mass surveillance, to assign
reputational scores to every Chinese citizen. China can also share that information freely with
trusted companies, whereas U.S. agencies can contract with third parties only under highly
restrictive terms governing data sharing and security practices.
Consider the differences between U.S. and Chinese facial recognition programs. China
has an estimated 200 million active surveillance cameras, four times that of the United States
(Mozur, 2018b), allowing it to accumulate a database of facial images that likely numbers over a
trillion. This has allowed China to jumpstart development of a robust “security-industrial
complex” that has excelled internationally, with Chinese firm Yitu in 2018 winning a facial
recognition algorithm contest held by the U.S. government’s Office of the Director of National
Intelligence; other Chinese firms also placed well (Mazur, 2018). By contrast, the Federal
Bureau of Investigation is estimated to have access to a database of only 641 million facial
images and faces statutory restrictions under the Privacy Act of 1974 and the privacy provisions
of the E-Government Act of 2002 that prevent it from disclosing the contents of that database
freely. Moreover, the increasingly commonplace bans on use of facial recognition at the local
50
Translation provided by the World Intellectual Property Organization, available at
https://www.wipo.int/edocs/lexdocs/laws/en/cn/cn147en.pdf (accessed April 18, 2020).
177
and state levels
51
limit the potential market for private firms working on facial recognition,
further slowing progress.
There is one important caveat. ML algorithms are sensitive to data quality, not just
quantity. U.S. technology firms have global audiences that have allowed for accumulation of
information on diverse and dispersed populations. To the extent that Chinese data assets are
restricted primarily to an ethnically and culturally homogeneous population, this may limit the
accuracy of their algorithms. Chinese success in international competitions suggests that they
have at least partially overcome this limitation, but the obtuse nature of ML algorithms means
that deficiencies in algorithmic accuracy might arise in other contexts.
5.3.2 Private data collection. This paper has argued that domestic and international
trends have limited how U.S. technology companies might collect consumer data. Already,
GDPR and CCPA have forced companies to institute new privacy practices en masse. However,
there is one additional relevant development. China is not immune to the same pressures other
countries face to increase data security and privacy. Similar concerns over data collection
practices led China, in May 2018, to release a Personal Information Security Specification,
52
which puts into place a consent-based privacy regime similar to GDPR.
There are two salient differences, though. First, the Chinese government’s ability to share
data with local firms, particularly trusted state-sponsored enterprises, blunts some of the impacts
of privacy restrictions. Second, Chinese firms benefit from asymmetrical access to foreign
markets. While the United States and Europe may limit foreign FDI entirely, they generally do
51
See page 45.
52
Available at https://www.tc260.org.cn/upload/2018-01-24/1516799764389090333.pdf. Translated by New
America (Shi, Sacks, Chen, & Webster, 2019).
178
not place special restrictions on firms that are allowed market entry. China, by contrast, has
leveraged its FDI regime aggressively to control the online platform market. Indeed, by blocking
Facebook and applying a heavy-handed censorship regime to Google, to the point that it has only
a marginal presence in mainland China, the country has insulated its internal markets sufficiently
to allow local firms such as WeChat (for social networking) and Baidu (for search) to reach
critical economies of scale. The result is that some U.S. technology companies are locked out of
the single largest foreign market, representing nearly a fifth of the world’s population,
impoverishing their potential data collection vis-à-vis Chinese firms that do not face similar
limitations.
5.3.3 Data labeling industries. For the sake of completeness, we briefly note again that
China’s abundant labor pool presents it a final data advantage: It has developed the world’s most
robust data labeling industry. Data labeling—a time-consuming manual labor process—is
essential to developing ML models, and demand is growing exponentially. To the extent that
Chinese firms can rely on commodity services for data labeling instead of dedicating expensive
AI talent to the task, a hearty local labeling sector mitigates some of the disadvantages Chinese
firms face in the market for AI specialists.
5.4 Absorptive Capacity
Thus far this chapter has focused on existing policies, practices, and structures, not
capabilities for future action. This section examines the latter issue by sketching out, briefly, a
theory of U.S. national “absorptive capacity” for new technological developments. Absorptive
capacity typically is applied at the firm level and refers to “an ability to recognize the value of
179
new information, assimilate it, and apply it to commercial ends” (Cohen & Levinthal, 1990, p.
128). Building on Tudor and Warner’s (2019) study that uses absorptive capacity to frame a
proposal to create a Congressional Futures Office, the discussion below applies the concept at
the country level, focusing on institutional structures that limit the United States’ overall
potential to advance national objectives.
5.4.1 Institutional knowledge. The first step in reacting to new technological
developments is to analyze them and assess their potential to affect important national and
economic values. However, since the 1990s, conservative politicians have engaged in a
coordinated effort to portray the bureaucracy as wasteful and have made shrinking the Federal
workplace a top priority, both with some degree of success. From that perspective, the United
States fares poorly in aggregate institutional capabilities in comparison to countries with deeper
civil servant expertise, including China.
Congress’s institutional capacity has dropped significantly in recent years. Since 1995,
the legislative branch has grown only 20% in real terms through 2019,
53
while overall
discretionary government spending has grown roughly 64% (just over 64% if defense spending
is excluded).
54
This, however, underrepresents the true erosion of Congressional policymaking
capacity. Most of the funding increase between 1995 and 2015 represents “non-legislative
activities such as operations and maintenance of the Capital Complex” (Tudor and Warner, 2019,
p. 46-47).
53
Author’s analysis of data from OMB Historical Table 5.1—Budget Authority by Function and Subfunction:
1976–2025. Inflationary adjustments made using average annual Consumer Price Index – All Urban Consumers
(CPI-U) data from the Bureau of Labor Statistics.
54
Author’s analysis of data from OMB Historical Table 5.6—Budget Authority for Discretionary Programs: 1976–
2025. Inflationary adjustments made using average annual Consumer Price Index – All Urban Consumers (CPI-U)
data from the Bureau of Labor Statistics.
180
The erosion of staff resources has been especially pronounced. Between 1985 and 2015,
total Congressional staff have declined 32%, and civil service support staff at the Congressional
Research Service (CRS) and Government Accountability Office (GAO) have dropped 29%and
41%, respectively.
55
The drop in specialized science and technology expertise is especially
problematic for Congress’s absorptive capacity. OTA, Congress’s advisory body for emerging
science and technology issues, was defunded entirely in 1995, while committee staff, which
serve as subject matter experts, have declined by 36% since then.
56
Committees with jurisdiction
over science and technology issues, including the Senate Committee on Commerce, Science, and
Transportation; the House Committee on Science, Space, and Technology; and the House
Committee on Energy and Commerce have seen staffing drop at least 40% during that time
period (Tudor and Warner, 2019, p. 47). Moreover, these decreases correspond with a rapid rise
in Congressional workload due to new communications tools, an accelerating news environment,
and expanding election cycles. As a result, more staff are either wholly or partially allocated to
communications, constituent correspondence, or other non-policymaking tasks.
Congress also faces a staff quality issue. Congressional salaries substantially lag
executive branch salaries for similar roles. Moreover, while CRS and GAO staff have civil
service protections, most Congressional member and committee staff are employed on an ad hoc
basis and are subject to losing their positions following an unfavorable election. Because the
most knowledgeable staff are typically also most in demand at agencies and the private sector,
the lack of long-term job stability and the temptation of higher salaries continuously drains
institutional knowledge from the legislature.
55
Author’s analysis, based on data from Brookings Institution, Vital Statistics on Congress, available at
https://www.brookings.edu/multi-chapter-report/vital-statistics-on-congress/.
56
Author’s analysis, based on data from Brookings Institution, Vital Statistics on Congress, available at
https://www.brookings.edu/multi-chapter-report/vital-statistics-on-congress/.
181
The results are predictable. While some members of Congress have significant
knowledge of technology issues, others have much farther to go. For instance, the press wrote
numerous articles about “weird” (Liao, 2018; Stewart, 2018), “awkward” (Liao, 2018), “strange”
(Gutman, 2018), and “confusing” (Gutman, 2018; Stewart, 2018) questions after a pair of high-
profile hearings featuring Facebook CEO Mark Zuckerberg testifying on the Cambridge
Analytica scandal (Facebook, Social Media Privacy, and the Use and Abuse of Data, 2018;
Facebook: Transparency and Use of Consumer Data, 2018).
From a democratic standpoint, the net result of Congressional defunding is a net transfer
of legislative drafting from Congress to the private sector. As early as 2015, total reported
corporate lobbying expenditures had begun to exceed taxpayer funding for Congressional
operations (Klein, 2015). By 2019, that disparity had grown to more than $1 billion, with
Congressional appropriations of approximately $2.2 billion and total lobbying exceeding $3.5
billion.
57
This underrepresents the total impact of the influence industry, since it does not include
expenditures related to executive branch agencies or indirect activity such as educational
campaigns, which do not have to disclosed. This has two notable impacts. First, it leads to
legislative positions that are far more favorable to corporate interests. Empirical evidence
suggests that lobbying expenditures are highly cost-effective, such that “the market value
contribution of an additional dollar of lobbying is roughly $200” (Hill, Kelly, Lockhart, & Van
Ness, 2013). The outsize returns from lobbying leads to salaries that are often several multiples
of those offered by Congressional offices, exacerbating staff pressures. Second, reliance on
external organizations for expertise increases polarization. With limited time to process incoming
data, research, and legislative proposals, staff will vet material based on perceived reliability, and
57
Author’s analysis of CRS and appropriations data. Lobbying data comes from Center for Responsive Politics,
Open Secrets, available at https://www.opensecrets.org/federal-lobbying/summary.
182
partisan alignment is a favored method for making such assessments. This exacerbates party
divides and makes passage of legislation more difficult (Fagan, 2019).
Although a similar analysis for the executive branch is less straightforward, it is not
immune from the same pressures on expertise. Although real discretionary expenditures for
executive branch agencies have grown much more rapidly than Congress, the vast majority of
that is not allocated to policymaking functions. For example, the FCC is currently at its lowest
staffing levels in modern history, down more than 660 staff, or more than 31%, since the
agency’s peak in 1996 (FCC, 2019a, p. 12). Moreover, concerns about the “revolving door”
between agencies and firms are as commonplace for executive branch organizations as they are
for Congress.
There is an additional complicating factor for U.S. absorptive capacity when it comes to
GPTs. Both the legislative and executive branches are arranged in an ad hoc manner that
represents accident and coincidence as much as it does a coordinated plan. However, that
structure is also incredibly sticky, due to the tendency for bureaucracies to expand over time,
constituencies that develop around agencies with niche functions, and general legislative inertia.
As a result, the U.S. government faces severe jurisdictional challenges when dealing with issues
that span historical boundaries, such as AI. Cybersecurity presents a telling example; at least five
Congressional committees claim jurisdiction over portions of the executive branch with
cybersecurity responsibilities, including organizations that work on homeland security, defense,
intelligence, trade, transportation, and commerce issues. The administrative obstacles that such
muddled jurisdiction presents are profound; even non-controversial bills with bipartisan backing
can take years to navigate the process of coordinating between committees and chambers. Yet
asking a committee to voluntarily give up jurisdiction is anathema, especially in a legislature
183
deliberately designed around a dispersed power structure. The executive branch, ostensibly,
should face this problem to a much lesser degree, since it has a coordinating mechanism in the
White House. However, many of the agencies tasked with overseeing technology issues, such as
the FCC and FTC, are independent entities that have limited accountability to the President.
Moreover, as the U.S. AI strategy reveals, executive branch initiatives are often limited in scope
and impact without supporting legislation.
One solution that has gained significant momentum in recent years—especially after the
recent technology scandals and hearings—is reconstituting OTA to restore institutional
technology and science expertise to Congress (Etzioni, 2020; Majumder, 2019; Tudor and
Warner, 2019). Recent years have seen repeated attempts to include OTA funding, including $6
million to reopen the office in the 2020 House Legislative Branch appropriations bill.
58
None of
these attempts have been enacted, however, due to continuing opposition from conservatives,
who, with limited evidence, associate OTA with partisan analysis. In the meantime, Congress
has blessed an alternative: creating a new Science, Technology Assessment, and Analytics team
within GAO to “combine and enhance [GAO’s] technology assessment functions and [its]
science and technology evaluation into a single, more prominent office” (GAO, 2019). Although
advocates both in and out of Congress generally support any expanded technical expertise within
the legislative branch, they do not consider the new team a full replacement for OTA due to
GAO’s risk-averse culture and other organizational challenges, and are skeptical that the new
office will significantly reduce Congress’s technology knowledge gap.
58
H.R.2779 - Legislative Branch Appropriations Act, 2020, available at https://www.congress.gov/bill/116th-
congress/house-bill/2779.
184
5.4.2 Institutional inertia. Just as important as the ability to assess technological change
is the capability to leverage that information into appropriate and timely responses to emergent
issues. The U.S. government also fares poorly on this mark, as both its legislative and executive
bodies face substantial, and growing, obstacles to swift action.
Some of these inefficiencies are endemic to the hybrid government system enshrined in
the Constitution. The bicameral legislature and tri-branch governmental system was deliberately
designed to slow the government activities through its checks and balances system, to protect
minorities and ensure thorough vetting of policy proposals.
59
The “vetogate” model of U.S.
lawmaking highlights the obstacles to statutory enactment enshrined into Article I of the
Constitution: “at least nine different points where bills can die,” including opposition from
House or Senate majorities, committee chairs, the Rules Committee in the House, filibustering
minorities in the Senate, House-Senate conference committees, or the President (Eskridge,
2007). “This is an awesome obstacle course for major legislation having large stakes or
generating intense opposition (or both)” (Eskridge, 2007). As Justice Scalia has said, Americans
should “should learn to love the gridlock. It's there for a reason” (Constitutional Role of Judges,
2011).
However, modern political dynamics, particularly political partisanship, have imposed
further obstacles on the lawmaking process and created an enduring crisis of legitimacy. Both
public approval of Congress and legislative productivity have dropped in recent years, with the
59
See, for example, Federalist Paper 51, available at
https://www.congress.gov/resources/display/content/The+Federalist+Papers#TheFederalistPapers-51: “In republican
government, the legislative authority necessarily predominates. The remedy for this inconveniency is to divide the
legislature into different branches; and to render them, by different modes of election and different principles of
action, as little connected with each other as the nature of their common functions and their common dependence on
the society will admit. It may even be necessary to guard against dangerous encroachments by still further
precautions. As the weight of the legislative authority requires that it should be thus divided, the weakness of the
executive may require, on the other hand, that it should be fortified. An absolute negative on the legislature appears,
at first view, to be the natural defense with which the executive magistrate should be armed.
185
former reaching its nadir in at 9% in 2013
60
and the latter resulting in the 112
th
Congress (2011-
2013) enacting fewer substantive bills that any other modern session (DeSilver, 2019). As Binder
(2015) notes, “levels of legislative deadlock have steadily risen over the past half century”, with
“Congress still struggl[ing] to legislate when partisan polarization rises and when the two
chambers diverge in their policy views.”
Increasing Congressional gridlock has manifested itself in novel legislative strategies
(Sinclair, 2016). As Gluck, O'Connell, and Po (2015) note: “Unorthodox policymaking is now
often the norm rather than the exception.” Common tools include increasing use of omnibus bills
that serve as vehicles for diverse legislative issues; shortened bill development processes,
including routine bypassing of Congressional committees; greater reliance on states, private
firms, and quasi-private actors to implement policy initiatives; and more complex delegations to
rulemaking agencies (Gluck, O'Connell, & Po, 2015). In using these tools to overcome some of
its productivity challenges, however, the legislature has also weakened its position relative to
other branches of government. Omnibus bills lead to overlapping agency jurisdictions, providing
additional interpretational latitude to the executive branch, while non-Federal organizations are
subject to fewer oversight mechanisms than government agencies.
Executive branch agencies, however, face their own challenges in rapid policy
development. The Administrative Procedure Act of 1946 (APA) requires that agencies delegated
with authority to promulgate rules and regulations must do so in an open manner, by ensuring
that a “notice of proposed rule making shall be published in the Federal Register” and providing
“interested persons an opportunity to participate in the rule making through submission of
written data, views, or arguments with or without opportunity for oral presentation.”
60
Historical data from Gallup, available at https://news.gallup.com/poll/1600/congress-public.aspx.
186
Over time, this “notice and comment” procedure has developed into an onerous process.
Intra-branch power conflicts led to an “oversight game” (Shapiro, 1994) in which “each of the
three branches has sought to impose its own conception of good regulation (or of good
regulatory process) on the federal bureaucracy, stifling the ability of bureaucrats to regulate on
the basis of technocratic expertise” (Yackee & Yackee, 2011). For instance, at first agency
rulemaking procedures had limited exposure to judicial review. However, as a result of doctrinal
changes in the 1960s and 1970s, “agencies had reason to expect that every major rule would be
the subject of judicial review, usually at the behest of numerous parties with widely disparate
interests” (Pierce, 1996, p. 192). This created a “hard look” regime in which “[r]ulemaking
records … mushroomed from a few hundred pages of comments to tens of thousands of pages of
comments, including many conflicting studies that challenged the major factual predicates for
the proposed rule” (Pierce, 1996, p. 193). In addition, over time, the executive branch has
exercised increasing authority over rulemaking through OMB’s Office of Information and
Regulatory Affairs, which reviews draft regulations for compliance with executive branch best
practices, comportment with Executive Order 12866, and alignment with the President’s
principles. Thus, rulemaking has “ossified” into a rigid, lengthy process in which agencies take
months or even years reviewing public input to ensure that rules can withstand judicial scrutiny
(McGarity, 1991), although at least some studies find limited empirical evidence of such delays
(Yackee & Yackee, 2011).
Personal experience aligns with theory. Under prevailing interpretations of APA
guidelines, agencies are required to examine every public comment received on a rulemaking,
which means that even high-priority items are likely to take at least six months to complete.
Even straightforward final rules can run for hundreds of pages, with copious citations to the
187
public record. In cases where there is no clear consensus from stakeholders, and no preferred
position from leadership, rulemakings will sit indefinitely. Every aspect of rulemaking is
scrutinized by an agency’s general counsel’s office to minimize judicial vulnerabilities, but even
then, the expectation is that an agency will be sued over every significant regulation. It would not
be an exaggeration to say that less than 10% percent of total effort is spent on policymaking,
with the rest devoted to procedural and administrative issues.
Like the legislative branch, executive branch agencies have adopted “unorthodox”
methods to accelerate their ability to act (Gluck, O'Connell, & Po, 2015). For instance, between
1995 and 2012, agencies exempted approximately half of their rules from the APA notice-and-
comment process, although many of these decisions were overturned later by the courts (Raso,
2015). In addition, agencies have likely issued more guidance documents than regulatory actions,
a “loophole” that may allow agencies to escape public participation and Congressional oversight
(Raso, 2010). This, too, erodes legislative authority. Delegations of rulemaking authority already
represent a concession by the legislature that it does not possess the expertise to draft detailed
rules on a topic, and courts have provided significant flexibility to agencies to interpret statutory
language under Chevron deference (Chevron U.S.A., Inc. v. Natural Resources Defense Council,
Inc, 1984). When agencies deliberately elude notice-and-comment procedures, they eliminate a
window for potential legislative intervention, including preventing Congress from speaking
“directly” on an issue that would otherwise receive Chevron deference.
5.4.3 Conclusion. This analysis presents reasons to be pessimistic about the general
timeliness of U.S. action on emerging technologies. Indeed, the preceding analysis of domestic
188
technology policy—which has yet to result in major antitrust or privacy reform and has altered
CDA Section 230 only slightly—underscores these points.
Technology issues present serious challenges under such conditions. For instance,
technology companies have increased their lobbying expenditures enormously, spending nearly
$500 million on such activities in the past decade (Romm, 2020) and more than $60 million in
2019 (Newcomer & Brody, 2020). Given low levels of institutional expertise within Congress,
lawmakers rely heavily on these lobbyists to vet potential bills relating to technology issues.
However, researchers have found that “technology entrepreneurs support liberal redistributive,
social, and globalistic policies but conservative regulatory policies” (Broockman, Ferenstein, &
Malhotra, 2019), so much of this input involves pressuring legislators not to take aggressive new
actions, exacerbating legislative inertia.
Jurisdictional issues, including muddled inter-branch, intra-branch, and inter-chamber
authority, present significant challenges to developing coherent national AI policy and may
stultify policy development. For example, labor market displacement will likely be a major
outcome as more AI products are deployed commercially. Yet it is unclear if the government’s
response will be to address labor issues on a sector-by-sector basis—with say, the Department of
Transportation and the corresponding Congressional authorizing committees developing
parochial policies for affected drivers—or whether Congress might contemplate creating new
safety net programs to address labor disruptions on a national level.
Finally, the relationship between technology platforms, political polarization, and
institutional capacity creates complex feedback loops that might exacerbate American political
inertia. Intelligence agencies have repeatedly warned that the primary purpose of electoral
misinformation campaigns is not to influence voting outcomes but to inflame partisan divides
189
and undermine faith in U.S. institutions. However, so long as political polarization increases
gridlock and public mistrust limits Congress’s mandate, the institution is less likely to pass
significant legislation that can address social media ills (including attacks on legislative
credibility) or that might increase Congressional capacity and effectiveness in dealing with
technology issues.
Will China do better? It is certainly possible that the difficulty of addressing
technological impacts is rooted in the complexity of the underlying issues. But there are reasons
to think that China’s absorptive capacity is much higher than the United States’. First, China has
made development of a robust civil service an important priority, including competitive exams
intended to attract “the best and the brightest,” and government jobs continue to carry a much
higher level of prestige there than in the United States (Burns, 2007). Second, in recent years the
Chinese government has implemented significant overhauls of its domestic surveillance,
cybersecurity, and privacy regimes while simultaneously crafting a unified national vision for
industrial policy and economic development. That is a level of productivity that seems to far
exceed the pace of regulatory and legislative change in the United States.
5.5 Asymmetrical Vulnerability
A final area of clear separation between China and United States involves the extent to
which AI technologies are likely to create significant and problematic societal disruptions.
Because this analysis is focused on differential impacts, it does not consider issues such as labor
displacement, which all nations will have to face to some extent; similarly, it does not includes
value-driven normative choices, such as restricting the use of facial recognition, that are
amenable to the regular policymaking process. Rather, this section focuses on clear instances of
190
asymmetrical impacts underlain by rigid institutional constraints, such that nations possess
limited means to address and mitigate the consequences. To the extent that nations are
susceptible to such impacts, it might create a dynamic that would undermine continuing
investment in AI technologies, or spur opposition to their deployment. The discussion below
addresses three major areas of asymmetrical U.S. vulnerability: misinformation, elections, and
cybersecurity.
5.5.1 Misinformation. From the boisterous partisan press of the early republic to the era
of “yellow journalism,” the history of U.S. media abounds with concerns over the potential
spread of false information, sensationalistic material, or exaggerated stories. However, the
velocity and scale made possible by social media has raised new questions about the impact of
misinformation on democratic institutions and popular opinion, both from intentional foreign
interference to stoke rivalries and disrupt institutions, and from citizenry sharing conspiracy
theories and partisan falsehoods.
False news is a ubiquitous phenomenon online. For example, researchers estimate that the
average U.S. adult was exposed to one or several fake news stories in the months before the 2016
election (Allcott & Gentzkow, 2017). In a recent survey, 58% of British social media users
reported encountering news on social media they considered not entirely accurate, and 43% of
users admitted sharing such information (Chadwick & Vaccari, 2019). Such information can
have serious consequences, including inspiring assaults (Kang & Goldman, 2016) and property
destruction (Kelion, 2020). This has prompted significant responses from social media
companies, including large expansions of their content moderations teams and partnerships with
fact-checking organizations.
191
Ongoing developments in the ML industry suggests that AI will significantly accelerate
and expand misinformation online. ML is advancing rapidly in areas amenable to spreading false
information, including text generation, puppeteering, and face swapping. Deepfake videos have
already been used to attack political figures (Mervosh, 2019), and scholars estimate that they will
become indistinguishable from reality within three years (Mosley, 2019). Moreover, the ability
of AI to counter such technology is limited. As Galston (2020) notes, “The capacity to generate
deepfakes is proceeding much faster than the ability to detect them,” and social media companies
have faced serious challenges in detecting and flagging false news stories even after deploying
large content teams buttressed by sophisticated AI technology.
Fundamentally, however, the U.S. ability to control misinformation faces hard
constitutional limits. False news may be a problematic phenomenon for the health of modern
democratic institutions, but it is also a protected class of speech under the First Amendment in
most circumstances (Manzi, 2018). For example, in United States v. Alvarez (2012), the Supreme
Court struck down a portion of the Stolen Valor Act, which had criminalized false claims about
military medals, noting that “[p]ermitting the government to decree this speech to be a criminal
offense, whether shouted from the rooftops or made in a barely audible whisper, would endorse
government authority to compile a list of subjects about which false statements are punishable.
That governmental power has no clear limiting principle.” As a result, publishers of such
information can freely take liberties with the truth, so long as they avoid historical free speech
carveouts such as incitement, defamation, or conveyance of a grave and imminent threat.
So long as the First Amendment remains a guardrail against government restraint on
speech, the United States is likely to have limited success in combating the spread of
misinformation. Even if Section 230 is reformed, the government cannot compel companies to
192
exercise their own First Amendment rights in support of preferred government policies; it can
only apply financial incentives or coercive tools. As a result of the Snowden revelations,
however the relationship between the technology industry and the government has meaningfully
deteriorated. Firms risk significant internal and popular backlash when they cooperate too
closely with Federal authorities, as in the case of facial recognition or encryption backdoors. As
a result, social media platforms have adopted a patchwork set of policies, allowing purveyors of
misinformation to venue shop for an amenable home.
China, on the other hand, has a long history of controlling the spread of information
online, utilizing the Chinese Firewall and filtering tools to censor content the regime finds
unacceptable. This power is not unlimited, and it is likely employed far more on items that are
“truthful” but politically problematic. But a side effect of developing such authoritarian
capabilities is a much stronger inherent ability to control misinformation as well.
5.5.2 Cybersecurity. Just as AI is rapidly integrating itself into the arsenal of tools
available for misinformation campaigns, it is also becoming an increasingly prominent part of
the cyberattack toolkit. For many years, malicious actors have relied on self-propagating
malware to allow attacks to scale rapidly without human intervention. Such attacks were
“dumb,” however, engaging in low-level penetration testing to find the most vulnerable targets.
Recent examples, such as new iterations of the Emotet trojan, however, deploy AI tools such as
email parsing to generate contextualized spam for potential victims (Dixon & Eagan, 2019). The
result are malware tools that combine rapid scaling with the personalization and higher success
193
rates of spear phishing attacks.
61
Such automated capabilities are likely to increase rapidly in
coming years, as will tools that rely on ML decision-making capabilities to modify attack plans
on the fly and at time scales far exceeding those of human reactions.
As in the case with misinformation, new AI cyber defense tools are unlikely to solve this
problem. Certainly, as attacks become more sophisticated and automated, and accelerate in
number and speed, AI will become an important part of cybersecurity strategy. In general,
however, cyber defense is much harder than cyber attacking, and that asymmetry will only grow
as attack tools increase in complexity and unsophisticated IOT devices become more
commonplace. As one researcher notes, “Security is challenging … It’s very difficult to secure
everything and as somebody who is trying to defend, you have maybe 100 holes and maybe you
can cover 99 of them. For an attacker it’s much easier, you only need to find one problem, one
hole to break in” (as cited in Hill, 2019).
There are two reasons why U.S. companies might be more vulnerable to cyber threats
than their Chinese counterparts. First, the United States does not have in place a comprehensive
data security law, which means that companies have adopted ad hoc policies that offer varying
levels of protection. While the FTC’s ability to act against “unfair” practices has led, over time,
to a de facto set of standard practices needed to escape government investigation, increasing
reliance on cloud providers and other third-party services has vastly increased the attack surface
for data; exfiltrating information from the weakest link in the value chain is often as damaging as
if a core system has been hit. China, on the other hand, has a robust cybersecurity law in place
that covers both domestic and foreign firms.
61
Spear phishing refers to a type of fraudulent activity designed to extract sensitive information from a potential
victim that relies on personalized information about that person, such as data gleaned from social media, to increase
success rates.
194
A bigger issue, however, is that cyberattacks have asymmetrical impacts on U.S firms.
U.S. industrial decision-making, for example, relies in large part on robust IP protections,
including protection of trade secrets and other proprietary information, which are highly
susceptible to exfiltration operations. Chinese firms already plan around weak internal IP
protections by focusing on physical infrastructure and other advantages that cannot easily be
copied (Lee, 2018), rendering these companies less vulnerable to IP theft. Moreover, while the
United States engages in some cyber operations for national security purposes, such as the
Stuxnet operation to sabotage Iranian nuclear reactions (Anderson, 2012), it does not appear to
conduct such activities in support of industrial development. Such cyber espionage is a common
tool of Chinese industrial policy (National Counterintelligence and Security Center, 2018).
5.5.3 Elections. Foreign actors chose to target U.S. elections for high-profile
misinformation campaigns for predictable reasons. Fixed election cycles, such as those held in
the United States, essentially advertise more than a year in advance a time and locations for a
potential attack. Combined with the ever-growing campaign seasons in the modern era, this
presents an appealing target for foreign malfeasors deploying both AI-enhanced misinformation
campaigns and cyberespionage tools.
Unlike misinformation with a domestic origin, false news from foreign actors is not likely
to be protected by the First Amendment, and social media platforms are taking aggressive steps
to combat such activity. One major issue, however, is determining the provenance of such
information. As the Select Committee on Intelligence (2019) notes, Russian electoral
interference relied on proxy organizations, such as the Internet Research Agency, to obfuscate
the origin of misinformation and complicate investigations. China has also been known to rely
195
on quasi-public organizations for its state-sponsored cyberespionage activities (Taylor, 2019).
Internet platforms are not likely to take aggressive action against information with unknown
origins, especially if it contains partisan messaging, for fear of political backlash.
Another complicating factor involves information sharing. Consider Facebook’s policy
not to remove information posted by politicians, on the grounds that the public has an interest in
full disclosure of elected officials’ activity and that public scrutiny is likely to provide sunshine
on problematic content. This might entirely preclude removal of some foreign misinformation
shared by U.S. political figures. It also creates incentives for foreign actors to specifically target
government figures, on the grounds that this will increase the stickiness of false narratives.
Moreover, given that some misinformation campaigns are designed primarily to increase distrust
towards institutions, including the media, sunshine by journalistic organizations might prove an
anemic remedy.
Elections also present tempting cyberattack targets. As the California Secretary of State
notes, “Cyber threats to our elections are the new normal” (Padilla, n.d.). According to Wyden
(2019), this is not a theoretical threat: Russia hacked a major election vendor, VR Systems, in the
summer of 2016. He says: “Far too many of our election systems have the security of Swiss
cheese … hackers are coming in 2020. And the U.S. is not even close to prepared to stop them.”
In addition to facing the same asymmetrical cybersecurity threats as other U.S.
organizations, elections face other vulnerabilities. First, the voting industry has poor cyber
hygiene, and many voting machines are easily hackable (Uchill, 2017). Second, although
election security is not a partisan issue, it has become entangled with a broader partisan divide
into election accessibility. As a result of this dissonance, Congress has repeatedly provided
funding to states to enhance election security but has so far not come to any agreement outlining
196
minimal election security standards. The result is that states continue to employ a patchwork of
security practices, including some states that are actively purchasing paperless voting machines
that experts unanimously agree are highly susceptible to manipulation. Finally, the sensitive
nature of U.S. elections means that even a marginal breach in a single key state is likely to fuel
paranoia over voter fraud and disenfranchisement, exacerbating the ability of foreign actors to
undermine faith in U.S. institutions.
Although China does have elections that take place on a regular timetable, it is immune to
many of these pressures. For example, the country possesses robust censorship tools that allow
for swift detection and removal of misinformation. The highest-ranking positions in China are
determined indirectly, by a small number of electors, rendering those positions largely immune
to large-scale hacking. Thus, the impacts of AI on electoral infrastructure are likely to
disproportionately affect the United States.
5.6 Institutional Constraints and Limits on Policy
This chapter argues that institutional factors constrain the United States’ ability to bolster
its AI industries, develop coherent AI policy, and mitigate AI’s negative impacts vis-à-vis its
international rivals. These factors include limitations on overt industrial policy, restrictions on
government and private data collection, limited capacity to assess and respond to new
technological developments, and asymmetrical vulnerability to AI-enhanced tools. In
combination with domestic pressures on technology companies, these factors provide an opening
for foreign countries to overcome the current U.S. advantage in sophisticated ML applications.
These institutional factors do not all have the same levels of permanence. Issues under
active legislative consideration—such as amendments to CDA Section 230 or new data security
197
requirements—might become law in the coming years. U.S. institutional values have some
limited malleability; the neoliberal turn in U.S. economic policy came only after many years of
protectionist industrial policy, and Congress at one time was far more willing to loosen its own
purse strings in support of in-house technical expertise. The prognosis for revisions to
Constitutional limitations, on the other hand, is grim, especially in an age of hyperpartisan
divides. Even if some changes are made on the margin, however, the broad picture should raise
alarm bells for policymakers that believe maintaining U.S. superiority in AI remains a primary
goal of U.S. economic and international policy.
Understanding institutional constraints provides a necessary backdrop for properly
assessing key interlinkages between diverse policy areas. It also provides a roadmap for
assessing what legislative and regulatory actions are like to return the most value. For example,
lawmakers might first consider consolidating jurisdictional authority over AI into a single
committee before proceeding with bills addressing individual sectoral impacts of new ML
technologies. Alternatively, Congress might acknowledge a continuing grim prognosis for
legislative gridlock and delegate authority over AI to an independent regulatory agency, which
would allow it to preserve a maximal oversight role in comparison to allowing the executive
branch to take the lead. Doing nothing is not a valid option, however: Uneven and ad hoc policy,
shaped by self-interested corporate lobbyists, is unlikely to lead to an acceptable balance
between domestic and international policy priorities.
198
CHAPTER 6: CONCLUSION
At the outset of this paper, I expressed a preference for pragmatic research—that is, work
that can provide usable insights to inform contemporary AI policymaking. This conclusion
addresses a seeming contradiction. In presenting an analytical framework for assessing tensions
in various policies that touch the AI industry, this study seems to do achieve the opposite goal: It
identifies a plethora of problems and complicates analysis of AI policy without providing
tangible recommendations for remedial actions.
The answer, in short, is that this is the beginning of a journey—not the end.
What do I mean by that? This work is a necessary precondition to developing a rational
AI policy. It is only possible to make lucid tradeoffs among policy priorities with a clear
understanding of the key interlinkages between domestic and international policy developments.
Previous chapters have articulated and dissected the connective tissue binding U.S.
telecommunications, privacy, antitrust, and content moderation policy with the country’s global
AI objectives. But this work is more than just a new way of thinking about AI policy; it also
serves as a roadmap for improving analysis of AI policies.
First, it suggests that policymakers must dissolve existing jurisdictional boundaries if
they are to have any hope of coherently approaching AI issues. With its potential to impact most
sectors of the economy, its nexus with intricate social and political issues, and its complex
industrial structure, AI ignores disciplinary divides perhaps more than any other topic in history.
Properly grappling with the technology’s social, economic, and political impacts requires an
equally expansive approach. This does not mean that every investigation must approach artificial
intelligence holistically. Developing timely policy will still require deep dives by subject matter
199
experts that are appropriately circumscribed in time and scope. Rather, an organization or entity
must exist with the authority to aggregate various lines of work, assess tradeoffs explicitly, and
present bundles of options to decision-makers. Otherwise, the United States will continue to
suffer from ad hoc policy that fails to properly foresee the full range of its impacts and threatens
to haphazardly undermine other political priorities.
Second, this research identifies specific areas that might be fruitful targets for policy
interventions. For example, U.S. lawmakers might focus on creating and empowering a new
Congressional AI policy committee to increase absorptive capacity, setting explicit AI
milestones and goals for industry as a condition for avoiding regulation, or creating a new
government funding mechanism for direct investment into AI companies. However, reorienting
AI inquiries to focus on data is probably the most compelling lesson of this investigation.
Although great caution should be applied in taking a reductive approach to AI issues, many
tradeoffs between national and international AI goals can be simplified to consideration about
the impacts on the quality and quantity of data available to U.S. technology companies. Placing
industrial AI inputs front and center streamlines the analytical process, better situates AI policy
within the universe of existing policy proposals, and renders AI issues more amenable to existing
policy tools and techniques.
Third, this research elaborates the value in de-escalating global competition over AI and
enhancing international cooperation. It is unclear whether U.S. institutional limitations will
ultimately erode the country’s lead in AI, as they did for wireless technology. However, the
evidence presented in this paper signals that such an outcome is certainly possible. Global
cooperation around AI would not only eliminate the downside risks of AI competition, but it
would alleviate domestic pressures and allow the United States to develop more consistent
200
policies to address its technology industry, unfettered by considerations about global impacts. It
would also allow the country to focus additional resources on existing strengths that are more
resistant to such limitations, such as developing autonomous vehicles and other promising AI
applications where U.S. industry has already made significant advances.
Proposing a collaborative approach may seem naïve, given the overheated rhetoric
around AI and deteriorating relations between the United States and China. However, uncertainty
over the outcome of AI competition is universal. No amount of investment or planning is
sufficient to guarantee success when dealing with novel technologies and applications.
Moreover, AI by its very nature is especially unpredictable, given its ability to operate without
human supervision, its self-learning nature, and the likelihood that it will be weaponized for
conventional and electronic warfare. These features increase the likelihood of unforeseen
outcomes, including potential global catastrophes. Therefore, other countries might also face
strong incentives for de-escalation. There are some promising signs. As Allen (2019) notes,
“Chinese officials and government reports have begun to express concern in multiple diplomatic
forums about arms race dynamics associated with AI and the need for international cooperation
on new norms and potentially arms control.”
Perhaps the most constructive next step, however, is extending the comparative analysis
initiated in this paper to other countries. This paper intentionally focuses on the differences
between U.S. and China, because they possess wildly divergent political and industrial
structures; such disparity is a useful tool for identifying areas of differential advantage,
understanding the implications of policy decisions on AI trajectory, and clarifying the nature of
political tradeoffs. For developing detailed policy prescriptions, however, it is far superior to
examine entities that more closely resemble the United States and that are already explicitly
201
balancing competing policy priorities. Thus, the most natural extension of this work is to
examine in detail the European approach to AI.
Even a cursory examination suggests that the European strategy represents a potential
“third way,” in which domestic actions to control the power of technology companies enhance,
rather than diminish, international competitiveness. For example, the EU sees its strong privacy
regime as an asset to its AI strategy; GDPR allows the EU to confront, on its own terms, the
technology industry, while fostering trust in AI through strong baseline protections for its
citizens and thereby accelerating adoption of new AI applications. And by focusing on the entire
AI supply chain, the EU is hedging its bets, minimizing the negative consequences of losing out
to international rivals in any one AI vertical.
Such a balancing act is difficult. Sprint and T-Mobile, for example, made a similar
claim—that merging would actually increase competition in the U.S. wireless market—but the
evidence suggests that their assertion was more rhetorical than substantive. The EU’s case may
be similarly unreliable. This analysis provides a framework to make a plausible assessment of
such European claims.
How might such an examination work in practice? This paper already identifies the issues
salient to AI policy, so in substance and form, the analytical process could mirror the work
presented thus far. For example, the discussion above already includes an assessment of many of
the EU’s actions on privacy and antitrust. On the other hand, the investigation would require
much new work on the European approach to content moderation, given the uniquely American
nature of CDA Section 230. Similarly, assessing European industrial policy, data policy, and
absorptive capacity would require significant new research, and the unique multinational
202
governance structure of the EU might mandate entirely new discussions of institutional
constraints unique to the bloc.
Such an analysis would not be easy. But it is the most fruitful path for transitioning from
high-level discussions of policy tradeoffs to detailed policy prescriptions informed by real-world
data. Given the radical potential for AI to reshape economies, governments, and societies, such
effort seems worthwhile.
203
REFERENCES
3GPP. (2019, April 26). Release 15. https://www.3gpp.org/release-15
Abril, D. (2019, November 22). Google and Twitter changed their rules on political ads. Why
won’t Facebook? Fortune. https://fortune.com/2019/11/22/facebook-google-twitter-political-
ads-policy/
Acharya, A., & Arnold, Z. (2019). Chinese Public AI R&D Spending: Provisional Findings
[CSET Issue Brief]. Center for Security and Emerging Technology.
Administrative Procedure Act of 1946, P.L. 79–404, 60 Stat. 237.
Akerlof, G. A. (1978). The market for “lemons”: Quality uncertainty and the market mechanism.
In Uncertainty in economics (pp. 235–251). Elsevier.
Alder-Bell, S. (2017, February 6). House passes bill to end warrantless email surveillance. The
Century Foundation. https://tcf.org/content/commentary/house-passes-bill-end-warrantless-
email-surveillance/
Ali, O., Flaounas, I., De Bie, T., Mosdell, N., Lewis, J., & Cristianini, N. (2010). Automating
news content analysis: An application to gender bias and readability. Proceedings of the First
Workshop on Applications of Pattern Analysis, 36–43.
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of
Economic Perspectives, 31(2), 211–236.
Allcott, H., Gentzkow, M., & Yu, C. (2019). Trends in the diffusion of misinformation on social
media. Research & Politics, 6(2), 2053168019848554.
Allen, G. C. (2019, February 6). Understanding China’s AI Strategy.
https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy
204
Allow States and Victims to Fight Online Sex Trafficking Act of 2017, P.L. 115–164, H.R.1865.
Alotaibi, M., & Mahmood, A. (2017). Improved gait recognition based on specialized deep
convolutional neural network. Computer Vision and Image Understanding, 164, 103–110.
Alpaydin, E. (2020). Introduction to machine learning. MIT Press.
Alphabet. (2020). Fourth Quarter and Fiscal Year 2019 Results.
https://abc.xyz/investor/static/pdf/2019Q4_alphabet_earnings_release.pdf?cache=05bd9fe
Altman, M., Wood, A., & Vayena, E. (2018). A harm-reduction framework for algorithmic
fairness. IEEE Security & Privacy, 16(3), 34–45.
American Civil Liberties Union Foundations of California. (2018, May 21). Emails Show How
Amazon is Selling Facial Recognition System to Law Enforcement. ACLU of Northern CA.
https://www.aclunc.org/news/emails-show-how-amazon-selling-facial-recognition-system-
law-enforcement
Amodei, D., Ananthanarayanan, S., Anubhai, R., Bai, J., Battenberg, E., Case, C., Casper, J.,
Catanzaro, B., Cheng, Q., & Chen, G. (2016). Deep speech 2: End-to-end speech recognition
in English and Mandarin. International Conference on Machine Learning, 173–182.
Anderson, C. (2009). Free: The future of a radical price (1st ed). Hyperion.
Anderson, N. (2012, June 1). Confirmed: US and Israel created Stuxnet, lost control of it. Ars
Technica. https://arstechnica.com/tech-policy/2012/06/confirmed-us-israel-created-stuxnet-
lost-control-of-it/
Ardia, D. S. (2009). Free speech savior or shield for scoundrels: An empirical study of
intermediary immunity under Section 230 of the Communications Decency Act. Loy. LAL
Rev., 43, 373.
205
Arntz, M., Gregory, T., & Zierahn, U. (2016). The risk of automation for jobs in OECD
countries.
Arthur, C. (2013, August 23). Tech giants may be huge, but nothing matches big data. The
Guardian. https://www.theguardian.com/technology/2013/aug/23/tech-giants-data
Arthur, W. B. (1989). Competing technologies, increasing returns, and lock-in by historical
events. The Economic Journal, 99(394), 116–131.
Auletta, K. (2009). Googled: The end of the world as we know it. Penguin Press.
Auletta, K. (2011, July 4). Can Sheryl Sandberg change Silicon Valley? The New Yorker.
https://www.newyorker.com/magazine/2011/07/11/a-womans-place-ken-auletta
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace
automation. Journal of Economic Perspectives, 29(3), 3–30.
Autor, D. H., & Dorn, D. (2013). The growth of low-skill service jobs and the polarization of the
US labor market. American Economic Review, 103(5), 1553–1597.
Autor, D. H., Levy, F., & Murnane, R. J. (2003). The skill content of recent technological
change: An empirical exploration. The Quarterly Journal of Economics, 118(4), 1279–1333.
Balluck, K. (2019, October 23). Senate GOP blocks three election security bills for second day.
The Hill. https://thehill.com/homenews/senate/467130-senate-gop-blocks-three-election-
security-bills-for-second-day
Basu, S., & Fernald, J. (2007). Information and communications technology as a general-purpose
technology: Evidence from US industry data. German Economic Review, 8(2), 146–173.
BBC. (2018, November 1). US says Chinese firm stole trade secrets. BBC News.
https://www.bbc.com/news/world-us-canada-46066537
206
Benkler, Y., Roberts, H., Faris, R., Solow-Niederman, A., & Etling, B. (2015). Social
mobilization and the networked public sphere: Mapping the SOPA-PIPA debate. Political
Communication, 32(4), 594–624.
Benner, K., & Kang, C. (2019, December 19). How a top antitrust official helped T-Mobile and
Sprint merge. The New York Times. https://www.nytimes.com/2019/12/19/technology/sprint-
t-mobile-merger-antitrust-official.html
Bergmayer, J. (2019, May 15). What Section 230 is and does—yet another explanation of one of
the internet’s most important laws. Public Knowledge.
https://www.publicknowledge.org/blog/what-section-230-is-and-does-yet-another-
explanation-of-one-of-the-internets-most-important-laws/
Biagi, F. (2013). ICT and Productivity: A Review of the Literature. Institute for Prospective
Technological Studies Digital Economy Working Paper.
Binder, S. (2015). The dysfunctional congress. Annual Review of Political Science, 18, 85–101.
Bingham, R. D. (1998). Industrial policy American-style: From Hamilton to HDTV. ME Sharpe.
Bipartisan Budget Act of 2015, P.L. 114–74, H.R.1314.
Bureau of Industry and Security. (2018). Review of Controls for Certain Emerging Technologies
(Advance Notice of Proposed Rulemaking, 83 FR 58201).
https://www.federalregister.gov/documents/2018/11/19/2018-25221/review-of-controls-for-
certain-emerging-technologies
Blackburn, M. (n.d.). Blackburn Convenes First Judiciary Committee Tech Task Force Meeting.
Retrieved March 7, 2020, from https://www.blackburn.senate.gov/blackburn-convenes-first-
judiciary-committee-tech-task-force-meeting
Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Springer.
207
Bode, K. (2019, April 18). Report: 26 states now ban or restrict community broadband. Vice.
https://www.vice.com/en_us/article/kzmana/report-26-states-now-ban-or-restrict-community-
broadband
Borensztein, E., De Gregorio, J., & Lee, J.-W. (1998). How does foreign direct investment affect
economic growth? Journal of International Economics, 45(1), 115–135.
Bork, R. H. (1978). The antitrust paradox. Basic books New York.
Bostrom, N. (2014). Superintelligence: paths, dangers, strategies. Oxford University Press.
boyd, danah. (2018, September 21). Media manipulation, strategic amplification, and
responsible journalism. Medium. https://points.datasociety.net/media-manipulation-strategic-
amplification-and-responsible-journalism-95f4d611f462
Bradshaw, S., & Howard, P. N. (2019). The Global Disinformation Order 2019: Global
Inventory of Organised Social Media Manipulation (Working Paper 2019.2). Project on
Computational Propaganda. https://comprop.oii.ox.ac.uk/wp-
content/uploads/sites/93/2019/09/CyberTroop-Report19.pdf
Brandeis, L., & Warren, S. (1890). The right to privacy. Harvard Law Review, 4(5), 193–220.
Bright, P. (2016, August 23). Microsoft sheds some light on its mysterious holographic
processing unit. Ars Technica. https://arstechnica.com/information-
technology/2016/08/microsoft-sheds-some-light-on-its-mysterious-holographic-processing-
unit/
Brodkin, J. (2019, February 8). AT&T sued by Sprint, must defend decision to tell users that 4G
is “5G E.” Ars Technica. https://arstechnica.com/information-technology/2019/02/sprint-
sues-att-to-stop-it-from-calling-its-4g-service-5g-e/
208
Broockman, D. E., Ferenstein, G., & Malhotra, N. (2019). Predispositions and the political
behavior of American economic elites: Evidence from technology entrepreneurs. American
Journal of Political Science, 63(1), 212–233.
Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications
of the ACM, 36(12), 66–77.
Brynjolfsson, E., & Hitt, L. M. (1998). Beyond the productivity paradox. Communications of the
ACM, 41(8), 49–55.
Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and
prosperity in a time of brilliant technologies. WW Norton & Company.
Brynjolfsson, E., Rock, D., & Syverson, C. (2019). Artificial intelligence and the modern
productivity paradox. The Economics of Artificial Intelligence: An Agenda, 23.
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in
commercial gender classification. Conference on Fairness, Accountability and Transparency,
77–91.
Burns, J. P. (2007). Civil service reform in China. OECD Journal on Budgeting, 7(1), 1–25.
Business, B. F. (n.d.). Nearly every state is now investigating Google over antitrust. CNN.
Retrieved December 22, 2019, from https://www.cnn.com/2019/09/09/tech/google-antitrust-
state-attorneys-general/index.html
Cadwalladr, C., & Graham-Harrison, E. (2018, March 17). Revealed: 50 million Facebook
profiles harvested for Cambridge Analytica in major data breach. The Guardian.
https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-
us-election
209
Cady, F., & Etzioni, O. (2019, June 25). China to overtake US in AI research. Medium.
https://medium.com/ai2-blog/china-to-overtake-us-in-ai-research-8b6b1fe30595
Cannon, R. (2002). The legacy of the Federal Communications Commission’s Computer
Inquiries. Federal Communications Law Journal, 55, 167.
Caprettini, B., & Voth, H.-J. (2017). Rage Against the Machines: Labor-Saving Technology and
Unrest in Industrializing England. Centre for Economic Policy Research.
https://cepr.org/active/publications/discussion_papers/dp.php?dpno=11800
Cardona, M., Kretschmer, T., & Strobel, T. (2013). ICT and productivity: Conclusions from the
empirical literature. Information Economics and Policy, 25(3), 109–125.
Carlaw, K. I., & Lipsey, R. G. (2006). GPT‐driven, endogenous growth. The Economic Journal,
116(508), 155–174.
Carney, J. (2020, April 2). Justice IG pours fuel on looming fight over FISA court. The Hill.
https://thehill.com/policy/national-security/490691-justice-ig-pours-fuel-on-looming-fight-
over-fisa-court
Cath, C., Wachter, S., Mittelstadt, B., Taddeo, M., & Floridi, L. (2018). Artificial intelligence
and the ‘good society’: The US, EU, and UK approach. Science and Engineering Ethics,
24(2), 505–528.
Committee on Foreign Investment in the United States. (2019). Annual Report to Congress,
Report Period: CY 2016 and CY 2017. https://home.treasury.gov/system/files/206/CFIUS-
Public-Annual-Report-CY-2016-2017.pdf
Chadwick, A., & Vaccari, C. (2019). News Sharing on UK Social Media: Misinformation,
Disinformation, And Correction, Survey Report. Loughborough University.
210
https://www.lboro.ac.uk/media/media/subjects/communication-media-
studies/downloads/chadwick-vaccari-o3c-1-news-sharing-on-uk-social-media-1.pdf
Chevron U.S.A., Inc. V. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984).
Children’s Online Privacy Protection Act of 1998, P.L. 105–277, 15 U.S.C. § 6501–6506.
Chin, A. (2005). Decoding Microsoft: A first principles approach. Wake Forest Law Review, 40,
1.
Chopra, R. (2019a). Dissenting Statement of Commissioner Rohit Chopra, In re Facebook, Inc.,
Commission File No. 1823109.
https://www.ftc.gov/system/files/documents/public_statements/1536911/chopra_dissenting_s
tatement_on_facebook_7-24-19.pdf
Chopra, R. (2019b). Dissenting Statement of Commissioner Rohit Chopra, In the Matter of
Google LLC and YouTube, LLC, Commission File No. 1723083.
https://www.ftc.gov/system/files/documents/public_statements/1542957/chopra_google_yout
ube_dissent.pdf
Cisse, M., Adi, Y., Neverova, N., & Keshet, J. (2017). Houdini: Fooling deep structured
prediction models. ArXiv Preprint ArXiv:1707.05373.
Citizens United v. Federal Election Commission, 558 U.S. 310 (2010).
Clark, M. (1993). Suppressing innovation: Bell laboratories and magnetic recording. Technology
and Culture, 34(3), 516–538.
Cognilytica. (2019). Data Engineering, Preparation, and Labeling for AI 2019 (CGR-DE100).
Cognilytica Research. https://www.cognilytica.com/2019/03/06/report-data-engineering-
preparation-and-labeling-for-ai-2019/
211
Cohen, S. S., & DeLong, J. B. (2016). Concrete economics: The Hamilton approach to economic
growth and policy. Harvard Business Review Press.
Cohen, W. M., & Levinthal, D. A. (1990). Absorptive capacity: A new perspective on learning
and innovation. Administrative Science Quarterly, 128–152.
Combating Online Infringement and Counterfeits Act, S. 3804 (2010).
Congressional Review Act of 1996, 5 U.S.C. § 801-808.
Constitutional Role of Judges: Hearing Before the Judiciary Committee, Senate, 112 Cong. 1
(Testimony of Antonin Scalia) (2011).
Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and
Promoting Innovation in the Global Digital Economy. (2012). The White House.
https://obamawhitehouse.archives.gov/sites/default/files/privacy-final.pdf
Cooper, M. (2011). The future of journalism: Addressing pervasive market failure with public
policy. In R. McChesney, & V. Pickard (Eds.), Will the last reporter please turn out the
lights: The collapse of journalism and what can be done to fix it (pp. 320–339). The Free
Press.
Court of Justice of the European Union. (2015). The Court of Justice declares that the
Commission’s US Safe Harbour Decision is invalid.
https://curia.europa.eu/jcms/upload/docs/application/pdf/2015-10/cp150117en.pdf
Crandall, R. W., Lehr, W., & Litan, R. E. (2007). The effects of broadband deployment on output
and employment: A cross-sectional analysis of US data. Brookings.
https://www.brookings.edu/research/the-effects-of-broadband-deployment-on-output-and-
employment-a-cross-sectional-analysis-of-u-s-data/
212
Crane, D. A. (2013). The tempting of antitrust: Robert Bork and the goals of antitrust policy.
Antitrust Law Journal, 79, 835.
Cunningham, C., & Blankenship, J. (2018). Using AI For Evil: A Guide To How Cybercriminals
Will Weaponize And Exploit AI To Attack Your Business.
https://www.forrester.com/report/Using+AI+For+Evil/-/E-RES143162
Dal Bó, E. (2006). Regulatory capture: A review. Oxford Review of Economic Policy, 22(2),
203–225.
Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against
women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-
idUSKCN1MK08G
Davenport, T. H., Barth, P., & Bean, R. (n.d.). How ‘big data’ is different. MIT Sloan
Management Review. Retrieved April 12, 2020, from
https://sloanreview.mit.edu/article/how-big-data-is-different/
Dayen, D. (2019, July 10). The biggest abuser of forced arbitration is Amazon. The American
Prospect. https://prospect.org/api/content/e8849c44-3288-5d18-963c-533229f211ad/
de Rassenfosse, G., Raiteri, E., & Bekkers, R. N. A. (2017). Discrimination against foreigners in
the patent system: Evidence from standard-essential patents. 44th Annual Conference of the
European Association for Research in Industrial Economics (EARIE 2017), August 31-
September 2, 2017, Maastricht, The Netherlands.
Deloitte. (2018). 5G: The chance to lead for a decade.
https://www2.deloitte.com/content/dam/Deloitte/us/Documents/technology-media-
telecommunications/us-tmt-5g-deployment-imperative.pdf
213
Delrahim, M. (2019, November 12). “Blind[ing] Me With Science”: Antitrust, Data, and Digital
Markets. https://www.justice.gov/opa/speech/assistant-attorney-general-makan-delrahim-
delivers-remarks-harvard-law-school-competition
Department of Commerce. (2019, August 19). Department of Commerce Adds Dozens of New
Huawei Affiliates to the Entity List and Maintains Narrow Exemptions through the
Temporary General License. U.S. Department of Commerce.
https://www.commerce.gov/news/press-releases/2019/08/department-commerce-adds-
dozens-new-huawei-affiliates-entity-list-and
DeSilver, D. (n.d.). A productivity scorecard for the 115th Congress: More laws than before, but
not more substance. Pew Research Center. Retrieved April 20, 2020, from
https://www.pewresearch.org/fact-tank/2019/01/25/a-productivity-scorecard-for-115th-
congress/
Dibbell, J. (1993, December 23). A rape in cyberspace. The Village Voice.
https://www.villagevoice.com/2005/10/18/a-rape-in-cyberspace/
Digital Millennium Copyright Act of 1998, Pub. L. 105–304, 112 Stat. 2860.
Dixon, W., & Eagan, N. (2019, June 19). 3 ways AI will change the nature of cyber attacks.
World Economic Forum. https://www.weforum.org/agenda/2019/06/ai-is-powering-a-new-
generation-of-cyberattack-its-also-our-best-defence/
Doherty, C., & Kiley, J. (2019, July 29). Americans have become much less positive about tech
companies’ impact on the U.S. Pew Research Center. https://www.pewresearch.org/fact-
tank/2019/07/29/americans-have-become-much-less-positive-about-tech-companies-impact-
on-the-u-s/
Donnelly, J. (2000). Realism and international relations. Cambridge University Press.
214
Duggan, M. (2014, October 22). Online Harassment. Pew Research Center: Internet, Science &
Tech. https://www.pewresearch.org/internet/2014/10/22/online-harassment/
Durkee, A. (n.d.). Spotify weighs in on political ad debate by banning them entirely. Vanity Fair.
Retrieved December 30, 2019, from https://www.vanityfair.com/news/2019/12/spotify-bans-
political-ads
Dutton, T. (2018, July 25). An overview of national AI strategies. Medium.
https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd
Dvorkin, M., & Shell, H. (2016, April 28). Job polarization. FRED Blog.
https://fredblog.stlouisfed.org/2016/04/job-polarization/
Economides, N. (2001). United States v. Microsoft: A failure of antitrust in the new economy.
ArXiv Preprint Cs/0109069.
Economist. (2017, May 6). The world’s most valuable resource is no longer oil, but data. The
Economist. https://www.economist.com/leaders/2017/05/06/the-worlds-most-valuable-
resource-is-no-longer-oil-but-data
E-Government Act of 2002, P.L. 107–347, 116 Stat. 2899.
Ehrenkranz, M. (2018, January 17). YouTube unveils new monetization rules killing ad revenue
for small creators. Gizmodo. https://gizmodo.com/youtube-unveils-new-monetization-rules-
killing-ad-reven-1822154823
Ehrlich, P. (2002). Commuications Decency Act 230. Berkeley Technology Law Journal, 17,
401.
Electronic Communications Privacy Act of 1986, 18 U.S.C. §§ 2510–2523.
Electronic Frontier Foundation. (n.d.). Section 230 of the Communications Decency Act.
Retrieved December 21, 2019, from https://www.eff.org/issues/cda230
215
ElementAI. (2019). Global AI Talent Report 2019. https://jfgagne.ai/talent-2019/
Elsayed, G. F., Goodfellow, I., & Sohl-Dickstein, J. (2018). Adversarial reprogramming of
neural networks. ArXiv Preprint ArXiv:1806.11146.
Equifax. (2017). Consumer Notice—Cybersecurity Incident & Important Consumer Information.
https://www.equifaxsecurity2017.com/consumer-notice/
Eskridge, W. N. (2007). Vetogates, Chevron, preemption. Notre Dame Law Review, 83, 1441.
Etzioni, A. (2020, January 7). Office of Technology Assessment: It’s time for a second coming
[Text]. TheHill. https://thehill.com/opinion/technology/477025-office-of-technology-
assessment-its-time-for-a-second-coming
Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the
poor. St. Martin’s Press.
European Commission. (2016, August 29). State aid: Irish tax treatment of Apple is illegal.
https://ec.europa.eu/commission/presscorner/detail/en/IP_16_2923
European Commission. (2017, June 26). Antitrust: Commission fines Google €2.42 billion
[Text]. European Commission - European Commission.
https://ec.europa.eu/commission/presscorner/detail/en/IP_17_1784
European Commission. (2018, July 17). Antitrust: Commission fines Google €4.34 billion for
abuse of dominance regarding Android devices [Text]. European Commission - European
Commission. https://ec.europa.eu/commission/presscorner/detail/en/IP_18_4581
European Commission. (2019, March 19). Antitrust: Google fined €1.49 billion for online
advertising abuse [Text]. European Commission - European Commission.
https://ec.europa.eu/commission/presscorner/detail/en/IP_19_1770
216
European Commission. (2020). On Artificial Intelligence - A European approach to excellence
and trust (White Paper COM(2020) 65 final).
https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-
feb2020_en.pdf
General Data Protection Regulation, Regulation (EU) 2016/679 (2016). https://eur-
lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679
Executive Order 12866: Regulatory Planning and Review (58 Fed. Reg. 51,735). (1993).
Executive Order 13859: Maintaining American Leadership in Artificial Intelligence. (2019).
Executive Order 13873: Securing the Information and Communications Technology and Services
Supply Chain. (2019).
Executive Order 13913: Establishing the Committee for the Assessment of Foreign Participation
in the United States Telecommunications Services Sector. (2020).
Facebook, Social Media Privacy, and the Use and Abuse of Data, Hearing Before the Committee
on the Judiciary and Committee on Commerce, Science, and Transportation, Senate, 115
Cong. 2, (2018).
Facebook: Transparency and Use of Consumer Data, Hearing Before the Committee on Energy
and Commerce, House of Representatives, 115 Cong. 2, (2018).
Fadulu, L. (2019, September 24). Facial recognition technology in public housing prompts
backlash. The New York Times. https://www.nytimes.com/2019/09/24/us/politics/facial-
recognition-technology-housing.html
Fagan, E. J. (2019, May 15). A polarized Congress relies more on party-aligned think tanks.
LegBranch. https://www.legbranch.org/a-polarized-congress-relies-more-on-party-aligned-
think-tanks/
217
Family Educational Rights and Privacy Act of 1974, 20 U.S.C. § 1232g, 34 CFR Part 99.
Federal Communications Commission. (1960). Allocation of Frequencies in the Bands Above
890 Mcs. (29 F.C.C. 825).
FCC. (1966). In re Regulatory & Policy Problems Presented by the Interdependence of
Computer and Communication Services & Facilities (First Computer Inquiry), Notice of
Inquiry (7 FCC 2d 11).
FCC. (1968). In the Matter of the Use of the Carterfone Device in Message Toll Telephone
Service (13 F.C.C.2d 420).
FCC. (1971). Regulatory and Policy Problems Presented by the Interdependence of Computer
and Communications Services (First Computer Inquiry), Final Decision (28 FCC2d 267).
FCC. (1976). Amendment of Section 64.702 of the Commission’s Rules and Regulations (Second
Computer Inquiry), Notice of Inquiry and Proposed Rulemaking (61 F.C.C.2d 103).
FCC. (1979). Second Computer Inquiry, Tentative Decision and Further Notice of Inquiry and
Rulemaking (72 FCC2d 358).
FCC. (1980). Second Computer Inquiry, Final Decision (77 FCC2d 384).
http://etler.com/FCC/pdf/Numbered/20828/FCC%2080-189.pdf
FCC. (1985). Amendment of Sections 64.702 of the Commission’s Rules and Regulations. (Third
Computer Inquiry), Notice of Proposed Rulemaking (50 Fed. Reg. 33581).
FCC. (1986). Amendment of Sections 64.702 of the Commission’s Rules and Regulations. (Third
Computer Inquiry), Report and Order (104 F.C.C.2d 958).
FCC. (1998). Federal-State Joint Board on Universal Service, Report to Congress (13 FCC Rcd
11501). https://transition.fcc.gov/Bureaus/Common_Carrier/Reports/fcc98067.pdf
218
FCC. (2002). Inquiry Concerning High-Speed Access to the Internet Over Cable & Other
Facilities; Internet Over Cable Declaratory Ruling; Appropriate Regulatory Treatment for
Broadband Access to the Internet Over Cable Facilities, Declaratory Ruling and Notice of
Proposed Rulemaking (17 FCC Rcd 4798).
FCC. (2005). Appropriate Framework for Broadband Access to the Internet Over Wireline
Facilities et al., Report and Order and Notice of ProposedRulemaking (20 FCC Rcd 14853).
FCC. (2009). Petition for Declaratory Ruling to Clarify Provisions of Section 332(c)(7)(B) to
Ensure Timely Siting Review and to Preempt Under Section 253 State and Local Ordinances
that Classify All Wireless Siting Proposals as Requiring a Variance, Declaratory Ruling.
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=2ahUKEwjz
g860uProAhV1gnIEHQnpB_AQFjAAegQIAhAB&url=https%3A%2F%2Fdocs.fcc.gov%2F
public%2Fattachments%2FFCC-09-99A1.pdf&usg=AOvVaw0M-nuYxgYLri1BzbbV7-Y2
FCC. (2011). Staff Analysis and Findings. https://docs.fcc.gov/public/attachments/DA-11-
1955A2.pdf
FCC. (2015). Protecting and Promoting the Open Internet, Report and Order on Remand,
Declaratory Ruling, and Order (30 FCC Rcd 5601).
FCC. (2016a). Protecting the Privacy of Customers of Broadband and Other
Telecommunications Services, Report and Order (31 FCC Rcd 13911).
FCC. (2016b). Use of Spectrum Bands Above 24 GHz For Mobile Radio Services, Report and
Order, Further Notice of Proposed Rulemaking.
https://docs.fcc.gov/public/attachments/FCC-16-89A1.pdf
FCC. (2017). Restoring Internet Freedom, Declaratory Ruling, Report and Order, and Order (33
FCC Rcd 311).
219
FCC. (2018). The FCC’s 5G FAST Plan. https://www.fcc.gov/document/fccs-5g-fast-plan
FCC. (2019a). Fiscal Year 2020 Budget in Brief. https://docs.fcc.gov/public/attachments/DOC-
356607A2.pdf
FCC. (2019b). Applications of T-Mobile US, Inc., and Sprint Corporation, Memorandum
Opinion and Order, Declaratory Ruling, and Order of Proposed Modification.
https://docs.fcc.gov/public/attachments/FCC-19-103A1.pdf
FCC. (2019c). Protecting Against National Security Threats to the Communications Supply
Chain Through FCC Programs, Report and order, Further Notice of Proposed Rulemaking,
and Order. https://docs.fcc.gov/public/attachments/FCC-19-121A1.pdf
FCC. (2020a). What the FCC Has Accomplished Under 3 Years of Chairman Ajit Pai’s
Leadership. https://docs.fcc.gov/public/attachments/DOC-362141A1.pdf
FCC. (2020b). Establishing a 5G Fund for Rural America, Notice of Proposed Rulemaking and
Order. https://docs.fcc.gov/public/attachments/DOC-363491A1.pdf
Federal Trade Commission Act of 1914, 38 Stat. 717 (1914).
Fidler, M. (n.d.). African Union Bugged by China: Cyber Espionage as Evidence of Strategic
Shifts. Council on Foreign Relations. Retrieved April 22, 2020, from
https://www.cfr.org/blog/african-union-bugged-china-cyber-espionage-evidence-strategic-
shifts
Finlayson, S. G., Bowers, J. D., Ito, J., Zittrain, J. L., Beam, A. L., & Kohane, I. S. (2019).
Adversarial attacks on medical machine learning. Science, 363(6433), 1287–1289.
Fisher, W. W. (1999). The growth of intellectual property: A history of the ownership of ideas in
the United States. Eigentumskulturen Im Vergleich, 265–291.
Foreign Intelligence Surveillance Act of 1978, P.L. 95–511, 92 Stat. 1783.
220
Fostering a Healthier Internet to Protect Consumers: Hearing Before the Subcommittee on
Communications and Technology and the Subcommittee on Consumer Protection and
Commerce of the Committee on Energy and Commerce, House of Representatives, 116 Cong.
1, (2019).
Four Things to Know About China’s $670 Billion Government Guidance Funds. (n.d.). Retrieved
April 18, 2020, from https://www.caixinglobal.com/2020-02-25/four-things-to-know-about-
chinas-670-billion-government-guidance-funds-101520348.html
Frank, M. R., Autor, D., Bessen, J. E., Brynjolfsson, E., Cebrian, M., Deming, D. J., Feldman,
M., Groh, M., Lobo, J., & Moro, E. (2019). Toward understanding the impact of artificial
intelligence on labor. Proceedings of the National Academy of Sciences, 116(14), 6531–
6539.
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to
computerisation? Technological Forecasting and Social Change, 114, 254–280.
Federal Trade Commission. (1997, July 16). FTC Staff Sets Forth Principles for Online
Information Collection From Children. https://www.ftc.gov/news-events/press-
releases/1997/07/ftc-staff-sets-forth-principles-online-information-collection
FTC. (1998). Privacy Online: A Report to Congress.
https://www.ftc.gov/sites/default/files/documents/reports/privacy-online-report-
congress/priv-23a.pdf
FTC. (2018). Privacy & Data Security Update: 2018.
https://www.ftc.gov/system/files/documents/reports/privacy-data-security-update-2018/2018-
privacy-data-security-report-508.pdf
221
FTC. (2019a, February 26). FTC’s Bureau of Competition Launches Task Force to Monitor
Technology Markets. Federal Trade Commission. https://www.ftc.gov/news-events/press-
releases/2019/02/ftcs-bureau-competition-launches-task-force-monitor-technology
FTC. (2019b, July 24). FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions
on Facebook. Federal Trade Commission. https://www.ftc.gov/news-events/press-
releases/2019/07/ftc-imposes-5-billion-penalty-sweeping-new-privacy-restrictions
FTC. (2019c, July 24). FTC’s $5 Billion Facebook settlement: Record-Breaking and History-
Making. Federal Trade Commission. https://www.ftc.gov/news-events/blogs/business-
blog/2019/07/ftcs-5-billion-facebook-settlement-record-breaking-history
FTC. (2019d, September 3). Google and YouTube Will Pay Record $170 Million for Alleged
Violations of Children’s Privacy Law. Federal Trade Commission.
https://www.ftc.gov/news-events/press-releases/2019/09/google-youtube-will-pay-record-
170-million-alleged-violations
FTC. (2019e, October 16). What’s in a Name? Ask the Technology Enforcement Division.
Federal Trade Commission. https://www.ftc.gov/news-events/blogs/competition-
matters/2019/10/whats-name-ask-technology-enforcement-division
FTC. (2019f, November 22). YouTube Channel Owners: Is Your Content Directed to Children?
Federal Trade Commission. https://www.ftc.gov/news-events/blogs/business-
blog/2019/11/youtube-channel-owners-your-content-directed-children
FTC. (2020, February 11). FTC to Examine Past Acquisitions by Large Technology Companies.
Federal Trade Commission. https://www.ftc.gov/news-events/press-releases/2020/02/ftc-
examine-past-acquisitions-large-technology-companies
222
FTC & New York v. Google & YouTube Complaint, Case No.: 1:19-cv-2642 (D.D.C. 2019).
https://www.ftc.gov/system/files/documents/cases/youtube_complaint.pdf
FTC & New York v. Google & YouTube Order, Case No.: 1:19-cv-2642 (D.D.C. 2019).
https://www.ftc.gov/system/files/documents/cases/172_3083_youtube_coppa_consent_order.
pdf
FTC v. Rag-Stiftung, Civil Action No. 2019-2337 (D.D.C. February 3, 2020).
https://ecf.dcd.uscourts.gov/cgi-bin/show_public_doc?2019cv2337-150
FTC v. Wyndham Worldwide Corp., 799 F.3d 236 (3d Cir. 2015).
Full Fact. (2019). Report on the Facebook Third Party Fact Checking programme.
https://fullfact.org/media/uploads/tpfc-q1q2-2019.pdf
Gadaleta, M., & Rossi, M. (2018). Idnet: Smartphone-based gait recognition with convolutional
neural networks. Pattern Recognition, 74, 25–37.
Galston, W. A. (2020, January 8). Is seeing still believing? The deepfake challenge to truth in
politics. Brookings. https://www.brookings.edu/research/is-seeing-still-believing-the-
deepfake-challenge-to-truth-in-politics/
GAO. (2019, January 29). Our new science, technology assessment, and analytics team.
WatchBlog: Official Blog of the U.S. Government Accountability Office.
https://blog.gao.gov/2019/01/29/our-new-science-technology-assessment-and-analytics-team/
Gellman, B., & Poitras, L. (2013, June 7). U.S., British intelligence mining data from nine U.S.
Internet companies in broad secret program. Washington Post.
https://www.washingtonpost.com/investigations/us-intelligence-mining-data-from-nine-us-
internet-companies-in-broad-secret-program/2013/06/06/3a0c0da8-cebf-11e2-8845-
d970ccb04497_story.html
223
Georgetown University. (2019, February 28). Largest U.S. Center on Artificial Intelligence,
Policy Comes to Georgetown. Georgetown University.
https://www.georgetown.edu/news/largest-u-s-center-on-artificial-intelligence-policy-comes-
to-georgetown/
Gershgorn, Dave. (2016, March 12). Google’s AlphaGo beats world champion in third match to
win entire series. Popular Science. https://www.popsci.com/googles-alphago-beats-world-
champion-in-third-match-to-win-entire-series/
Gibney, E. (2017). Self-taught AI is best yet at strategy game Go. Nature News.
https://doi.org/10.1038/nature.2017.22858
Ginzburg v. Mem’l Healthcare Sys., 993 F. Supp. 998 (S.D. Tex. 1997).
Glaser, A. (2018, January 18). Want a terrible job? Facebook and Google may be hiring. Slate.
https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-
moderators-for-2018.html
Gluck, A. R., O’Connell, A. J., & Po, R. (2015). Unorthodox lawmaking, unorthodox
rulemaking. Columbia Law Review, 115, 1789.
Goldfarb, C. B. (n.d.). Telecommunications Act: Competition, Innovation, and Reform (CRS
Report for Congress). Congressional Research Service.
Goldsmith, J., & Wu, T. (2006). Who controls the Internet?: Illusions of a borderless world.
Oxford University Press.
Goodwin, G. L. (2019). DOJ and FBI Have Taken Some Actions in Response to GAO
Recommendations to Ensure Privacy and Accuracy, But Additional Work Remains
(Testimony Before the Committee on Oversight and Reform, House of Representatives
GAO-19-579T). https://www.gao.gov/assets/700/699489.pdf
224
Goolsbee, A. (2018). Public policy in an AI economy. National Bureau of Economic Research.
Graff, G. M. (2020, January 16). Inside the Feds’ battle against Huawei. Wired.
https://www.wired.com/story/us-feds-battle-against-huawei/
Gramm-Leach-Bliley Act of 1999, P.L. 106–102, 113 Stat. 1338.
Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: A critical
assessment of the movement for ethical artificial intelligence and machine learning.
Proceedings of the 52nd Hawaii International Conference on System Sciences.
Greenwald, G. (2013, June 6). NSA collecting phone records of millions of Verizon customers
daily. The Guardian. https://www.theguardian.com/world/2013/jun/06/nsa-phone-records-
verizon-court-order
Griggs, M. B. (2019, December 3). The rise and fall of the PlayStation supercomputers. The
Verge. https://www.theverge.com/2019/12/3/20984028/playstation-supercomputer-ps3-
umass-dartmouth-astrophysics-25th-anniversary
Gutman, R. (2018, April 10). The 13 strangest moments from the Zuckerberg hearing. The
Atlantic. https://www.theatlantic.com/technology/archive/2018/04/the-strangest-moments-
from-the-zuckerberg-testimony/557672/
Guynn, J. (2015, July 1). Google Photos labeled black people “gorillas.” USA TODAY.
https://www.usatoday.com/story/tech/2015/07/01/google-apologizes-after-photos-identify-
black-people-as-gorillas/29567465/
Halevy, A., Norvig, P., & Pereira, F. (2009). The unreasonable effectiveness of data. IEEE
Intelligent Systems, 24(2), 8–12.
225
Hamblen, M., & Lawson, S. (2010, November 22). 4G turning into “meaningless” moniker.
PCWorld.
https://www.computerworld.com/s/article/352778/4G_Turning_Into_Meaningless_Moniker
Hamilton, A. (n.d.). Report on the Subject of Manufactures. National Archives.
phia:http://founders.archives.gov/documents/Hamilton/01-10-02-0001-0007
Harlow, S., & Johnson, T. J. (2011). The Arab spring| overthrowing the protest paradigm? How
the New York Times, global voices and Twitter covered the Egyptian revolution.
International Journal of Communication, 5, 16.
Hart-Scott-Rodino Antitrust Improvements Act of 1976, 15 U.S.C. § 18a.
Hausman, J. A., Leonard, G. K., & Sidak, J. G. (2002). Does Bell company entry into long-
distance telecommunications benefit consumers. Antitrust Law Journal, 70, 463.
Hautala, L., & Shankland, S. (2016, March 24). Police turn to digital bloodhounds in modern-
day manhunts. CNET. https://www.cnet.com/news/police-turn-to-digital-bloodhounds-to-in-
modern-day-manhunts/
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778.
Health Insurance Portability and Accountability Act of 1996, P.L. 104–191, 110 Stat. 1936.
Helpman, E. (1998). General purpose technologies and economic growth. MIT press.
Henderson, R. M., & Clark, K. B. (1990). Architectural innovation: The reconfiguration of
existing product technologies and the failure of established firms. Administrative Science
Quarterly, 9–30.
Herman, A. (2019, November 20). America needs an industrial policy. American Affairs
Journal. https://americanaffairsjournal.org/2019/11/america-needs-an-industrial-policy/
226
Hill, M. (2019, October 10). Defense now far harder than attack, warns security researcher.
Infosecurity Magazine. https://www.infosecurity-magazine.com:443/news/defense-harder-
than-attack/
Hill, M. D., Kelly, G. W., Lockhart, G. B., & Van Ness, R. A. (2013). Determinants and effects
of corporate lobbying. Financial Management, 42(4), 931–957.
Hinton, G. E., Osindero, S., & Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets.
Neural Computation, 18(7), 1527–1554.
Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural
networks. Science, 313(5786), 504–507.
Horwitz, J. (2019, December 23). China plans 5G coverage for all prefecture-level cities by end
of 2020. VentureBeat. https://venturebeat.com/2019/12/23/china-plans-5g-coverage-for-all-
prefecture-level-cities-by-end-of-2020/
House of Representatives. (1996). Telecommunications Act of 1996, Conference Report (No.
104–458).
Howard, P. N., Duffy, A., Freelon, D., Hussain, M. M., Mari, W., & Maziad, M. (2011).
Opening closed regimes: What was the role of social media during the Arab Spring?
Available at SSRN 2595096.
Hughes, C. (2019, May 9). Opinion | It’s time to break up Facebook. The New York Times.
https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-
zuckerberg.html
IBM. (n.d.). Deep Blue. Retrieved December 20, 2019, from http://www-
03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/
227
Ingram, M. (2019, August 2). Facebook’s fact-checking program falls short. Columbia
Journalism Review. https://www.cjr.org/the_media_today/facebook-fact-checking.php
Institute of International Education. (2019). 2019 Open Doors Report on International Education
Exchange. https://www.iie.org/Research-and-Insights/Open-Doors
International Telecommunication Union. (2017). Guidelines for evaluation of radio interface
technologies for IMT-2020 (ITU-RM.2412-0). https://www.itu.int/dms_pub/itu-r/opb/rep/R-
REP-M.2412-2017-PDF-E.pdf
International Telecommunication Union. (2019). Measuring digital development: Facts and
figures 2019. ITU. https://www.itu.int/en/ITU-
D/Statistics/Documents/facts/FactsFigures2019.pdf
Jackson, J. K. (2020). The Committee on Foreign Investment in the United States (CFIUS) (No.
RL33388). Congressional Research Service. https://fas.org/sgp/crs/natsec/RL33388.pdf
James, L. (2019, October 22). Attorney General James Gives Update On Facebook Antitrust
Investigation | New York State Attorney General. https://ag.ny.gov/press-
release/2019/attorney-general-james-gives-update-facebook-antitrust-investigation
Janson, M. A., & Yoo, C. S. (2012). The wires go to war: The US experiment with government
ownership of the telephone system during World War I. Tex. L. Rev., 91, 983.
Jerome, J. (2019, October 3). Private right of action shouldn’t be a yes-no proposition in federal
US privacy legislation. Privacy Perspectives. https://iapp.org/news/a/private-right-of-action-
shouldnt-be-a-yes-no-proposition-in-federal-privacy-legislation/
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature
Machine Intelligence, 1(9), 389–399.
228
John, R. R. (2009). Spreading the news: The American postal system from Franklin to Morse.
Harvard University Press.
Jorgenson, D. W., Ho, M. S., & Stiroh, K. J. (2003). Lessons from the US growth resurgence.
Journal of Policy Modeling, 25(5), 453–470.
Jorgenson, D. W., Ho, M. S., & Stiroh, K. J. (2008). A retrospective look at the US productivity
growth resurgence. Journal of Economic Perspectives, 22(1), 3–24.
Joy, B. (2000). Why the future doesn’t need us. Wired Magazine, 8(4), 238–262.
Kalven Jr, H. (1966). Privacy in tort law—Were Warren and Brandeis wrong.
Law and Contemporary Problems, 31, 326.
Kalyanpur, N., & Newman, A. L. (2019). The MNC‐coalition paradox: issue salience, foreign
firms and the General Data Protection Regulation. Journal of Common Market Studies,
57(3), 448–467.
Kang, C. (2019, October 14). Facebook’s hands-off approach to political speech gets
impeachment test. The New York Times.
https://www.nytimes.com/2019/10/08/technology/facebook-trump-biden-ad.html
Kang, C., & Goldman, A. (2016, December 5). In Washington pizzeria attack, fake news brought
real guns. New York Times. https://www.nytimes.com/2016/12/05/business/media/comet-
ping-pong-pizza-shooting-fake-news-consequences.html
Kang, D. (2018, November 6). Chinese “gait recognition” tech IDs people by how they walk.
Associated Press. https://apnews.com/bf75dd1c26c947b7826d270a16e2658a
Kang, J. (1998). Information privacy in cyberspace transactions. Stanford Law Review, 50, 1193.
Kania, E. (2019, November 7). Securing Our 5G Future.
https://www.cnas.org/publications/reports/securing-our-5g-future
229
Kania, E. B. (n.d.). Opinion | Why doesn’t the U.S. have its own Huawei? Politico. Retrieved
April 22, 2020, from https://www.politico.com/news/agenda/2020/02/25/five-g-failures-
future-american-innovation-strategy-106378
Karsten, J., & West, D. M. (2019, January 29). Supreme Court antitrust case bypasses traditional
technology regulators. Brookings.
https://www.brookings.edu/blog/techtank/2019/01/29/supreme-court-antitrust-case-bypasses-
traditional-technology-regulators/
Katz v. United States, 389 U.S. 347.
Kelion, L. (2020, April 14). Twenty suspected phone mast attacks over Easter. BBC News.
https://www.bbc.com/news/technology-52281315
Kendall, B., & McKinnon, J. D. (2019, June 3). Congress, enforcement agencies target tech.
Wall Street Journal. https://www.wsj.com/articles/ftc-to-examine-how-facebook-s-practices-
affect-digital-competition-11559576731
Khan, A., Baharudin, B., Lee, L. H., & Khan, K. (2010). A review of machine learning
algorithms for text-documents classification. Journal of Advances in Information
Technology, 1(1), 4–20.
Khan, L. M. (2016). Amazon’s antitrust paradox. Yale Law Journal, 126, 710.
Kharpal, A. (2019, September 25). Alibaba unveils its first A.I. chip as China pushes for its own
semiconductor technology. CNBC. https://www.cnbc.com/2019/09/25/alibaba-unveils-its-
first-ai-chip-called-the-hanguang-800.html
Klein, E. (2015, April 20). Corporations now spend more lobbying Congress than taxpayers
spend funding Congress. Vox. https://www.vox.com/2015/4/20/8455235/congress-lobbying-
money-statistic
230
Komkov, S., & Petiushko, A. (2019). AdvHat: Real-world adversarial attack on ArcFace Face ID
system. ArXiv Preprint ArXiv:1908.08705.
Kosseff, J. (2017). The gradual erosion of the law that shaped the internet. Science and
Technology Law Review, 18.
Krebs, B. (2018, December 13). Sources: Target investigating data breach. Krebs on Security.
https://krebsonsecurity.com/2013/12/sources-target-investigating-data-breach/
Leavy, S. (2018). Gender bias in artificial intelligence: The need for diversity and gender theory
in machine learning. Proceedings of the 1st International Workshop on Gender Equality in
Software Engineering, 14–16.
Lecher, C. (2018, July 24). Senator Ron Wyden reckons with the internet he helped shape. The
Verge. https://www.theverge.com/2018/7/24/17606974/oregon-senator-ron-wyden-
interview-internet-section-230-net-neutrality
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
Lee, J. H., Shin, J., & Realff, M. J. (2018). Machine learning: Overview of the recent progresses
and implications for the process systems engineering field. Computers & Chemical
Engineering, 114, 111–121.
Lee, K.-F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Houghton
Mifflin Harcourt.
Lerman, R., & Ortutay, B. (2019, October 30). Twitter bans political ads ahead of 2020 election.
Associated Press. https://apnews.com/63057938a5b64d3592f800de19f443bc
Lessig, L. (2006). Code: Version 2.0. Basic Books.
Levi, L. (2008). The four eras of FCC public interest regulation. Administrative Law Review, 60,
813.
231
Liao, S. (2018a, April 11). 11 weird and awkward moments from two days of Mark Zuckerberg’s
Congressional hearing. The Verge.
https://www.theverge.com/2018/4/11/17224184/facebook-mark-zuckerberg-congress-
senators
Liao, S. (2018b, May 2). The Pentagon bans Huawei and ZTE phones from retail stores on
military bases. The Verge. https://www.theverge.com/2018/5/2/17310870/pentagon-ban-
huawei-zte-phones-retail-stores-military-bases
Lima, C. (n.d.). “Nightmarish”: Lawmakers brace for swarm of 2020 deepfakes. Politico.
Retrieved February 23, 2020, from https://politi.co/2IgkUbL
Lipsey, R. G., Bekar, C., & Carlaw, K. (1998). The consequences of changes in GPTs. General
Purpose Technologies and Economic Growth, 193–218.
Lipsey, R. G., Carlaw, K. I., & Bekar, C. T. (2005). Economic transformations: General purpose
technologies and long-term economic growth. OUP Oxford.
Luckerson, V. (2018, May 18). ‘Crush them’: an oral history of the lawsuit that upended Silicon
Valley. The Ringer. https://www.theringer.com/tech/2018/5/18/17362452/microsoft-antitrust-
lawsuit-netscape-internet-explorer-20-years
Maclin, T. (1994). When the cure for the Fourth Amendment is worse than the disease. Southern
California Law Review, 68, 1.
Majumder, B. (2019, May 13). Congress Should Revive the Office of Technology Assessment.
Center for American Progress.
https://www.americanprogress.org/issues/green/news/2019/05/13/469793/congress-revive-
office-technology-assessment/
232
Manzi, D. C. (2018). Managing the misinformation marketplace: The First Amendment and the
fight against fake news. Fordham Law Review, 87, 2623.
Mason, A. (1946). Brandeis: A free man’s life. Viking Press.
Massanari, A. (2017). #Gamergate and the Fappening: How Reddit’s algorithm, governance, and
culture support toxic technocultures. New Media & Society, 19(3), 329–346.
https://doi.org/10.1177/1461444815608807
May, P. J. (2007). Regulatory regimes and accountability. Regulation & Governance, 1(1), 8–26.
Mayer-Schönberger, V., & Cukier, K. (2013). Big data: A revolution that will transform how we
live, work, and think. Houghton Mifflin Harcourt.
McCarthy, J. (1990). Chess as the Drosophila of AI. In Computers, chess, and cognition (pp.
227–237). Springer.
McChesney, R. W., & Nichols, J. (2011). The death and life of American journalism: The media
revolution that will begin the world again. Bold Type Books.
McDonald, A. M., & Cranor, L. F. (2008). The cost of reading privacy policies. I/S: A Journal of
Law and Policy for the Information Society, 4, 543.
McGarity, T. O. (1991). Some thoughts on deossifying the rulemaking process. Duke Law
Journal, 41, 1385.
Mcmorrow, R. (2019, May 30). Huawei a key beneficiary of China subsidies that US wants
ended. Phys.Org. https://phys.org/news/2019-05-huawei-key-beneficiary-china-
subsidies.html
McQuinn, A., & Castro, D. (2019). The Costs of an Unnecessarily Stringent Federal Data
Privacy Law. Information Technology and Innovation Foundation.
233
https://itif.org/publications/2019/08/05/costs-unnecessarily-stringent-federal-data-privacy-
law
Mehta, A. (2015). The role of values in the U.S. net neutrality debate. International Journal of
Communication, 9, 3460. https://ijoc.org/index.php/ijoc/article/view/3331
Mervis, J., (2018, June 11). More restrictive U.S. policy on Chinese graduate student visas raises
alarm. Science. https://www.sciencemag.org/news/2018/06/more-restrictive-us-policy-
chinese-graduate-student-visas-raises-alarm
Mervosh, S. (2019, May 24). Distorted videos of Nancy Pelosi spread on Facebook and Twitter,
helped by Trump. The New York Times.
https://www.nytimes.com/2019/05/24/us/politics/pelosi-doctored-video.html
Meserole, C. (2018, May 9). How misinformation spreads on social media—And what to do
about it. Brookings. https://www.brookings.edu/blog/order-from-chaos/2018/05/09/how-
misinformation-spreads-on-social-media-and-what-to-do-about-it/
Metz, C. (2016, March 14). How Google’s AI viewed the move no human could understand.
Wired. https://www.wired.com/2016/03/googles-ai-viewed-move-no-human-understand/
Metz, C. (2017, October 22). Tech giants are paying huge salaries for scarce A.I. talent. The New
York Times. https://www.nytimes.com/2017/10/22/technology/artificial-intelligence-experts-
salaries.html
Mitchell, A., Gottfried, J., Stocking, G., Walker, M., & Fedeli, S. (2019, June 5). Many
Americans say made-up news is a critical problem that needs to be fixed. Pew Research
Center’s Journalism Project. https://www.journalism.org/2019/06/05/many-americans-say-
made-up-news-is-a-critical-problem-that-needs-to-be-fixed/
234
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A.,
Riedmiller, M., Fidjeland, A. K., & Ostrovski, G. (2015). Human-level control through deep
reinforcement learning. Nature, 518(7540), 529–533.
Moammer, K. (2015, April 13). US government bans Intel, Nvidia and AMD from selling high
end chips to the Chinese government. Wccftech. https://wccftech.com/us-government-bans-
intel-nvidia-amd-chips-china/
Mohri, M., Rostamizadeh, A., & Talwalkar, A. (2018). Foundations of machine learning. MIT
press.
Mosley, T. (n.d.). Perfect deepfake tech could arrive sooner than expected. WBUR. Retrieved
April 20, 2020, from https://www.wbur.org/hereandnow/2019/10/02/deepfake-technology
Moss, D. (2019). The Record of Weak U.S. Merger Enforcement in Big Tech. American Antitrust
Institute.
Mosseri, A. (2016, December 15). Addressing Hoaxes and Fake News. About Facebook.
https://about.fb.com/news/2016/12/news-feed-fyi-addressing-hoaxes-and-fake-news/
Mozur, P. (2018a, January 9). AT&T drops Huawei’s new smartphone amid security worries.
The New York Times. https://www.nytimes.com/2018/01/09/business/att-huawei-mate-
smartphone.html
Mozur, P. (2018b, July 8). Inside China’s dystopian dreams: A.I., shame and lots of cameras.
The New York Times. https://www.nytimes.com/2018/07/08/business/china-surveillance-
technology.html
Mullin, J. (2018, February 27). House Vote on FOSTA is a Win for Censorship. Electronic
Frontier Foundation. https://www.eff.org/deeplinks/2018/02/house-vote-fosta-win-censorship
235
Murray, M. P., Drought, A. B., & Kory, R. C. (1964). Walking patterns of normal men. JBJS,
46(2), 335–360.
National Counterintelligence and Security Center. (2018). Foreign Economic Espionage in
Cyberspace. https://www.dni.gov/files/NCSC/documents/news/20180724-economic-
espionage-pub.pdf
National Industrial Recovery Act of 1933, P.L. 73–67, 48 Stat. 195, 15 U.S.C. § 703 (1933).
National Telecommunications and Information Administration. (1998, June 5). Statement of
Policy on the Management of Internet Names and Addresses.
https://www.ntia.doc.gov/federal-register-notice/1998/statement-policy-management-
internet-names-and-addresses
Newcomer, E., & Brody, B. (2020, January 22). Amazon and Microsoft are among the tech
giants opening wallets in newly hostile Washington, D.C. The Seattle Times.
https://www.seattletimes.com/business/amazon-and-microsoft-are-among-the-tech-giants-
opening-wallets-in-newly-hostile-washington/
Nirkin, Y., Keller, Y., & Hassner, T. (2019). Fsgan: Subject agnostic face swapping and
reenactment. Proceedings of the IEEE International Conference on Computer Vision, 7184–
7193.
Nordicity. (2011). Analysis of Government Support for Public Broadcasting and Other Culture
in Canada. http://www.nordicity.com/de/cache/work/85/CBC-
Analysis%20of%20Government%20Support%20for%20Public%20Broadcasting%20and%2
0Other%20Culture%202011.pdf
National Science Foundation. (2003, August 13). A Brief History of NSF and the Internet.
https://www.nsf.gov/news/news_summ.jsp?cntn_id=103050
236
Odlyzjo, A. (2009). Network neutrality, search neutrality, and the never-ending conflict between
efficiency and fairness in markets. Review of Network Economics, 8(1), 40–60.
Organisation for Economic Co-operation and Development. (n.d.). Our Global Reach. Retrieved
December 20, 2019, from https://www.oecd.org/about/members-and-partners/
OECD. (2016). Big Data: Bringing Competition Policy to the Digital Era
(DAF/COMP(2016)14). https://one.oecd.org/document/DAF/COMP(2016)14/en/pdf
OECD. (2018). Private Equity Investment in Artificial Intelligence. https://www.oecd.org/going-
digital/ai/private-equity-investment-in-artificial-intelligence.pdf
OECD. (2019a). What is an “online platform”? In An Introduction to Online Platforms and Their
Role in the Digital Transformation. OECD Publishing. https://doi.org/10.1787/19e6a0f0-en
Office of Technology Assessment. (1993). Policy analysis at OTA: A staff assessment.
https://ota.fas.org/reports/PAatOTA.pdf
OTA. (1996). Annual Report to the Congress: Fiscal Year 1995.
https://ota.fas.org/reports/9600.pdf
Ohio v. American Express Co., 585 U.S. ___ (2018).
Oliner, S. D., & Sichel, D. E. (2002). Information technology and productivity: Where are we
now and where are we going? Divisions of Research & Statistics and Monetary Affairs,
Federal Reserve.
OpenAI. (2019a). AI and Compute. OpenAI. https://openai.com/blog/ai-and-compute/
OpenAI. (2019b, February 14). Better language models and their implications. OpenAI.
https://openai.com/blog/better-language-models/
OpenSignal. (2018). The state of LTE. https://www.opensignal.com/reports/2018/02/state-of-lte
Orbach, B. (2013). Foreword: Antitrust’s pursuit of purpose. Fordham Law Review, 81(5), 2151.
237
Oren, S. S., & Smith, S. A. (1981). Critical mass and tariff structure in electronic
communications markets. The Bell Journal of Economics, 467–487.
Our World in Data. (n.d.). Technology adoption in US households. Our World in Data. Retrieved
December 20, 2019, from https://ourworldindata.org/grapher/technology-adoption-by-
households-in-the-united-states
Overly, S. (2020, February 13). U.S. charges Huawei with decadeslong theft of U.S. trade
secrets. Politico. https://www.politico.com/news/2020/02/13/us-charges-huawei-with-
racketeering-and-theft-114912
Padilla, A. (n.d.). Election Cybersecurity. California Secretary of State. Retrieved April 20,
2020, from https://www.sos.ca.gov/elections/election-cybersecurity/
Palladino, V. (2018, February 13). Fast talker: Alexa may offer speedier answers with Amazon-
made AI chips. Ars Technica. https://arstechnica.com/gadgets/2018/02/fast-talker-alexa-may-
offer-speedier-answers-with-amazon-made-ai-chips/
Parkin, S. (2019, June 22). The rise of the deepfake and the threat to democracy. The Guardian.
http://www.theguardian.com/technology/ng-interactive/2019/jun/22/the-rise-of-the-deepfake-
and-the-threat-to-democracy
Patel, N. (2018, September 4). It’s time to break up Facebook. The Verge.
https://www.theverge.com/2018/9/4/17816572/tim-wu-facebook-regulation-interview-curse-
of-bigness-antitrust
Pavesich v. New England Life Ins. Co., 122 Ga. 190, 50 S.E. 68 (1905).
Pew Research Center. (2014, June 12). Political Polarization in the American Public. Pew
Research Center for the People and the Press. https://www.people-
press.org/2014/06/12/political-polarization-in-the-american-public/
238
Pickard, V. (2013). Social democracy or corporate libertarianism? Conflicting media policy
narratives in the wake of market failure. Communication Theory, 23, 336–355.
Pierce Jr, R. J. (1996). Rulemaking and the Administrative Procedure Act. Tulsa Law Journal,
32, 185.
Pitofsky, R. (1978). Political Content of Antitrust. University of Pennsylvania Law Review, 127,
1051.
Plato. (360 C.E.). Phaedrus. http://classics.mit.edu/Plato/phaedrus.html
Policy Principles for a Federal Data Privacy Framework in the United States, Hearing Before
the Committee on Commerce, Science, and Transportation, Senate, 116 Cong. 1, 116th
Congress (2019). https://www.commerce.senate.gov/2019/2/policy-principles-for-a-federal-
data-privacy-framework-in-the-united-states
Politico Staff. (2010, January 21). Pols weigh in on Citizens United decision. Politico.
https://www.politico.com/news/stories/0110/31798.html
Pool, I. de S. (1983). Technologies of freedom. Harvard University Press.
Posner, R. A. (2000). Antitrust in the new economy. Antitrust Law Journal, 68, 925.
Post, D. (2015, August 27). A bit of Internet history, or how two members of Congress helped
create a trillion or so dollars of value. Washington Post.
https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/08/27/a-bit-of-internet-
history-or-how-two-members-of-congress-helped-create-a-trillion-or-so-dollars-of-value/
Postman, N. (1993). Technopoly: The surrender of culture to technology. Random House.
Prager University v Google LLC, No. 17–06064 (9th Cir. February 26, 2020).
https://www.courtlistener.com/opinion/4730161/prager-university-v-google-llc/
239
Preventing Real Online Threats to Economic Creativity and Theft of Intellectual Property Act, S.
968 (2011).
Privacy Act of 1974, P.L. 93–579, 88 Stat. 1896 (1974).
Privacy Shield Framework. (2019). https://www.privacyshield.gov/list
Prosser, W. L. (1960). Privacy. Calif. L. Rev., 48, 383.
Protecting Consumer Privacy in the Era of Big Data, Hearing Before the Subcommittee on
Consumer Protection and Commerce, Committee on Energy and Commerce, House of
Representatives, 116 Cong. 1, 116th Congress (2019).
https://energycommerce.house.gov/committee-activity/hearings/hearing-on-protecting-
consumer-privacy-in-the-era-of-big-data
Qiu, J., Wu, Q., Ding, G., Xu, Y., & Feng, S. (2016). A survey of machine learning for big data
processing. EURASIP Journal on Advances in Signal Processing, 2016(1), 67.
Qualcomm. (2017, July 25). What is 5G | Everything You Need to Know About 5G | 5G FAQ.
Qualcomm. https://www.qualcomm.com/invention/5g/what-is-5g
Raina, R., Madhavan, A., & Ng, A. Y. (2009). Large-scale deep unsupervised learning using
graphics processors. Proceedings of the 26th Annual International Conference on Machine
Learning, 873–880.
Raso, C. (2015). Agency avoidance of rulemaking procedures. Administrative Law Review,
67(1), 65–132.
Raso, C. N. (2010). Strategic or sincere? analyzing agency use of guidance documents. The Yale
Law Journal, 782–824.
240
Ray, N. (2018, August 16). Proposed Merger of T-Mobile and Sprint.
https://ecfsapi.fcc.gov/file/1082182518443/Letter%20to%20M.%20Dortch%20re%20N.%20
Ray%20FCC%20Presentation%20-%20Redacted%20Erratum.pdf
Reazin v. Blue Cross and Blue Shield of Kansas, 899 F.2d 951 (10th Cir. 1990).
Recon Analytics. (2018). How America’s 4G Leadership Propelled the U.S. Economy.
https://api.ctia.org/wp-content/uploads/2018/04/Recon-Analytics_How-Americas-4G-
Leadership-Propelled-US-Economy_2018.pdf
Reid, J. (n.d.). 5G Infrastructure fight between cities, FCC to continue in 2020. Retrieved April
21, 2020, from https://news.bloomberglaw.com/tech-and-telecom-law/5g-infrastructure-
fight-between-cities-fcc-to-continue-in-2020
Reiter v. Sonotone Corp., 442 U.S. 330 (1979).
Reno v. American Civil Liberties Union, 521 U.S. 844 ___ (1997).
Rewheel/Research. (2018). The state of 4G pricing—2H2018 (Digital Fuel Monitor).
http://research.rewheel.fi/downloads/The_state_of_4G_pricing_DFMonitor_10th_release_2H
2018_PUBLIC.pdf
Right to Financial Privacy Act of 1978, 12 U.S. Code 35.
Roberson v. Rochester Folding Box Co., 171 N.Y. 538, 64 N.E. 442 (1902).
Robinson, G. O. (1988). The Titanic remembered: AT&T and the changing world of
telecommunications. Yale Journal on Regulation, 5(2), 11.
Rodrigo, C. M. (2019, December 10). Tech legal shield included in USMCA despite late Pelosi
push [Text]. TheHill. https://thehill.com/policy/technology/473905-tech-legal-shield-
included-in-usmca-despite-late-pelosi-push
241
Rogers, M., & Ruppersberger, D. (n.d.). Investigative Report on the U.S. National Security
Issues Posed by Chinese Telecommunications Companies Huawei and ZTE. October 8, 2012.
Retrieved April 20, 2020, from https://republicans-
intelligence.house.gov/sites/intelligence.house.gov/files/documents/huawei-
zte%20investigative%20report%20(final).pdf
Rogin, J. (2018, February 2). National Security Council official behind 5G memo leaves White
House. The Washington Post. https://www.washingtonpost.com/news/josh-
rogin/wp/2018/02/02/national-security-council-official-behind-5g-memo-leaves-white-house/
Roh, Y., Heo, G., & Whang, S. E. (2019). A survey on data collection for machine learning: A
big data-AI integration perspective. IEEE Transactions on Knowledge and Data
Engineering.
Rohlfs, J. (1974). A theory of interdependent demand for a communications service. The Bell
Journal of Economics and Management Science, 16–37.
Romano, A. (2018, April 13). A new law intended to curb sex trafficking threatens the future of
the internet as we know it. Vox. https://www.vox.com/culture/2018/4/13/17172762/fosta-
sesta-backpage-230-internet-freedom
Romm, T. (2020). Tech giants led by Amazon, Facebook and Google spent nearly half a billion
on lobbying over the past decade, new data shows. The Washington Post.
https://www.washingtonpost.com/technology/2020/01/22/amazon-facebook-google-
lobbying-2019/
RootMetrics. (2020). 5G expansion in the US: A game changer for the end-user connected
experience? http://rootmetrics.com/en-US/content/5g-expansion-in-the-US
242
Rosenbaum, E. (2019, March 1). 1 in 5 corporations say China has stolen their IP within the last
year: CNBC CFO survey. CNBC. https://www.cnbc.com/2019/02/28/1-in-5-companies-say-
china-stole-their-ip-within-the-last-year-cnbc.html
Rosenberg, E. (n.d.). How Youtube Ad Revenue Works. Investopedia. Retrieved December 29,
2019, from https://www.investopedia.com/articles/personal-finance/032615/how-youtube-ad-
revenue-works.asp
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A.,
Khosla, A., & Bernstein, M. (2015). Imagenet large scale visual recognition challenge.
International Journal of Computer Vision, 115(3), 211–252.
Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. Malaysia;
Pearson Education Limited,.
Sanger, D. E., Perlroth, N., Thrush, G., & Rappeport, A. (2018, December 11). Marriott data
breach is traced to Chinese hackers as U.S. readies crackdown on Beijing. The New York
Times. https://www.nytimes.com/2018/12/11/us/politics/trump-china-trade.html
Saunders, K. M., & Levine, L. (2004). Better, faster, cheaper-later: What happens when
technologies are suppressed. Mich. Telecomm. & Tech. L. Rev., 11, 23.
Saw, J. (2019, April 4). Winning the Global Race to 5G. Sprint Newsroom.
https://newsroom.sprint.com/winning-global-race-to-5g.htm
Sayyad, B. (2019). Keynote, 2019 Global Antitrust Enforcement Symposium.
https://d2l6535doef9z7.cloudfront.net/Uploads/s/k/y/georgetownfinaldraftoralremarksasprep
aredfordelivery_602755.pdf#page=1&zoom=auto,-265,798
Schultze, C. L. (1997). The public use of private interest. Brookings.
Schumpeter, J. A. (2008). Capitalism, socialism and democracy. Harper Collins.
243
Schwab, K. (2015, December 12). The fourth industrial revolution. Foreign Affairs.
https://www.foreignaffairs.com/articles/2015-12-12/fourth-industrial-revolution
Schwartz, P. M. (1999). Privacy and democracy in cyberspace. Vanderbilt Law Review, 52,
1607.
Secure and Trusted Communications Networks Act of 2019, H.R.4998, P.L. 116–124.
Segal, A. (2019, November 3). China is moving quickly on 5G, but the United States is not out of
the game. Council on Foreign Relations. https://www.cfr.org/blog/china-moving-quickly-5g-
united-states-not-out-game
Select Committee on Intelligence. (2019). Report of the Select Committee on Intelligence United
States Senate on Russian Active Measures Campaigns and Interference in the 2016 U.S.
Election: Volume 2: Russia’s Use of Social Media With Additional Views. United States
Senate.
Shapiro, S. A. (1994). Political oversight and the deterioration of regulatory policy. Admin. L.
Rev., 46, 1.
Sharman, J. (2018, May 13). Metropolitan Police’s facial recognition technology wrong in 98
per cent of cases. The Independent. https://www.independent.co.uk/news/uk/home-
news/met-police-facial-recognition-success-south-wales-trial-home-office-false-positive-
a8345036.html
Shi, M., Sacks, S., Chen, Q., & Webster, G. (2019, February 8). Translation: China’s Personal
Information Security Specification. New America.
https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-chinas-
personal-information-security-specification/
244
Si, M. (2019, November 1). Nation ushers in 5G commercial service era.
http://english.www.gov.cn/statecouncil/ministries/201911/01/content_WS5dbb69dac6d0bcf8
c4c1620b.html
Silver, D., & Hassabis, D. (2016, January 27). AlphaGo: Mastering the ancient game of Go with
machine learning. Google AI Blog. http://ai.googleblog.com/2016/01/alphago-mastering-
ancient-game-of-go.html
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser,
J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J.,
Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., &
Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search.
Nature, 529(7587), 484–489. https://doi.org/10.1038/nature16961
Sinclair, B. (2016). Unorthodox lawmaking: New legislative processes in the US Congress. CQ
Press.
Slaughter, R. K. (2019a). Dissenting Statement of Commissioner Rebecca Kelly Slaughter, In the
Matter of FTC vs. Facebook.
https://www.ftc.gov/system/files/documents/public_statements/1536918/182_3109_slaughter
_statement_on_facebook_7-24-19.pdf
Slaughter, R. K. (2019b). Dissenting Statement of Commissioner Rebecca Kelly Slaughter, In the
Matter of Google LLC and YouTube, LLC.
https://www.ftc.gov/system/files/documents/public_statements/1542971/slaughter_google_y
outube_statement.pdf
Smith v. Maryland, 442 U.S. 735 (1979).
245
Social Media Influence in the 2016 U.S. Elections: Hearing Before the Select Committee on
Intelligence, Senate, 115 Cong. 1 (Testimony of Richard Burr), (2017).
Solove, D. J. (2006). A brief history of information privacy law. Proskauer on Privacy, PLI.
https://scholarship.law.gwu.edu/cgi/viewcontent.cgi?article=2076&context=faculty_publicati
ons
Sovern, J. (1991). Private actions under deceptive trade practices acts: reconsidering the FTC Act
as rule model. Ohio State Law Journal, 52, 437.
Spectrum Pipeline Act of 2015, Pub. L. No. 114–74, 129 Stat. 621 (2015).
https://www.congress.gov/bill/114th-congress/house-bill/1314
Spencer, S. (2019, November 20). An update on our political ads policy. Google.
https://blog.google/technology/ads/update-our-political-ads-policy/
Spiller, P. T., & Cardilli, C. G. (1997). The frontier of telecommunications deregulation: Small
countries leading the pack. Journal of Economic Perspectives, 11(4), 127–138.
Sprint/T-Mobile. (2018). Joint Application for Consent to Transfer Control of International and
Domestic Authority Pursuant to Section 214 of the Communications Act of 1934, As
Amended. https://ecfsapi.fcc.gov/file/1061884849864/Joint%20Domestic-
Intl%20214%20Application%20061818.pdf
Starr, P. (2004). The creation of the media: Political origins of modern communications. Basic
Books New York.
Statt, N. (2019, July 24). Facebook confirms new FTC antitrust investigation after posting strong
earnings. The Verge. https://www.theverge.com/2019/7/24/20726371/facebook-ftc-antitrust-
earnings-q2-2019-privacy-regulation-mark-zuckerberg
246
Stempel, J., & Finkle, J. (2017, October 3). Yahoo says all three billion accounts hacked in 2013
data theft. Reuters. https://www.reuters.com/article/us-yahoo-cyber-idUSKCN1C82O1
Stevens, P. (2019, November 7). Here are the 10 companies with the most cash on hand. CNBC.
https://www.cnbc.com/2019/11/07/microsoft-apple-and-alphabet-are-sitting-on-more-than-
100-billion-in-cash.html
Stewart, E. (2018, April 10). Lawmakers seem confused about what Facebook does—And how to
fix it. Vox. https://www.vox.com/policy-and-politics/2018/4/10/17222062/mark-zuckerberg-
testimony-graham-facebook-regulations
Stigler, G. J. (1971). The theory of economic regulation. The Bell Journal of Economics and
Management Science, 3–21.
Stiglitz, J. E. (1989). Markets, market failures, and development. American Economic Review,
79, 197–203.
Stop Online Piracy Act, H.R. 3261 (2011).
Stratton Oakmont, Inc. V. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995).
Strickling, L. E. (2016, August 16). Letter to Mr. Goran Marby, President and CEO, Internet
Corporation for Assigned Names and Numbers.
https://www.ntia.doc.gov/files/ntia/publications/20160816marby.pdf
Strumpf, D. (2019, February 27). Where China dominates in 5G technology. Wall Street Journal.
https://www.wsj.com/articles/where-china-dominates-in-5g-technology-11551236701
Sun, C., Shrivastava, A., Singh, S., & Gupta, A. (2017). Revisiting unreasonable effectiveness of
data in deep learning era. Proceedings of the IEEE International Conference on Computer
Vision, 843–852.
247
Sutskever, I., Martens, J., & Hinton, G. E. (2011). Generating text with recurrent neural
networks. Proceedings of the 28th International Conference on Machine Learning (ICML-
11), 1017–1024.
Swan, J., McCabe, D., Fried, I., & Hart, K. (2018, January 28). Scoop: Trump team considers
nationalizing 5G network. Axios. https://www.axios.com/trump-team-debates-nationalizing-
5g-network-f1e92a49-60f2-4e3e-acd4-f3eb03d910ff.html
Synced. (2018, September 13). AI chip duel: Apple A12 Bionic vs Huawei Kirin 980. Medium.
https://medium.com/syncedreview/ai-chip-duel-apple-a12-bionic-vs-huawei-kirin-980-
ec29cfe68632
Synced. (2019, March 14). Facebook releases a trio of new AI hardware designs. Synced.
https://syncedreview.com/2019/03/14/facebook-releases-a-trio-of-new-ai-hardware-designs/
Taplin, J. (2017, April 22). Opinion | Is it time to break up Google? The New York Times.
https://www.nytimes.com/2017/04/22/opinion/sunday/is-it-time-to-break-up-google.html
Taylor, J. (2019, August 8). Chinese cyberhackers “blurring line between state power and
crime.” The Guardian. https://www.theguardian.com/technology/2019/aug/08/chinese-
cyberhackers-blurring-line-between-state-power-and
Technology Assessment Act of 1972, 92–484, H. R. 10243 (1972).
https://www.govinfo.gov/content/pkg/STATUTE-86/pdf/STATUTE-86-Pg797.pdf
Telecommunications Act of 1996, 104–104, 110 Stat. 56.
Tetlock, P. E. (2017). Expert political judgment: How good is it? How can we know?-New
edition. Princeton University Press.
248
The Networking & Information Technology Research & Development Program. (2019).
Supplement to the President’s FY2020 Budget. https://www.whitehouse.gov/wp-
content/uploads/2019/09/FY2020-NITRD-AI-RD-Budget-September-2019.pdf
The New York Times. (2019, September 9). 16 Ways Facebook, Google, Apple and Amazon
Are in Government Cross Hairs. The New York Times.
https://www.nytimes.com/interactive/2019/technology/tech-investigations.html
T-Mobile. (2020, April 1). T-Mobile Completes Merger with Sprint to Create the New T-Mobile.
https://www.t-mobile.com/news/t-mobile-sprint-one-company
Transparency & Accountability: Examining Google and its Data Collection, Use, and Filtering
Practices, Hearing Before the Committee on the Judiciary, House of Representatives, 115
Cong. 2, (2018).
Tsinghua University. (2018). China AI Development Report 2018. China Institute for Science
and Technology Policy at Tsinghua University.
http://www.sppm.tsinghua.edu.cn/eWebEditor/UploadFile/China_AI_development_report_2
018.pdf
Tucker, T. (2019). Industrial Policy and Planning: What It Is and How to Do It Better. Roosevelt
Institute. https://rooseveltinstitute.org/wp-content/uploads/2019/07/RI_Industrial-Policy-and-
Planning-201707.pdf
Tudor, G., & Warner, J. (2019). The Congressional Futures Office: A Modern Model for Science
& Technology Expertise in Congress. Belfer Center for Science and International Affairs.
https://www.belfercenter.org/sites/default/files/2019-
06/PAE/CongressionalFuturesOffice.pdf
249
Uchill, J. (2017, July 29). Hackers breach dozens of voting machines brought to conference
[Text]. TheHill. https://thehill.com/policy/cybersecurity/344488-hackers-break-into-voting-
machines-in-minutes-at-hacking-competition
United States of America et al., v. Deutsche Telekom AG, T-Mobile US, Inc., Softbank Group
Corp., Sprint Corporation, and DISH Network Corporation, Proposed Final Judgment, 1:19-
cv–02232 (D.D.C. July 26, 2019). https://www.justice.gov/atr/case-
document/file/1187771/download
United States v. Alvarez, 567 U.S. 709 (2012).
United States v. AT&T Co., 552 F. Supp. 131 (D.D.C. 1982).
United States v. AT&T Inc., 310 F. Supp. 3d 161 (D.D.C. June 12, 2018).
https://www.dcd.uscourts.gov/sites/dcd/files/17-2511opinion.pdf
United States v. Jones, 565 U.S. 400 (2012).
United States v. Microsoft Corp., 97 F. Supp. 2d 59 (D.D.C. 2000).
United States v. Microsoft Corp., 231 F. Supp. 2d 144 (D.D.C. 2002).
United States v. Microsoft Corp., 253 F.3d 34 (D.C. Cir. 2001).
United States v. Microsoft Corp., No. 98-CV–1232 (D.D.C. 2002).
United States v. Microsoft Corp., No. 98-CV–1232, 98-CV–1233 (D.D.C. 1999).
United States v. Paramount Pictures, Inc., 334 U.S. 131 (1948).
U.S. Department of Justice. (1984). U.S. Department of Justice Merger Guidelines.
https://www.justice.gov/atr/page/file/1175141/download?mkwid=c
U.S. Department of Justice. (2019, July 23). Justice Department Reviewing the Practices of
Market-Leading Online Platforms. https://www.justice.gov/opa/pr/justice-department-
reviewing-practices-market-leading-online-platforms
250
U.S. Department of Justice and Federal Trade Commission. (2010). Horizontal Merger
Guidelines. https://www.justice.gov/atr/horizontal-merger-guidelines-08192010
U.S. Department of the Treasury. (2016). The European Commission’s Recent State Aid
Investigations of Transfer Pricing Rulings (p. 26). https://www.treasury.gov/resource-
center/tax-policy/treaties/Documents/White-Paper-State-Aid.pdf
U.S. Government Working Group on Electronic Commerce. (1998). First Annual Report.
USA FREEDOM Act of 2015, P.L. 114–23, 129 Stat. 268.
USA PATRIOT Act of 2001, P.L. 107–56, 115 Stat. 272.
USTR. (2018). Findings of the Investigation into China’s Acts, Policies, and Practices Related
to Technology Transfer, Intellectual Property, and Innovation Under Section 301 of the
Trade Act of 1974. https://ustr.gov/sites/default/files/Section%20301%20FINAL.PDF
Van Ark, B., & Inklaar, R. (2005). Catching Up Or Getting Stuck?: Europe’s Troubles to Exploit
ICT’s Productivity Potential. Groningen Growth and Development Centre.
Video Privacy Protection Act of 1994, P.L. 100–618, 102 Stat. 3195, 18 U.S.C. § 2710.
Vincent, J. (2019, November 7). OpenAI has published the text-generating AI it said was too
dangerous to share. The Verge. https://www.theverge.com/2019/11/7/20953040/openai-text-
generation-ai-gpt-2-full-model-release-1-5b-parameters
Volokh, E., & Falk, D. M. (2011). Google: First Amendment protection for search engine search
results. The Journal of Law, Economics & Policy, 8, 883.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science,
359(6380), 1146. https://doi.org/10.1126/science.aap9559
Wagner, J. (2017, June 1). China’s Cybersecurity Law: What You Need to Know.
https://thediplomat.com/2017/06/chinas-cybersecurity-law-what-you-need-to-know/
251
Wakabayashi, D. (2019, July 24). YouTube is a big business. Just how big is anyone’s guess.
The New York Times. https://www.nytimes.com/2019/07/24/technology/youtube-financial-
disclosure-google.html
Warmund, J. (2000). Can COPPA work-an analysis of the parental consent measures in the
Children’s Online Privacy Protection Act. Fordham Intellectual Property, Media and
Entertainment Law Journal, 11, 189.
Warren, E. (2019, March 8). Here’s how we can break up Big Tech. Medium.
https://medium.com/@teamwarren/heres-how-we-can-break-up-big-tech-9ad9e0da324c
Webster, G., Creemers, R., Triolo, P., & Kania, E. (2017, August 1). Full Translation: China’s
“New Generation Artificial Intelligence Development Plan.” New America.
https://www.newamerica.org/cybersecurity-initiative/digichina/blog/full-translation-chinas-
new-generation-artificial-intelligence-development-plan-2017/
Weinberg, J. (1999). The Internet and telecommunications services, universal service
mechanisms, access charges, and other flotsam of the regulatory system. Yale Journal on
Regulation, 16, 211.
Weiss, K., Khoshgoftaar, T. M., & Wang, D. (2016). A survey of transfer learning. Journal of
Big Data, 3(1), 9.
Werbach, K. (2007). Only connect. Berkeley Technology Law Journal, 22, 1233.
Wernau, J. (2019, May 20). Forced tech transfers are on the rise in China, European firms say.
Wall Street Journal. https://www.wsj.com/articles/forced-tech-transfers-are-on-the-rise-in-
china-european-firms-say-11558344240
Westin, A. F. (1997). “Whatever Works”: The American Public’s Attitudes
Toward Regulation and Self-Regulation on Consumer Privacy Issues. In NTIA, Department
252
of Commerce, Privacy and Self-Regulation in the Information Age, Chapter 1: Theory of
Markets and Privacy. https://www.ntia.doc.gov/page/chapter-1-theory-markets-and-privacy
Westin, Alan F. (1968). Privacy and Freedom. Athenium.
Westman Comm’n Co. V. Hobart Int’l, Inc., 796 F.2d 1216 (10th Cir. 1986).
Whalen, J., & Wang, Y. (2019, September 26). Hottest job in China’s hinterlands: Teaching AI
to tell a truck from a turtle. The Washington Post.
https://www.washingtonpost.com/business/2019/09/26/hottest-job-chinas-hinterlands-
teaching-ai-tell-truck-turtle/
White House. (n.d.). Artificial Intelligence for the American People. The White House. Retrieved
December 21, 2019, from https://www.whitehouse.gov/ai/
White House. (2018). National Cyber Strategy of the United States of America.
https://www.whitehouse.gov/wp-content/uploads/2018/09/National-Cyber-Strategy.pdf
White House. (2020). National Strategy to Secure 5G of the United States of America.
https://www.whitehouse.gov/wp-content/uploads/2020/03/National-Strategy-5G-Final.pdf
Wong, J. C. (2019, August 5). 8chan: The far-right website linked to the rise in hate crimes. The
Guardian. https://www.theguardian.com/technology/2019/aug/04/mass-shootings-el-paso-
texas-dayton-ohio-8chan-far-right-website
World Bank. (2016). World Development Report 2016: Digital Dividends.
Wu, T. (2010). The master switch: The rise and fall of information empires. Vintage.
Wyden, R. (2018, March 21). Floor Remarks: CDA 230 and SESTA. Medium.
https://medium.com/@RonWyden/floor-remarks-cda-230-and-sesta-32355d669a6e
Wyden, R. (2019, September 17). Wyden Remarks at “In Defense of American Democracy” on
Election Security and Vote-By-Mail. https://www.wyden.senate.gov/news/press-
253
releases/wyden-remarks-at-in-defense-of-american-democracy-on-election-security-and-
vote-by-mail
Yackee, J. W., & Yackee, S. W. (2011). Testing the ossification thesis: An empirical
examination of federal regulatory volume and speed, 1950-1990. George Washington law
Review, 80, 1414.
Yampolskiy, R. V. (2012). Leakproofing singularity artificial intelligence confinement problem.
Journal of Consciousness Studies, 19, 194.
Yap, C.-W. (2019, December 25). State support helped fuel Huawei’s global rise. Wall Street
Journal. https://www.wsj.com/articles/state-support-helped-fuel-huaweis-global-rise-
11577280736
YouTube. (n.d.). Upcoming changes to kids content on YouTube.com—YouTube Help. Retrieved
December 29, 2019, from https://support.google.com/youtube/answer/9383587?hl=en
Yuan, L. (2018, November 25). How cheap labor drives China’s A.I. ambitions. The New York
Times. https://www.nytimes.com/2018/11/25/business/china-artificial-intelligence-
labeling.html
Zeran v. America Online, Inc., 129 F.3d 327 (4th Cir. 1997).
Zhou, Y. (2018, July 27). More Chinese students are returning home after studying abroad.
Atlas. http://www.theatlas.com/charts/rJ3L4kYVQ
Zhu, X., Vondrick, C., Fowlkes, C. C., & Ramanan, D. (2016). Do we need more training data?
International Journal of Computer Vision, 119(1), 76–92.
Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information
civilization. Journal of Information Technology, 30(1), 75–89.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Privacy for all: enshrining data privacy as a civil right in the Information Age
PDF
Mobile apps and services for development: What can we learn from the non-smartphone era in ICT4D?
PDF
Mapping out the transition toward information societies: social nature, growth, and policies
PDF
The paradoxes of network neutralities
PDF
Prediction of peptides in formation of MHC class I - peptide - TCR complexes using molecular models and artificial intelligence
PDF
The Internet middlemen: targeting intermediary firms as gatekeepers in the online economy
PDF
Blockchain migration: narratives of lived experiences in Puerto Rico at the dawn of a new digital era
PDF
Cryptographic imaginaries and networked publics: a cultural history of encryption technologies, 1967-2017
PDF
Living with the most humanlike nonhuman: understanding human-AI interactions in different social contexts
PDF
The new news: vision, structure, and the digital myth in online journalism
PDF
The impact of state-society relations on the design and implementation of digital-communication networks: a Franco-American comparative perspective
PDF
Automated contracts and the lawyers who don't review them: adoption and use of machine learning technology
PDF
Artificial intelligence in medical dosimetry: a quantitative analysis of artificial intelligence adoption among medical dosimetrists
PDF
Industry 4.0 impacts on U.S. food industry executive self-directed learning
PDF
Responsible artificial intelligence for a complex world
PDF
Sunsetting: platform closure and the construction of digital cultural loss
PDF
E/Utopia in practice: the practice and politics of Ethiopian futurity
PDF
Virtually human? Negotiation of (non)humanness and agency in the sociotechnical assemblage of virtual influencers
PDF
Technical base, interests, and power in the two-level game of international telecom standards setting: the political economy of China's initiatives
PDF
Mobile communication and development: a study of mobile phone appropriation in Ghana
Asset Metadata
Creator
Mehta, Aalok
(author)
Core Title
Domestic and international tensions in artificial intelligence policy
School
Annenberg School for Communication
Degree
Doctor of Philosophy
Degree Program
Communication
Degree Conferral Date
2020-05
Publication Date
08/05/2022
Defense Date
05/11/2020
Publisher
University of Southern California. Libraries
(digital)
Tag
antitrust,artificial intelligence,big tech,broadband,communications policy,competition,congressional capacity,content regulation,emerging technologies,industrial policy,OAI-PMH Harvest,online platforms,privacy,public policy,technology industry
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Aronson, Jonathan (
committee chair
), Annany, Mike (
committee member
), Bar, Francois (
committee member
)
Creator Email
aalokm@gmail.com,aalokmeh@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC11666377
Unique identifier
UC11666377
Legacy Identifier
etd-MehtaAalok-8888
Document Type
Dissertation
Rights
Mehta, Aalok
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
antitrust
artificial intelligence
big tech
broadband
communications policy
competition
congressional capacity
content regulation
emerging technologies
industrial policy
online platforms
privacy
public policy
technology industry