Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Political decision-making in an uncertain world
(USC Thesis Other)
Political decision-making in an uncertain world
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Political Decision-Making
in an Uncertain World
Simon Blessenohl
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulllment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PHILOSOPHY)
May 2021
Acknowledgements
In his book Thanks A Thousand, A. J. Jacobs documents his attempts to thank everyone
involved in producing his morning cup of coee. He thanks not only the barista but also
more than a thousand other people, such as the inventor of corrugated cardboard used in
packaging cups and the miners who provide iron ore for the steel used in the coee machine.
A similar exercise in gratitude could be carried out for this dissertation. Just as countless
people other than the barista contributed to Jacobs's cup of coee, so did countless people
other than myself contribute to the chapters that follow. I can only name a few of them
below, but I feel indebted to many more.
During my ve years at USC, I would have gotten much less done if it were not for
Natalie Schaad and her stellar team in the oce. They helped me quickly navigate through
administrative mazes and were always there for a friendly chat.
My fellow graduate students were not only a source of personal support but also of
philosophical insights. Among the many helpful graduate students who provided feedback
on my work, I am particularly grateful to Paul Garofalo and Vishnu Sridharan.
Faculty members were extremely generous with their time to me. I often entered a
meeting with half-baked ideas and left with respectable arguments. Even those not on my
dissertation committee were willing to sacrice many hours to help me think and write better.
Among them, I am especially thankful to John Hawthorne and Sharon Lloyd.
The advice and guidance I received from my committee while I worked on the dissertation
and related projects was of immense value. Jacob Nebel often showed me ways in which I
could reframe or extend my arguments to make them more compelling. Ralph Wedgwood
discussed my ideas with me since my rst days at USC, and he frequently drew my attention
to related literature that I could fruitfully engage with. Jerey Russell and Jonathan Quong
co-chaired the dissertation committee. Je not only thought through my arguments with me
at a depth that I could not have reached on my own. He also made my papers much more
readable by taking the time to identify innumerable ill-chosen words, confusing paragraph
ii
transitions, and many other shortcomings of my writing. From Jon, I learned much of the
political philosophy that I know. I could not have asked for a more insightful, kind, and
patient teacher. Few of the better moves in my arguments would be there without his help.
Finally, I am thankful to my family for encouraging me to pursue my interest in philoso-
phy and supporting me in my endeavors, even if these involved being 9,000 kilometers away
from them. And, of course, I am much indebted to Rose, who stepped into my life one sunny
afternoon in the Hoose Library and brightened up my days ever since.
iii
Contents
Acknowledgements ii
Abstract vii
1 Introduction 1
2 Risk Attitudes and Social Choice 9
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Risk Attitudes and Statewise Reasoning . . . . . . . . . . . . . . . . . . . . 15
2.2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2.2 An Impossibility Result . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.3 REU Theory for Individuals and (Individual Preference Divergence) . 23
2.3 Giving up (Ex Ante Pareto) . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.3.1 Justiability to Individuals . . . . . . . . . . . . . . . . . . . . . . . . 26
2.3.2 Reasons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4 Giving up (Dominance) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.4.1 Dominance Principles for Social Observers . . . . . . . . . . . . . . . 32
2.4.2 Social Choice without (Dominance) . . . . . . . . . . . . . . . . . . . 35
2.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3 Should States Be Catastrophe-Averse? 43
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
iv
3.2 The Argument from Aggregate Future Welfare . . . . . . . . . . . . . . . . . 47
3.3 The Argument from People's Interest in Meaningful Lives . . . . . . . . . . . 50
3.4 The Argument from Duties of Intergenerational Justice . . . . . . . . . . . . 53
3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4 Explore and Exploit 61
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2 Lessons for the Future . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3 The Risk-Based Objection to Exploration . . . . . . . . . . . . . . . . . . . . 69
4.4 The Distribution-Based Objection to Experimentation . . . . . . . . . . . . 73
4.4.1 Experiments without Reasonable Disagreement . . . . . . . . . . . . 74
4.4.2 Experiments with Reasonable Disagreement . . . . . . . . . . . . . . 81
4.4.3 Compensation and the Signicance of Distributive Injustice . . . . . . 83
4.5 Is Exploration through Devolution Immune to the Distribution-Based Objec-
tion? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5 Science Advice: Making Credences Accurate 92
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.2 The Case for (Accuracy) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.2.1 Only Epistemic Values Should Guide Science Advice . . . . . . . . . 97
5.2.2 Communication Style Should Be Context-Sensitive . . . . . . . . . . 108
5.3 The Case against (Accuracy) . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.4 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.4.1 Communicating to Multiple Policy-Makers . . . . . . . . . . . . . . . 116
5.4.2 Communication to the Public . . . . . . . . . . . . . . . . . . . . . . 119
5.5 Ideals and Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
v
6 Between Democracy and Epistocracy 127
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.2 Indicator Depistocracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
6.3 Instrumental Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6.4 Non-Instrumental Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.4.1 Equal Respect and Mutual Justiability . . . . . . . . . . . . . . . . 138
6.4.2 Equal Advancement of Interests . . . . . . . . . . . . . . . . . . . . . 141
6.4.3 Equal Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
6.4.4 Friendship-Like Equality . . . . . . . . . . . . . . . . . . . . . . . . . 148
6.5 Arguments against Factoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
6.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
Bibliography 155
A An Impossibility Result for Strict Preferences 171
vi
Abstract
Political decision-makers cannot perfectly foresee the consequences of their actions. For
example, when they consider raising the income tax, they do not know exactly what ef-
fect that would have on consumer spending or the distribution of wealth. When political
decision-makers are uncertain about the consequences of the available options, which op-
tions should they choose? And which procedures should societies implement to ensure that
expert knowledge best informs political decisions? This dissertation tackles aspects of these
two questions. Regarding which options political decision-makers should choose, I discuss
to what extent they should take people's risk attitudes into account, to what extent they
should decrease risks of catastrophically bad events, and to what extent they should exper-
iment with policies to get a better understanding of policy consequences. Regarding how
we should best bring expert knowledge to bear on policy decisions, I discuss how scientists
should inform political decision-makers and whether we should transfer some power from
elected politicians to experts.
vii
Chapter 1
Introduction
All political decisions are decisions under risk.
1
From the perspective of political decision-
makers, choosing one option rather than another is not a choice between dierent outcomes.
Rather, it is a choice between dierent probabilities for a vast array of possible outcomes.
2
What will happen if we cut taxes for the rich, introduce a ne for emitting certain chemicals,
or go to war with a neighboring state? All we can say is that a variety of outcomes are possible
and that our decisions will aect their probabilities of occurring.
One subset of the possible outcomes in political decision-making is of particular interest:
outcomes in which a non-negligible proportion of the population would be harmed. Call such
outcomes social risks.
3
These risks dier along many dimensions. Tsunamis harm a small
proportion of the population, while a transition to an autocratic regime might harm almost
everyone. A terrorist attack usually harms people within a single state, while bioengineered
1
Political decisions are decisions made by individual or collective actors in their role of participants in the
political system. In the following, a decision under risk is a decision in which the decision-maker's evidence
does not determine a single possible outcome for each option. Sometimes, the term `decision under risk' is
reserved for situations in which agents know exact objective probabilities for the various possible outcomes.
I use the term in a broader sense.
2
I use the terms `probability' and `likelihood' interchangeably. The probability of p is the credence in
p that a rational agent would have given the relevant body of evidence. It will usually be clear from the
context of the statement what the relevant body of evidence for a probability statement is. When we discuss
the decisions of a particular person, the relevant evidence is the evidence that that person possesses. When
we discuss the decisions of a group, it is less clear what the relevant evidence is. But for the purposes of this
dissertation, a detailed account of the relevant evidence in such situations is not necessary.
3
Note that I use an evidence-relative sense of `risk'. Roughly, for there to be a risk of an earthquake just
is for our evidence not to rule out the possibility of an earthquake.
1
pandemics additionally harm people outside their countries of origin. Crop failures could
occur this year, while catastrophic climate change could most likely only happen decades
from today. Volcano eruptions are of natural origin, while mass unemployment is man-made.
What unies these diverse outcomes is that a non-negligible proportion of the population
would be harmed if they occurred, and that, though unlikely, it is possible that they occur.
Many political decisions are not only decisions under risk; they are also decisions about
social risks. That is, they aect what kinds of social risks exist and how large those risks
are.
4
Some political decisions are explicitly framed as decisions about social risks. In 2005,
Congress directed NASA to nd at least 90% of potentially hazardous near-earth objects
and assess how such an object could be diverted if it headed towards earth. The stated
motivation for this decision was to decrease the risk of an asteroid impact. But even among
political decisions that are not explicitly framed as being about social risks, most of them
are. For example, increasing welfare spending might decrease the risk of extremist parties
being elected, increase the risk of scal collapse, and so on.
Once one recognizes that all political decisions are decisions under risk, and that many
are decisions about social risks, too, it is natural to ask what values and norms should
guide such political decisions. This question contains two subquestions, which, in turn,
can be broken up into many more specic questions. First, we can ask what values and
norms should guide which option is chosen. We could try to answer this question in full
generality, by giving a theory that, for any set of options and any context, tells us which
option the political actor in that context ought to choose. For example, some might claim
that political actors should choose the option that maximizes the expected sum happiness
of present and future citizens. But we could also try to answer narrower subquestions that
might inform such a grand theory. For example, we could ask whether political actors should
be indierent between any two options that dier only along some specied dimension, such
4
Strictly speaking, almost every political decision under risk will also aect the likelihoods of social risks
and hence count as a decision about social risks. It might be more useful to think of a spectrum of political
decisions, ranging from being entirely about social risks to being not about social risks at all.
2
as the identities of the people who are exposed to risks. I call questions about which option
an actor should choose substantive questions. Second, we can ask what values and norms
should guide how such decisions are made. Again, we could try to answer this question by
giving a full description of procedures we ought to adopt in a variety of realistic contexts.
But we could also normatively assess specic institutions that we currently nd in the world
and discuss possible incremental changes to them. I call questions about how decisions under
risk should be made procedural questions.
The rst part of this dissertation addresses several substantive questions. That is, it tries
to improve our understanding of which options a political decision-maker should choose in
various situations. Chapter 2 asks to what extent political decisions should be sensitive to
a feature of the people aected by the decision: their risk attitudes. In particular, if some
parts of the population are more risk-averse than others, should decision-makers try to shift
the high-risk, high-payo prospects to the more risk-seeking individuals? I show that if we
deny that, then we violate an ex ante Pareto principle, and that if we accept it, then we
violate a plausible statewise dominance principle. I then argue that this is troublesome for
the view that agents might rationally have dierent risk attitudes.
Chapter 3 asks to what extent political decisions should be sensitive to the fact that
some policies reduce risks that would kill many individuals at the same time, causing the
breakdown of civilization as we know it, while other policies reduce risks that kill many
individuals at dierent times. I discuss three reasons why states should prioritize reducing
catastrophic mortality risks over non-catastrophic mortality risks, other things equal: re-
ducing catastrophic mortality risks leads to a greater increase in aggregate future welfare,
promotes the satisfaction of current citizens' interest in the continued existence of human
civilization, and is required by intergenerational duties of justice.
Chapter 4 asks to what extent political decisions should be sensitive to the fact that
some policies resolve more uncertainty about policy consequences than others. For example,
implementing a universal basic income and observing its economic and social consequences
3
would be quite informative. In contrast, scaling up a tried-and-true welfare program is less
informative because we have less uncertainty about its consequences. I suggest that, in
many cases, a central argument for choosing the more exploratory option is weaker than one
might think. Moreover, I discuss the objections that exploratory policies are riskier and that
a particular kind of exploratory options|policy experiments|often set back distributive
justice.
The second part of this dissertation turns to procedural questions about what mechanisms
political communities should put into place to make decisions under risk. The two chapters
in this part of the dissertation deal with the procedural ramications of the fact that some
members of society have more reliable judgments about policy consequences than others.
In most actual societies, there is an established mechanism that is supposed to make deci-
sions responsive to these more reliable judgments: political decision-makers invite experts
to communicate their insights to them, either verbally, such as in congressional briengs, or
in written reports. Chapter 5 asks how scientists should communicate their knowledge to
political decision-makers in such contexts. In particular, when may scientists make claims
such as `X is toxic' even though their evidence does not conclusively show that X is toxic?
The chapter discusses the view that scientists should say what makes the decision-makers'
credences most accurate. It argues that this view has signicant advantages over two exist-
ing views from the literature|that scientists always ought to make uncertainty explicit, and
that scientists ought to say what has the best policy consequences.
The existing mechanism for letting those with more reliable judgments about policy
consequences inform political decision-making has obvious issues. For example, political
decision-makers have incentives to ignore expert advice if doing so is politically advanta-
geous. Chapter 6 considers an imaginary alternative mechanism. We could set up a polit-
ical system in which the role of elected representatives is conned to dening desiderata,
such as minimizing poverty, that should be attained through policy-making. Then, non-
democratically selected experts provide binding assessments of which policies are most likely
4
to achieve those desiderata. In such a system, the extent to which political decisions are
guided by the judgments of those with more reliable assessments of policy consequences is
not dependent on the goodwill of elected representatives. Rather, expert judgments inform
political decisions `by design'. But clearly, this system would be a departure from demo-
cratic systems as we know them. In fact, it might seem non-democratic in a problematic
way. Thus, one might expect that arguments for democracy could explain why we should
not implement this system. But I suggest that many recent arguments fail to do so.
To situate these chapters of my dissertation within the existing literature, it is worth
sorting the comprehensive, interdisciplinary body of related work into three categories. First,
there is work outside philosophy on political decision-making under risk. Second, there is
work on decisions under risk within ethics. Third, there is literature in political philosophy
on political decision-making under risk. I will now explain how my work draws on and goes
beyond these dierent bodies of work.
Many readers will be familiar with an in
uential school of thought in sociology which
sees risk management as the central aspect of modern society and tries to better understand
how it is done.
5
Relatedly, researchers in anthropology and political psychology examine how
dierent social structures are intertwined with dierent ways of perceiving risks.
6
In contrast
to such work, my dissertation focuses on normative questions. It aims to better understand
how we should deal with political decisions under risk. My work is therefore more closely
related to literature outside philosophy that focuses on such normative questions. Here,
the work of social choice theorists|be they economists, computer scientists, or scholars in
dierent disciplines|is particularly relevant. Social choice theorists have developed formal
models of collective decisions and normative principles meant to guide such decisions.
7
The
chapter on risk attitudes most heavily draws on tools from social choice theory. But it diers
from much of the social choice literature by emphasizing philosophical arguments for and
5
See Beck (1992) and Giddens (1999).
6
See Douglas (1970). See also Jasano (1999) for cross-cultural comparisons of notions of risk.
7
See Sen (1970) and Mongin and Pivato (2016).
5
against various principles.
Let me now turn to relevant work in subdisciplines of philosophy other than political phi-
losophy. Such work addresses questions about decision-making under risk in non-political
settings. For example: How shall we understand various ethical theories about what indi-
viduals ought to do in the context of decisions under risk?
8
Is risk imposition bad even if
the risk does not materialize, and if so, why?
9
Is it morally better to impose harm on an
unidentied person rather than on a person with a known identity?
10
What duties do com-
panies have when they expose workers to risks?
11
What options would it be instrumentally
rational to choose in a decision under risk given some set of goals?
12
Answers to those questions are likely to also shed light on questions about political
decision-making under risk. But on their own, they cannot fully answer the questions about
political decision-making under risk that I consider. These substantive or procedural ques-
tions in the political setting involve features that should make us skeptical whether the
correct answer to them is just the correct answer of the closest analogous question in the
non-political setting. Take the issue of how scientists should communicate their ndings to
policy-makers, for example. The closest analogue to this question in the non-political realm
might be how experts should advise decision-makers in companies or non-prots. It is easy
to see that special constraints might apply in the political setting. For example, one pressing
question in the political setting is how to reconcile the democratic idea that people should
have an equal opportunity to in
uence collective decisions with the increased in
uence that
experts wield. This question does not arise in the non-political setting because we do not
think that people should have an equal opportunity to in
uence the decisions of, say, Wal-
mart or Greenpeace. Except for chapter 2 on the normative signicance of risk-aversion,
all chapters in this dissertation include questions and arguments which only apply to the
8
See Hansson (2013) and Lazar (2018b).
9
See Ferretti (2010) and Oberdiek (2017).
10
See Cohen et al. (2015).
11
See Anderson (1988).
12
See Savage (1954), Jerey (1956), and Buchak (2013).
6
political context.
Finally, in political philosophy, political decision-making under risk only relatively re-
cently attracted sustained attention.
13
A signicant share of this work attempts to under-
stand how some general value or normative concept should be applied to situations that
involve risk.
14
While I draw on this work at times, my approach is often more applied.
Much of the existing work asks how a particular consideration, such as distributive equality,
plays out in all possible decisions. In contrast, I often focus on a specic class of decisions,
such as decisions of scientists about how to communicate their ndings or decisions in which
aected individuals have dierent risk attitudes. Answers to more abstract questions, such
as how the value of distributive equality plays out in risky decisions, are often relevant for
more applied work, as they entail judgments about these particular decisions. But focusing
on specic kinds of decisions nevertheless leads to an emphasis on dierent considerations.
First, one can discuss any consideration that might bear on how to make those decisions,
whether that consideration is tied to a more general view, such as a view about distributive
equality, or is relatively free-
oating in theoretical space. Second, one can ignore dierences
between views about general questions if they do not aect what these views entail about
13
That said, early theorizing about political decision-making under risk can be found at least as far back
as the Renaissance. In The Prince, Machiavelli writes:
I judge that it might be true that fortune is arbiter of half of our actions, but also that she
leaves the other half or close to it, for us to govern. [...] [T]he prince who leans entirely on his
fortune comes to ruin as it varies. I believe, further, that he is happy who adapts his mode of
proceeding to the qualities of the times [...] (Machiavelli, 1998 [1532], ch. XXV)
Machiavelli recognizes that luck partly determines the outcome of political decisions and draws the normative
conclusion that political decision-makers ought to be ready to change what they do in response to changing
circumstances. For a recent study of the treatment of risk in early modern political thought, see Nacol
(2016).
14
In the context of rights-based theories, there is a debate on the connection between rights and risks
that goes back at least to Nozick's Anarchy, State, and Utopia, but is still ongoing (see Nozick (1974) and
Oberdiek (2017)). There is recent work on how to understand Scanlonian contractualism|a theory of what
actions are morally permissible|for decisions under risk (see Frick (2015)). There is also intensive work,
usually from a welfarist perspective, on how the value of distributive equality should be understood in the
context of risk. In particular, shall we aim for equality of ex ante welfare, or for equality of ex post welfare,
or for some combination thereof (see Diamond (1967), Hammond (1983), Broome (1984), and Gajdos and
Maurin (2004))? Recently, it has also been asked how distributive equality should be understood in situations
in which we only have imprecise probabilities (see Rowe and Voorhoeve, 2019).
7
the concrete decisions at hand.
Of course, in addition to literature in political philosophy that is less applied, there also
exists much research on more applied questions. Sometimes, this research tackles the very
same questions that the chapters in this dissertation deal with. Here, the dierences between
the existing work and my work are more nuanced and will be discussed in the respective
chapters. The purpose of the preceding discussion was merely to give a broad overview of
how this dissertation relates to major categories of existing work.
This dissertation is written in the hope that, despite the extensive body of existing
work, further explorations of political decision-making under risk are worth our time and
attention. The subject matter is complex and gives rise to a myriad of questions, at least
some of which remain to be adequately elucidated. Also, from a more practical point of view,
political decisions about risks from the nancial system, climate change, and, at the time of
writing, the COVID-19 pandemic illustrate that humanity is still at the very beginning of
guring out how to collectively make decisions under risk. Maybe philosophy can help us
handle risks better. There are at least two ways in which it might do so. First, it might
clarify considerations for and against various views on how to make political decisions under
risk. For this purpose, it is often useful to accentuate the dierences between views. The
chapter on science advice in this dissertation takes this approach: it attempts to isolate
three distinct views on how scientists should communicate their ndings to policy-makers
and oers arguments for and against them. But sometimes, this mode of thought can stand
in the way of another, maybe more important contribution philosophy can make. Philosophy
can explain what dierent normative views agree on rather than what they dier in. The
chapter on catastrophic risks is one of the chapters that tries to do this, by showing how
a preference for reducing risks on people from catastrophic rather than non-catastrophic
sources might be justied from a variety of perspectives. Thus, this dissertation contains
attempts to clarify but also to conciliate, where possible. While these attempts might often
fail, I hope that higher aspirations are visible from time to time.
8
Chapter 2
Risk Attitudes and Social Choice
Abstract
How should we choose on behalf of groups of agents who violate expected utility theory by
being risk-averse or risk-seeking? Unfortunately, we sometimes have to either choose acts
that everyone disprefers or acts that are sure to turn out worse than another act. This
observation is particularly troubling for risk-expected utility theorists: neither option sits
comfortably with their view.
2.1 Introduction
Carla won a ticket for a concert in April and a ticket for a concert in August. The concert
in April lasts for two hours, but there is a 50% chance that it will be canceled due to bad
weather. The concert in August only lasts for one hour, but it is certain that it will take
place. Call the ticket for the two-hour concert in April the `risky ticket', and the ticket for
the one-hour concert in August the `safe ticket'. Carla decides to give away the tickets to
her two friends, Alice and Bob. Carla wonders: Should I give Alice the risky ticket and Bob
the safe ticket, or vice versa?
9
Carla knows that Alice and Bob equally enjoy going on concerts. If Alice went to the
one-hour concert, she would be at the same welfare level as Bob if he went to the one-hour
concert. Similarly, if Alice went to the two-hour concert, she would be at the same welfare
level as Bob if he went to the two-hour concert. Furthermore, Carla knows that Alice and
Bob benet twice as much from a two-hour concert as from a one-hour concert. We can
summarize this information in a table that maps each combination of an act (who gets
which ticket) and a state (whether it rains in April) to a vector (x;y) that represents the
welfare level x of Alice and the welfare level y of Bob. We x units by arbitrarily assigning
the number 0 to the welfare level of not going to a concert and the number 1 to the welfare
level of going to a one-hour concert.
Sunny April Rainy April
Risky ticket for Alice,
safe ticket for Bob
(2, 1) (0, 1)
Safe ticket for Alice,
risky ticket for Bob
(1, 2) (1, 0)
Carla reasons as follows:
Either April will be sunny or rainy. If April is sunny, then no matter how I give
out the tickets, one person is at welfare level 2 and one person is at welfare level
1, and so, since I should be impartial, I have no reason to prefer one way of giving
out the tickets over the other. If April is rainy, then no matter how I give out
the tickets, one person is at welfare level 1 and one person is at welfare level 0
and so, again, I have no reason to prefer one way of giving out the tickets over
the other. So, I ought to be indierent between the two ways of giving out the
tickets.
After concluding that she should not prefer one of the two acts, Carla
ips a coin, and gives
the safe ticket to Alice and the risky ticket to Bob. Carla's reasoning seems unobjectionable.
10
But now suppose that Carla has an additional piece of information. She knows that Alice
and Bob dier in their attitudes towards risk. Alice is venturesome: she is inclined to take
risks. In particular, Alice would prefer the risky ticket to the safe ticket if oered a choice
between the two. In contrast, Bob is cautious: he is inclined to play safe. Bob would prefer
the safe ticket to the risky ticket if oered a choice between the two.
Given that Carla knew this, it seems that she maybe did not do the right thing, after
all. It seems that Carla ought to have given Alice the risky ticket and Bob the safe ticket,
given that Alice is venturesome and thus prefers the risky ticket and Bob is cautious and
thus prefers the safe ticket. On the other hand, her statewise reasoning sounded compelling
as well.
A small modication of the example shows that the clash between respecting individual
preferences and statewise reasoning is even more dramatic. Suppose that Alice likes the
color of the safe ticket and Bob likes the color of the risky ticket. Therefore, they would get
a small additional benet if Carla gave the safe ticket to Alice and the risky ticket to Bob.
Sunny April Rainy April
Risky ticket for Alice,
safe ticket for Bob
(2, 1) (0, 1)
Safe ticket for Alice,
risky ticket for Bob
(1 +, 2 +) (1 +, 0 +)
Alice still prefers the risky ticket, and Bob still prefers the safe ticket. The small additional
benet of getting a ticket with a color pleasing to their eyes is not enough to change their
preferences. But Carla might now reason that, in each state, giving the safe ticket to Alice
and the risky ticket to Bob is strictly better than|rather than equally good as|giving the
risky ticket to Alice and the safe ticket to Bob. For example, it is strictly better if one
individual is at welfare level 1 + and another at welfare level 2 + than if one individual
11
is at welfare level 2 and another at welfare level 1.
1
Thus, statewise reasoning tells her to
strictly prefer the act which Alice and Bob unanimously disprefer: giving the safe ticket to
Alice and the risky ticket to Bob.
These examples illustrate a general dilemma: if we choose on behalf of agents who have
dierent risk attitudes, we must either choose acts which they all disprefer or acts which
are sure to turn out worse|or, at least, no better|than some alternative. This chapter
explores how those who think that Alice and Bob are perfectly rational might respond to
the dilemma.
This is worth doing for several reasons. First, it gives us a better sense of the moral
quandaries one gets oneself into if one assumes that rational agents may have dierent risk
attitudes. I will suggest that if one makes this assumption, grasping either horn of the
dilemma raises formidable challenges. Second, actual people have risk attitudes that deviate
from risk-neutrality.
2
Thus, even if one does not accept that rational agents may have
dierent risk attitudes, the question of how to choose on behalf of agents with dierent
risk attitudes does not go away. Although this chapter discusses possible responses by
someone who thinks that Alice and Bob are rational, someone who rejects this assumption
will hopefully nd this chapter interesting, too. The exposition of the dilemma is independent
of whether Alice and Bob are rational or not, and the general structure of the arguments
might still be suitable under the assumption that Alice and Bob are irrational. Third, the
question might also have practical signicance for policy-making. This chapter focuses on
the simple example of Carla deciding on behalf of Alice and Bob, but the considerations in
that case could potentially be transferred to decision-making on behalf of entire societies.
Actual policy-makers often evaluate acts that increase or reduce risks for citizens, such as
approving the testing of self-driving cars on public roads or implementing government-run
deposit insurance systems. To develop better frameworks for evaluating such policies, it
1
(1 +; 2 +) is clearly better than (1; 2)|both individuals are better o in the former outcome. Also,
it seems that impartiality requires that one is indierent between (1; 2) and (2; 1). Thus, it seems that one
should prefer (1 +; 2 +) to (2; 1).
2
For an empirical study of risk attitudes, see Abdellaoui (2000).
12
might be helpful to better understand how we could incorporate individual risk attitudes
into social choice. Fourth, although the problem is framed as how to choose on behalf of
groups, this chapter can also be read as an investigation into what groups ought to do. After
all, it is plausible that, in many circumstances, what an ideal observer should choose on
behalf of the group coincides with what the group ought to do.
While there is a signicant body of work on social choice under risk, it usually assumes
that individual preferences satisfy the axioms of expected utility theory, and hence that they
are risk-neutral with respect to the relevant notion of welfare or utility.
3
The recent wave of
interest in the normative dimensions of risk and individual risk-aversion makes it timely to
investigate social choice theory without assuming individual risk-neutrality.
4
Before we dive into the substantial discussion, let me make some clarications about the
examples. First, the numbers in the tables are supposed to re
ect all welfare consequences of
the outcomes for Alice and Bob. One might think that Bob prefers the safe ticket and Alice
prefers the risky ticket because Bob would have more intense feelings of disappointment than
Alice if his concert were canceled. But one should not understand the examples in that way.
Since both Alice and Bob are at welfare level 0 in the case of getting the risky ticket and
April being rainy, and these numbers re
ect all welfare consequences, Bob is not saddened
by bad fortune any more than Alice is. He prefers the safe ticket not because he tends to
feel intensely disappointed but because he prefers to pursue his welfare in a cautious manner
in situations of risk.
Second, one might think that, whatever is meant by `welfare' in the above examples, it
does not capture everything that Alice and Bob care about when they evaluate outcomes.
In particular, one might think that Bob dislikes outcomes in which he had been exposed
to risks. Thus, he evaluates the dierence between the outcome of staying home and the
3
Harsanyi's (1955) social aggregation theorem is one of the central contributions in the eld of social
choice under risk. For an excellent overview of classical results and more recent developments in this eld,
see Mongin and Pivato (2016).
4
Buchak explores the consequences of the assumption that rationality does not require risk neutrality for
social choice under certainty (see Buchak, 2017). In contrast, I explore the consequences of this assumption
for social choice under risk.
13
outcome of going to a two-hour concert after having been exposed to a 50% chance of staying
home as less than twice as large as the dierence between the outcome of staying home
and the outcome of going to a one-hour concert after not having been exposed to a chance
of staying home|even though the former outcome oers an improvement twice as large in
terms of welfare. But one should not understand the case in this way, either. I stipulate
that Alice and Bob only care about their own welfare when they evaluate outcomes. They
are indierent between any two outcomes that give them the same welfare level, no matter
what risk of ending up at a dierent welfare level they had been exposed to in that outcome.
Third, some expected utility theorists might say that the case I described is conceptually
impossible. I said that Alice prefers a 50% chance of a two-hour concert to a 100% chance
of a one-hour concert. I also said that Alice's welfare gain from a two-hour concert rather
than staying home is twice as high as the welfare gain from a one-hour concert, and that
Alice evaluates outcomes purely according to the welfare level she enjoys at those outcomes.
Some expected utility theorists might say that it is conceptually impossible that these two
statements are both true. The only way to make sense of a quantitative notion of welfare,
used in statements such as `Alice's welfare gain from a two-hour concert is twice as high
as from a one-hour concert', is to understand it as that quantity, whatever it is, that Alice
maximizes the expectation of. Thus, if Alice prefers a 50% chance of a two-hour concert to
a 100% chance of a one-hour concert, then the only meaningful notion of `welfare' is such
that Alice's welfare gain is more than twice as high for a two-hour concert than for a one-
hour concert.
5
For the purposes of this chapter, I will set aside this worry. I will assume a
notion of welfare that is not constructed from preferences in this way. This assumption is
shared, in particular, by those who hold that dierent risk attitudes are rational, which is
the perspective from which I look at the dilemma.
Of course, not all expected utility theorists insist on understanding quantitative notions
of welfare in terms of expected utility maximization. Those who are ne with other notions
5
Broome, for instance, argues that a quantitative notion of `goodness for a person' receives its meaning
from comparisons between risky alternatives (see Broome, 1993).
14
would say that the examples are conceptually coherent but involve Alice and Bob being
irrational. If Alice and Bob were rational, then they would both be indierent between the
risky and the safe ticket, since both tickets have an expected welfare of 1, and we assumed
that they only care about their own welfare. As I said above, I will evaluate dierent
responses to the dilemma under the assumption that the agents are rational, but I hope that
this chapter will also be of interest to those who reject this assumption.
Having made those clarications, I will proceed as follows. In section 2.2, I give a precise
statement of the clash between two ideas: taking individual risk attitudes into account and
preserving the validity of statewise reasoning. I then discuss how someone who holds that
rational agents may have dierent risk attitudes might respond to this clash. To make things
concrete, I will focus on one such view: Lara Buchak's risk-expected utility (REU) theory.
6
In section 2.3, I discuss whether REU theorists can plausibly claim that Carla does not have
to take individual risk attitudes into account and may override Alice's and Bob's unanimous
preference for giving Alice the risky ticket and Bob the safe ticket. In section 2.4, I discuss
whether REU theorists can plausibly give up the broadly consequentialist approach to social
choice which licenses Carla's statewise reasoning. I conclude with some general re
ections
in section 2.5.
2.2 Risk Attitudes and Statewise Reasoning
2.2.1 Preliminaries
Putting the dilemma Carla faces in more formal terms will help structure the subsequent
discussion, clarify what exactly is at stake, and elucidate how it relates to existing work. I
will now introduce the theoretical framework and some background assumptions.
6
See Buchak (2013). REU theory generalizes the expected utility formula in the same way as rank-
dependent utility theory (see Quiggin, 1982). Besides extending rank-dependent utility theory with a repre-
sentation theorem that allows inferring beliefs, utilities, and risk attitudes from preferences over prospects,
Buchak's contribution is to defend it as a normative decision theory.
15
I assume as given a set of individuals, a set of states of the world, and a set of outcomes.
Furthermore, I assume as given a set of social acts. A social act is a function from states of
the world to outcomes. Social acts are the formal representation of the alternatives among
which the impartial observer decides. For example, I model Carla's alternative to give Alice
the risky ticket and Bob the safe ticket as the social act that maps the state `Sunny April' to
the outcome `Alice goes to a two-hour concert and Bob goes to a one-hour concert' and the
state `Rainy April' to the outcome `Alice stays at home and Bob goes to a one-hour concert'.
Often, it is unknown which state of the world obtains and therefore to which outcome a
social act will lead. Then, individuals and the observer assign probabilities to the various
states of the world. I limit the scope of this chapter to cases in which each individual and
the observer all share the same probabilities|with the exception of a short discussion in
section 2.3.1.
I make the substantive assumption that only individual well-being matters for the evalu-
ation of outcomes. More precisely, I assume that individuals' preferences and the observer's
preferences over outcomes supervene on the individuals' well-being in those outcomes.
7
For
instance, Carla's preferences over outcomes are not sensitive to other features of the outcome
of the acts, such as the exact arrangement of gravel on the road leading to the concert venue,
which might dier depending on who goes to which concert, but which is irrelevant to how
well o Alice and Bob are. Given this assumption, we can identify outcomes with welfare
proles. A welfare prole is a vector which contains one welfare level for each individual.
The welfare level indicates how well o the individual is in the outcome. I do not assume
any particular theory of what welfare is.
8
Since outcomes are identied with welfare proles,
7
The assumption that the observer's preferences depend only on the welfare consequences of the dierent
social acts is plausible in decisions between dierent ways of distributing goods to people who have not
produced the goods and do not stand in any existing cooperative relations to each other. Assuming the
absence of such relations makes the restriction to welfare consequences more plausible because such relations
could ground claims to goods, which might be reasons beyond welfare consequences to prefer one distribution
over another (see Rawls, 1999, p. 77). The examples in this chapter do not involve such relations.
8
Prominent options include measuring the welfare of an individual by the satisfaction of her desires, or
by the extent to which she has pleasurable experiences, or by the attainment of things on a list of (alleged)
constituents of well-being, such as pleasure, knowledge, and friendship.
16
social acts are functions from states to welfare proles.
9
As indicated by talk of `how well o individuals are', I make the substantive assumption
that quantied interpersonal welfare information is available. More precisely, I assume that
individual welfare is measured on a common interval scale. To illustrate, recall that Alice is
at welfare level 2 rather than 0 if she goes to the two-hour concert rather than staying home,
and Bob is at welfare level 1 rather than 0 if he goes to the one-hour concert rather than
staying home. Under the assumption of a common interval scale, this tells us exactly the
following (and nothing more): Alice gains twice as much from going to a two-hour concert
rather than staying home as Bob gains from going to a one-hour concert rather than staying
home, and they are equally well o when staying home.
10
It is worth emphasizing that the way I have set up the problem builds in the contro-
versial
11
but standard
12
assumption that the welfare levels of dierent individuals can be
meaningfully compared and that such interpersonal welfare information can be treated as
given in a theory of social choice. The point of setting up the problem in this way is to focus
our attention on the interesting issues that remain for those who accept this assumption:
how to use information about how well o individuals would be in various outcomes to make
social decisions.
I dene a prospect as a function from states to single welfare levels. I write A
i
for the
9
Often, an act is dened as a function from states to outcomes understood as, for instance, possible
worlds. Then, a separate function is assumed that maps such outcomes to welfare proles. I omitted such
a separate function by directly dening outcomes as welfare proles merely to simplify the presentation|
nothing substantial hinges on this notational matter.
10
In general, if w w
0
, then an individual at w is at least as well o as an individual at w
0
, and,
furthermore, if w
1
w
2
= a(w
3
w
4
), then the welfare gain of being at w
1
rather than w
2
is a-times as
large as the welfare gain of being at w
3
rather than w
4
. That is, I assume that we can compare which of
two individuals is better o, and I also assume that we can quantify how large one dierence between two
welfare levels is relative to another dierence of two welfare levels. However, I do not assume that the welfare
level 0 has any special signicance|one can use it for the welfare of an arbitrarily chosen individual in an
arbitrarily chosen outcome. All other outcomes will receive positive or negative welfare levels relative to
it. Since I only assume that the welfare numbers lie on a common interval scale, the particular numbers 2,
1, and 0 are to some extent arbitrary. We could apply any positive linear transformation to them without
changing the interpersonal welfare information they contain.
11
Among others, Arrow (1951) famously rejected the assumption of such interpersonal welfare information.
12
Many theorists after Arrow defended assuming interpersonal welfare information (see, e.g., Sen, 1970).
Much of the literature I engage with assumes interpersonal utility information as given. For example, Buchak
makes this assumption without further argument (see Buchak, 2017, p. 629).
17
prospect in social act A for individual i. A
i
is dened as the prospect that maps each state
to the welfare level that individuali would be at if the social actA was chosen. That is,A
i
is
dened by A
i
(s) = (A(s))
i
. Here, A(s) is the welfare prole that results from performing A
in states, and (A(s))
i
is theith element of that welfare prole. One can think of a prospect
A
i
as the `upshot' of social actA for individuali. For example, ifA is the act of giving Alice
the risky ticket and Bob the safe ticket, then A
Alice
would be the prospect that maps the
state `Sunny April' to 2 and the state `Rainy April' to 0.
I assume as given an individual preference relation
i
on prospects, for each individual
i. Strict preference
i
and indierence
i
are dened in terms of weak preference
i
in the
usual way.
13
I interpret those relations as modeling evaluative attitudes of the individual.
P
i
Q means that individuali strictly prefers prospectP to prospectQ, andP
i
Q means
that the individual is indierent between prospect P and prospect Q.
The nal notion in the theoretical framework is the social preference relation on social
acts. Social preferences express what an ideal impartial observer would choose for the group.
A B means that the observer would choose A rather than B on behalf of the group,
and A B means that the observer would be indierent between choosing A or B. The
central question is how social preferences over social acts relate to welfare consequences and
individual preferences over prospects.
2.2.2 An Impossibility Result
I will now formulate desiderata for social preferences which formalize the dierent ideas that
came up in the case of Carla. Each of them seems plausible, but their conjunction turns out
to be inconsistent with theories that permit individuals to have dierent risk attitudes. This
gives rise to the dilemma. For ease of understanding, I will focus here and in the rest of the
chapter on the rst example I gave, in which statewise reasoning led Carla to be indierent
13
A
i
B is dened as A
i
B and B6
i
A, and A
i
B is dened as A
i
B and B
i
A. At various
points in my discussion, I rely on the standard assumption that
i
is complete, transitive and re
exive,
although this is not needed for the central result.
18
between the two ways of giving out the tickets.
We encountered the idea of letting individual risk attitudes in
uence social decisions.
Due to their risk attitudes, Alice prefers the risky ticket and Bob the safe ticket. We were
inclined to conclude that Carla should give the risky ticket to Alice and the safe ticket to
Bob. After all, both Alice and Bob prefer their prospect in that act|that is, in that way
of giving out the tickets|to their prospect in the alternative act. This kind of in
uence of
individual risk attitudes on social decisions follows from a version of the Pareto principle
which requires the observer to prefer A to B if all individuals prefer their prospect in A to
their prospect in B.
(Ex Ante Pareto) For all social acts A;B, if A
i
i
B
i
for all individuals i, then AB.
It is easy to see that the Pareto principle forces the observer to be responsive to individual risk
attitudes: the Pareto principle forces the observer to be responsive to individual preferences
over prospects, and those preferences are in turn in
uenced by individual risk attitudes.
We also encountered the idea of statewise reasoning: comparing two social acts by com-
paring their outcomes in each state of the world. To capture this, we need to speak of social
preferences over outcomes|that is, over welfare proles. So far, I dened social preferences
only over social acts, which are functions from states to outcomes. But we can easily dene
preferences over outcomes in terms of preferences over acts. For instance, for the observer
to prefer (3; 2) to (1; 1) just is for the observer to prefer the act which yields the welfare
prole (3; 2) in each state to the act which yields the welfare prole (1; 1) in each state. In
general, I take preferences over outcomes to be preferences over such constant acts. I will
write (w
1
;:::;w
n
) for the constant act that yields the welfare prole (w
1
;:::;w
n
) in each state.
Equipped with a notion of social preferences over outcomes, we can now formulate a
principle which entails that, in each state, Carla is indierent between the outcomes of the
two ways of giving out the tickets. Carla was indierent between the outcomes of the two
acts if April is sunny, because in both outcomes, one individual was at welfare level 2 and
one individual was at welfare level 1. Similarly, Carla was indierent between the outcomes
19
of the two acts if April is rainy, because in both outcomes, one individual was at welfare level
0 and one individual was at welfare level 1. The principle which underlies these preferences
over outcomes is that the observer is indierent between outcomes that involve exactly the
same welfare levels and only dier in who is at which welfare level.
(Constant Anonymity) For all welfare proles (w
1
;:::;w
n
) and permutations of individ-
uals , (w
1
;:::;w
n
) (w
(1)
;:::;w
(n)
).
This principle is such a plausible constraint on an impartial observer that I will treat it as
a xed point in my discussion. An example of a violation of this principle would be if Carla
preferred an act which, in each state, makes Alice end up at welfare level 2 and Bob at
welfare level 0 over an act that makes Alice end up at welfare level 0 and Bob at welfare
level 2. Such a preference clearly fails to be impartial: Carla would favor Alice's well-being
over Bob's well-being.
14
Carla then concluded that she ought to be indierent between the two acts, given that
she is indierent between their outcomes in each state. This inference seems very reasonable:
if you are sure that A will turn out at least as well as B, then it would seem unreasonable
to strictly prefer B to A. More perspicuously, the principle which licenses such statewise
reasoning says that if the observer weakly prefers the outcome of act A to the outcome of
act B in each state, then the observer weakly prefers A to B.
(Dominance) For all social acts A;B, if A(s)B(s) for all states s, then AB.
Since indierence between two acts is dened as weak preference in both directions, this
principle entails that if Carla is indierent between the outcomes of the two acts in each
state, then she is indierent between the two acts.
14
One might reject (Constant Anonymity) in cases in which individuals are to some extent responsible for
the welfare level they end up with in an outcome. For example, one might hold that an observer should not
be indierent between an outcome in which a criminal is at a low welfare level, and an outcome in which
an innocent person is at the same low welfare level. But this worry about (Constant Anonymity) does not
motivate a rejection of the principle in the case of Alice and Bob.
20
The last idea we need to formalize is that individuals have dierent risk attitudes. The
important consequence of this is that Alice and Bob have dierent preferences over the same
pair of prospects. Alice is risk-seeking, so she prefers the risky ticket to the safe ticket. In
contrast, Bob is risk-averse, so he prefers the safe ticket to the risky ticket. This violates
expected utility theory, which requires that all individuals rank prospects in exactly the same
way, namely, by their expected utility.
(Individual Preference Divergence) For some individualsi;j; and prospectsP;Q,P
i
Q and Q
j
P .
The central observation can now be stated as follows:
15
Proposition. (Ex Ante Pareto), (Constant Anonymity), (Dominance) and (Individual Pref-
erence Divergence) are inconsistent if the number of individuals is two.
Proof. By (Individual Preference Divergence), there are prospectsP andQ such thatP
1
Q
and Q
2
P . Consider the social acts A = (P;Q) and B = (Q;P ).
16
Then, A
1
1
B
1
and
A
2
2
B
2
. By (Ex Ante Pareto), A B. However, since A
1
= B
2
and A
2
= B
1
, we have
that for any state s, (A(s))
1
= (B(s))
2
and (A(s))
2
= (B(s))
1
. Thus, the welfare vectors in
each state are permutations of each other. Hence, by (Constant Anonymity), A(s) B(s)
for all s. Thus, by (Dominance), AB. Contradiction.
15
To my knowledge, the fairly obvious clash between respecting individual preferences and statewise rea-
soning if individuals have dierent risk attitudes has so far not been discussed in depth from a normative
perspective. However, it has certainly been noticed by others, albeit in slightly dierent settings. For ex-
ample, consider Hammond's investigation of commodity markets in situations of risk in Hammond (1981,
p. 239{242). He works in a standard expected utility model in which an individual's attitude towards risk
is captured by the shape of her utility function over dierent quantities of a good. This leads to slightly
dierent results, but some of them are motivated by the same basic observation: if dierent individuals have
dierent risk attitudes, then an ex ante Pareto condition and a statewise dominance condition can come into
con
ict. Thanks to Kacper Kowalczyk for bringing this paper to my attention.
McCarthy et al. (2016) derive an interesting result in a similar framework. However, they focus on the
problem of how a social observer's preference relation over prospects can be turned into a social preference
relation over social acts. Making a single preference relation over prospects the basis for social decisions
rules out that dierent risk attitudes of individuals can be taken into account in such decisions.
Finally, in work done independently of this chapter, Nebel (2020, p. 103{104) shows that even if individuals
have the same risk-averse preferences, then one cannot have an ex ante Pareto principle and a statewise
dominance principle.
16
I use the notation A = (P;Q) for the social act A so that A
1
=P and A
2
=Q.
21
This result makes two unrealistic simplifying assumptions: that there are only two aected
individuals, and that the observer faces a choice between acts A and B so that Alice's
prospect in A is exactly the same as Bob's prospect in B. Neither of these assumptions is
essential. Versions of this result show that the clash between Pareto reasoning and statewise
reasoning also occurs in cases with arbitrary numbers of individuals
17
and in cases in which
Alice's prospect inA is not exactly the same as Bob's prospect inB.
18
I focus on the case of
two individuals and exact equality merely because it strikes me as the most straightforward
illustration of the conceptual issues of interest.
In the introduction, I presented a modication of the example in which statewise reason-
ing led to a strict preference for the act that is unanimously disprefered. The corresponding
impossibility is stated in the appendix.
The result shows that anyone who is willing to accept (Individual Preference Diver-
gence) must give up either (Ex Ante Pareto) or (Dominance)|or (Constant Anonymity),
but as I noted earlier, I will not consider this option. This dilemma can be broadly applied.
Anyone who thinks that individuals may have dierent preferences between the same pair of
17
To obtain cases with more than two individuals, one could, for example, add individuals who have the
same welfare independently of whether A or B is chosen. An inconsistency can then be formally derived if
we strengthen (Ex Ante Pareto) as follows:
(Strong Ex Ante Pareto) For all A;B, if A
i
B for all i, and A
i
B for at least one i, then AB.
18
For example, in the variant of the case in which Alice and Bob get a small extra benet, Alice's prospect
inA is not exactly the same as Bob's prospect inB. See the appendix for a version of the impossibility that
applies to such cases.
One could make explicit the background assumption that the domain of social acts contains A = (P;Q)
and B = (Q;P ) by modifying (Individual Preference Divergence) as follows:
(Individual Preference Divergence') There are social acts A = (P;Q) and B = (Q;P ) so that P
i
Q
and Q
j
P for some individuals i;j.
Put that way, it is easy to see that there is a variant of the impossibility result with a strictly weaker version
of this assumption that does not assume exact equality:
(Weak Individual Preference Divergence') There are social acts A = (P;Q) and B = (Q
+
;P
+
) so
that
1) for all s, P
+
(s)P (s) and Q
+
(s)Q(s), and
2) P
i
Q
+
and Q
j
P
+
for some individuals i;j.
This is strictly weaker than (Individual Preference Divergence'), which assumes that P =P
+
and Q =Q
+
.
I omit the corresponding impossibility result for reasons of space.
22
prospects|that something in addition to the prospects themselves may in
uence an individ-
ual's preference|faces the dilemma. Among those who need to grapple with this challenge
are REU theorists.
2.2.3 REU Theory for Individuals and (Individual Preference Di-
vergence)
The point of REU theory is to allow individuals to have dierent risk attitudes that in
uence
their preferences over prospects. Therefore, it should not come as a surprise that REU theory
entails that two rational individuals might have dierent preferences over the same pair of
prospects. I will now give a brief description of Buchak's REU theory and explain how it
applies to the case at hand.
19
According to REU theory, rational individuals rank prospects by their risk-expected util-
ity. Risk-expected utility is a generalization of expected utility. Instead of weighting each
utility increment that an act might produce by the probability p of getting at least that
utility increment, the increments are weighted by r(p), where r is a risk function.
20
The
risk function captures the individual's risk attitude. If it is convex, such as r(p) =p
2
, then
the individual places disproportionately high weight on the bad outcomes of an act and will
therefore be risk-averse. If it is concave, such as r(p) =
p
p, then the individual places
disproportionately small weight on the bad outcomes of an act and will therefore be risk-
seeking. If it is the identity function, r(p) = p, then the individual is risk-neutral. In that
case, the expression for risk-expected utility reduces to the expression for expected utility.
Thus, REU theory is a strict generalization of expected utility theory: it holds that expected
utility maximization is rationally permitted but not required.
19
For a more comprehensive summary of REU theory, see Buchak (2014).
20
A risk function is a function r : [0; 1]! [0; 1] such that
1. r is non-decreasing
2. 0r(p) 1 for all p2 [0; 1]
3. r(0) = 0 and r(1) = 1
23
I write REU
r;Pr
(P ) for the risk-expected utility of a prospect P with respect to the
risk function r and the probability function Pr.
21
The claim that rational agents are REU
maximizers can then be stated as follows:
(Individual REU) For each rational agent i, there exists a risk function r
i
such that for
all prospects P;Q: P
i
Q i REU
r
i
;Pr
(P ) REU
r
i
;Pr
(Q).
Clearly, (Individual REU) allows Alice to prefer the risky ticket to the safe ticket and Bob
to prefer the safe ticket to the risky ticket. To derive this conclusion, assume that Alice's
(or Bob's) utility function assigns to an outcome the welfare level of Alice (or Bob) in
that outcome. This corresponds to our stipulation that Alice and Bob evaluate outcomes
purely by their welfare in those outcomes. REU theory does not make any substantial
assumptions about what rational individuals value and thus does not rule out that some
rational individuals only care about their welfare.
22
Suppose that Alice's risk-function is r
Alice
(p) =
p
p and Bob's risk-function is r
Bob
(p) =
p
2
. Alice is risk-seeking and Bob is risk-averse. Then, Alice's risk-expected utility for getting
the risky ticket|that is, for a 50% chance of a welfare gain of 2|is approximately 1.41, while
her risk-expected utility for getting the safe ticket is only 1.
23
In contrast, Bob's risk-expected
utility for getting the risky ticket is only 0.5, while his risk-expected utility for getting the safe
ticket is 1.
24
Hence, Alice prefers the risky ticket and Bob the safe ticket. Thus, (Individual
21
REU
r;Pr
(P ) is dened as follows:
REU
r;Pr
(P ) =P (s
0
) +
n
X
k=1
r(Pr(s
k
_:::_s
n
))(P (s
k
)P (s
k1
))
where the states are indexed such that P (s
0
):::P (s
n
).
22
Note also that I could have stated the impossibility in terms of the broader notion of `utility', understood
as a measure of how good things are according to whatever in
uences the individual's preferences over
outcomes, rather than in terms of the narrower notion of `welfare'. Then, one would assume that an
individual ranks social acts by the risk-expectation of her utility, without assuming that utility and welfare
coincide. This would allow, for instance, that an individual cares about the well-being of others, too, even
if it has no eect on her own well-being.
That said, if one uses a broad notion of utility rather than a narrow notion of welfare, some of the principles
used to derive the impossibility might lose their appeal (see Ng, 2000, section 4.1).
23
Alice's risk-expected utility for getting the risky ticket is given by 0 +
q
1
2
2 1:41
24
Bob's risk-expected utility for the risky ticket is given by 0 + (
1
2
)
2
2 =
1
2
.
24
REU) rationalizes the preferences described in (Individual Preference Divergence). Two
rational individuals may have opposite preferences between the same pair of prospects.
It is worth emphasizing that the strictly stronger restrictions on individual preferences
imposed by EU theory rule out such preferences. If Alice and Bob were EU maximizers,
they would both be indierent between the risky ticket and the safe ticket since both tickets
have an expected utility of 1. Thus, EU theorists deny (Individual Preference Divergence),
at least in cases in which the individuals are rational and assign the same probabilities to
the states of the world.
25
The impossibility result hence gives rise to an objection against
generalizing EU theory to REU theory: the move to REU theory would force us to give up
one of several plausible principles of social choice even in situations in which all agents are
rational and have the same beliefs. To assess how forceful this objection is, we need to get
a better sense of whether there is a plausible way out of the dilemma for REU theorists.
2.3 Giving up (Ex Ante Pareto)
Taking (Constant Anonymity) for granted, REU theorists need to give up either (Ex Ante
Pareto) or (Dominance). This section and the next will outline potential costs of each of
those responses.
Suppose REU theorists responded to the dilemma by denying that Carla ought to give the
risky ticket to Alice and the safe ticket to Bob, even though Alice prefers the risky ticket and
Bob prefers the safe ticket. This would require REU theorists to give some argument why (Ex
Ante Pareto) may be violated in the particular case that gives rise to the dilemma.
26
Various
25
In cases in which the individuals assign dierent probabilities to the states of the world, EU theorists
face the dilemma, but they might also be able to oer a plausible solution, which I will discuss in section
2.3.1.
26
At rst glance, rejecting (Ex Ante Pareto) might seem to be a natural choice for REU theorists. After
all, they reject the following principle for individual decision-making:
(Eventwise Dominance) If, for some event E, an agent prefers A to B conditional on E and conditional
on:E, then the agent prefers A to B unconditionally.
This principle for individual decision-making is structurally analogous to (Ex Ante Pareto) for social decision-
making: preferences conditional on dierent events take the place of preferences of dierent individuals.
25
arguments have been given that the Pareto principle may be violated in some circumstances.
The question is whether any of those justications, or variants thereof, can be adopted by
REU theorists to defend violations of (Ex Ante Pareto) in cases such as that of Carla. Some
of the well-known justications for violations of the Pareto principle are clearly inapplicable.
For instance, some have argued that if the unanimously preferred act leads to outcomes with
higher welfare inequality than the dispreferred act, then it is sometimes permitted to choose
the dispreferred act.
27
This justication cannot justify Carla overriding Alice's and Bob's
preference because the outcomes of the two ways of giving out the tickets do not dier in their
degree of welfare inequality. Also, according to REU theorists, Alice and Bob are perfectly
rational in preferring the risky and the safe ticket, respectively. Hence, REU theorists cannot
say that Alice's and Bob's preferences are irrational and hence may be overridden|which is
a response to the dilemma that EU theorists might be attracted to.
I will now consider two candidate justications that might initially look more promising.
I will argue that neither can be used to justify violations of the Pareto principle in the cases
under discussion.
2.3.1 Justiability to Individuals
It has been argued that violations of the Pareto principle are permissible if dierent individu-
als assign dierent probabilities to the states of the world.
28
As they stand, such justications
do not help REU theorists because in the case giving rise to the dilemma, Alice's, Bob's, and
Carla's probabilities are the same. The source of Alice's and Bob's preferences are dierent
Given that REU theorists reject (Eventwise Dominance), one might think that it is natural for REU theorists
to also reject (Ex Ante Pareto).
But (Eventwise Dominance) and (Ex Ante Pareto) are also disanalogous in important respects. In par-
ticular, one principle concerns aggregating across events and the other aggregating across people, and this
might well make a normative dierence. Additional arguments would be needed to explain why accepting
failures of (Eventwise Dominance) should make one accept a failure of (Ex Ante Pareto) in the social choice
problem at hand.
27
See Fleurbaey and Voorhoeve (2013).
28
Gilboa et al., for example, restrict the Pareto principle to exclude cases in which individual beliefs dier,
thereby permitting the observer to override unanimous individual preferences (see Gilboa et al., 2004, p.
934).
26
risk attitudes, not dierent subjective probabilities. However, REU theorists might hope to
adopt a justication of a similar shape.
Here is how a violation of the Pareto principle might be justied in cases of dierent
probabilities. Suppose that Alice and Bob assign a probability of 0.9 and 0.1, respectively,
to April being sunny. Suppose further that Alice and Bob are expected utility maximizers.
Then, Alice will prefer the risky ticket, because she is very condent that the concert will
not be canceled, while Bob will prefer the safe ticket, because he is very condent that
the concert will be canceled. As before, there is pressure to violate the Pareto principle:
(Constant Anonymity) and (Dominance) require Carla to be indierent, no matter what
her own subjective probabilities happen to be. This illustrates the well-known fact that EU
theorists have to choose between (Ex Ante Pareto) and (Dominance), too|when individuals
assign dierent probabilities to states of the world.
29
A natural explanation for why Alice and Bob assign dierent probabilities is that they
have dierent evidence. Indeed, let us assume the so-called Uniqueness Thesis|that for any
body of evidence E and proposition p, there is a unique subjective probability in p that E
supports|according to which they must have dierent evidence.
30
Given the Uniqueness
Thesis, Carla knows that if Alice and Bob shared their evidence, then their subjective proba-
bilities would be the same. Moreover, we assumed that their opposite preferences were due to
dierent subjective probabilities. Thus, Carla also knows that if Alice and Bob shared their
evidence, they would either both want the risky ticket, both want the safe ticket, or both be
indierent between them. Certainly, they would not both want Alice to get the risky and
Bob to get the safe ticket. Given that Carla knows that their unanimous preference for that
way of giving out the tickets is due to insucient sharing of evidence, it seems permissible
for her to override it.
31
29
For example, Mongin (1995) shows that, given some technical background assumptions, the following
are inconsistent in the case of diering individual beliefs: 1) Savage's EU axioms for individuals, 2) Savage's
EU axioms for the observer (which entail a dominance principle), 3) a Pareto principle.
30
For a critique and a defense of the view that a body of evidence may rationally permit dierent belief
states, see White (2005) and Schoeneld (2014).
31
For arguments in favor of the view that social decisions should be guided by hypothetical informed
27
In particular, it seems that Carla could convincingly justify her decision to Alice and Bob.
Suppose Alice and Bob complained to Carla about her not having chosen their preferred way
of giving out the tickets. In response, Carla could share the bits of evidence that at least
one of them was unaware of. For instance, she could say: `Bob, you did not look at the
historical weather data, which suggest that there is a 90% chance of sunshine in April'.
After Carla justied her decision in this way, Alice and Bob would have the same evidence
about the matter at hand, and hence the same probabilities. Consequently, they would
update their preferences and now either both be indierent between the two tickets or both
prefer the same ticket. In both cases, they would drop their complaint against Carla that
she did not give Alice the risky ticket and Bob the safe ticket. They would thus accept her
justication. Since Carla has a justication for overriding their unanimous preferences that
they would accept, it seems that she may permissibly do so.
32
Thus, the Pareto principle
fails if individuals have dierent probabilities.
Could REU theorists give an analogous argument to justify that Carla may override
Alice's and Bob's preferences in the case of dierent risk attitudes but same probabilities?
It does not seem so. If Alice and Bob have the same subjective probabilities, then there
is no reason to suppose that they have dierent evidence. Therefore, there is no reason to
suppose that if they shared their evidence, they would no longer unanimously favor giving
Alice the risky ticket and Bob the safe ticket. But then, Carla cannot justify overriding
their unanimous preferences by pointing out that their `improved' preferences would not be
unanimous. This is an important dierence between preferences based on dierent subjective
probabilities and preferences based on dierent risk attitudes. Individuals recognize that
their subjective probabilities and preferences based on them can be improved by taking
more evidence into account. However, risk attitudes and preferences based on them are not
preferences rather than actual preferences, see Harsanyi (1996) and Ng (2000, p. 34).
32
This argument assumes that justiability to all aected individuals is a strong indicator for the moral
permissibility of a decision. In particular, the argument appeals to the justiability of prospects to individuals
who discount outcomes based on their probabilities, and is therefore congenial to ex ante contractualism (see
Frick, 2015).
28
the kind of thing that can be improved upon. That is why, if Alice's and Bob's preferences are
based on dierent risk attitudes, Carla cannot appeal to some potential change of preferences
that they would both, by their own lights, regard as improving their preferences.
Before I move on, note that the justication for violations of the Pareto principle in cases
of dierent subjective probabilities only works if the probabilities are based on dierent
evidence. If we reject the Uniqueness Thesis, then there will be cases in which Alice and
Bob have exactly the same evidence but nevertheless assign dierent probabilities. Then,
Carla may not be able to justify overriding their preferences by appealing to their improved
preferences after sharing evidence. Thus, EU theorists who reject the Uniqueness Thesis but
accept the dominance principle seem to be committed to violations of the Pareto principle
in cases in which the agents have exactly the same evidence. Some might take this as an
argument in favor of the Uniqueness Thesis.
2.3.2 Reasons
REU theorists might try to give a dierent justication for why Carla may override Alice's
and Bob's unanimous preference, based on an argument by Philippe Mongin.
33
In the interest
of space, I will not go into the details of the argument; giving the main idea suces for our
discussion.
Mongin argues that permissible violations of the Pareto principle occur in cases in which
the individuals' probabilities are the same, and, indeed, even in cases in which there is no
relevant uncertainty at all. Suppose that all individuals prefer building a bridge (act A)
to the alternative of not building a bridge (act B). Act A has two consequences. One
group thinks that consequence 1 is good and consequence 2 is bad, and they prefer act A
because they consider the goodness of consequence 1 to outweigh the badness of consequence
2. The other group thinks that consequence 1 is bad and consequence 2 is good, and they
prefer act A because they consider the goodness of consequence 2 to outweigh the badness
33
See Mongin (2016).
29
of consequence 1. Mongin argues that in some such cases, the bridge should not be built
even though all individuals prefer building the bridge. In short, his argument is that the
reasons of the two groups of individuals are opposed to each other, so the observer cannot
derive a coherent set of reasons for building the bridge from the individual reasons. Thus,
in the absence of any other reasons in favor of building the bridge, the observer should not
decide to build it. Note that this argument crucially relies on the assumption that the mere
fact that all individuals prefer A to B would not be a reason for the observer to also prefer
A to B.
34
However one might assess Mongin's argument about the cases he considers, it is hard to
see how REU theorists could use the argument for the case of Carla. REU theorists might
try to argue that Carla cannot derive a coherent reason to prefer A to B from Alice's and
Bob's reasons because these reasons involve dierent risk attitudes. Thus, Carla should not
prefer A to B, even if Alice and Bob do.
But this argument seems weak. Suppose that Alice likes apples and Bob likes bananas,
and Carla therefore gives Alice an apple and Bob a banana rather than vice versa. It
would be implausible to object to Carla's choice by saying that Alice's and Bob's unanimous
preferences for Alice getting the apple and Bob getting the banana are based on reasons that
involve dierent tastes. Why is it any more plausible to object to Carla giving Alice the
risky ticket and Bob the safe ticket by saying that Alice's and Bob's unanimous preferences
are based on reasons that involve dierent risk attitudes?
Importantly, a social observer does not need to adopt the risk attitudes or tastes of
dierent individuals if she follows their unanimous preferences. Carla's reason for preferring
A is not `a 50% chance of a two-hour concert is worth giving up a safe one-hour concert (so
I will give Alice the risky ticket) and a 50% chance of a two-hour concert is not worth giving
up a safe one-hour concert (so I will give Bob the safe ticket)'. Such an adoption of dierent
risk attitudes would indeed be incoherent.
34
See Mongin (2016, p. 518).
30
Rather, Carla's reason for preferring A is `Alice is risk-seeking and Bob is risk-averse,
so giving Alice the risky ticket and Bob the safe ticket better aligns prospects with risk
attitudes'. This seems to be a good reason to prefer A, at least if one thinks that those
risk attitudes are perfectly rational. It is also consistent with granting that the mere fact
that Alice and Bob prefer their prospect in A to their prospect in B is not a reason for
Carla to prefer A to B. According to the suggested reason, it is the facts underlying these
preferences|facts about Alice's and Bob's risk attitudes|rather than the preferences them-
selves that provide Carla with a reason to preferA toB. It is hard to see how REU theorists
could reject this line of argument. Act B distributes the packages of risks and potential
benets across individuals so that Bob is exposed to a risk (having to stay home) which he
would not be willing to take in exchange for the potential gain associated with it (a two-hour
concert), given that he could also have a one-hour concert for certain. On the other hand,
A distributes the same packages of risks and potential benets across individuals in a way
which does not expose any individual to a risk that that individual would not be willing
to take, given her alternatives. Plausibly, exposing individuals to risks they would not be
willing to take, given their alternatives, should be avoided if this is possible at zero cost.
35
Thus, that A better aligns prospects with individual risk attitudes than B seems to be a
good reason for Carla to prefer A to B.
In fact, Buchak endorses the following principle about choosing on behalf of an individual
REU maximizer:
When making a decision for an individual, choose under the assumption that he
has the most risk-avoidant attitude within reason unless we know that he has a
dierent risk attitude, in which case, choose using his risk attitude.
36
35
Buchak points out that if one exposes individuals to a risk they would not be willing to take, then one
will not have a justication for them if things turn out badly (see Buchak, 2017, p. 636). If Carla gave Bob
the risky ticket and it is raining on the day of the concert, Bob might ask why she has not given him the safe
ticket, given that she knows he likes to play it safe. It is unclear what Carla could say in response. However,
things are dierent with individuals who would have been willing to take the risk themselves. Alice cannot
complain if she gets the risky ticket and April turns out to be rainy. Carla could tell her that she would
have chosen the risky ticket herself.
36
Buchak (2017, p. 632).
31
According to this principle, the risk attitude of an individual constitutes a reason for the
observer to choose one way rather than the other on behalf of that individual. But then,
surely, the risk attitudes of two individuals constitute reasons for the observer to choose one
way rather than the other on behalf of those two individuals.
In summary, giving up (Ex Ante Pareto) and saying that Carla should override Alice's
and Bob's preferences does not seem to be a promising way for REU theorists to respond to
the dilemma. Well-known justications for violating the Pareto principle in other contexts do
not seem to carry over to the case at hand. It appears that Carla could not justify overriding
Alice's and Bob's unanimous preferences to them. It also seems that she has a good reason
to give out the tickets in the way that aligns with Alice's and Bob's risk attitudes, given that
this is possible at no extra cost. Thus, we should take a closer look at the other possible
response to the dilemma.
2.4 Giving up (Dominance)
2.4.1 Dominance Principles for Social Observers
Dominance principles are very plausible for individuals. If you take act A to lead to at
least as good an outcome as act B in each state of the world, then how could it be rational
for you to strictly prefer act B? After all, you are certain that A will have at least as
good consequences as B.
37
You have no reason to strictly prefer B to A.
38
Even decision
theories which impose substantially weaker requirements on individuals than expected utility
theory, such as weighted utility theory or cumulative prospect theory, preserve the dominance
principle.
39
Indeed, REU theory requires individuals to satisfy the dominance principle.
40
37
For a version of this motivation for dominance, see Fleurbaey (2010, p. 655). Note we assume throughout
that the subjective probabilities of the states are independent of which act gets chosen. If not, the dominance
principle famously loses its appeal (see Jerey, 1983, p. 9).
38
Buchak calls this idea betterness-for-reasons (see Buchak, 2013, p. 75).
39
See Fishburn (1983); Tversky and Kahneman (1992).
40
See Buchak (2013, p. 100).
32
If the dominance principle is plausible for individuals, does it follow that it is also plausible
for a social observer? At rst glance, this seems hard to deny:
Surely, when we act on behalf of other people, let alone when we act on behalf
of society as a whole, we are under an obligation to follow, if anything, higher
standards of rationality than when we are dealing with our own private aairs.
41
Anyone who wants to give up (Dominance) for the social observer needs to explain why an
observer is exempt from a principle of rationality that clearly applies to individuals.
Our discussion in the last section foreshadowed an argument for why (Dominance) might
be implausible from the perspective of REU theorists. We have modeled outcomes as welfare
proles. Thus, (Dominance) says that if the welfare consequences of A are certain to be at
least as good as the welfare consequences of B, then the observer does not prefer B to A.
However, as I have just argued, it seems that REU theorists would say that an observer
should not only care about the welfare consequences of acts but also about whether acts
align prospects with risk attitudes. Thus, the observer might prefer B to A even though in
terms of welfare, A is sure to turn out at least as good as B. Hence, one might think that
REU theorists have a good motivation to reject (Dominance).
Here is a dierent spin on the same idea. Consider a principle (Prospect Anonymity)
which says that the observer should be indierent between two social acts that distribute the
same set of prospects and only dier in which individual faces which prospect.
42
REU theo-
rists might say that they have a good motivation to reject (Prospect Anonymity): contrary
to the principle, the observer should sometimes have strict preferences between dierent ways
of distributing the same prospects. She should sometimes strictly prefer ways of distribut-
ing prospects that allocate risky prospects to risk-seeking individuals and safe prospects to
41
Harsanyi (1975, p. 69).
42
More precisely:
(Prospect Anonymity) For all social acts A and permutations of individuals, A(A).
Here, (A) is the act which is just like A with the exception of the individuals being permuted. That is,
((A))(s) = (A(s))
(i)
, where (A(s))
(i)
is the welfare prole whose ith element is the (i)th element of
A(s).
33
risk-averse individuals. But one can show that (Dominance) entails (Prospect Anonymity),
given (Constant Anonymity).
43
Hence, REU theorists might say that they also have a good
motivation to reject (Dominance): it entails a principle that they have a good motivation to
reject.
One might wonder whether REU theorists could capture the concern for the alignment
of prospects with risk attitudes without giving up (Dominance). After all, REU theorists
might say that even if individuals rank outcomes only by welfare levels, the observer does
not. They could propose to model outcomes so that for each individual, an outcome species
both the welfare level and a risk attitude alignment score. This score would re
ect how well
the act that led to this welfare level aligned the individual's prospect with her risk attitude.
Giving Alice the risky ticket and Bob the safe ticket would lead to outcomes with higher risk
attitude alignment scores than giving Alice the safe ticket and Bob the risky ticket. Carla
would strictly prefer the outcomes of giving Alice the risky and Bob the safe ticket to the
outcomes of giving Alice the safe and Bob the risky ticket. Thus, if outcomes are modeled
in this non-welfarist way, Carla can follow Alice's and Bob's unanimous preferences without
choosing an act whose outcomes she disprefers in each state of the world.
But if REU theorists endorsed this redescription strategy for outcomes in social choice
theory, should they not also endorse it in individual decision theory? If they did, then
there would be no need for REU theory anymore: Alice and Bob could be represented as
maximizing expected utility. For instance, Alice's utility of going to a two-hour concert after
having been given the risky ticket would be more than twice as high as the utility of going to
43
Consider two arbitrary acts that distribute the same collection of prospects over individuals|that is,
the same functions from states to welfare levels. Then, the welfare proles they lead to must, in each state,
be permutations of each other. Hence, by (Constant Anonymity), the observer is indierent between the
outcomes of the two acts in each state. By (Dominance), the observer is indierent between the two acts.
Thus, (Prospect Anonymity) follows from (Constant Anonymity) and (Dominance).
On its own, (Prospect Anonymity) does not entail (Dominance) or (Constant Anonymity). Hence,
(Prospect Anonymity) is strictly weaker than their conjunction. But (Ex Ante Pareto), (Prospect
Anonymity), and (Individual Preference Divergence) are inconsistent. Thus, we can strengthen the im-
possibility result by substituting (Prospect Anonymity) for the stronger conjunction of (Dominance) and
(Constant Anonymity). I focused on the logically weaker version of the impossibility because I nd its
principles more intuitive.
34
a one-hour concert after having been given the safe ticket. Hence, a 50% chance of the former
outcome would have a higher expected utility than a 100% of the latter outcome. EU theory
would be an adequate model for Alice's preference for the risky ticket. Unsurprisingly, REU
theorists decidedly reject this strategy of making outcomes more ne-grained in individual
decision theory.
44
It would take us beyond the scope of this chapter to review the debate
about whether making outcomes more ne-grained is problematic or not. It suces to say
that REU theorists who want to make risk alignment part of the outcomes in social choice
theory would have to explain why this does not undermine the case for REU theory in the
rst place.
Thus, REU theorists might stick to modeling outcomes as welfare proles and justify giv-
ing up (Dominance) by saying that aligning prospects with individual risk attitudes matters.
But then, what theory of social choice could REU theorists endorse?
2.4.2 Social Choice without (Dominance)
The rst thing to note is that if REU theorists give up (Dominance), they cannot use
their own theory for individuals|REU theory|as a theory for the social observer. This
observation is somewhat surprising. At rst glance, one might have expected REU theorists
to say something like this about social choice: The observer should rst compute the `social
utility' of each outcome by summing the individual welfare levels in that outcome. She should
then compute the `social risk function' by averaging the risk functions of the individuals.
Then, the observer should rank social acts by their risk-expected social utility. Call this rule
(REU of Sum).
However, while a rule such as (REU of Sum) would seem congenial to REU theorists,
it entails (Dominance). If the observer maximizes any kind of risk-expected utility|as she
does according to (REU of Sum)|then she will satisfy (Dominance). REU theory entails
the dominance principle, whether we apply it to an individual or to a social observer. If the
44
See Buchak (2013, chapter 4).
35
observer is an REU maximizer, and she prefers the outcome of A to the outcome of B in
each state, then she must assign a higher utility to the outcome of A than to the outcome
of B in each state, and thus A must have a higher risk-expected utility than B for the
observer|whatever her risk and probability functions are. Thus, REU theorists who give
up (Dominance) must reject their own theory for social decisions. Even if REU is the correct
theory of individual decisions, it is not the correct theory of social decisions.
If REU theorists cannot use their theory for individuals as a theory for the social observer,
what alternative might they put forth? A natural suggestion would be that Carla ought to
evaluate a social act by using the risk-expected utilities that the dierent individuals assign
to that act. For example, REU theorists might say that Carla should rank social acts by the
sums of their individual risk-expected utilities.
(Sum of REU) AB i
P
n
i=1
REU
r
i
;Pr
(A
i
)
P
n
i=1
REU
r
i
;Pr
(B
i
).
For example, suppose that Alice's risk function is r
Alice
(p) =
p
p and Bob's risk function is
r
Bob
(p) =p
2
. Consider our main example again:
Sunny April Rainy April
A (2, 1) (0, 1)
B (1, 2) (1, 0)
Alice's risk-expected utility of her prospect in A is approximately 1.41, and Bob's risk-
expected utility of his prospect in A is 1.
45
In contrast, Alice's risk-expected utility of her
prospect in B is 1, and Bob's risk-expected utility of his prospect in B is 0.5.
46
Since the
sum of 1.41 and 1 is larger than the sum of 1 and 0.5, this rule generates the desired verdict
that Carla prefers A to B. That is, she prefers giving Alice the risky ticket and Bob the
safe ticket. In fact, Carla will always respect unanimous preferences if she uses (Sum of
45
0 +r
Alice
(
1
2
) 2 =
q
1
2
2 1:41.
46
0 +r
Bob
(
1
2
) 2 = (
1
2
)
2
2 = 0:5.
36
REU) to rank social acts: (Sum of REU) entails (Ex Ante Pareto). If all individuals prefer
their prospects inA to their prospectsB, then the risk-expected utility of A is greater than
the risk-expected utility of B for each individual, and therefore the sum of the individual
risk-expected utilities of A is greater than the sum of the individual risk-expected utilities
of B.
Should REU theorists, then, respond to the dilemma by giving up (Dominance) and
advocating for rules that aggregate individual risk-expected utilities? This is not at all clear.
It might seem plausible to strictly prefer distributing prospects so that prospects are aligned
with the dierent risk attitudes of individuals if there is a way to make everyone happy|that
is, if there is an act that every individual prefers. But rules based on individual risk-expected
utilities go further than that. They also let individual risk attitudes aect decisions in cases
in which individual preferences con
ict. This leads to questionable decisions.
To illustrate, consider the following counterexample against (Sum of REU). Suppose that
both Alice and Bob are ill, and Carla must decide between act C and act D. In C, Carla
gives Alice a drug that will ease some of her symptoms (welfare level 1). In D, Carla
ips a
coin and gives either Alice or Bob a drug that will completely cure her or him (welfare level
2). We can write the two acts as follows:
Heads Tails
C (1; 0) (1; 0)
D (2; 0) (0; 2)
Clearly, Carla should prefer curing one of Alice or Bob to merely easing Alice's symptoms,
no matter what Alice's and Bob's risk attitudes are. To see why, note rst that Carla should
clearly prefer curing Alice with certainty (act C
+
) to merely easing some of her symptoms
with certainty (act C): C
+
C. But also, Carla should prefer
ipping a coin to determine
whether to cure Alice or Bob (act D) to curing Alice with certainty (C
+
). At least, she
37
should not disprefer
ipping a coin to decide who gets cured: DC
+
.
47
But then, by the
transitivity of Carla's preferences, D C. Thus, Carla should prefer D to C, no matter
what Alice's and Bob's risk attitudes are.
48
But (Sum of REU) does not support this verdict. Assume that Alice and Bob are both
risk-averse with a risk attitude r(p) =p
3
. Act A gives Alice a 100% chance of ending up at
welfare level 1. Its risk-expected utilities for Alice and Bob are 1 and 0, respectively. Thus,
its sum of individual risk-expected utilities is 1. Act D gives both Alice and Bob a 50%
chance of ending up at welfare level 2. Its risk-expected utility is 0:25 for both of them.
49
Therefore, the sum of individual risk-expected utilities for act D is 0:5. Since 1 > 0:5,
(Sum of REU) entails that C D. This is implausible, and, therefore, (Sum of REU) is
implausible.
As a second counterexample, suppose that both Alice and Bob are ill and that Carla
has to decide whether to give a drug that has a 50% chance of working to Alice (act E) or
Bob (act F ). If she gave it to Alice and it worked, then it would only ease her symptoms,
whereas if she gave it to Bob and it worked, it would fully cure him. We can write the two
acts as follows:
Drug works Drug does not work
E (1; 0) (0; 0)
F (0; 2) (0; 0)
It seems that Carla should give the drug to Bob, given that it would do more good for
Bob than for Alice if it worked. That seems plausible irrespective of Alice's and Bob's risk
47
Ex ante egalitarians would strictly prefer D to C
+
(see Diamond, 1967). Others think that such forms
of egalitarianism are misguided (see Broome, 1984). However, even those who do not accept such forms of
egalitarianism should agree that it does not make an act worse that it spreads the chances of something
good equally across the population instead of concentrating them in one person.
48
Note that in act D, there is no risk on the social level because in every state, one individual gets 2 and
one individual gets 0. However, there is risk on the private level because each individual has a 50% chance
of getting 0. For a structurally similar case, albeit in a dierent framework, see Arrow and Lind (1970, p.
377).
49
0 +r(
1
2
) 2 = (
1
2
)
3
2 = 0:25.
38
attitudes. Here is one way to vindicate this verdict. First, Carla should strictly prefer giving
Alice a drug that would cure her if it worked (actE
+
) rather than merely ease her symptoms:
E
+
E. Second, Carla should be indierent between giving Alice or Bob a drug that would
cure the recipient if it worked: F E
+
. Put dierently, the aforementioned principle
(Prospect Anonymity) should hold, at least in this case: Carla should be indierent between
dierent ways of giving out the same prospects. By transitivity, it follows that FE.
But again, (Sum of REU) does not support this verdict. Suppose that Alice is risk-seeking
with a risk function r
Alice
(p) =
p
p, and Bob is risk-averse with a risk function r
Bob
(p) =p
2
.
Then, actE gives Alice a risk-expected utility of 0.71 and Bob a risk-expected utility of 0.
50
ActF gives Alice a risk-expected utility of 0 and Bob a risk-expected utility of 0.5.
51
Thus,
(Sum of REU) entails that EF .
This second counterexample applies not only to (Sum of REU) but to any plausible rule
that ranks social acts purely based on individual risk-expected utilities. Any plausible such
rule will prefer actE, which has individual risk-expected utilities 0.71 and 0, to actF , which
has individual risk-expected utilities of 0 and 0.5. If the rule did not do so, it would weakly
disprefer any act with risk-expected utilities 0.71 and 0 to any act with risk-expected utilities
0 and 0.5. In particular, it would weakly disprefer act G to act H.
Any state
G (0:71; 0)
H (0; 0:5)
But clearly, act G should be strictly preferred to act H. If an impartial observer could
either improve Alice's welfare or improve Bob's welfare to a lesser extent, then an impartial
observer should improve Alice's welfare.
52
Thus, any plausible rule that ranks acts based on
50
r
Alice
(
1
2
) 1 =
q
1
2
0:71:
51
r
Bob
(
1
2
) 2 = (
1
2
)
2
2 = 0:5.
52
This is true at least in the case in which Alice and Bob are currently at the same welfare level, so that
there is no prioritarian reason to prefer a small improvement for Bob over a larger improvement for Alice.
39
individual risk-expected utilities must preferG toH. But then, it would also preferE toF ,
which is implausible.
Both counterexamples involved an observer who had to decide which of two individual
preferences to satisfy. Rules which use individual risk-expected utilities to resolve such
con
icts will give priority to whomever assigns a higher risk-expected utility to their preferred
prospect. This, in turn, depends on the individuals' risk attitudes. For example, Alice was
given the drug even though the drug would do more good for Bob if it worked. The reason for
that was that Alice is more risk-seeking than Bob. More generally, if Carla needs to decide
who gets a chance of something good and who gets nothing, those rules will prefer giving
chances of something good to risk-seeking individuals. Such in
uence of risk attitudes on
who gets their preferences satised seemed questionable in the cases I presented|whether
it is questionable in general merits a comprehensive discussion on its own.
53
In summary, if REU theorists respond to the dilemma by denying (Dominance), then
they must deny REU theory for social decision-making. The most obvious alternative are
theories that rank social acts based on individual risk-expected utilities. However, those
theories have potentially unappealing consequences in cases in which individual preferences
con
ict. I leave it as an open challenge for REU theorists who think that risk attitudes
should in
uence social decisions to give a plausible theory of social choice.
2.5 Conclusions
If dierent people have dierent attitudes towards risk, we must either choose acts which
they all disprefer or acts which are sure to turn out worse|or, at least, no better|than
their alternatives. Either (Ex Ante Pareto) or (Dominance) has to go when we aggregate
53
Here are two observations that might pertain to that question. On the one hand, one might worry that
letting risk attitudes in
uence social decision-making will put risk-averse agents, such as Bob, at a systematic
disadvantage when chances of good outcomes are distributed. On the other hand, one might note that Bob
has a lower willingness to pay for the drug than Alice, assuming Alice and Bob have the same marginal
utility for money. Standard cost-benet analysis, using ex ante willingness to pay, would therefore favor
giving Alice the drug.
40
the preferences of such individuals. This is particularly troubling for those who think that
such individuals are perfectly rational.
While discussing how REU theorists might respond to the dilemma, I attempted to
shed some light on the normative signicance of risk attitudes for social decision-making
under the assumption that dierent risk attitudes can be rational. Two observations are
worth emphasizing. First, in cases in which dierent packages of potential benets and
risks are distributed and there is an allocation which all individuals strictly prefer to all
other allocations because of their dierent risk attitudes, it seems that an impartial observer
should choose the favored distribution. Carla should prefer giving the risky ticket to Alice
and the safe ticket to Bob rather than vice versa. But, second, it seems less plausible that
the impartial observer should be sensitive to the dierent risk attitudes of individuals in
decisions about which single individual gets a chance of something good. That Alice is less
risk-averse than Bob does not seem to be a good reason for giving her a drug even though
that drug would help Bob more if it worked. Going forward, proponents of the view that
dierent risk attitudes are rational might either reject one of the two verdicts or propose
a decision rule for the observer that accommodates both verdicts. As I have shown, REU
theory for the observer fails to get the rst verdict right, whereas rules based solely on
individuals' risk-expected utilities face trouble with the second verdict.
Throughout this chapter, I focused on discussing the dilemma from the perspective of
REU theorists. However, I mentioned that if one thinks that individuals such as Alice and
Bob are irrational|as EU theorists do|a natural response to the dilemma might be to
say that we should choose acts that all agents (irrationally) disprefer. I did not mean to
suggest that this is obviously the right thing to say for those who think that Alice and Bob
are irrational. There is no straightforward entailment from the irrationality of non-neutral
risk attitudes to the claim that such risk attitudes have no normative signicance in social
decision-making.
I hope that this chapter illustrates how social choice theory and individual decision theory
41
are mutually illuminating. One cannot have both the conjunction of (Ex Ante Pareto)
and (Dominance) in social choice theory and REU theory in individual decision theory.
Arguments for (Ex Ante Pareto) and (Dominance) are arguments against REU theory, and
vice versa. The questions of how to pursue the good collectively and the good individually
are intertwined.
42
Chapter 3
Should States Be
Catastrophe-Averse?
Abstract
What do seat belt laws and asteroid de
ection missions have in common? They both reduce
our risk of premature death. However, while seat belt laws reduce risks that would strike
dierent people at dierent times, asteroid de
ection missions reduce risks that would strike
most of us at the same time and thereby cause a catastrophic breakdown of human civiliza-
tion. This chapter identies reasons why states should prioritize reducing catastrophic over
non-catastrophic mortality risks.
3.1 Introduction
Truly terrible things could happen. A new infectious disease, much deadlier than the Spanish
u or COVID-19, could quickly wipe out a large part of the global population. An asteroid
could hit the earth, causing an impact winter that leads to mass starvation. Runaway global
warming could make large parts of the earth uninhabitable and lead to rapid population
decline and widespread violent con
ict. For the purposes of this chapter, let a catastrophe
be an event that kills most but not all human beings and leads to the breakdown of civilization
43
as we know it. A catastrophic risk is a risk of such an event.
1
States are in a unique position to reduce catastrophic risk. Catastrophic risk reduction is
a public good: if provided, everyone can benet from it, and it is dicult to exclude people
from its positive eects. As with other public goods, private actors are prone to generate
less catastrophic risk reduction than would be desirable from the social perspective. If a
private space
ight company develops asteroid de
ection technology, it benets everyone,
but it has to pay the costs on its own. States can use their coercive power to force every
person to pay their share for catastrophic risk reduction by raising taxes to nance such
programs. Relatedly, catastrophic risk reduction often involves coordination problems, which
states can help solve. Each bioengineering company might prefer no company engaging in
research which increases catastrophic risks over all companies engaging in such research.
But whatever other companies do, each company might also prefer engaging in dangerous
research to refraining from it because the research would increase its competitiveness. A
legal prohibition on dangerous research would solve this coordination problem.
2
An obvious reason for states to adopt catastrophic risk-reducing policies is that they
reduce the mortality risks to which people are exposed.
3
For the purposes of this chapter,
the mortality risk of an individual is that individual's annual risk of dying prematurely.
4
If
1
See Bostrom (2013, p. 17) for a taxonomy of related kinds of risks.
2
One might object that if a state prohibited certain kinds of research, then labs would continue their
research in countries with fewer regulations. In general, one might say that single states can do little to
decrease catastrophic risks. Really, it is groups of states or maybe transnational actors such as NGOs that
must reduce them (see Beck, 2009, p. 183{186).
But this position is implausible. Clearly, a single state can substantially reduce non-anthropogenic risks,
such as the risk of an asteroid impact. For risks which need international regulation, single states are the
actors which have to push for such regulation. There is no global actor which could reliably enforce, say, a
global prohibition on certain research. Hence, focusing on individual states as central actors in catastrophic
risk reduction is defensible.
3
For the purposes of what follows, `the state' can simply be taken to refer to the group of people who
has the relevant legal powers to pass laws, issue regulation, and so on. That the state has reasons to do
something means that a particular group of people has reasons to do something. If one feels uncomfortable
with such reason-talk, one can read this chapter as a discussion of various arguments for the conclusion that
states may permissibly prefer catastrophic to non-catastrophic mortality risk reduction.
4
An individual's mortality risk is an annual risk. It is not an individual's risk of dying at some point,
which is close to 100% independently of what policies the state adopts. Also, an individual's mortality risk is
only their risk of dying prematurely. To die prematurely is to die from something other than aging-associated
diseases.
44
a policy decreases the annual risk of a catastrophic pandemic by 0.01% for a time of, say, ten
years, it thereby decreases the mortality risk for all people by about 0.01% for ten years. That
is clearly a reason for the state to adopt the policy. But that reason in favor of catastrophic
risk-reducing policies also applies to some policies that do not decrease catastrophic risk.
For example, it also applies to a policy which prescribes wearing seat belts.
Are there reasons in favor of catastrophic risk-reducing policies that do not also count in
favor of policies that reduce non-catastrophic mortality risk? Put dierently, are there rea-
sons to prefer catastrophic risk reduction over mortality risk reduction from non-catastrophic
sources? This chapter tries to identify and assess such reasons. Let us call them reasons for
catastrophe-aversion.
It is worthwhile to investigate such reasons. First, they would support the interesting
view that, with limited resources, states should increase spending to reduce mortality risk
by reducing catastrophic risks beyond the point at which this is the most cost-eective way
to decrease mortality risk. Put dierently, states should sometimes adopt policies that leave
mortality risk higher than it could have been, if that helps decrease the risk of catastrophic
outcomes. For example, imagine that the state could either reduce the mortality risk for all
individuals by 0.1% through reducing non-catastrophic risks (seat belt laws) or by 0.09%
through reducing catastrophic risks (pandemic preparedness), at the same cost, for a period
of 50 years. Reasons for catastrophe-aversion would suggest that states should choose the
latter, even if that means that the risk of dying prematurely is higher for each present and
future individual who is alive at some point within the next 50 years.
Second, identifying reasons for catastrophe-aversion can contribute to the development
of policy analysis tools for catastrophic risk reduction. One might apply cost-benet anal-
ysis in a naive way to catastrophic risk-reducing policies, evaluating them by the amount
of mortality risk reduction per individual per year, multiplied by the total number of peo-
ple whose mortality risks are reduced and the number of years for which mortality risk is
reduced. Applying cost-benet analysis in this way amounts to assuming that there are
45
no reasons for catastrophe-aversion: catastrophic risk reduction is treated as just another
kind of mortality risk reduction. Recently, some legal scholars and economists have started
to take steps towards developing instruments for policy evaluation that are sensitive to the
dierence between catastrophic and non-catastrophic mortality risks.
5
Philosophers can
contribute to this important project by shedding light on the normative considerations that
these instruments should ideally capture.
Third, catastrophes seem to have attracted less attention than extinction events.
6
They
have also attracted less attention than non-catastrophic but bad futures, such as moderate
global warming scenarios.
7
This might be justied. Extinction events might be more the-
oretically interesting than catastrophes. Non-catastrophic bad futures are more likely and
might therefore seem more practically important than catastrophes. That said, this chap-
ter hopefully demonstrates that there are also interesting things to say about catastrophes,
which occupy a middle position between the very bad but unlikely extinction events and the
less bad but more likely bad futures.
To identify reasons for catastrophe-aversion, the following observation is a natural starting
point. Clearly, there is no reason to expect the world to be in a vastly worse state in the
year 2100 if 900,000 rather than 300,000 people died in car crashes each year between 2020
and 2070. In contrast, we should expect the world in the year 2100 to be in a vastly worse
state if a global pandemic killed most human beings at some time between 2020 and 2070.
This suggests that we could base reasons for catastrophe-aversion on such long-term negative
eects that catastrophes but not temporally dispersed deaths have.
This chapter will discuss three such reasons. They are inspired by arguments from the
literature on extinction and non-catastrophic bad futures. The rst two arguments in this
chapter|that catastrophic risk reduction increases aggregate future well-being and that it
promotes people's interest in meaningful lives|are based on arguments from the literature on
5
See Posner (2004) and Bernard et al. (2018), for example.
6
For some work on extinction, see Lenman (2002), Bostrom (2013), Finneron-Burns (2017), Frick (2017),
and Greaves and MacAskill (Ms).
7
For a survey on the climate change literature, see Gardiner (2004).
46
extinction. The third argument is based on a well-known case for preventing non-catastrophic
bad futures|that it would be unjust to future people not to do so. While none of the
arguments that I discuss in this chapter are radically new, I hope that the reader will still
nd value in the things I have to say about their application to catastrophes, their potential
to justify state action rather than private action, and their status in situations in which
we are quite unsure whether our actions would make the dierence between a catastrophe
occurring or not occurring.
3.2 The Argument from Aggregate Future Welfare
Some economists and legal scholars evaluate policies that reduce extinction risks with mod-
els which take into account that there will be less future welfare if fewer people are born.
For example, Richard Posner uses expected cost-benet analysis to determine which policies
states should adopt to reduce extinction risks. In his calculations, the cost of human extinc-
tion includes the cost of future people not being born.
8
Similarly, Yew-Kwang Ng argues for
policies that reduce annual human extinction risk on the basis of the expected gain in total
utility generated by the larger number of future people.
9
Such a welfarist argument also gives rise to a reason for catastrophe-aversion. In ex-
pectation, a catastrophe would reduce aggregate future welfare more than losing the same
number of lives over a longer time period. This is so for two reasons. First, in expectation,
a smaller total number of people will be born if a catastrophe happens. There will be much
fewer people for some number of generations after the catastrophe. Second, in expectation,
the average welfare across all future people is lower in the case of a catastrophe than in the
case of temporally dispersed deaths. This is because a catastrophe, by denition, leads to
a collapse of civilization as we know it. This means not only that there will be people who
will live with less technology to cure diseases, but also that, in the long-run, the curve of
8
See Posner (2004), in particular p. 169{170.
9
See Ng (2016).
47
technological advancement is shifted so that, at any time in the future, the people living
at that time would be less technologically advanced if a catastrophe occurred at some time
between now and then. Since both the total number of people and the average welfare of
those people is lower in expectation for catastrophes than for losing the same number of
lives over a longer time period, catastrophic risk reduction increases the expected aggregate
welfare of future humans more than non-catastrophic mortality risk reduction. Some might
say that this is a reason for catastrophe-aversion: if a policy increases expected aggregate
future welfare more than another, then this is a reason to prefer it.
There are several well-known worries about this argument. Some people doubt that a
world with more people is better than a world with less people even if having more people
does not decrease average welfare. On this view, expected future well-being is not a desirable
maximand.
10
Relatedly, some reject that merely possible people should be considered in
policy-making. Thus, that some policy would lead to more people being born should not be
considered a reason in favor of it|even if it would make the world go better.
11
But there is also a less discussed worry which is especially relevant in the context of state
action. The worry is that the argument for catastrophic risk reduction relies on a certain
premise about what constitutes a good life. This feature of the argument makes it unsuited
for justifying state action.
To identify the potentially problematic premise, note that, on some conceptions of the
good life, technological setbacks would increase aggregate future well-being. To take a simple
example, one might think that a good life must center around religious worship and that
advanced technology is likely to distract future people from religious worship. Thus, one
might hold that the expected value of future lives decreases at levels of technology above
our own. On this view, having more people at lower levels of technology increases aggregate
welfare. Thus, catastrophe might be a good thing from the perspective of aggregate future
10
See Frick (2017), for example. See Greaves (2017) for an overview of theories of population axiology.
11
Goodin, for instance, advocates for utilitarianism as the guiding moral theory for policy-making but
rejects the view that policy-makers should take the utility of possible future people into account, on the
grounds that a `population explosion' would seem undesirable (see Goodin, 1995, p. 14).
48
welfare. Of course, one must still concede that some generations immediately after the
catastrophe would live terrible lives. But the welfare gain from having a huge number of
people at each time in the future live at lower technological levels plausibly outweighs this
welfare loss.
The reliance on such an assumption about what constitutes a good life might make
the argument unsuited to justify political action, at least according to some views.
12
On
those views, arguments for policies in pluralist societies should avoid relying on assumptions
about the good life that contradict some `reasonable' conception of the good life.
13
Just as
we should not justify policies by saying that getting more people to play classical violin will
make those people's lives go better, so we should not justify policies by saying that getting
more people to live under levels of technology far above our own will make those people's
lives go better.
But maybe we can rescue a weakened version of the argument which is not susceptible to
this objection. After all, the views would be implausibly restrictive if they prohibited arguing
against a policy based on it threatening to set back technology to medieval levels. Clearly, the
set of conceptions of the good life which an argument may not contradict must only include
views which agree that one's ability to live a good life is signicantly diminished if one only
has access to medieval technology. But then, we can argue as follows: Catastrophes would
make the lives of people living in the decades immediately after the catastrophe go less well
because in those years technology will be much below current levels. An equivalent number
of deaths that are spread out in time will not decrease future well-being in this way. Hence,
this is a reason for states to be catastrophe-averse. Granted, the weakened version of the
argument might be less impressive than the original argument about the aggregate welfare
of all future people. But by being more modest, it might be able to avoid the objection
discussed above.
12
See Quong (2011), for example.
13
I treat `reasonable' as a technical term in such theories. It lies beyond the scope of this chapter to discuss
how to best dene its meaning.
49
3.3 The Argument from People's Interest in Meaning-
ful Lives
In a dierent contribution from the literature on extinction, Samuel Scheer suggests that
we have a strong interest that humanity does not go extinct, at least not in the foreseeable
future.
14
He gives several arguments for this claim, but I shall focus on just one of them:
that the value of many of our activities depends on humanity not going extinct.
15
It is easy to see how the value of some of the things we do hinges on the continued
existence of humanity. Some of our activities, such as political activism or cancer research,
derive part of their value from improving the world for future people. Some of our activities,
such as education and conservation, derive part of their value from transmitting values and
practices to future people. Some of our activities, such as journalism, derive part of their
value from being part in something temporally larger than ourselves. The value of all of
these activities would be diminished if humanity came to an end in the foreseeable future.
But catastrophes would also threaten the value of at least some of these activities. For
example, all activities that are geared towards improving social institutions would lose just as
much value from the collapse of those institutions as from human extinction. Similarly, cancer
research would become much less valuable if most medical knowledge and infrastructure were
lost in a catastrophe.
The fact that we nd many of our activities valuable only if there is no catastrophe in the
near future might provide the basis for a reason for catastrophe-aversion. The argument goes
as follows. Avoiding catastrophe is a widely shared interest|because the value of many of
our activities depends on it|and states are in a unique position to ensure that it is satised.
Whenever states are in a unique position to satisfy a widely shared interest, or at least
increase the chances of that interest being satised, that gives them a strong reason to do
14
See Scheer (2013, 2018). For related ideas, see Lenman (2002, p. 263{265).
15
See Scheer (2018, ch. 2).
50
so.
16
Furthermore, decreasing non-catastrophic mortality risks does not advance this interest
because deaths which are spread out in time do not threaten the value of the aforementioned
kinds of activities to the same extent. The value of improving institutions, for example, is
not threatened if many more people die in car accidents each year. Thus, this is a reason for
catastrophe-aversion.
In fact, the interest in avoiding catastrophe can be seen as part of a broader interest:
the interest of living in conditions that allow us to form and pursue a plan of life that we
nd valuable|and that we nd valuable not just because we hold false empirical beliefs.
This interest would be set back by a catastrophe because if there was a catastrophe, many
people would pursue a plan of life that they nd valuable, but only because they are under
the illusion that there will be no catastrophe. Most of us have a strong interest in living a
life that we nd valuable not only because we have false empirical beliefs. States should try
to make sure that no catastrophe occurs to further this interest.
Even if one is often skeptical that the state's coercive force should be used to promote
any old widely shared interest, the interest in living under conditions which help people
live meaningful lives, even if they dier in exactly what they take to be a good life, is
plausibly among those interests that the state should promote. At least, many liberal political
philosophers argue that it is. Those philosophers should therefore be sympathetic to the idea
that states have a reason to be catastrophe-averse, stemming from the fact that evaluations
of our plans of life are entangled with the future of human civilization.
For example, John Rawls argues that the state should preserve conditions in which indi-
viduals can form and pursue a plan of life that they consider valuable.
17
This motivates his
concern for goods such as basic liberties and income, which enable individuals to form and
16
By `interest', I do not mean `preference'. If many people preferred racially segregated public facilities,
that itself would not constitute a reason for the state to segregate public facilities (see Sunstein, 1991).
Rather, your interests, as I use the word, correspond to your enlightened preferences: the preferences you
would have if you were deliberating under properly specied ideal conditions.
17
More specically, Rawls uses a conception of citizens as having the capacity to form and pursue a plan
of life that they consider valuable|that is, a conception of their own good. They also have an interest to
exercise that capacity (see Rawls, 1993, p. 71{77). This assumption is also shared by some of those who
have further developed the Rawlsian approach (see Quong, 2011, p. 182).
51
pursue a plan of life that they consider valuable.
18
One additional good that Rawls maybe
should have been concerned with on similar grounds is the continued existence of human
civilization. It is a good that|like income|helps individuals form and pursue a plan of
life that they consider valuable. Thus, Rawlsians might treat catastrophe-avoidance as a
primary good and arm that the state should be concerned with its provision.
Granted, as commonly formulated, our interest is only to live in conditions in which
we can form and pursue a plan of life that we nd valuable. We can do so as long as we
believe that there is no catastrophe, whether or not there actually is one.
19
Thus, strictly
speaking, the interest that many liberal philosophers are concerned with is not promoted
by avoiding catastrophe. But our interest is plausibly stronger than usually formulated:
we have an interest that our perception of the value of our projects is not based on false
empirical premises. Thus, those who think that states have a duty to see to the satisfaction
of the interest as it is usually formulated should also accept the analogous claim about the
stronger interest.
How strong is this reason for prioritizing the reduction of mortality risks from catastrophic
sources over the reduction of mortality risks from non-catastrophic sources? The reason is
tied to people considering the value of their lives as dependent on avoiding catastrophe. Thus,
it is natural to think that the strength of the reason depends on the strength of people's
attitudes. It is likely that people would dier in how much they consider the meaning of their
lives to depend on avoiding catastrophe.
20
A rst, albeit somewhat crude approximation of
the strength of this conviction would be how much people would be willing to pay to avoid
catastrophe after their own death. One aspect that would presumably come to light in such
an empirical investigation would be that this reason for catastrophic risk reduction might get
18
Rawls (1993, p. 75{76).
19
Relatedly, Finneron-Burns (2017) suggest that contractualist would object to a principle that permits
letting humanity go extinct on the basis of the psychological trauma that results from the belief that
humanity goes extinct. This reason would not count in favor of policies that reduce catastrophic risks in
current circumstances. Currently, few if any people suer from psychological trauma because of the levels
of catastrophic risk we are exposed to.
20
See Frankfurt (2013, p. 134{136) for some arguments why people might not consider the value of their
lives to strongly depend on there not being a catastrophe in the future.
52
weaker the further away in the future the risks are.
21
Someone might plausibly nd many
things they do much less valuable if human civilization came to an end shortly after they
died. But learning that human civilization would perish in a few millenia from now seems
unlikely to lead to a comprehensive re-evaluation of what activities one considers worthwhile.
Hence, this argument is particularly strong for policies which reduce catastrophic risks in the
next few decades and weaker for policies which reduce catastrophic risks in the far future.
3.4 The Argument from Duties of Intergenerational
Justice
Let me nally turn to a well-known argument for preventing non-catastrophic bad futures,
such as those involving moderate climate change. The argument is that duties of intergener-
ational justice require us to ensure that future people do not have to live in such bad futures.
This argument would apply to preventing catastrophes, too. After all, if a catastrophe hap-
pens, future people born in the decades after the catastrophe would face even more hardship
than people in non-catastrophic bad futures. They would be born into a world with much
lower standards of living and without stable institutions that reliably protect basic rights.
Temporally dispersed deaths|from car accidents, for example|do not similarly threaten
that future people will have to live in such apocalyptic circumstances. Thus, this argument
from duties of intergenerational justice provides a reason for catastrophe-aversion.
Of course, it is controversial whether there are duties of intergenerational justice at all.
Some are skeptical of such duties because those particular people in the future who would
have bad lives would not have existed had we not performed an action that made their lives
go badly. This might seem to undermine the thought that we violated a duty owed to them.
For the purposes of this chapter, I will assume that this worry is mistaken and side with
21
See also Frick (2017, p. 345).
53
those who defend the view that we have duties of intergenerational justice.
22
If there are any duties of intergenerational justice at all, then there are, at the very least,
duties to ensure that future people have `enough' in some suitably specied sense. It is clear
that, however we spell out `enough', we will violate that duty if we allow future generations to
be born into a post-catastrophic world. For example, some common proposals are that we owe
future generations a world in which they can live above some welfare threshold, maintain just,
stable institutions, and, relatedly, be collectively self-determining.
23
In a post-catastrophic
world, attaining moderate welfare levels and maintaining stable, just institutions which allow
people to collectively self-determine will be dicult if not impossible.
24
The above argument sounds simple, but only because I left out an important complica-
tion. I talked as though we faced the choice between either letting a catastrophe happen
or bearing some burden to avert it. Then, it is easy to see that if there are duties of in-
tergenerational justice, they would require us to avert the catastrophe. The literature on
averting non-catastrophic bad scenarios, such as moderate climate change, usually frames
the problem in this way, taking it to be very likely that such non-catastrophic bad scenar-
ios will happen if we do not take action. But while this approach might be justied for
non-catastrophic bad scenarios, it is not justied for catastrophes. It is quite unlikely that
a catastrophic risk-reducing policy will make the dierence between future people having
enough and future people not having enough. To illustrate, consider investments in asteroid
de
ection missions. We do not know whether an asteroid is on course to hit the Earth in
the next few millenia. We also do not know whether the technology we could develop before
such an asteroid approaches the Earth would suce to de
ect it. Thus, it is far from clear
whether such investments make the dierence between future people having enough and fu-
22
See Caney (2018) for an overview of the relevant discussion.
23
See Meyer and Roser (2009), Reiman (2007), Barry (1997), Rawls (1999, p. 251{259), and Thompson
(2010).
24
The interest in having just institutions and achieving self-determination is often ascribed to collective
entities rather than to individuals. However, it is plausible that the value of collectives achieving self-
determination is derived from members of collectives having certain interests (see Stilz, 2016, for a defence
of this view). For the purposes of this chapter, it therefore seems appropriate to focus on the interests of
individuals rather than collectives.
54
ture people not having enough. To understand the extent to which sucientarian duties of
intergenerational justice would give us a reason to adopt catastrophic risk-reducing policies,
we must tackle the question how those duties work in situations of uncertainty.
One answer to this question would be that our duties require us to do what actually
averts catastrophe. When we do not know for sure how we could satisfy our duties, we should
maximize expected choiceworthiness.
25
First, assign a `deontological' value on an interval
scale to any combination of an act and a state of the world, representing how choiceworthy
that act would be if that state of the world was actual. For example, we would assign low
value to states in which we do not invest in asteroid de
ection technology and this happens
to make the dierence between future people having enough and them not having enough.
Second, assign probabilities to the various states of the world. Finally, rank each act by its
expected choiceworthiness, which is the product of the probabilities of the worlds in which
we choose the act and their `deontological' value. On this view, the strength of the reason
to choose a catastrophic risk-reducing policy over a non-catastrophic mortality risk-reducing
policy is proportional to the weight of the intergenerational duty at stake and the probability
of that policy making the dierence between violating and satisfying the duty.
A dierent approach to answering what our duties of intergenerational justice require
in such situations of uncertainty is based on the following premise: how much we have to
reduce catastrophic risks to satisfy our duties of intergenerational justice depends on what
self-interested agents in a suitably specied original position would choose to do. The general
form of argument is as follows:
1. The principles of intergenerational justice are those that would be chosen by a set of
individuals who
(a) are purely self-interested,
(b) do not know which generation they belong to,
25
Lazar (2018a) suggests such an approach to decisions under uncertainty for deontologists.
55
(c) know that all generations will follow the principles they choose.
2. In such an original position, principle P would be chosen.
3. Therefore, P is a principle of intergenerational justice.
Rawls uses an argument of this form to elucidate how to trade o present consumption and
saving for future people so that they can set up and maintain just institutions. If one is
attracted to this approach to determining how much to sacrice to enable future people to
maintain just institutions, one should apply the same approach to determine how much to
sacrice to reduce catastrophic risk. After all, catastrophic risk is a risk of future people not
being able to maintain just institutions.
26
One reason why this approach is attractive is that making parties ignorant about which
generation they belong to can be seen as capturing the value of fairness. Just as it would
be unfair to assign benets and burdens based on race or sex, so it would be unfair to
assign benets and burdens based on temporal position.
27
If one is skeptical of this form
of argument, one might nevertheless agree that the principles that tell us how we ought to
trade o various considerations should be such that one could want every generation to follow
them. This universalizability condition is the key assumption underlying the argument that
follows.
28
To apply this style of argument to the question at hand, suppose that you are in the
original position. Rawls argues that your rst concern is to make sure that your basic
26
One can think of catastrophic risk reduction as a form of probabilistic saving: reducing the annual
catastrophic risk amounts to reducing the annual risk of erasing most of the capital stock. It would be
strange if justice required us to save 1% rather than 0.5% for future generations but did not require us to
also reduce the risk of leaving the future generation with nothing from, say, 2% to 1%. Thus, anyone who
accepts a duty to build up the capital stock should also accept duties not to run high risks of erasing most
of the capital.
27
Reiman (2007) suggests such a line of argument.
28
Tonn (2009) uses choice in an original position to determine universalizable principles regulating ex-
tinction risks. However, his approach diers from the approach I take here in that agents in his original
position include all possible people. It is questionable why principles chosen by all possible people should
have normative force. The normative force of these principles is not easily grounded in the value of fairness,
unless one cares about fairness to merely possible people who never exist. See Barry (1989, p. 195) for
further criticism.
56
liberties are protected. Your second concern is to have as much wealth and other general-
purpose resources as possible. With these concerns in mind, you now need to pick principles
regulating how much to sacrice for the sake of reducing catastrophic risk, and you do so
in a purely self-interested manner. You know that the more catastrophic risk reduction you
prescribe, the more you will have to pay for reducing catastrophic risks. On the other hand,
the more catastrophic risk reduction you prescribe, the more other generations will pay to
keep catastrophic risks small, and so the less likely you are to be born into a post-catastrophic
world in which life is hard and humanity struggles to maintain just institutions. At what
level of catastrophic risk would these two motivations equilibrate?
29
You probably would not choose the principle `do not run annual catastrophic risk higher
than 1%'. A 1% annual catastrophic risk would amount to a 63% probability of catastrophe
in a random 100-year time span. In contrast, a 0.1% annual catastrophic risk would amount
only to a 10% probability of catastrophe in a random 100-year time span. Assume that if
a catastrophe happened 100 or fewer years before you were born, you would live in a world
where you would not have enough. Clearly, you would prefer a prospect in which you only
have a 10% chance of not having enough to a prospect in which you have a 63% chance of
not having enough, even if the rst prospect involves considerable costs to you in terms of
your lifetime consumption.
30
Hence, this argument implies that running a 1% annual risk
is intergenerationally unjust. A principle permitting such a tradeo in favor of avoiding
the costs of catastrophic risk reduction would not be chosen in the original position. This
establishes an upper bound on the level of intergenerationally just catastrophic risk.
The exact tradeo will depend on exactly how one takes agents in the original position
to assess the costs of living in a post-catastrophic world relative to the increasing costs of
29
Strictly speaking, an agent would not choose between single levels of catastrophic risk for all generations,
to be attained at whatever cost this imposes for each generation. Rather, they would choose between more
complex schemes that assign required amounts of risk reduction based on a society's level of prosperity, the
types of risk it faces, and so on.
30
The reader might wonder whether the fact that post-catastrophe generations will be smaller than pre-
catastrophe generations should be captured in the original position, by reducing the agent's likelihood to
end up in post-catastrophe generations. I assume that this is not the case. This assumption is in the
contractualist, nonaggregative spirit of a Rawlsian theory of justice. I will not defend this approach here.
57
catastrophic risk reduction. The costs that we would incur to reduce catastrophic risks
below their current levels mainly fall into the category of slowing down economic growth
and possibly technological progress. It is plausible that, as Rawls suggests, agents would
give much more weight to the preservation of just institutions than to increasing material
prosperity beyond the level currently attained in developed countries.
31
Thus, it would seem
that they would be happy to choose principles that demand sacricing even a large part
of economic growth in many countries in exchange for lower chances of catastrophes. This
suggests that many countries fail to give the normatively appropriate weight to reducing the
risk of future people not having enough. Intergenerational justice demands that they should
pay much higher costs in order to increase the likelihood that future people can have decent
lives under just institutions.
32
Note that even if one rejects duties of justice to future people, one should still accept a
version of the argument which is limited in scope to current people. After all, if catastrophe
strikes tomorrow, not only will people born after the catastrophe not have enough but also
those of us who survive the catastrophe. Even if one rejects that I owe future people a duty to
protect them from not having enough, one should still accept that I owe my contemporaries a
duty to protect them from not having enough. I violate that duty towards my fellow citizens
if I am unwilling to pay costs to protect them from having to live in a post-catastrophic
world. Hence, it is not only sucientarian duties to future people but also sucientarian
duties to current people which provide a reason to be catastrophe-averse.
Even if one has a restrictive view of the kind of activities that states should be engaged
in, seeing to it that duties of justice are fullled is surely among them. Hence, if there is a
duty of justice that requires catastrophic risk reduction, it would give states a reason to do
so.
31
See Rawls (1999, p. 257).
32
See Mulgan (2011, p. 181{184) for further discussion of why, on a Rawlsian picture, drastic sacrices are
required for the sake of the future.
58
3.5 Conclusions
I discussed three candidate reasons for states to be catastrophe-averse, that is, to prioritize
reducing mortality risks from catastrophic sources over reductions of mortality risks from
non-catastrophic sources. All of these reasons hinged on negative long-term eects caused
by catastrophes but not by deaths that are suciently spread out in time. First, mortality
risks from catastrophic sources are more prone to negatively aect the aggregate welfare of
future people. Second, current citizens have an interest in there not being a catastrophe in
the future since that would threaten the value of some of the activities they are engaged in.
Third, intergenerational duties of justice demand that we decrease the risk of future people
having to live in a post-catastrophic world.
My discussion has several implications for policy analysis. First, if one treats catastrophic
risk-reducing policies as simply another mortality risk-reducing policy in one's cost-benet
analysis, then one fails to capture reasons to favor catastrophic risk reduction over non-
catastrophic mortality risk reduction. Second, extending cost-benet analysis so that it is
sensitive to the loss of future lives, as suggested by some economists and legal scholars,
takes one candidate reason for catastrophe-aversion into account. However, it should be
noted that that reason is somewhat controversial. An alternative incremental improvement
on naive cost-benet analysis might be to use individuals' willingness to pay for avoiding
catastrophe rather than just their willingness to pay for mortality risk reduction. Such a cost-
benet analysis would, at least heuristically, take into account people's interests in the future
of humanity that go beyond their interest in their personal survival. For instance, it might
capture the interests having to do with the value of some of people's projects being entangled
with the future of humanity. Third, there might be an imperative of intergenerational justice
to keep catastrophic risks low. In that case, it is doubtful whether cost-benet analysis, which
is conned to broadly welfarist reasons for policies, is an adequate framework to assess how
choiceworthy policies are that reduce catastrophic risks.
I focused on one narrow set of reasons in this chapter: reasons to prioritize catastrophic
59
risk-reduction over non-catastrophic mortality risk reduction. It is important to be clear
about what I have not done. First, I have not discussed what reasons there are against
being catastrophe-averse. For example, one might note that for any non-catastrophic death
that occurs, much suering is caused among bereaved relatives and friends. In contrast,
most deaths in a catastrophe will not cause this kind of suering because, if they occur,
relatives and friends are likely to be dead too.
33
As a dierent example, one might note
that ex post inequality aversion might generate a reason against being catastrophe-averse.
After all, catastrophic risks are risks that make us sink or swim together.
34
Second, I have
only discussed the choice between reducing mortality risks from catastrophic and from non-
catastrophic sources. Of course, there are many potential policies states could adopt that do
neither. Hence, it might be that, all-things-considered, states should not reduce catastrophic
risks or non-catastrophic mortality risks. Rather, they should do something else entirely. In
fact, maybe some of the arguments I discussed also entail that some third kind of policies
is most preferable. For example, there might be more potent ways to increase aggregate
future well-being than reducing mortality risks, be it from catastrophic or non-catastrophic
sources.
33
See Elster (1979, p. 391).
34
See Bernard et al. (2018).
60
Chapter 4
Explore and Exploit
Abstract
Political decision-makers often face the choice between `exploring' and `exploiting'. On the
one hand, they could implement policies with highly uncertain consequences. By doing so,
they would gain valuable knowledge about policy consequences. On the other hand, they
could exploit the knowledge they already have and choose policies which they know work
reasonably well. This chapter discusses moral considerations that are relevant for making
such explore-exploit decisions on the policy level.
4.1 Introduction
You are at an Italian restaurant, deciding between the two daily specials: savory chestnut pie
and eggplant rotini. You have never had chestnut pie before, and you do not know whether
you would like it or not. You might like it much less or much more than the eggplant rotini.
In either case, by choosing the chestnut pie, you would learn whether you like chestnut pie
or not, and that information might be useful in the future. On the other hand, eggplant
rotini is a long-time favorite of yours. You have had it many times, and ordering it would
guarantee that you will enjoy your dinner. But you would not learn anything new about
which dishes you like.
61
Pondering whether to order the chestnut pie or the eggplant rotini, you face the explore-
exploit tradeo. You could either choose an option about whose consequences you know very
little. You would then learn something new by observing its consequences. Alternatively,
you could choose an option which you know will have reasonably good consequences. Since
you already know that, you will not learn much by choosing that option. Such decisions
between exploring and exploiting commonly occur in our lives. For example, we decide
between meeting new people or hanging out with old friends; between listening to music
that someone recommended to us or putting on an album by our favorite artist; between
giving skiing a shot or continuing to snowboard for the seventh winter in a row; and so on.
Clearly, both exploring and exploiting have their merits and drawbacks in such situations.
We need to understand what these merits and drawbacks are and then weigh them against
each other to make good decisions.
But we do not only make such decisions in our private lives. We are confronted with the
same tradeo in policy decisions. For example, consider the decision between implementing a
universal basic income or increasing funding for an existing food stamp program. That, too,
is a decision between exploring and exploiting. We know very little about the consequences
of a universal basic income scheme, and we would therefore learn much by implementing it.
On the other hand, the eects of increased funding for an existing food stamp program are
much more predictable. We can be quite condent that it will go reasonably well, but we
will not learn much from it. Just as it is important that we understand the advantages and
disadvantages of exploring and exploiting in our private decisions, so it is important that
we understand the advantages and disadvantages of exploring and exploiting on the policy
level. This chapter tries to shed some light on political explore-exploit decisions.
After having gestured at the problem this chapter is about, let me restate it more perspic-
uously. It was somewhat misleading to speak as though there are two categories of policies|
exploratory and exploitative|and we must sometimes decide between policies from those
dierent categories. Actually, no matter what policy we choose, we are somewhat uncertain
62
about its consequences and we would learn something if we implemented it. This is because
we have never observed how the policy plays out in the future state of the world in which
it would be implemented if we chose it today. And no matter which policy we choose, we
will to some extent exploit existing knowledge which indicates that things will not turn out
terribly if we choose that policy.
It is therefore clearer to replace the dichotomy by a comparative notion. I will say that
policyA is more exploratory than policyB if policyA resolves more uncertainty about policy
consequences.
1
For example, implementing a universal basic income is more exploratory
than increasing funding for an existing food stamp program. Implementing a universal basic
income would lead to a wealth of new insights. In addition to learning about the outcomes of
this particular policy in this particular context, we might also gain some transferable insights
about how variants of this policy would perform. We might even get some information about
the eects of completely unrelated policies by learning, for example, how important nancial
incentives are for people's willingness to seek employment. In contrast, increasing funding for
an existing food stamp program resolves much less uncertainty about policy consequences.
We already have a good idea of what would happen if we chose that policy.
The topic of this chapter, then, are the reasons for and against choosing more exploratory
over less exploratory options. I will loosely talk of `exploratory options' and `exploitative
options' at various points, but what I mean by that is `more exploratory than the salient
alternatives' and `less exploratory than the salient alternatives'.
To get a clearer picture of the considerations that bear on decisions between more ex-
ploratory options and less exploratory options, it is important to be aware that there are
1
Given our uncertainty about the consequences of policies, we could think of the consequences of a policy
in a specic context of implementation as a random variable. The distribution of the random variable would
be dened by our credences in the various possible consequences. Implementing a policy and observing its
eects would change our credences about the eects of the implemented policy and various related policies.
This suggests measuring the extent to which a policy is exploratory by the reduction in the entropies of these
random variables. The more spread out the probability distribution over the possible values of the random
variables|i.e., the more uncertain we are about the policy's eects|the higher the entropy (see Shannon,
1948). Thus, the more `peaked' a policy makes our credence distributions over the consequences of various
policies, the more exploratory it is.
63
dierent kinds of exploratory options to which dierent considerations may apply. Some ex-
ploratory options consist in implementing a single policy with fairly unknown consequences,
such as a universal basic income. Other exploratory options consist in synchronously im-
plementing multiple dierent policies. Such policy heterogeneity can be achieved either in a
top-down or a bottom-up manner. The U.S. federal government might mandate that North
Dakota and South Dakota implement two dierent agricultural policies. This option would
be more exploratory than implementing the same policy in both states because it resolves
uncertainty about two policies at the same time. On more local levels of authority, a city
might mandate that a new approach to policing is employed in a randomly chosen subset
of the neighborhoods. If a top-down implementation of dierent policies uses random as-
signment and possibly other techniques familiar from scientic experiments, such as careful
data collection, I will call it a policy experiment. A political authority can bring about syn-
chronous implementation of multiple policies in a more bottom-up fashion as well. It can do
so by devolving decision-making power to lower levels of authority. It is foreseeable that if
power is devolved to lower levels of authority, then dierent decision-makers at lower levels
of authority will make dierent decisions. We can then observe the consequences of many
policies at the same time. Thus, devolving authority is an exploratory option, too. It will
often resolve more uncertainty about policy consequences than its alternatives.
2
Political decision-makers are aware that they sometimes choose between policies that re-
solve more uncertainty and policies that resolve less uncertainty. When Commodore Matthew
Perry arrived at the shores of Japan in 1853, he delivered a letter from President Fillmore,
in which the president suggests that Japan could open up for a few years and thereby resolve
uncertainty about the consequences of a less isolationist foreign policy:
If your imperial majesty is not satised that it would be safe altogether to ab-
rogate the ancient laws which forbid foreign trade, they might be suspended for
2
The link between exploration and devolution is sometimes also discussed in the context of individual
conceptions of the good life. For example, through his famous idea of `experiments of living', Mill (2003
[1859]) links devolution of authority over what conception of the good life an individual pursues with the
idea of exploration.
64
ve or ten years, so as to try the experiment.
3
In more recent times, policy experiments have become a popular way to resolve uncertainty
about policy consequences. For example, in the U.S., policy-makers have conducted large
policy experiments in important domains such as education and agriculture.
4
In China, local
experimentation is systematically used to learn about various possible policies before scaling
up the most promising candidate to the national level.
5
While it is obvious that we often face the choice between more and less exploratory
policies, the reasons for and against choosing more exploratory options have received little
scholarly attention.
6
Often, it is simply assumed|not implausibly|that being more ex-
ploratory is a reason to choose a policy over its alternatives. For example, proponents of
federalism sometimes argue for devolution by arguing that by allowing dierent policies to
be implemented in dierent federal states, we can learn much about which policies work|
usually quoting Justice Brandeis famous line of federal states as \laboratories of democ-
racy".
7
Moreover, there is widespread approval of making policies more evidence-based, and
choosing more exploratory policies appears to be a natural component of evidence-based
policy-making.
8
While it is plausible that, other things equal, becoming more knowledge-
able about the consequences of policies is a desirable side-eect of a policy, it is worth
thinking a bit more carefully about why that is and what potential problems with more
exploratory policies might be.
In section 4.2, I discuss what I take to be the main argument for preferring more ex-
ploratory policies: by resolving uncertainty about policy consequences, we can improve fu-
3
As cited by Kissinger (2014, ch. 5). Of course, it is questionable whether Fillmore was actually concerned
with how opening up would benet Japan.
4
See Wiseman and Owen (2018).
5
See Heilmann (2008).
6
Notable exceptions include discussions of policy experiments by Kukla (2007) and MacKay (2018, 2020).
7
See New State Ice Co. v. Liebmann, 285 U.S. 262 (1932).
8
Some libertarians argue that part of the value of devolution of authority to individuals is that this will
allow voluntarily associations of like-minded citizens (religious citizens, feminists, ...) to explore dierent
systems of social rules. Those rules that appeal to wider parts of the population can then be voluntarily
adopted more widely (Gaus, 2016, p. 186). While related, this view is dierent from an endorsement of
exploration on the level of formal policy-making.
65
ture policy decisions. I suggest that this argument is weaker in many contexts than it might
initially seem. In section 4.3, I spell out a reason against choosing more exploratory policies,
namely, that they are often riskier than less exploratory alternatives. I oer a rebuttal to
this objection: while exploratory policies might impose risks on us, they also reduce risks on
future people. In sections 4.4 and 4.5, I turn to an objection that specically targets policy
experiments. I contend that by conferring dierent benets and burdens on equally situated
citizens, they might often be less distributively just than alternative policies.
4.2 Lessons for the Future
The most obvious argument for choosing more exploratory policies is that by exploring today,
we can get a better sense of which policies work, and this will enable us to make smarter
decisions in the future.
9
It is undeniable that the goal of improving future political decisions is valuable, given
that the stakes in such decisions are often quite high. Moreover, information about policy
consequences surely can improve future political decisions. This benecial eect of explo-
ration may also materialize in jurisdictions other than the one that was choosing exploratory
options. If Sweden implemented a universal basic income and things went badly there, that
might help Denmark make better decisions in the future. Thus, we might think of choosing
exploratory option as making an investment which will pay o not only in our own future
but also in the future of other countries. By exploring, we produce a global public good|
information about policy consequences|which can potentially benet people all over the
globe.
But in many contexts, there are important concerns about giving this reason much weight.
These concerns do not cast doubt on whether improving future political decisions is an end
9
A version of this argument is often given in discussions of federalism. The idea is that if a federal
state implements an exploratory policy, other federal states can observe its eects and use the obtained
information in their own policy-making. In particular, they can choose to also adopt the policy in the future
if it turned out well in a dierent state. See Bednar (2011) and Iyer (2012) for some discussion.
66
worth pursuing. Rather, they question whether exploration improves future decisions. First,
knowledge gained from observing policy consequences becomes less useful the more dierent
the context is. We know that even carefully designed randomized controlled trials about
the impact of, say, deworming programs do not allow us to reliably forecast the impact
of such programs in a dierent country or even a dierent region of the same country.
10
This should make us cautious about the idea that what we learn in one country (federal
state, municipality, ...) tells us much about the consequences of policies in another country
(federal state, municipality, ...).
11
Moreover, it should make us cautious about the idea
that what we learn today will tell us much about policy consequences in the future. As
social circumstances|such as technology, informal social norms, and the level of economic
development|change, so do the consequences of policies. Moreover, new policy options
become available with time, so policies which we implement and learn about today might
not even be considered in future decisions. For these reasons, it is questionable how useful
information from exploration is for improving future decisions, even if decision-makers were
making optimal use of it. This weakens the argument for exploration.
Second, even if knowledge from past explorations would give us reliable evidence about
how the various options, we will consider in the future would play out, political decision-
makers in the future might not make optimal use of that information. Maybe they will
mainly base their decisions on what benets them or on what ts with their ideology. If
information generated by exploration has a decent chance of never being used, then this
further weakens the argument for exploration.
In addition to these two worries, it is also worth noting that we might be prone to
overestimate the strength of the reason to explore|that it resolves uncertainty about policy
consequences|because of a tempting but misleading analogy. The argument for exploration
in policy-making is in some respects analogous to a compelling argument for exploration in
10
See Vivalt (forthcoming). For a more optimistic take on the generalizability of ndings in these contexts,
see Bates and Glennerster (2017).
11
See Al-Ubaydli et al. (2019) for a discussion of the diculties of scaling up successful programs. Gaus
(2016, sec. 4.1) mentions the same worry about social experimentation.
67
our private lives. Many people consider it advisable to pick exploratory options, at least
from time to time. If you only think about this evening, then trying new food might be
less enjoyable in expectation than eating your favorite dish. But you should also take into
account the information you gain by trying new food, which might help you decide what to
eat in the future and therefore pay o handsomely in the long run.
12
This argument for individual exploration is compelling. But as the aforementioned con-
cerns should have made clear, the individual and the collective case are relevantly disanalo-
gous. When you try new food, you can be fairly certain that your future self will nd it tasty
if and only if you nd it tasty. Moreover, you can be fairly certain that your future self will
use the information you generate. Thus, the two objections to the case for exploration apply
much less to the individual case than to the collective case. Hence, one might worry that
we nd the argument for exploration compelling because it sounds similar to a compelling
argument we are familiar with from individual decision-making, even though the two settings
are relevantly disanalogous.
13
I raised these worries partly to caution against putting too much weight on the suggested
reason to choose exploratory policies. Maybe the fact that a policy resolves uncertainty
about policy consequences is often a rather weak reason for choosing it. But I also raised the
worries because they can help us identify kinds of exploratory policies for which this reason
is particularly strong. These would be policies which generate information that is likely to
be relevant in the future and to be taken into account by future decision-makers.
One candidate class of such policies are policies enacted by executive agencies. Executive
agencies can operate in a more evidence-based manner than legislative assemblies, which are
arguably subject to strong forces that pressure them away from doing what a careful exami-
12
...unless you are near the end of your life. In fact, there is evidence that we have a biologically hard-wired
tendency to choose less exploratory options as we get closer to the end of our lives, tracking the declining
weight of the reason to explore (see Mata et al., 2013).
13
As a third disanalogy, note that individual exploration|if it is worse in the short-term than some less
exploratory alternative|involves intrapersonal trade-os whereas collective exploration involves interper-
sonal trade-os. Not all people alive today might benet from the information generated by exploring today.
As discussed later, this gives rise to distinctive worries about collective exploration.
68
nation of the best evidence suggests.
14
Moreover, these agencies often work continuously on
the same problem for a long time, such as the provision of public housing. Hence, it is likely
that they will make use of information in the future obtained by choosing exploratory options
today. To give a concrete example: it is more believable that the Federal Housing Agency
will benecially use insights from an experiment involving dierent ways of administrating
public housing projects than that Congress will benecially use insights from implementing
a new kind of medical insurance scheme.
To sum up, maybe the most obvious reason in favor of exploratory policies is that they
might improve future decision-making by generating information about policy consequences.
The weight of this reason depends on the likelihood that the information generated today
will be relevant in the future and that future decision-makers will use it to improve their
decisions. In many cases, there is no reason to think that this likelihood is particularly
high. That said, there might be contexts, such as the creation of regulatory law by executive
agencies, in which the suggested reason for choosing exploratory policies is relatively strong.
There might be other reasons to favor a more exploratory policy in virtue of it being more
exploratory. Maybe the generated information will be benecially used by non-governmental
actors, such as companies and NGOs. Maybe choosing exploratory options is valuable be-
cause it instills a culture of `doing what works', which will pay o in spheres of social life
other than politics. Maybe exploring is valuable because knowledge is intrinsically good. I
focused on improvements to future policy decisions because that argument for exploration
has been advanced in the literature and it strikes me as most natural and promising.
15
4.3 The Risk-Based Objection to Exploration
There is a close connection between exploration and risk. Policies that are more exploratory
involve higher uncertainty about outcomes. Often, the uncertainty aects morally important
14
In fact, in the U.S., the Foundations for Evidence-Based Policymaking Act of 2018 legally requires
federal agencies to foster evidence-based decision-making.
15
See the references in footnote 9.
69
dimensions of outcomes, such as how many people suer from various hardships. For exam-
ple, if a government implements a new kind of health insurance scheme rather than sticking
to one which they know works moderately well, they expose the population to the risk of
inadequate medical care. The new scheme might turn out to work much better than the well-
known alternative. But it might also turn out to be a failure, causing much more suering
than the well-known alternative. Here, as in many other cases, the more exploratory alterna-
tive is riskier than a less exploratory alternative. In such cases, the riskiness of exploration
is a reason to exploit rather than to explore.
When I talk of `risk' in this section, I refer to subjective risk. To be precise, a risk is a
harm that has non-zero subjective probability of occurring, relative to our evidence.
16
The
magnitude of a subjective risk is the subjective probability of the harm multiplied by the
gravity of the harm. For example, the magnitude of my risk of dying from an untreated
heart disease is the probability of me dying from an untreated heart disease relative to our
evidence multiplied by the magnitude of the harm of dying from an untreated heart disease.
Note that the magnitude of a subjective risk does not depend on its objective chance of
materializing but on our subjective probabilities. Even if I were genetically immune against
all heart diseases, I would still face a subjective risk of dying from an untreated heart disease
because our evidence does not show that I am immune.
Policy choices in
uence who is subjected to which risks and what the magnitudes of
these risks are. For example, more generous state-provided health care decreases my risk
of dying from an untreated heart disease. This follows on the above denition because if
the policy is chosen, the subjective probability of me dying from an untreated heart disease
decreases. Clearly, every risk that someone faces when we choose policy A but not if we
choose policy B is a reason against choosing A over B.
17
Equally clearly, this reason might
16
I will leave unspecied exactly whose subjective probabilities I talk about or relative to which body of
evidence these probabilities are computed. The question whose probabilities matter when we assess reasons
for and against policies does not need to be tackled for the purposes of this discussion.
17
Hayenhjelm and Wol (2012) provide an overview of the relevant literature. See also Frick (2015) and
Oberdiek (2017) for some recent work.
70
well be outweighed by other considerations in favor of policy A. The point I am making is
only that exploratory policies often have more uncertain consequences with larger risks of
harm for individuals than less exploratory alternatives.
18
There seems to be little room for a proponent of exploration to deny that exploratory
options often introduce many risks of greater magnitude than less exploratory options. It is
also hard to deny that such risks constitute reasons against the more exploratory options.
The argument must then be made that the reasons in favor of the exploratory options
outweigh the reasons generated by the possibility that things go wrong. That said, in many
cases, there is another response available that might weaken the risk-based objection against
exploration.
The response is based on a special feature of many exploratory policies which sets them
apart from many other risky policies: by generating risks on us, exploratory policies often
reduce risks on future people. A risk on a future person is a harm to a future person
that is not ruled out by our evidence. Exploring reduces risks on future people because we
consider it less likely, based on our evidence, that future people will suer various harms if we
bequeath better information about policy consequences to them. For example, suppose that
in a few decades from now, a novel coronavirus starts to spread. If more policy exploration
happened during the COVID-19 pandemic, future governments would have a better idea of
the consequences of various policy responses to the new pandemic. We should then take it
to be less likely that future people will be harmed in a variety of ways, given that they will
decide based on better information.
Many policies which are risky but not exploratory do not reduce risks on future people.
For example, an agricultural policy whose outcome depends heavily on the weather in the
next season is subjectively risky, but implementing it will not help us predict the eects of
agricultural policies in the future. The risks on us associated with that policy do not help
to reduce risks on future people.
18
There are dicult questions about how to aggregate various risks imposed on dierent individuals to
determine whether one policy is overall riskier than another. I will not address these issues here.
71
It is important to clarify what I mean when I say that, often, `exploring reduces risks on
future people'. I mean that there are certain risks, such as the risk of dying in a pandemic,
and that those risks will be smaller for the people who would exist in the future if we explored
more today than they would be for the people who would exist in the future if we explored
less today. I do not mean that there is any particular person who will exist in the future
and who would face smaller risks if we explored more today. Since our policy choices aect
the identities of future people, there might be no such person.
19
The upshot of this discussion is that we must qualify the risk-based objection against
exploratory policies. They might impose more risks on us than less exploratory alternatives,
but they also often decrease risks on future people. If an exploratory policy only increased
risks on us, this would unambiguously count against it. But if it also reduces risks on future
people, it is less clear whether the risk-based objection is compelling. On a broadly utilitar-
ian account, a risk-imposition might be impermissible because it decreases people's expected
welfare. But if the risk-imposition allows future people to face smaller risks, then this might
at least partly oset the badness of risks on us, depending on the magnitude of future risk
reduction and the extent to which we discount future utility relative to present utility.
20
On a broadly contractualist account, a risk-imposition might be impermissible because any
principle which permits imposing such risks in the relevant circumstances is reasonably re-
jectable.
21
There are clearly complaints against risky exploration that can be raised on behalf
of present people whose well-being is threatened by the risk-imposition. But if exploration
also decreased risks on future people, then this can oset the complaints of present people
within a contractualist framework|at least according to some contractualists.
22
Contrac-
tualists disagree exactly how this would go. They cannot say that future people could raise
19
See Part (1984), for example.
20
See Greaves (2017) on the question of discounting.
21
See Scanlon (1998). How exactly to cash out the contractualist slogan in the context of risk is the subject
of an ongoing debate. See Frick (2015) and Horton (2017), for instance.
22
Other contractualists might deny that we should weigh risks for present people against risks for future
people within a contractualist theory. Rather, they might say that their theory only captures a subset of all
moral considerations, and that considerations having to do with future people lie beyond the scope of their
theory.
72
complaints about their well-being being exposed to greater risks because we chose not to
explore. After all, for any particular future person, that person would not have existed at all
had we chosen to explore. That said, various ways of incorporating concerns about future
people have been proposed by contractualists, and these could be applied to the problem at
hand.
23
Needless to say, the extent to which this defense of exploration against the risk-based
objection works depends on the magnitude of the risk increase for present people and the
risk decrease for future people that exploration would bring about. In particular, the as-
sumption that an exploratory policy removes risks from future people will usually rely on the
assumption that exploration improves future decisions, which, as I discussed in the previous
section, is only sometimes plausible.
24
Also, whether the risks introduced by exploration
are a reason to exploit rather than explore depends on whether there are less exploratory
alternatives available with fewer associated risks.
4.4 The Distribution-Based Objection to Experimen-
tation
Some exploratory policies are policy experiments. In policy experiments, dierent citizens
are randomly exposed to dierent policies. Such experiments might lead to inequalities. If I
get exposed to a dierent policy than you, then I might have to pay lower taxes than you, for
instance, and therefore end up richer. In this section, I will argue that policy experiments
are often less distributively just than non-experimental alternatives, at least according to
standard egalitarian theories of distributive justice.
23
See Kumar (2009).
24
Throughout this section, I have focused on the reduction of risks on future people. But if exploration
would likely improve decisions in, say, ten years from now, then many of the beneciaries are already alive
today. In that case, the same people who are exposed to new risks through exploration in the next decade
might also face fewer risks after that. Exploration might then not be riskier on us, all risks considered, than
a less exploratory option.
73
4.4.1 Experiments without Reasonable Disagreement
A distribution-based objection is most natural against those policy experiments in which
it is clear even before the experiment is conducted that people in one treatment arm will
be better o than people in another. For example, in the Jobs-Plus Community Revital-
ization Initiative for Public Housing Families (`Jobs-Plus'), residents of randomly chosen
public-housing projects in the U.S. received free extra benets, such as employment services
at on-site job centers.
25
Suppose that Amber and Brooke are residents of public housing
projects and that they have the same employment history, family status, nancial resources,
marketable talents, and so on. Suppose further that residents of Amber's but not Brooke's
project were chosen to receive additional benets. Then, Amber receives more benets than
Brooke, but she does not have to make any extra sacrices in return, such as paying higher
taxes. Thus, Amber receives a larger share of society's resources than Brooke. She also has
more opportunity for welfare, assuming that by using the free employment services, she can
make her life go better. And she might be better o than Brooke along other normatively
relevant dimensions as well.
Many egalitarian theories of distributive justice appear to entail that this inequality sets
back distributive justice. Such theories endorse
(Ecumenical Egalitarianism) PolicyA is less distributively just than policyB if choosing
policy A rather than policy B leads to more inequality in the currency of egalitarian
justice that cannot be justied as
a) the result of unequal choices or
b) compensation for unequal natural endowments.
Egalitarian theories dier in which currency of egalitarian justice they espouse. Prominent
candidates include opportunity for welfare and resources.
26
But those dierences do not mat-
25
See https://www.hud.gov/program_offices/public_indian_housing/jpi (accessed December
2020).
26
See Arneson (1989) and Dworkin (1981). Cohen (1989) argues for `advantage' as the correct currency of
egalitarian justice, which is slightly broader than welfare. All the arguments could be made in terms of that
74
ter for the distribution-based objection against policy experiments. After all, experiments
such as Jobs-Plus bring about inequalities along all such dimensions. Since Brooke does not
get the services Amber gets, she has less opportunity for welfare than Amber, less resources
than Amber, and so on. These inequalities furthermore cannot be justied by choices Am-
ber and Brooke made. For example, we can assume that Brooke never chose to work less
hard than Amber did. Moreover, by assumption, Brooke has the same natural endowments
as Amber. For instance, she does not have more marketable talents than Amber. Hence,
Jobs-Plus causes inequalities along several dimensions in the absence of dierent choices or
dierent natural endowments. It therefore seems prone to being less distributively just than
alternative policies we could enact.
Before I discuss the eect of policy experiments on distributive justice in more depth,
it should be noted that (Ecumenical Egalitarianism) is controversial. Proponents of policy
experiments might therefore respond to the objection by rejecting the principle on which it
rests. For instance, they might endorse a sucientarian view of distributive justice.
27
On
that view, as long as Amber, Brooke, and all other people are above some threshold, the
policy experiment maximally realizes distributive justice, no matter how large the inequalities
above the threshold are. Second, they might appeal to a pure ex ante view of distributive
justice, on which inequalities do not set back distributive justice if the parties had equal
expected shares at some point in the past and then, due to brute luck, unequal distributive
shares came about.
28
Since Amber and Brooke had equal chances of their housing project
being chosen for increased benets, it would not set back distributive justice if Amber were
now better o than Brooke. Finally, proponents of policy experimentation might say that
the concern with distributive justice, thus construed, is misguided. We have reason to bring
about relational equality|the absence of social hierarchies|and distributive equality only
notion, too.
27
See Frankfurt (1987).
28
For some criticism of a pure ex ante view, see Arneson (1999, p. 489{493) and Lippert-Rasmussen (1999,
p. 482{484), for instance. Of course, one can reject a pure ex ante view but still say that ex ante equality
makes the objection weaker than it would otherwise be. While this is plausible, it is worth noting that some
argue that ex ante equality does not reduce the degree of unfairness of ex post inequality (see Hyams, 2017).
75
matters insofar as it is conducive to relational equality.
29
On this view, the distribution-
based objection to policy experiments does not work. The distributive inequality Jobs-Plus
introduces does not seem particularly prone to generating social hierarchies. Those who
receive more resources will not thereby be able to boss around those who do not or become
their social superiors in other respects. But while some might disagree with (Ecumenical
Egalitarianism) in various ways, many are attracted to it or to closely related principles. It is
therefore worth exploring to what extent such views are in tension with policy experiments.
It is important to note that a natural way of modifying (Ecumenical Egalitarianism)
rarely helps with defending policy experiments: adding that, if no one is worse o under
policy A than under policy B, then policy A is at least as distributively just as policy B,
no matter whether it features more inequalities or not.
30
This modication of (Ecumenical
Egalitarianism) is appealing because, unlike (Ecumenical Egalitarianism), it never endorses
making some people worse o and no one better o for the sake of distributive justice. How-
ever, modifying (Ecumenical Egalitarianism) in this way would hardly help with defending
actual policy experiments. In the case of Jobs-Plus, there is an alternative policy in which
the distribution of shares is likely to be more egalitarian and in which Brooke is strictly
better o than under the more unequal distribution of Jobs-Plus: we could have just given
all public housing residents a small amount of extra benets rather than giving a select few
residents a large amount of extra benets. Jobs-Plus makes some worse o than this policy,
so the modication of (Ecumenical Egalitarianism) will still entail that Jobs-Plus is more
distributively unjust than that policy.
A noteworthy exception are cases in which a superior policy cannot be rolled out to
everyone at the same time due to logistical constraints. In that case, rolling out the policy
to only one part of the population might be a Pareto improvement over all alternatives with
more equal distributions. According to the weakened version of (Ecumenical Egalitarianism),
29
On distributive and relational egalitarianism, see Anderson (1999).
30
See Rawls (1999) for a related view. Since it contradicts the view that all inequalities due to brute luck
set back distributive justice, luck egalitarians tend to disagree with this view (see Cohen, 2008, for instance).
76
it would then not be more distributively unjust than those alternatives. Policy experiments
are sometimes conducted in such circumstances.
31
Let us now turn to replies to the distribution-based objection which do not give up (Ec-
umenical Egalitarianism). One natural reply starts from the observation that what matters
for distributive justice is the overall set of inequalities across all people's shares. Maybe
Jobs-Plus introduces inequalities between Amber and Brooke, but it also reduces inequal-
ities between Amber and other people in the society. To illustrate, if Amber and Brooke
both have much less than almost all other people in the society, and this is distributively
unjust, then bumping up Amber's share might make the distribution of shares more just. It
introduces an inequality between Amber and Brooke, but it also decreases many inequalities
between Amber and other people.
32
Thus, quite often, a policy experiment might not set
back distributive justice.
But this only shows that, relative to the status quo, the experiment might promote rather
than set back distributive justice. It might still be that there is some other alternative which
promotes distributive justice even more. In particular, a policy experiment will only make
the distribution more just if the share of those who are randomly chosen to receive extra
benets is brought closer to the share of those who unjustiably have more. But then,
giving extra benets to those who are not chosen would also reduce such inequalities. After
all, since they are randomly sampled from the same population as those who are chosen to
receive extra benets, they will have roughly equal distributive shares. For example, suppose
residents of public housing projects currently have unjustiable few resources relative to some
31
For example, Wood et al. (2020) evaluate the ecacy of a specic type of police training in Chicago. It
was plausibly infeasible to implement the training program in all neighborhoods at the same time. Thus,
the only alternative to implementing it in dierent neighborhoods at dierent times is not to implement it
at all, which is arguably worse for everyone than a staggered roll-out.
Even if one wanted to stick to the unmodied version of (Ecumenical Egalitarianism), such policy experi-
ments seem likely to be most choiceworthy, all-things-considered. They only introduce distributive injustices
temporarily, until we can roll out the policy to everyone, and they allow us to eventually make everyone
better o.
32
I assume here that the degree to which distributive justice is realized is determined by some aggregate
of all the unjustied inequalities. Alternatively, one might say that it is determined by the total distance
between individual shares and their ideal shares if the distribution was perfectly just. The discussion in this
section could be framed in these terms, too.
77
other group of people, and Jobs-Plus thus reduces an unjustied inequality for some public
housing residents. Then, an even better alternative from the perspective of distributive
justice would be to just give these benets to all residents. Hence, Jobs-Plus would still be
less choiceworthy from the point of view of distributive justice than some less exploratory
alternative, even though the relevant alternative would not be the status quo, but rather the
alternative of implementing the intervention for everyone in the target population.
A class of situations in which policy experiments might seem not to set back distributive
justice relative to any available alternative might be the following: situations in which we
do not know whose shares are too high and whose shares are too low, but we have good
reason to believe that there are still unjustiable inequalities.
33
We might end up in such a
situation if we tried as best as we can to x all inequalities that are distributively unjust.
Then, we should still believe that things are not perfect because, for example, we cannot
obtain perfect knowledge of people's actual shares and the shares they ought to have. But
we would not be able to identify any particular inequalities which we could x by enacting
appropriate policies. In such a situation, there is no alternative which seems better than the
status quo from the perspective of distributive justice.
It is tempting to think that in such situations with irremovable background noise of
inequalities, policy experiments which modify the distributive shares indiscriminately across
social groups would, in expectation, be at least as distributively just as the status quo.
It would then follow that they would also be at least as distributively just as any other
alternative. To give a simple example, suppose that 10,000 randomly chosen citizens get a
cash transfer of $1,000 and 10,000 randomly chosen citizens must pay an extra $1,000 in
taxes. In that case, some of the recipients who are made better o would have a pre-transfer
share below their fair share, while others would have a pre-transfer share above their fair
share. Similarly, some of the people who would have to pay extra taxes are above their fair
33
Our societies are arguably not like that. Thus, this response to the distribution-based objection is most
convincing in worlds closer to an ideal world in which the obvious distributive injustices from an egalitarian
perspective were removed.
78
share while some are below that share. Thus, one might think that the gains and losses
in distributive justice cancel out in expectation. Then, the experiment would not set back
distributive justice relative to the status quo.
But this reasoning is mistaken. Adding random perturbations to people's distributive
shares will, in expectation, increase their distance from the target value. To illustrate, if you
have ten glasses, each of which is likely to be slightly above or below its target content of
300ml, then blindly adding 10ml to or removing 10ml from each glass will, in expectation,
increase the deviation from their target content of 300ml.
34
Similarly, policy experiments set
back distributive justice relative to the status quo, even if we acknowledge that they occur
against an irremovable background of inequalities.
Before we move on, it is worth discussing a defense of policy experiments against an ob-
jection that is related to the distribution-based objection I discussed in this section. Douglas
MacKay examines the worry that policy experiments such as Jobs-Plus violate the govern-
ment's duty not to treat people dierently on the basis of morally arbitrary grounds.
35
This
worry is dierent from the distribution-based objection I discussed in this section. That
said, `bringing about that they have unequal distributive shares' could be seen as a specic
way of `treating dierently', and `not justied based on unequal choices or natural endow-
ments' might be a necessary condition for being `based on morally arbitrary grounds'. Thus,
MacKay's objection merits discussion at this point.
MacKay argues that dierential treatment does not violate the government's duty not to
treat people dierently on the basis of morally arbitrary grounds as long as the dierential
treatment 1) \is expected to signicantly advance the realization of one of the government's
purposes", 2) this purpose is \more valuable than the purposes frustrated by the dierential
34
Let D be the current distance of an individual's share from the ideal share that that individual should
have. Assume that D is normally distributed with an expected value of 0 and a standard deviation of
1
,
for some
1
> 0. Performing the experiment means that the distance of the individual's share from the
ideal share is D +E, where E is normally distributed with expected value of 0 and a standard deviation of
2
> 0. D +E will also be normally distributed, but with standard deviation
p
1
+
2
. Since D +E has
a higher standard deviation than X,jD +Ej has a higher expected value thanjDj. Hence, performing the
experiment increases the expected deviation between people's shares and their target values.
35
See MacKay (2020).
79
treatment", and 3) \there is no nondierential treatment by which the government may
signicantly advance the realization of its purpose that would not result in undue burdens
on it".
36
According to MacKay, some policy experiments might satisfy these conditions. For
example, he discusses a program in Ontario in which randomly chosen people were paid a
basic income. He points out that
Ontario was aiming to produce knowledge that is relevant to raising the living
standards of low-income residents. This purpose is very valuable since Ontario
arguably has a duty of justice to realize it.
At the same time, assuming that the government has no duty to implement a universal
basic income for everyone|because it would be unsustainably expensive, for instance|no
Ontarian has a claim of justice to it. Thus, the `purposes' which were frustrated by the
experiment, namely the interests of those who were not chosen to receive a basic income, do
not weigh as heavily as the aim of the government, as the former but not the latter involves
duties of justice.
This argument is at odds with (Ecumenical Egalitarianism). Assume that Brooke has no
independent claim of justice to the level of benets conferred on Amber. MacKay appears
to assume that withholding the benets from Brooke only sets back her interest to receive
employment services, which she has no independent claim of justice to. But on an egalitarian
view, that assumption is false. On that view, Brooke has a claim of justice not to be made
worse o than someone else who has made equal choices and has equal natural endowments.
That claim is frustrated by dierential treatment. Hence, the purposes frustrated by the
policy experiment involve duties of justice, too.
As an aside, note that even if one rejected the distributive egalitarian view, MacKay's
argument that actual policy experiments might satisfy his three conditions seems shaky. In
particular, the third condition would also often seem to be violated. It is true that \random
assignment may be necessary to produce high-quality evidence and other study designs are
36
MacKay (2020, p. 24).
80
also likely to involve dierential treatment".
37
But the government's valuable purpose|the
purpose that it has a duty of justice to realize|is to increase the living standard of the poor.
It is not to produce high-quality evidence. Thus, as long as there are other ways to satisfy
the duty of justice to improve the living standard of the poor|ways which do not involve
dierential treatment|the third condition is violated. It is far from clear that there are no
other ways of raising living conditions of the poor than running the policy experiment. Thus,
accepting MacKay's conditions for when dierential treatment is justiable does not seem
to help much in defending actual policy experiments against the charge that the dierential
treatment is morally problematic.
4.4.2 Experiments with Reasonable Disagreement
Proponents of policy experiments could agree that Jobs-Plus is susceptible to the objection
that is less distributively just than some available alternatives. But they could point out
that Jobs-Plus is, in one respect, unlike many other policy experiments. As I said, it was
clear before Jobs-Plus was launched which arm of the experiment was associated with higher
distributive shares. By `clear', I meant that any reasonable assessment of the empirical
evidence combined with any reasonable way to assess distributive shares yields the verdict
that one arm gives people higher distributive shares than another. But for many policy
experiments, there is reasonable ex ante disagreement about which arm gives people higher
distributive shares. This disagreement might be partly or wholly empirical. The evidence
might be sparse and ambiguous, so that dierent reasonable people come to dierent con-
clusions about which arm will benet people most. The disagreement might also be partly
or wholly normative. For example, people might disagree about which arm oers greater
opportunity for welfare because they disagree about what high welfare consists of.
As an example of a case with ex ante disagreement, consider a variant of Jobs-Plus in
which Amber must accept a rent increase in exchange for employment services. We could
37
MacKay (2020, p. 22).
81
set the size of the rent increase so that reasonable people would disagree about whether
the employment services are worth the rent increase. That is, they would disagree over
whether Amber has more of the relevant currency of distributive justice than Brooke. Call
this imaginary program `Jobs-Plus-Minus'.
One might claim that Jobs-Plus-Minus is less problematic. This claim would be parallel
to a common claim in the ethics of clinical trials: giving patients dierent treatments in a
clinical trial is permissible if there is reasonable ex ante disagreement among experts about
which treatment is better.
38
If experts disagree whether drug A or drug B would be best
for patients in a particular population, we might as well give some patients drug A and
other patients drugB. The same idea could apply to Jobs-Plus-Minus. If there is reasonable
disagreement about whether increasing someone's rent and giving them employment services
is better for a public housing resident than oering cheaper rent without employment services,
then we might as well randomize between the two schemes.
But mere reasonable disagreement over which arm of a policy experiment is associated
with higher shares is not enough to defuse the distribution-based objection. Suppose that
dierent reasonable views disagree about which arm of an experiment is likely to be associated
with larger distributive shares. Even then, it is still possible that there is an alternative which
all views agree is more distributively just than the experiment. For example, all reasonable
views might agree that the status quo is more distributively just than Jobs-Plus-Minus. Some
reasonable views hold that Jobs-Plus-Minus would make Amber better o than Brooke, while
others hold that it would make Brooke better o than Amber. But they all agree that Jobs-
Plus-Minus introduces inequalities which cannot be justied based on dierent choices or
natural endowments. In other words, the disagreement is merely over who gains and who
loses from a policy, not over whether the policy makes people's shares more distributively
unjust. In such cases, it is not obvious why the reasonable disagreement over which arm of
an experiment is better weakens the distributive objection against the policy experiment.
39
38
This is commonly called the `equipoise principle'. See Freedman (1987).
39
A similar argument could apply to clinical trials. Even if some think that drug A works better than
82
This observation also casts doubt on related claims sometimes made in the literature on
policy experiments.
40
I pointed out that there can be reasonable agreement that the experiment is less dis-
tributively just than the status quo or some other alternative, even if there is reasonable
disagreement about which arm of an experiment is associated with larger distributive shares.
Thus, it is mistaken to think that no concerns of distributive justice arise in the presence
of the latter kind of reasonable disagreement. Another kind of disagreement would be more
potent in warding o the distribution-based objection: if there is no alternative which all
reasonable views consider more distributively just than the policy experiment, then the ex-
periment might be `as good as it gets' from the perspective of people in a pluralist society
who try to promote distributive justice together. But reasonable disagreement over which
treatment arm is best is neither necessary nor sucient for this kind of disagreement.
4.4.3 Compensation and the Signicance of Distributive Injustice
At this point, the proponent of policy experiments could concede that experiments such
as Jobs-Plus and Jobs-Plus-Minus are often more distributively unjust than alternatives
but point out that we might be able to derive variants of these experiments which are
not. In particular, we could pay compensation after the experiment to those who are in
drugB and others think that drugB works better than drugA, everyone might agree that randomizing will
lead to unequal outcomes. Whether that is the basis for an objection to clinical trials depends on whether
we have reason to avoid inequalities in patient outcomes.
40
For example, MacKay claims that random assignment to policies is permissible \when there is reasonable
disagreement within the social science community regarding which [of the policies A and B] is superior for
the realization of [target] outcomes" (MacKay, 2020, p. 10). Simplifying a bit, MacKay argues that people
have a claim to be subject to the best feasible policies. Now suppose that it is unclear which policy will best
realize target outcomes. Then, if we conduct an experiment, \no participant is subject to a policy for which
there are suciently strong reasons to judge that it will realize the relevant target outcomes to a greater or
lesser degree than the BPA [ best feasible] policy" (MacKay, 2020, p. 11). Thus, MacKay suggests, random
assignment is permissible.
Let us grant that, because of the reasonable disagreement over which treatment arm is best, no person
in the experiment can claim that they are subjected to a policy that realizes desirable outcomes less than
some alternative. Even then, we might all know that the random assignment will cause more distributive
injustice than some other alternative, such as the status quo. Clearly, this can make random assignment
impermissible even if there is no particular individual who has an uncontroversial basis for the claim that
they get the short end of the stick in the experiment.
83
the disadvantageous arm of the experiment and thus eradicate any inequalities in life-time
distributive shares that the experiment might otherwise cause. For example, in Jobs-Plus, we
could pay Brooke some amount of money after the experiment concluded. In cases in which
there is ex ante disagreement about which arm of the experiment is better, we can determine
the compensation scheme after the experiment concluded, at which point we hopefully have
a better idea about the benets and burdens associated with the dierent arms.
But compensated variants of policy experiments fail to fully address the worry. There
will be some people who were in the worse arm of the experiment and who do not live long
enough to be compensated. The compensated policy experiment has the same eect as the
uncompensated policy experiment on their distributive shares. This set of people grows larger
the longer the policy experiment runs before compensation is paid. Moreover, compensation
introduces new problems which might make the policy experiment unattractive, all things
considered. Compensation makes the experiment less informative because the prospect of
compensation might change people's behavior, and it might do so dierently across dierent
arms of the experiment. Then, the observed dierences between groups in dierent arms
is not only the dierence generated by the dierent policies but also includes the dierence
generated by dierent expectations of compensation. Moreover, long-term follow-ups after
the experiment ends would be distorted by the actual compensation people received. Policy
experiments would, in eect, only give information about the merits of a policy relative to
cash transfers. While this might be exactly the information we need in some contexts, it
will often be less useful than information about the eect of the policy relative to a control
group which is not exposed to any intervention, be it policy changes or cash transfers.
Alternatively, proponents of policy experiments could say that while policy experiments
might set back distributive justice relative to other available policies, they are often justi-
ed, all things considered. To make this plausible, they might question how weighty the
distribution-based objection against policy experiments really is. They might point out that
we usually do not worry about small distributive inequalities, even if they cannot be justi-
84
ed by dierent choices or dierent natural endowments. For example, suppose that Amber
and Brooke live in dierent neighborhoods and that the state builds a public employment
oce in Amber's neighborhood. Let us assume that building more than one such oce
would be unjustiable given the small size of the town. By building the oce in Amber's
neighborhood, the state confers a greater benet on Amber than on Brooke. Amber can
access employment services at a smaller cost than Brooke. This plausibly constitutes an
inequality|albeit a small one|in several suggested currencies of egalitarian justice, such as
resources or opportunity for welfare. But it would be absurd to say that this is a weighty
reason against building the public employment oce. Thus, even if the distribution-based
objection against policy experiments is sound, the reason it provides is easily outweighed by
other considerations.
As I said, it is not my goal in this section to show that policy experiments are always
impermissible, all things considered. Thus, I am happy to concede that an inequality-
inducing policy is sometimes the most desirable option. This is particularly likely to be the
case if there are no attractive alternatives to the inequality-inducing policy, such as in the
case of the public employment oce just described. I also agree that the strength of the
objection might be relatively weak if the dierences between the arms of the experiment are
small.
However, it is false that real policy experiments only operate with small benets. The
actual Jobs-Plus program, for instance, conferred an array of benets on residents of ran-
domly chosen public housing projects, including nancial incentives to seek employment. As
a dierent example, consider the widely publicized policy experiment in Stockton, Califor-
nia, which involves paying randomly chosen citizens $500 a month. People in the treatment
arms of such experiments receive considerably larger benets while bearing the same bur-
dens than their similarly situated fellow citizens in the control arms. If one believes in the
importance of distributive justice, construed along the lines of (Ecumenical Egalitarianism),
why would one dismiss such violations of its demands as negligibly weak reasons against
85
policy experiments?
To sum up, many policy experiments appear to be less distributively just than available
alternatives. This is because they confer unequal distributive shares|as measured by a wide
variety of proposed currencies of distributive justice|in the absence of dierent choices or
dierent natural endowments. If the dierentials between the arms of the experiment are
small, this might not matter too much. But often, they are large, and then it is hard to see
why concerns of distributive justice are easily outweighed by the benets of the experiment.
This objection is not necessarily any weaker in cases in which there is reasonable disagreement
about the desirability of being in various arms of the policy experiment.
It is worth noting that the distribution-based objection does not apply to all exploratory
policies. In particular, the objection is less likely to apply to a non-experimental but ex-
ploratory policy which might either turn out very well for everyone or very badly for ev-
eryone. For example, a policy experiment in which some randomly chosen people receive
additional employment services might be problematic from the perspective of distributive
justice, while a policy which gives everyone these additional services for a short time might
not be problematic.
4.5 Is Exploration through Devolution Immune to the
Distribution-Based Objection?
In the introduction, I distinguished two ways in which a political community might syn-
chronously implement multiple policies. One way is to have some authority mandate that
dierent populations are exposed to dierent policies. Another way is to have some authority
devolve decision-making powers to lower levels of authority. This foreseeably leads to au-
thorities at lower levels implementing dierent policies. For instance, one federal state might
decide to grant certain employment services to its public housing residents, while another
federal state might not. Or, to take a dierent example, one city might decide to pay a UBI
86
while another city might not.
41
Given that decision-makers at higher levels of authority can
foresee such policy heterogeneity, we must ask whether their decisions to devolve authority
are susceptible to the same distribution-based objection as decisions to conduct top-down
experiments.
One reason to take less issue with experimentation through devolution is that inequalities
across federal subunits might be seen as at least partly the result of the decisions of the
aected parties. Suppose Amber lives in Austin, where residents of public housing projects
receive employment services, whereas Brooke lives in Bakerseld, where they do not. Suppose
that, as a result, a greater share of society's resources is invested into Amber's life than into
Brooke's, that Amber has greater opportunity for welfare, and so on. But while these
inequalities are similar to inequalities that top-down experiments might lead to, they are
dierent in that Brooke could get what Amber gets if she moved to Austin. She is worse o
only because she chooses to stay in Bakerseld. Hence, the inequality between Amber and
Brooke is due to their choices and hence does not set back distributive justice according to
(Ecumenical Egalitarianism).
But in most cases, the possibility of moving will do little to defuse the egalitarian objec-
tion.
42
This is because moving is costly. To illustrate, suppose that moving to Austin would
cost Brooke several thousand dollars. Moving to Austin and getting high benets is not as
good as Amber's option of staying in Austin and getting high benets; it is several thousand
dollars worse. No matter whether Brooke chooses to stay in Bakerseld or move to Austin,
she would be worse o in terms of resources or opportunity for welfare than Amber. Hence,
Brooke is worse o not because of her choices but merely because of the bad brute luck of
living in a place where authorities have decided to confer less benets on public housing
41
In the UK, local authorities decide which health care services are provided for people in a particular
region. This leads to inequalities. For example, some patients receive certain cancer drugs while others
are denied those drugs purely because they live a few miles down the road. While the decision to devolve
authority over healthcare spending might not have been motivated by a desire to learn about what spending
patterns are most benecial, the same egalitarian worry that I discuss in this section applies.
42
Relatedly, Fabre (2005) argues that inequalities between people in dierent countries can often not be
justied by their choices.
87
residents than somewhere else. Hence, the inequalities generated by devolving authority do
set back distributive justice.
The possibility to move merely caps the choice-independent disadvantage of those under
the worse policy regime. Brooke's choice-independent disadvantage cannot be higher than the
disadvantage|in terms of resources or opportunity for welfare|of having to move, because
by moving, she gets all the benets that Amber gets. The less costly moving is in terms
of the relevant currency of justice, the smaller the choice-independent disadvantage that
Brooke can suer. But since the costs of moving are generally substantial, the possibility of
choosing to move to then enjoy the same benets as people in other jurisdictions will rarely
weaken the distribution-based objection.
Alternatively, the inequality might be said to be due to Amber's and Brooke's choices
because California and Texas each choose their policies democratically. Thus, Amber and
Brooke|and, more generally, Californians and Texans|might be said to have chosen their
dierent levels of provision. The trouble with this response is that either it only justies few
inequalities, or it employs a notion of `choice' that cannot plausibly justify inequalities. If
Amber and Brooke directly chose the dierent policies by voting for them in a referendum,
then one might say that the inequalities between them are due to their choices and hence
unproblematic. But in real cases, many inequalities will involve people who voted against
the policies that cause the inequalities. Or, more likely, these people might have never
had the opportunity to vote on the policies. Of course, in some sense of `choice', we could
say that even if someone voted against a policy or never voted on it directly, they have
nevertheless `chosen' the policy in virtue of being a participant in the democratic process
that selected this policy. But then, we employ a weak sense of `choice', and it is far from
clear why inequalities should be unproblematic as long as people have `chosen' them in that
sense of the term. Indeed, it would follow that any inequalities that democracies produce are
unproblematic from the perspective of distributive egalitarians. This interpretation would
clearly misconstrue their view.
88
Maybe bottom-up experimentation is less problematic because weaker requirements of
distributive justice apply across subunits of authority. For example, inequalities between
Californians and Texans might be less problematic than inequalities between Californians.
Similarly, inequalities between Angelenos and San Franciscans might be less problematic than
inequalities between Angelenos. Then, bottom-up experimentation would be less problematic
because it ensures that within a subunit of authority|such as one federal state|everyone
is subjected to the same treatment arm.
43
Of course, not everyone agrees with the view that requirements of distributive justice
are weaker across dierent subunits of authority.
44
But even if we grant that they are
somewhat weaker, it seems hard to see how they could be much weaker. In other words, it
is hard to see why we should be very worried about inequalities between Californians but
barely worried about inequalities between Californians and Texans. For example, maybe one
thinks that requirements of distributive justice are grounded in coercive relations. Since there
are stronger coercive relations within subunits of authority than across them, requirements
of distributive justice across subunits of authority tend to be weaker. But it is unclear
whether there is much less coercion between people in California and people in Texas than
between people in California. In any case, there is clearly a fair amount of coercion between
Californians and Texans, exercised through their shared federal government. Thus, if one
thought that requirements of distributive justice are grounded in coercive relations, then
there should still be reasonably strong requirements of distributive justice between people in
dierent federal states. While the distribution-based objection might be somewhat weaker
in bottom-up experiments, it is hard to see how it could cease to be a major concern.
45
43
Note that the U.S. federal government could conduct an experiment in which all citizens of the same
federal state get assigned to the same treatment arm. Thus, this is not only an argument for why the
distribution-based objection is weaker for bottom-up experiments but also for why it is weaker for top-down
experiments that assign citizens to treatment arms in a certain way.
44
Some even think that requirements of distributive justice are not signicantly weaker between citizens
of dierent countries than between citizens of the same country (see Caney, 2001).
45
See Fllesdal (2001) for some further arguments that egalitarian demands apply across federal subunits.
Of course, in states in which the dierent subunits of authority have very loose ties|much looser ties than
U.S. federal states, for example|it is more plausible that requirements of distributive justice are signicantly
weaker across federal states than within them.
89
Another reason for thinking that the distribution-based objection is weaker against ex-
perimentation by devolution is that while devolution might introduce some distributive in-
equalities, it might also remove more distributive inequalities along the relevant dimension
of distributive justice than top-down experiments. For example, geographically concentrated
minorities might have less opportunity for welfare because they can never change political
decisions to conform with their reasonable conception of justice. They might always be over-
ruled by people with more widely held conceptions of justice. Devolution might decrease
the inequality of opportunity for welfare by allowing geographically concentrated minorities
to modify some of the rules that apply to them in the light of their conception of justice.
Thus, devolution might in some ways set back equality of opportunity for welfare, but it also
promotes it in other ways. Top-down experiments might have less potential to also reduce,
say, inequalities of opportunity for welfare. Therefore, the distribution-based objection is
weaker in the context of devolution.
I agree that if devolution also decreased distributive inequalities in some ways, then this
would weaken the objection. However, while it is easy to nd examples in which distributive
inequalities are created through dierent policy regimes in dierent jurisdictions, it is much
less clear that devolution removes inequalities to a similar extent by empowering minorities.
The challenge for the defender of devolution is to make a convincing case that it does.
Not only is it unclear whether the distribution-based objection would be weaker for
bottom-up rather than top-down experiments. It should also be noted that the reason for
policy exploration is clearly weaker for bottom-up experiments. The information obtained
from bottom-up experiments is less useful because there are many dierences between, say,
California and Texas, and it will therefore be dicult to tease out the causal contribution
of a policy implemented in Texas but not California by comparing outcomes in both states.
Top-down randomized controlled trials are much more potent tools for learning about the
eects of policies. This casts doubt on the argument for devolution based on policy explo-
ration. The main reason for exploration concerns the improvement of future decisions, and
90
devolution is less likely to achieve this while being similarly problematic from the perspective
of distributive justice.
46
4.6 Conclusions
Policy-makers frequently face the explore-exploit tradeo. They must choose between poli-
cies with uncertain consequences which would resolve much uncertainty about policy con-
sequences and policies that they know will turn out moderately well. While explore-exploit
decisions are well-studied in a variety of contexts in a variety of disciplines, the moral consid-
erations bearing on how to make explore-exploit decisions on the policy level seem relatively
neglected.
This chapter started to ll this gap. It took a closer look at the argument that more
exploratory options are more choiceworthy because they improve future decisions. It also
discussed a worry that often applies to more exploratory policies, namely, that they are riskier
than less exploratory alternatives. Finally, it discussed the worry that policy experiments
are often less distributively just than non-experimental alternatives.
Of course, any policy decision involves many reasons for and against the available alter-
natives. The reasons stemming from some policies being more exploratory than others are
just one subset of those reasons. Sometimes, they will be insignicant relative to other con-
siderations. But at other times, it will be important that we have an accurate understanding
of them to make good decisions.
46
To be clear, devolving authority might still be the most choiceworthy option in many cases. There are
many reasons to devolve authority that have nothing to do with exploration.
91
Chapter 5
Science Advice: Making Credences
Accurate
Abstract
Policy-makers often rely on scientists to inform their decisions. When advising policy-makers,
may scientists make claims such as `X is toxic' even if their evidence does not conclusively
show that X is toxic? One view says that scientists always ought to make their uncertainty
explicit. Another view says that scientists ought to take moral consequences of policies into
account when they decide what to say. We propose a third view: scientists should say what
maximizes the expected accuracy of policy-makers' credences.
1
5.1 Introduction
Many decisions that policy-makers face involve dicult empirical questions. For example,
the decision whether to regulate a certain substance involves the empirical question whether
the substance is toxic. The decision whether to prohibit the construction of new nuclear
power plants involves empirical questions concerning the safety of nuclear reactors. Since
policy-makers need to make those decisions but cannot be experts in all relevant domains,
1
This chapter is based on joint work with Deniz Sarikaya.
92
states have developed mechanisms to bring policy-makers in contact with scientists. For
example, a policy-maker might invite a scientist to an expert brieng in which the scientist
reports on the toxicity of a substance or the safety of nuclear reactors.
This chapter explores views about what scientists should say when they advise policy-
makers. One view holds that scientists should say what they have a high credence in. For
example, they should not assert `X is toxic' if they only have a credence of 0.7 that X is
toxic. Rather, they should make their uncertainty explicit to say something weaker that
they have a high credence in, such as `it is likely that X is toxic'.
2
Another view holds that
scientists should say what they expect to have the best policy consequences.
3
For example,
scientists should not make their uncertainty explicit if they know that policy-makers would
then not regulate the sale ofX and scientists think that regulation would be desirable given
the likelihood of X being toxic and the social costs of failing to regulate X if it is toxic.
Neither of these two views is fundamentally concerned with the policy-maker. The rst
view is fundamentally concerned with the honesty of the scientist. The second view is fun-
damentally concerned with policy consequences and hence only derivatively concerned with
the policy-maker, insofar as it is through them that science advice exerts a causal in
uence
on policy decisions. While proponents of these views discuss the role and characteristics of
policy-makers, it is somewhat surprising that their views are not fundamentally concerned
with the policy-maker. After all, it is natural to think that the point of science advice is
to make policy-makers more informed. This suggests considering a third view, which is
fundamentally concerned with the epistemic benet to the policy-maker. According to this
view, scientists should say what they expect to make the policy-makers' credences most
accurate.
4
That is, if a scientist has a credence of 0.7 in X being toxic, taking that to be
the accuracy-maximizing credence, then they should say whatever brings the policy-maker's
2
See Betz (2013) and further references in footnote 25.
3
See Steele (2012) and further references in footnote 10.
4
We are not aware of any comprehensive defense of this view in print. The underlying idea to focus on
the policy-maker's epistemic benet is shared by John (2018). Large parts of our discussion of this idea
go beyond his paper. Our arguments might help assess the merits of his view as well, although his specic
proposal diers from the accuracy-focused view we discuss here.
93
credence close to 0.7. If this requires not making uncertainty explicit or saying things they
know will bring about suboptimal policy consequences, so be it.
For ease of reference, let us state the three views as follows:
(Honesty) Advising scientists ought to say what they have a high credence in.
(Policy) Advising scientists ought to say what maximizes the expected value of the policy
consequences of their utterance.
(Accuracy) Advising scientists ought to say what maximizes the expected accuracy of their
addressee's credences in the target propositions.
Before we start discussing these views, we must clarify some central concepts. First, advising
scientists are scientists who communicate their ndings to policy-makers.
5
For the purposes
of this chapter, a policy-maker is someone with the formal power to make laws. For instance,
a policy-maker might be an elected member of the legislative branch, such as a member of
the U.S. Congress. Alternatively, a policy-maker might be an employee who writes secondary
legislation at an executive agency, such as the U.S. Environmental Protection Agency. To
keep our discussion focused, we only consider norms for science advice in democracies.
Second, various accuracy measures have been proposed in the literature, and nothing in
our arguments hinges on which particular measure is used.
6
For example, one could measure
the accuracy of a credence Cr(p) in some propositionp by the squared distance between Cr(p)
and 1 if p is true, or 0 if p is false. Thus, for example, if p is true, then the agent's credence
is maximally accurate if it is 1, fairly inaccurate if it is 0.5, and maximally inaccurate if it is
0.
5
Science advice can occur in various institutional contexts. We focus on cases in which policy-makers
commission scientists to report on particular questions. Another type of science advice occurs when or-
ganizations such as the Parliamentary Oce of Science and Technology of the U.K. proactively research
important technologies and write reports to raise awareness among policy-makers. The goals and norms
might be dierent for this form of science advice. For an overview of the various institutions that facilitate
science advice, see Weingart and Lentsch (2009).
Note also that there are processes other than science advice through which scientists might in
uence
policy-making. Christiano (2012), for example, construes the role of scientic experts as providing a pool of
theories on the basis of which policy-makers can develop legislation.
6
See Pettigrew (2016) for an overview of dierent measures of accuracy.
94
Third, the expectations in (Accuracy) and (Policy) are taken with respect to the scientist's
credences. In particular, if the scientist has a credence of 0.7 in X being toxic, then the
expected accuracy of the policy-maker's credence in X being toxic is maximal if the policy-
maker has a credence of 0.7, too.
7
So, (Accuracy) says that scientists ought to make their
addressees' credences close to their own credences.
Fourth, we use the term target propositions to single out the credences that (Accuracy)
is concerned with. To x ideas, here are three central properties of target propositions.
First, target propositions must be relevant to what the scientist was asked. If a scientist
is asked whether X is toxic, propositions about climate change are not target propositions
because they are irrelevant in that conversational context. Thus, (Accuracy) would say that
scientists should not decide what to say based on what makes the policy-maker's credences
about climate change more accurate, even if they correctly judge that climate change is a
much more morally important issue than the toxicity of X.
8
Second, target propositions
are non-normative. The norm (Accuracy) does not require that the scientist makes the
credence in the proposition thatX should be banned accurate. If (Accuracy) required that,
it would threaten to collapse into (Policy). Third, target propositions are not propositions
about epistemic matters. The norm (Accuracy) does not require maximizing the accuracy
of the policy-maker's credences about what the scientist believes or what evidence there is.
If asked whether `X is toxic', the target proposition is the proposition that X is toxic|not
the proposition that X should be banned or that the scientist believes that X is toxic.
Fifth, what do we mean by `ought'? Like other activities, science advice might be
7
Here, we assume that the accuracy measure is chosen such that the expected accuracy in p relative to a
credence function Cr is maximized by the probability Cr(p). The aforementioned accuracy measure has this
property.
8
Grice's Maxim of Relation requires speakers to `be relevant' (see Grice, 1975). We intend to use `relevant'
in the same sense as Grice. For the purposes of this chapter, we need not give a theory of when a proposition
is relevant in a given conversational context. In principle, the reader can plug their favorite account into our
denition of target propositions (see Wilson and Sperber, 2012, for instance).
That said, it is worth noting that on any plausible account of relevance, what is relevant can come apart
from what was explicitly asked. For example, if you were bitten by a snake and asked a biologist `is this
snake poisonous?', the target proposition is the proposition that the snake is venomous. A snake is poisonous
if it kills you when you eat it, which is clearly irrelevant in the described situation.
95
governed by dierent kinds of norms which employ dierent senses of `ought'. Maybe there
is a moral norm, which species what scientists morally ought to say, and an epistemic norm,
which species what scientists epistemically ought to say. Then, the above norms might not
be mutually exclusive: they might simply employ dierent senses of `ought'. But we read all
these norms in a specic sense of `ought': the moral sense. We ask what scientists morally
ought to say when they communicate their ndings to policy-makers.
The three views are simplied versions of more plausible views. In particular, the views
say that advising scientists should always be guided by one particular concern. This is
implausible. To give a simple example, in a case in which the scientist would be killed if
they followed one of the three norms, it is likely to be false that the scientist ought to follow
any of them. But instead of making the views more complicated by introducing qualications
to deal with pathological cases, we hereby restrict the scope of our discussion to everyday
cases of science advice that lack such exotic features. We are interested in what should guide
scientists in such cases: honesty, policy, or accuracy.
9
In the next section, we raise an objection to (Policy) and an objection to (Honesty) and
explain how (Accuracy) avoids those objections. In contrast to (Policy), (Accuracy) does not
permit scientists to skew their advice based on their moral assessment of policies. This is an
advantage because such in
uence would be procedurally unjust. In contrast to (Honesty),
(Accuracy) allows scientists to assert simplied claims that they have a low condence in,
if doing so would make the addressee's credences accurate. This is an advantage because it
often seems implausible that scientists have to stick to saying what they have a high credence
in even if that would be epistemically unhelpful for the policy-maker. In section 5.3, we turn
to a problem for (Accuracy). It seems to require that scientists tell lies if this happens to
maximize the expected accuracy of the policy-maker's credences. Up to this point, we keep
things simple by presenting our arguments in the context of the somewhat rare cases in
9
The norm (Honesty) is simplied in a further way: having a high credence in p cannot be a sucient
condition for it to be the case that one ought to say p. Otherwise, the norm would recommend saying
trivial things such as `X is toxic or not'. Since our discussion of this norm only concerns high credences as a
necessary condition, we do not need to complicate matters by adding a condition to rule out trivial claims.
96
which an advising scientist's statements are received by a single policy-maker. In section
5.4, we then discuss how one could generalize the norm to cover cases of multiple addressees.
In section 5.5, we assess actual science advice in the light of (Accuracy) to give a better idea
of what (Accuracy) would amount to in practice. We conclude in section 5.6.
5.2 The Case for (Accuracy)
5.2.1 Only Epistemic Values Should Guide Science Advice
Various writers argue that a scientist's moral judgments ought to at least partially guide
their decisions about what empirical claims to make. For example, Katie Steele, reviving
a well-known argument by Richard Rudner, considers a scientist who decides whether to
report that they have medium condence or high condence in the claim that the state of
the world S obtains.
10
She writes that
11
They would need to guess what policy decisions advice of high condence or
medium condence would lead to and consider the impacts if, in either case, the
state S obtains or does not obtain. [...] Scientists [in making a decision about
what to report] commit to an evaluation of the desirability of outcomes.
This is how (Policy) would tell the scientist to decide whether to report medium or high
condence: assess the moral value of the possible policy consequences of saying various
10
See Steele (2012) and Rudner (1953). See also Churchman (1948) and Shrader-Frechette (1994) for
related views. Heather Douglas (2000, 2008, 2009) also defends a view on which moral judgments about
policy consequences should in
uence a scientist's decision what to say. However, her view appears to be
that only the scientist's moral assessment of the consequences of saying something false|but not of the
consequences of saying something true|should guide their decisions (see Douglas, 2008, p. 14).
We suspect that none of these writers would endorse (Policy) in full generality. For example, they would
probably hold that the set of potential statements among which scientists should choose based on their
moral judgments is restricted on the basis of considerations other than policy consequences (see footnote
41). Pace (Policy), scientists should not just say whatever maximizes expected policy consequences, possibly
disregarding truth entirely. However, these questions do not matter for the arguments in this section. The
arguments apply to any view that holds that scientists should in standard cases of science advice decide
what to say based on their moral judgments about policy consequences, whether that decision is made from
a set of options that was already narrowed down or not.
11
Steele (2012, p. 898{899).
97
things and then say what maximizes expected value. In contrast, (Accuracy) would tell
the scientist to report medium or high condence depending on which of the two will give
the policy-maker more accurate credences in the proposition that the state S obtains. The
scientist should not consult their moral judgments about the desirability of having this or
that policy in place.
A problem with (Policy) becomes apparent once we appreciate that what we morally
ought to do when we act within a political system is sometimes not what has the best policy
consequences. This is because, in addition to bringing about good policy consequences,
it also matters that we follow just procedures. For example, you might nd yourself in a
situation in which you could bring about better policy consequences by conducting electoral
fraud. Nevertheless, except in extraordinary circumstances, you should not conduct electoral
fraud because doing so would undermine procedural justice.
12
The problem with (Policy) is that it tells scientists to act in a way which undermines
procedural justice, under a broadly democratic understanding of this ideal. For example, a
policy-maker is more likely to pass a policy banning X if a scientist says `X is toxic' than if
they do not say that. If an advising scientist is more likely to say `X is toxic' if they have
certain moral views|as (Policy) implies|then a policy-maker will be more likely to ban
X if the advising scientist has certain moral views. But it is procedurally unjust that the
advising scientist's moral views in
uence policy decisions in this way.
13
Why think that this in
uence is procedurally unjust? After all, the mere fact that some
non-elected person's moral judgments exercise a large in
uence on a policy-maker's decision
might not be procedurally unjust. For example, suppose that a member of an NGO advocates
12
There is a large literature on why following just|and, in particular, democratic|procedures matters
in addition to arriving at just outcomes. For some recent work, see Christiano (2008), Estlund (2009), and
Kolodny (2014).
13
The thought that there is something anti-democratic about allowing scientists to use moral judgments
when they communicate has been articulated before (see, for instance, Pielke (2007), Betz (2013), and
de Melo-Mart n and Intemann (2016); on the tension between democratic ideals and governance by scientists,
see Sartori (1987, p. 434{439)). But often, this objection is made rather quickly without a careful explanation
of what exactly is anti-democratic about scientists using moral judgments when they decide what to say about
empirical matters. We hope that our discussion will help to sharpen the argument.
98
for a particular policy position in the presence of the policy-maker. In that case, the moral
views of a non-elected person|the member of an NGO|exert a strong in
uence on the
decisions of a policy-maker. But this, one might say, is entirely unproblematic from the
point of view of procedural justice. It is therefore unclear why the in
uence of scientists'
moral judgments should be problematic.
We agree that procedural justice might sometimes be consistent with a non-elected per-
son's moral judgments having a large in
uence on policy decisions. But we contend that
procedural justice requires that, in such situations, policy-makers can viably deny the per-
son's moral judgment that in
uence. That is, the policy-maker's decision to give weight to
a non-elected person's moral judgments must not be due to the policy-maker being put in
a situation in which not giving weight to the non-elected person's moral judgments would
be very costly. Only if that condition is satised can we say that by choosing to let that
person's moral judgment in
uence their policy decisions|rather than merely resigning to
the in
uence because there is no viable alternative|the policy-maker legitimized that in-
uence. If the policy-maker cannot viably deny the non-elected person's moral judgment
the in
uence on their policy decisions, then this in
uence is undemocratic and procedurally
unjust.
This condition is satised in the case of the NGO. The democratically legitimized policy-
maker had the viable option to refrain from changing their decisions based on the arguments
given by the member of the NGO. They freely decided to give the NGO member's moral
judgments more weight and thereby legitimized this in
uence. The condition also explains
why other cases are problematic. Suppose that a blackmailer credibly threatens to harm
the policy-maker if they do not make policy decisions according to the blackmailer's moral
judgments. The policy-maker decides to comply with the blackmailer's demands. Clearly,
this in
uence of the blackmailer's moral judgments is in tension with democratic ideals. The
condition we proposed generates this verdict. The condition is violated because it is not a
viable option for the policy-maker to deny the blackmailer's moral judgments in
uence on
99
policy decisions: the costs of doing so would be prohibitively high. Hence, the policy-maker's
decision to give weight to the blackmailer's moral judgments does not make the in
uence
unproblematic from the point of view of procedural justice.
If scientists followed (Policy), science advice would violate this constraint: policy-makers
would often lack a viable alternative to letting scientists' moral judgments in
uence their
decisions. Thus, the in
uence of the scientists' moral judgments would be procedurally
unjust. They would lack a viable alternative because, rst, the only alternative would be to
not let their decisions be in
uenced by what the scientist says, and, second, that is not a
viable alternative.
First, if policy-makers changed what they do based on the scientist's empirical claims,
then they would give the scientist's moral judgments weight. For example, if they let their
decisions be in
uenced by the statement `X is toxic', they could not help but let their decision
be in
uenced by the scientist's moral judgments, in the sense that a scientist with dierent
moral judgments might have made dierent empirical statements, which would have led the
policy-maker to do something else. Thus, to deny the scientist's moral judgments in
uence
on their policy decisions, the policy-maker would have to avoid changing what they do based
on the scientist's empirical claims. If (Policy) was adopted, policy-makers would be put in a
situation in which they have to decide between ignoring scientists altogether or having their
policy decisions be in
uenced by the scientist's moral judgments.
But, second, ignoring scientists is not a viable option. The policy-maker knows that
they would impose severe risks on others if they made decisions about policies that crucially
depend on empirical matters without getting informed. In such a situation, many policy-
makers would rightly accept that the scientist's moral judgments in
uence their decisions,
but they might only accept it because the alternative is out of the question. Thus, the moral
judgments of some non-elected person would get a greater in
uence on a policy-maker's
decisions but only because the policy-maker cannot viably deny them that in
uence. That,
we contend, is problematic from the point of view of a broadly democratic conception of
100
procedural justice.
If scientists followed (Accuracy), then their moral judgments about policy consequences
would not get a special in
uence on policy decisions. Scientists would decide what to say
based on empirical judgments about what makes the policy-maker's credences accurate.
They would not use moral judgments about policy consequences to decide what to say.
Thus, if the policy-maker changes their policy decisions based on the empirical claims a
scientist makes, they do not also have to accept that their decisions will thereby be sensitive
to the scientist's moral judgments. Hence, (Accuracy) does not undermine procedural justice
in the way (Policy) does.
As a response to this objection, a proponent of (Policy) might say that an advising
scientist should explain to the policy-maker how their moral judgments shape what empirical
claims they make.
14
Then, the policy-maker could correct for the in
uence of the scientist's
moral judgments on the scientist's empirical claims. In eect, the policy-maker could infer
which empirical judgments and moral judgments the scientist combined in order to arrive at
the statements they made. They would then have the viable option to permit the scientist's
empirical judgments but not their moral judgments to in
uence policy decisions.
15
But proponents of (Policy) cannot plausibly make this move. They insist that scientists
cannot and should not try to communicate the complex empirical judgments underlying the
less nuanced claims one encounters in science advice.
16
This is inconsistent with also saying
that if scientists present both the less nuanced claims and the moral judgments, then policy-
makers can infer the underlying empirical judgments and rely on them without also having
their decision be in
uenced by the scientist's moral views. In that case, scientists would be
able to communicate their complex underlying empirical judgments, albeit in a somewhat
roundabout way.
17
14
See Douglas (2008).
15
See Elliott and McKaughan (2014).
16
We will discuss and further support this view in the next section.
17
See de Melo-Mart n and Intemann (2016, p. 512) and Elliott and Richards (2017b, p. 272) for further
worries about this response.
101
Alternatively, proponents of (Policy) might say that while being transparent about one's
values does not enable policy-makers to rely on the empirical advice without also having their
decision be in
uenced by the scientist's normative views, policy-makers could at least choose
to listen to scientists whose moral views coincide with their own moral views. After all, while
policy-makers cannot viably ignore all scientists, they can viably ignore some scientists.
But for a policy-maker to avoid the choice between not listening to any scientists or
accepting their policy decision to be in
uenced by whatever moral judgments the advising
scientist happens to make, there must always be many scientists with dierent moral views
for each policy-relevant empirical question. Otherwise, a policy-maker might be unable to
nd any scientist with moral views that match their own. They would then again face the
choice between ignoring all scientists or letting their policy decision be in
uenced by moral
judgments they do not approve of. Given the diversity of moral views that policy-makers
in pluralistic societies represent, it is unrealistic that for any representative and any policy
question that requires science advice, there will be a matching scientist with the required
combination of moral views and empirical expertise.
18
As a dierent response, the proponent of (Policy) might defend an `institutional' version
of (Policy). According to this view, rather than using their own moral judgments about
policy consequences to decide what to say, they should rely on norms which capture such
moral judgments. Much as there are norms about which strengths of evidence you need to
convict someone of a crime or of a civil oense, there could be norms about which strength
of evidence you need to declare a substance safe that would kill people if it was unsafe or
to declare a substance safe that would cause mild headaches if it was unsafe. If scientists
followed such norms when they decided what to say, they would not use their own moral
judgments about the moral disvalue of people being killed or experiencing headaches. Rather,
they would be guided by the moral judgments that underlie the norms. Furthermore, the
norms and the underlying moral judgments could be democratically selected. Maybe they
18
See de Melo-Mart n and Intemann (2016, p. 511) for a similar objection in the context of the idea that
diverse advisory boards should scrutinize how values are used in scientic studies.
102
would not be directly voted on by elected representatives, but they could plausibly be dened
by appointed ocials who are accountable to the legislature.
19
The institutional version of (Policy) faces a dilemma. Either the norms are suciently
precise to single out a narrow set of possible statements for any question a scientist is asked to
give advice on and any evidential situation they might nd themselves in. In that case, pre-
cise value trade-os must underlie these norms. Even if these are democratically approved,
many elected representatives will disagree, just as many elected representatives disagree with
other democratic decisions. It seems highly undesirable that ocials are presented with the
choice of either not to listen to scientists or to accept moral views that they reject. Repre-
sentatives should be able to decide in an informed way based on their own moral judgments
or those of the constituency they represent, even if those moral judgments are not shared by
the majority. Moreover, one might worry that spelling out norms to a degree of precision
that leaves scientists with little freedom about what to say in any epistemic situation is
impractical. The other option is that the norms leave scientists much freedom about what
to say. But in that case, the proponent of the institutional version of (Policy) would presum-
ably say that scientists should then fall back on their own moral judgments to decide what
to say. But if scientists' moral judgments largely drive what empirical claims they make and
norms only give rough guidance, the original objection applies. The institutional version of
(Policy), whether it assumes precise or imprecise norms, fails to oer an appealing response
to the objection.
Rather than trying to argue that (Policy) is consistent with the alleged demands of
democratic ideals, one might suggest that (Accuracy) also requires moral judgments about
19
This institution-centered way of thinking about values in science has been proposed by Wilholt (2013)
and Steel (2016), for instance. Steel (2016) argues that there are already such norms in place, and that
scientists have no choice but to stick to them because, otherwise, they face the danger of having their work
ignored or not receiving funding. We agree that concerning some decisions that scientists make|such as
how to run an experiment|scientists do not have the freedom to pick whatever alternative they judge to
have the best consequences. But when it comes to evidential thresholds in science advice, the norms we
actually have clearly leave scientists with some freedom to make their own decisions (see also Steele, 2012,
p. 899). Even if there were such precise norms, it is unclear how reliably they would be enforced given that
some science advice happens behind closed doors.
103
policy consequences. Then, (Accuracy) would seem to be just as inconsistent with the
demands of procedural justice as (Policy). One might worry, in particular, that (Accuracy)
requires scientists to make moral judgments about policy consequences in situations in which
there is more than one target proposition.
Roughly speaking, we dened target propositions as the propositions that are relevant in
the conversational context. It is clear that there will be more than one target propositions
in many situations of science advice. For example, if one of the target propositions is that
X is toxic, presumably that X is toxic for adults and that X is toxic for children will also
both be target propositions. After all, if it is relevant whether X is toxic, then it is also
relevant whether X is toxic for adults and whether X is toxic for children. Suppose that
one way of presenting the scientist's ndings makes the credences in one target proposition
accurate, while another way makes the credence in another target proposition accurate. To
give helpful advice in such situations, (Accuracy) must specify how to trade o the accuracy
of credences in dierent target propositions.
The objection contends that proponents of (Accuracy) could only give one plausible
answer to the question of how to trade o accuracy in dierent propositions: scientists
should prioritize the accuracy of credences according to how morally bad the consequences
of leaving them inaccurate would be. For example, if it is morally more important for a
policy-maker to have accurate credences about the eects of X on children than about the
eects of X on adults, then scientists should say what makes credences about the eects
of X on children more accurate rather than what makes credences about the eects of X
on adults more accurate, if they cannot make both credences accurate. But if that is how
(Accuracy) should be understood in cases with multiple target propositions, then scientists
have to make moral judgments about policy consequences to decide what to say, after all.
The norm (Accuracy) would be just as susceptible as (Policy) to the objection we raised.
But it is false that proponents of (Accuracy) must deal with cases of multiple target
propositions in this way. In fact, the `morally-weighted' version of (Accuracy) would be
104
somewhat dicult to motivate. To determine which credences an advising scientist should
make accurate, it rst says that only credences in target propositions matter, that is, in
propositions that are relevant in the given conversational context. But it then says that
among those propositions, moral considerations about policy consequences determine their
relative weight. This mixes two criteria, relevance and the moral value of consequences, to
determine which credences should be made more accurate in a given situation. It might
be dicult to nd a consistent justication for such a hybrid view. In any case, there is
a more straightforward version of (Accuracy) that can deal with the problem of multiple
target propositions without requiring scientists to resort to moral judgments.
On this view, relevance determines not only which credences should be considered at
all, but also what their relative priority should be. If a scientist cannot make the policy-
maker's credences in all relevant propositions accurate, they should prioritize propositions
according to how relevant they are. How relevantp is depends on what has been said before
in the conversation (have I been asked about p?), on mental states of my addressee (what
are my addressee's goals for this conversation, and is p important to those goals?), and on
further considerations.
20
For example, the accuracy of the proposition that X is toxic for
children and the proposition that X is toxic for adults might receive roughly equal weight
because they are equally relevant in a conversational context in which the speaker is asked
whether X is toxic. In contrast, if regulation of X for adults would be impossible due to
political reasons, then the proposition that X is toxic for adults would be less relevant in
the context, assuming that the policy-maker's goal for the conversation is to acquire action-
relevant information. Finally, suppose that the policy-maker communicated that they are
interested primarily in the eects ofX on children. Then, according to the view we describe,
the scientist should prioritize the accuracy of the credence in the proposition thatX is toxic
for children over that of the proposition that X is toxic for adults, even if they correctly
judged that it would be more important, morally speaking, if the policy-maker had accurate
20
See footnote 8 on the notion of relevance.
105
credences about the eects of X on adults.
This way of dealing with multiple target propositions uses relevance throughout rather
than mixing dierent criteria. Moreover, it avoids the objection that (Accuracy) requires
moral judgments about policy consequences in cases with multiple target propositions. One
can usually gure out how relevant various propositions are in a given conversational context
without making moral judgments, as relevance is determined by non-moral matters such as
what has been said in the conversation and what the goals of one's addressee are.
21
Instead of trying to identify specic ways in which (Accuracy) might require judgments
about policy consequences, a proponent of (Policy) might say that it is simply inevitable that
scientists explicitly make or at least implicitly commit to moral judgments when they decide
what empirical claims to assert.
22
Scientists must decide how to map their complex empirical
judgments to less nuanced statements for policy-makers, and there is no appropriate basis on
which to make this decision other than moral judgments about the consequences of saying
various things. Maybe this is unfortunate from the point of view of procedural justice, and
we should therefore try to reduce the discretion that scientists have when they make these
decisions. But as long as we do not want to abandon the practice of science advice altogether,
we will just have to accept that scientist's moral judgments have an increased in
uence on
policy decisions.
However, (Accuracy) casts doubt on this argument. There is a dierent possible basis for
deciding what to say, namely, a concern for the accuracy of the policy-maker's credences. To
be clear, a proponent of (Accuracy) can arm that when a scientist decides to say S, they
implicitly commit to and may even explicitly make the moral judgment thatS is the morally
right thing to say. They can arm that scientists|like all of us|inevitably commit to moral
judgments when they make decisions. They disagree with proponents of (Policy) merely in
21
In particular situations, it might be that judgments about the relevance of propositions should be
informed by moral judgments. For example, if the addressee asked one to describe the world's most pressing
problems, then the relevance of various propositions depends on the moral question what the most pressing
problems are. But we do not see how the relevance of empirical propositions in cases of science could depend
on moral matters.
22
See Steele (2012) for an argument of this
avor.
106
the claim that the moral choiceworthiness of saying something in a situation of science advice
is determined by the moral value of its potential policy consequences. Instead, according to
(Accuracy), the moral choiceworthiness of saying something in a situation of science advice
is determined by the epistemic benet it bestows upon the policy-maker. Thus, to gure
out what they should say|in the moral sense of `should'|scientists do not need to confront
ethical questions about how bad it would be if this or that policy was implemented. Instead,
they need to consider empirical questions about how saying various things will aect the
credences of the policy-maker.
23
Strictly speaking, (Accuracy) is consistent with scientists' moral judgments in
uencing
policy decisions via science advice. Scientists must make many decisions, not just deci-
sions about what to say in a situation of science advice, but also, for example, about what
research methodology to use.
24
The norm (Accuracy) does not say anything about how
scientists should make those other decisions. Thus, it is consistent with (Accuracy) that
all of these other decisions are made based on the scientist's moral judgments. But these
decisions in
uence what the scientist will tell policy-maker's in situations of science advice.
For example, if a scientist decides not to conduct a certain experiment, then this will in
u-
ence what they can and will tell the policy-maker. The norm (Accuracy) allows that the
decision whether to conduct the experiment was based on the scientist's moral judgments
about policy consequences. Thus, (Accuracy) on its own does not rule out that scientists'
moral judgments in
uence the policy-maker's decisions through science advice.
But even if we concede this, the reason to prefer (Accuracy) over (Policy) still applies.
Even if scientists made all other decisions based on their moral values, the scientist's moral
judgments would have less in
uence on policy decisions if (Accuracy) is followed than if
(Policy) is followed. After all, (Accuracy) would at least remove the in
uence of scientist's
23
Of course, scientists should be guided by moral judgments about policy consequences in pathological
situations|which we explicitly set aside in our discussion|in which moral atrocities would be committed
if scientists made a particular statement. Even the staunchest defender of value-free science should accept
this.
24
See Douglas (2000, p. 565) and the contributions in Elliott and Richards (2017a, part 3).
107
moral judgments from the decision how to communicate the results of the research they
decided to conduct. If we are right that this in
uence is problematic from the perspective
of procedural justice, then the fact that (Accuracy) diminishes it relative to (Policy) is a
reason in its favor.
5.2.2 Communication Style Should Be Context-Sensitive
A common claim in the literature on science advice is that scientists should make their uncer-
tainty explicit when they communicate their ndings. For example, Gregor Betz proposes
that \[a]llegedly [...] value-laden decisions can be systematically avoided [...] by making
uncertainties explicit and articulating ndings carefully".
25
The imperative to make uncer-
tainty explicit would be supported by a norm such as (Honesty).
The view that scientists should always make uncertainty explicit is in tension with re-
search on how people change their credences when confronted with language describing
uncertainty. Qualitative specications of uncertainty, such as `it is unlikely that', are inter-
preted as corresponding to dierent probabilities depending on the utility of the event whose
likelihood is at issue.
26
This makes it hard to anticipate how the addressee will interpret what
one says. Quantitative specications are less ambiguous but might have other undesirable
eects. First, they might simply be ignored by the policy-maker. For example, in a recent
study, Eva Vivalt and Aidan Coville nd that policy-makers tend to ignore specications of
the variance of estimates.
27
Second, and more problematically, quantitative specications
might cause the policy-maker to misunderstand what the scientist says.
28
In such cases, it
25
Betz (2013, p. 209). Betz's view builds on Jerey's (1956) well-known response to Rudner (1953).
26
See Weber and Hilton (1990).
27
See Vivalt and Coville (ms).
28
For example, Johnson and Slovic (1995) observed that \people seem to see lower risk estimates (10
6
, as
opposed to 10
3
) as less credible". Fischho (1995) suggests that if experts tried to just give their addressees
`the numbers', then
[c]onfused recipients of such raw materials may add some of their own uncertainty to that
expressed by the analysts. Suspicious recipients may adjust risk estimates upward or downward
to accommodate (what they see as) likely biases.
108
seems implausible that a scientist should make their uncertainty explicit, given that it leads
to the policy-maker misunderstanding or ignoring what they say.
29
We contend that such cases favor (Accuracy) over (Honesty). The norm (Honesty) has
the implausible implication that scientists should always make uncertainty explicit, inde-
pendently of its eect on the listener. In contrast, (Accuracy) entails that if descriptions
of uncertainty would cause confusion, scientists should not make their uncertainty explicit.
Uncertainty should be made explicit only insofar as this is accuracy-conducive. Thus, the
degree to which uncertainty ought to be made explicit depends on how receptive the ad-
dressee is to language describing uncertainty. This strikes us an appealing implication of
(Accuracy).
In addition to sometimes confusing the addressee, only saying things one has high con-
dence in|and thus always making uncertainty explicit|might rule out other accuracy-
conducive communication strategies. For example, studies suggest that strategically framing
the facts one tries to communicate can positively in
uence one's addressee's ability to recall
them later.
30
A policy-maker might not remember much if the scientist carefully lays out
the evidence about the eects of climate change on ecosystems. But if the scientist told a
story about why the policy-maker's hayfever lasts longer each year, the policy-maker might
be able to recall much more of the scientic insights that the scientist intended to communi-
cate. Framing scientic information in compelling ways might in some cases be inconsistent
with (Honesty), if a compelling narrative requires making claims one has less condence in|
such as `climate change makes your hayfever last longer'|rather than their high-condence
29
Frank (2017) and John (2018) also point out that making uncertainty explicit can be confusing and
attack views such as (Honesty) on these grounds. Relatedly, Elliott (2010) argues that advising scientists
have a duty to promote their addressee's ability to make informed decisions, and to fulll this duty, they
must make sure that their addressees understand what they say.
A document by the OECD on science advice emphasizes that \[c]ommunicating scientic advice in ways
that maximise shared understanding (and minimise misunderstandings) is of key importance for scientic
advisory bodies [...] [a] common mistake is for advice to be written in long, very technical reports [...] There
is a balance to be achieved between oversimplication and the incomprehensibility of science language"
(OECD, 2015, p. 21{22).
30
See Valkenburg et al. (1999) and Jaspaert et al. (2011). We thank an anonymous referee for prompting
us to discuss framing.
109
alternatives|such as `there is strong evidence that...'. In such cases, (Honesty) would not
recommend employing such framing devices, even if they increased recall. In contrast, (Ac-
curacy) would recommend doing so since they are conducive to making the policy-maker's
credences accurate. Thus, for those who think that scientists should strategically use fram-
ing to communicate more eectively even if that requires making claims they are somewhat
uncertain about, this speaks in favor of (Accuracy) over (Honesty).
31
Analogous points could
be made for other ways in which only saying things one has a very high condence in stands
in the way of inducing the most accurate credences.
32
While it strikes us as an appealing feature of (Accuracy) that it recommends scientists
to tailor their communication style to suit their addressees, we also think that it would
be desirable to have an explanation why, at least in some cases, scientists may permissibly
simplify, omit mentioning uncertainties, and thus make assertions they think are not very
likely to be true. After all, it generally seems morally dubious to assert things one does not
have a high condence in.
One possible explanation is that there is a mutual understanding between the scientist
and the policy-maker. It is understood that the scientist will assert simplied claims in
which they do not have a high condence when that is accuracy-conducive.
33
Hence, a
policy-maker cannot reasonably expect scientists to only say things in which they have a high
condence when simplifying or glossing over uncertainties is required for making credences
in relevant propositions accurate. Rather, the policy-maker should expect that the scientist
will communicate in this way. Thus, the scientist does not deceive the policy-maker. They do
31
See Nisbet (2009), Bolsen et al. (2014), and Yang and Hobbs (2020) for further discussion of framing in
science communication.
32
For example, Vivalt and Coville (ms) nd that policy-makers tend to change their credences more in
response to good news|such as that a program they want to implement is more eective than they thought|
than in response to bad news. Hence, it might sometimes be more accuracy-conducive for scientists to
emphasize uncertainties if the result is good news from the perspective of the policy-maker. After all, this
might prevent the policy-maker from changing their credences too much. In contrast, if the result is bad
news, it might be accuracy-conducive not to mention uncertainty. In general, for any of the cognitive biases
that policy-makers have, there is a communication strategy that aims to mitigate the eect of the bias and
thereby promotes accurate credences (see also Akin and Landrum, 2017).
33
This explanation is inspired by Shirin (2016, ch. 1).
110
not represent themselves as having high condence in the simplied and unqualied claims
they make. This explains why it is morally unproblematic for scientists to say things they are
not very condent in, even though in other contexts|such as on academic conferences|it
would be deceptive and morally wrong to do so.
An argument in favor of this explanation is that the same phenomenon of partially relaxed
norms of communication occurs in contexts other than science advice. Suppose you attend a
public lecture about black holes on the open day of an astrophysics department. For the sake
of conveying a few basic facts about black holes to the audience, the lecturing astrophysicist
will simplify claims and gloss over uncertainties. They will assert claims in which they have
low condence and even claims they believe to be
at-out false, as asserted, if saying more
complicated, qualied claims would confuse the audience and distract from more basic facts
they intend to convey. In this situation, you could not reasonably expect that the scientist
will refrain from simplications and only say things they are very condent in, even if that
would hinder the communication of the central facts. You could not reasonably expect this
because you know that the point of a public lecture on black holes is to help laypeople
understand some basics about black holes|and maybe get some teenagers excited about
studying astrophysics|and this requires partially relaxing norms which normally require
people to only assert things they are condent in. According to the explanation we suggest,
the same happens in situations of science advice. The point of science advice is to give
policy-makers more accurate credences. Thus, norms that would otherwise require scientists
not to assert claims they have low condence in are partially relaxed, and scientists are
therefore morally permitted to assert such claims.
5.3 The Case against (Accuracy)
In the previous subsection, we claimed that in most cases, it seems right that scientists ought
to simplify and gloss over uncertainties if that is accuracy-conducive. But now imagine the
111
following situation. A scientist advises a policy-maker who has much too little condence
in X being toxic and who seems to be resistant to changing their credences. Also, most
evidence suggests that X is toxic, although there exists some weak counterevidence. After
the scientist laid out the evidence for the toxicity of X, the policy-maker asks in a skeptical
tone of voice whether there is any counterevidence. In that situation, it might be most
conducive to the accuracy of the policy-maker's credence in X being toxic to falsely claim
that there is no counterevidence. Granted, this would make the policy-maker's credence
in the proposition that the scientist has counterevidence inaccurate. But since the target
proposition is thatX is toxic, (Accuracy) would entail that the scientist ought to deny that
they are aware of counterevidence.
Such an utterance would constitute a lie. In particular, by explicitly asking for any coun-
terevidence, the policy-maker has brought about a context in which they have a reasonable
expectation that the scientist will make any uncertainty explicit. The norms are no longer
relaxed in a way that would permit not making uncertainty explicit. Falsely denying that
one has counterevidence, even if this is the only way to get the policy-maker to have accurate
credences, seems morally wrong.
To press this objection further, one might argue that (Accuracy) would end up recom-
mending lying quite often. Policy-makers' credences are exposed to a variety of forces, and
some of those forces do not push credences towards higher accuracy. In particular, inter-
est groups might try to make policy-makers adopt certain credences not because they are
rational given the evidence but rather because they speak in favor of policies that interest
groups want to see enacted. For example, tobacco companies tried to make policy-makers
believe that environmental tobacco smoke is harmless, even though a balanced review of
the scientic literature would have suggested otherwise.
34
Scientists must counteract such
34
Among other things, the tobacco industry set up the Center for Indoor Air Research which funded not
only peer-reviewed research, but also research selected by tobacco industry executives. This research was
more likely to conrm the tobacco industry's positions than peer-reviewed research projects (see Barnes and
Bero, 1996).
112
opposing forces when they attempt to make policy-makers' credences accurate.
35
Counter-
acting opposing forces requires exerting a stronger in
uence on the policy-makers' credences.
One might suspect that, at least in some contexts, exerting such a stronger in
uence will
often require telling lies.
36
There are dierent ways to derive an objection from the observation that (Accuracy) will
sometimes recommend lying. First, one might object that (Accuracy) undermines the very
concern that motivated it in the rst place: a concern for the accuracy of policy-makers'
credences. By telling scientists to greedily optimize for accuracy in each case of science
advice, it ends up undermining scientists' ability to in
uence policy-makers' credences. This
is because, as explained above, optimizing accuracy in a particular case sometimes requires
telling lies. But some of those lies will be exposed, and policy-makers might then lose trust
in scientists.
37
Loss of trust in science makes scientists unable to in
uence policy-makers'
credences, leaving them inaccurate. Hence, (Accuracy) undermines the concern for the
accuracy of policy-makers' credences, whether today or in the future.
This is a good objection. In response, one could modify (Accuracy) so that it tells
scientists to say what maximizes the expected accuracy of policy-makers' credences in the
long run. This modied norm would almost always prescribe not to tell lies because in or-
der to maximize the accuracy of policy-makers' credences in the long run, scientists should
35
Of course, a scientist will only know in some cases that the net in
uence of other people's in
uence on
the addressee's credences is such that it would pull them, say, below the optimally accurate credence. In
many other cases, as far as the scientist knows, a policy-maker might be subjected to both epistemic forces
that lead them to increase their credence above what the scientist takes to be the most accurate credence
and forces that lead them to decrease their credence below what the scientist takes to be the most accurate
credence. If these two possibilities are about equally likely, then the in
uence is zero in expectation. In such
cases, the scientist can maximize expected accuracy as if there were no other forces applied to the addressee's
credences.
36
In fact, one might worry that if scientists followed (Accuracy), then an unfortunate dynamic would
unfold between scientists and their epistemic antagonists. Once scientists start lying, epistemic antagonists
might respond by lying even more. In response, (Accuracy) requires scientists to engage in more lying to
compensate for the increased misleading in
uence. It seems that if scientists followed (Accuracy), then the
only equilibrium would be one in which both scientists and their epistemic antagonists vastly overstate their
case in an attempt to compensate for unwanted in
uence from the other side. If scientists followed (Honesty),
this problem would not occur. Thus, in the long run, (Accuracy) might end up recommending lying quite
often.
37
See Nisbet (2009, p. 59).
113
avoid undermining trust in science. The recommendations of the modied version of (Ac-
curacy) would therefore more often coincide with the recommendations of (Honesty). That
is, it would follow that (Honesty) is extensionally correct in many cases.
38
But the view
would still be importantly dierent from (Honesty). It would entail that (Honesty) fails
to latch onto the normative consideration that underlies the imperative not to lie in many
circumstances, which is a concern about the long-term accuracy of addressees' credences.
Moreover, (Accuracy) would still extensionally dier from (Honesty) in at least two kinds of
cases. First, simplication and omission of uncertainty in the cases discussed in the previous
section would still be endorsed by this modied version of (Accuracy) but not by (Honesty).
Long-term accuracy is less likely to be jeopardized in such cases because, as explained above,
there is a mutual understanding that scientists will simplify what they say, so no loss of trust
has to be feared from it `coming to light' that a scientist simplied their statements and omit-
ted mentioning uncertainty. Second, even though the modied version of (Accuracy) will
prescribe telling lies a lot less often than the original version, there might still be situations
in which one can promote long-term accuracy by lying. If a scientist can be fairly sure that
the lie will never be exposed, and they think that it will make the policy-maker's credences
more accurate, then (Accuracy) entails that they morally ought to tell the lie.
As a second objection, one might say that lying is always at least somewhat morally
bad, and that this moral badness can surely sometimes outweigh the moral goodness of
making the policy-maker's credences in target propositions more accurate, in particular if
the accuracy gains obtained by lying are small.
39
In those cases, the policy-maker should
not lie, but (Accuracy), even in its modied form, would entail that they should. Thus,
(Accuracy) is false.
One response to this objection is to reject its premise: that lying is morally wrong in these
38
As an aside, note that such an `extensional collapse' of (Accuracy) into (Honesty) might also happen in
a society with extremely scientically literate policy-makers. In such a society, scientists might communicate
in the most accuracy-conducive manner if they carefully describe the scientic evidence, specify uncertainty,
and so on.
39
See Kupfer (1982) for some discussion of the inherent wrongness of lying.
114
cases. This would require claiming that the imperative to make policy-makers' credences
accurate always outweighs the moral wrong of lying. If falsely denying that one has any
counterevidence makes the policy-maker's credences more accurate and will not undermine
trust in science in the long run|e.g., because it is certain never to be revealed that one
had counterevidence|one should go ahead and tell a lie. Science advice, on this picture, is
not for wimps. If you accept the invitation to advise policy-makers, you might be brought
into situations in which you morally ought to lie for the sake of making the policy-maker's
credences about the relevant questions accurate.
To make the implication that scientists should sometimes lie more palatable, note that the
lies that the norm would require scientists to tell seem importantly dierent from paradig-
matic cases of lying. While the lies that the norm prescribes are intended to give the hearer
inaccurate credences in some propositions, these inaccurate credences are only caused in
order to give the listener accurate credences in the target propositions. Put dierently, the
inaccurate credences are not the point of the lie; they are mere epistemic collateral damage.
40
This response tries to soften the blow of the objection without modifying (Accuracy).
Alternatively, one could concede that telling a lie is not what one should morally do in those
cases and modify (Accuracy) to accommodate this verdict. For instance, one could add a
hard side-constraint to (Accuracy) which excludes questionable means to maximize accuracy,
such as lying. For example, Heather Douglas imposes such a restriction on her norm to avoid
the conclusion that scientists ought to deceive their addressees.
41
The same restriction could
be added to (Accuracy).
As another alternative, one could endorse a balancing view. When we presented the
three norms, we noted that all of them have counterexamples: when following them would
foreseeably lead to the scientist being killed for no good reason, the scientist obviously should
not follow them. We could take this thought a step further and say that even in standard
40
See also John (2018, p. 83).
41
See Douglas (2009, p. 81). Relatedly, see Havstad and Brown (2017, footnote 14). As Douglas notes,
imposing a hard side-constraint that rules out deception still leaves plenty of options for scientists to choose
from. For example, scientists can \choose whether or not to emphasize the importance of [...] uncertainties".
115
cases, there are multiple relevant considerations that bear on what the scientist ought to
say. Three potentially relevant considerations are that one should say what one is certain in,
bring about good policy consequences, and make the policy-makers' credences accurate.
42
What the scientist should say is determined by a balance of these considerations. On this
`balancing view', our discussion argues that the rst consideration often does not apply|
making statements that the scientists are not condent in is often unproblematic|and that
the second consideration is usually suspended for the sake of procedural justice. This is
compatible with saying that in cases in which those considerations would be severely set
back for small gains of accuracy, they outweigh the accuracy-related consideration, and the
scientist should not maximize accuracy.
In sum, while the implication that scientists sometimes ought to lie is a reasonable ob-
jection against (Accuracy), there are ways to either soften the blow of this objection or to
accommodate it while retaining the core idea underlying the accuracy-focused view.
5.4 Extensions
So far, we only considered cases in which the statements of a scientist are received by a single
person, and that person is a policy-maker. After illustrating (Accuracy) and various lines
of arguments in this comparatively simple case, we now discuss how one could generalize
(Accuracy) to cover a broader range of cases.
5.4.1 Communicating to Multiple Policy-Makers
Most importantly, one might wonder how to extend the norm so that it applies to cases
in which more than one policy-maker receives the statements of the advising scientists.
If we want to stay true to the spirit of (Accuracy), then we should extend it as follows:
scientists should say what maximizes the expected accuracy of the group's credences in
42
Other considerations might include, for instance, whether the scientist has promised not to deceive the
policy-maker. For example, in most U.S. Congressional committees, witnesses can be sworn in.
116
the target propositions. This norm is strictly more general than (Accuracy). It collapses
into (Accuracy) if there is only a single addressee but also covers cases with two or more
addressees.
For cases involving a single addressee, we said that we will largely stay agnostic about how
exactly accuracy is measured. There is a sophisticated literature on the relative advantages
and disadvantages of dierent accuracy measures, and while picking one such measure is
required to fully
esh out (Accuracy), this is unnecessary for the purposes of our discussion.
Similarly, a comprehensive discussion of the various ways of measuring the accuracy of a
group's credences would be out of place here. In principle, one can insert one's favorite
notion of the accuracy of the group's credences into the extended version of (Accuracy),
whatever that notion might be.
That said, to make more concrete how (Accuracy) could be extended to cases with
multiple policy-makers, let us present a natural suggestion: the accuracy of the group's
credences is obtained by computing the accuracy of each individual's credences and then
averaging those accuracy scores. The extended version of (Accuracy) would then say that
scientists ought to say what maximizes the expectation of the average accuracy of their
addressees' credences.
The discussion of how to aggregate individual accuracy into collective accuracy will struc-
turally mirror the well-known discussion of how to aggregate individual welfare into collective
welfare.
43
For example, one can search for general desiderata and then assess concrete rules
by which desiderata they satisfy. The averaging view satises two such desiderata: If there
exists a statement which is expected to be accuracy-maximizing for all addressees indi-
vidually, then the averaging view entails that the scientist ought to make that statement.
Moreover, if the scientist has to decide between saying something that makes the credences
of n of their addressees accurate and saying something that makes the credences of strictly
less than n potentially dierent addressees accurate, the averaging view entails that the
43
See Sen (1970) for a comprehensive treatment of welfare aggregation and social choice more generally.
117
scientist should say what benets the greater number.
44
The averaging view is therefore a
promising candidate for how to extend (Accuracy). That said, there might well be other
tenable notions of the accuracy of the group's credences.
One particular class of cases with multiple addressees is worth discussing in more depth,
as it gives rise to an objection against (Accuracy). Suppose that a scientist gives advice that
will be received by a senator and a town council member. Then, it might seem that the
scientist should give more weight to making the credences of the senator accurate because
it is more important, morally speaking, that a senator has accurate credences than that
a town council member has accurate credences. Much fewer people are aected if a town
council member makes decisions based on inaccurate credences than if a senator does. But
if (Accuracy) required using moral judgments in this way, then this would cast doubt on its
purported advantage over (Policy). It would seem to give scientists' moral judgments greater
in
uence on democratic decisions just as much as (Policy).
45
There are two responses to this worry that a proponent of (Accuracy) could give. First,
they could push back against the claim that scientists should prioritize the credences of
more important policy-makers. If a scientist is asked to advise a committee on which there
happens to be both a senator and a town council member, is it so implausible to say that
eects on the town council member's credences should in
uence what the scientist says just
as much as eects on the senator's credences?
Second, one could concede that the senator's credences should get more weight based
on their moral importance. This might sound as though it is undermining the purported
advantage of (Accuracy) over (Policy), namely, that (Accuracy) but not (Policy) is consistent
with democratic procedural justice. But, in fact, it undermines this advantage to a small
extent, if at all. The problematic implication of (Policy) is that it permits the scientist's moral
judgments about the desirability of policyA and policyB to in
uence what the scientist says
about empirical questions and thus what policies are chosen by policy-makers. In contrast,
44
Both of those would be captured by an analogue of Suppes' Grading Principle (Suppes, 1966).
45
We thank an anonymous referee for bringing this worry to our attention.
118
(Accuracy) would give a dierent kind of moral judgment such in
uence: judgments about
the moral stakes involved in the decisions that a policy-maker at a particular level of authority
makes. Empirical statements would not be modied to bring about specic policy outcomes
that align with the scientist's moral judgments, but only to ensure that those policy-makers
who make decisions with high moral stakes end up with accurate credences. The latter
in
uence of moral judgments might seem less problematic from the perspective of democratic
procedural justice. Even if one thought that these two ways in which moral judgments could
in
uence what empirical claims the scientist makes are equally in tension with procedural
justice, it is unclear how commonly science advice is not only received by policy-makers at
dierent levels of authority, but the advising scientist also has good reasons to believe that
saying one thing would be more accuracy-conducive for those making more consequential
decisions and saying another thing would be more accuracy-conducive for those making less
consequential decisions. Hence, the advantage of (Accuracy) over (Policy) would likely be
retained in many cases of science advice even if one agreed with all the assumptions that the
objection makes.
5.4.2 Communication to the Public
In addition to broadening the scope of (Accuracy) to cover cases with more than one policy-
maker, one might wonder whether it is plausible to further broaden it to cover cases in which
the scientist's statements are received by ordinary citizens, either exclusively or in addition
to being received by policy-makers. Such an extended version of (Accuracy) would say, for
example, that when a scientist appears in the evening news to report on the dangers of
nuclear energy, they should decide what to say by asking themselves what would make the
credences of the viewers most accurate.
We are reluctant to broaden the scope of (Accuracy) to include any communication
between scientists and the general public. The reason for this is that the role that scientists
play when they communicate to the public seems to be more multi-faceted and less clearly
119
delineated than the role that they play when they are invited to advise policy-makers on
specic questions.
46
For example, scientists might sometimes justiably appear in public
as advocates for their own agenda and not primarily as informants. Then, an extension of
(Accuracy) might not be the appropriate norm to follow.
That said, (Accuracy) might be a promising candidate as one component of a more
complex theory of how science communication ought to be conducted. It is worth noting,
for instance, that several of the arguments from the discussion of science advice might have
analogues in the context of science communication to the public. Consider the arguments
against scientists using moral judgments to decide what empirical claims to make. Just as in
the case of science advice, one might say that ordinary citizens should not be forced to choose
between either not listening to scientists at all or to subject their preferences for policies to
the in
uence of the moral judgments of scientists, whether they agree with those moral
judgments or not. If we can give citizens the opportunity to let their preferences be shaped
by scientists' empirical judgments but not by their moral judgments, it seems desirable to
do so. Moreover, the argument against only saying things that one has high condence in|
requiring one to articulate uncertainties carefully|applies to cases of science communication
to the public, too. After all, it seems implausible to tell scientists to say something that will
foreseeably confuse their addressees, or to tell scientists that they should abstain from using
framing and other eective communicative strategies if that would necessitate simplifying
and glossing over uncertainties. The idea that scientists should tailor their communication
style to their addressees might be even more compelling when they communicate to members
of the public than when they communicate to policy-makers.
47
These are reasons to favor an
accuracy-focused norm of communication to the public over norms that permit the scientist's
46
See the contributions in Bucchi and Trench (2008) for an overview.
47
One potential dierence between science communication to the public and science advice is that the
former often involves a larger number of addressees of more diverse backgrounds, who might react quite
dierently to the same statements (on dierent uptake of the same information among recipients of science
communication, see Hart and Nisbet (2012)). This might mean that following (Accuracy) requires considering
a larger number of distinct groups of addressees when scientists communicate to the public than when they
communicate to policy-makers.
120
moral judgments to in
uence what they say and over norms insisting that scientists should
always make uncertainty explicit. Therefore, (Accuracy) might be a plausible norm in at
least some contexts of science communication to the public.
5.5 Ideals and Reality
We now assess the gap between actual and ideal science advice according to (Accuracy).
Whether reality conforms to a suggested norm is, on its own, irrelevant for the plausibility
of the norm. Nevertheless, it is interesting to ask whether advising scientists actually say
what could plausibly be thought to maximize expected accuracy and, if not, what concrete
suggestions for improvement (Accuracy) makes. This should also make a bit more concrete
what (Accuracy) entails in practice.
Let us rst look at reports for policy-makers written by scientists. When one surveys
such reports, one will quickly nd that most of them carefully indicate uncertainty. The
norm (Accuracy) is more likely to recommend careful descriptions of uncertainty in contexts
of written rather than verbal science advice, assuming that descriptions of uncertainty are
more likely to be understood when presented in written form. That said, lengthy reports
in which many claims are specied as `low condence' or `moderate condence' might not
make policy-makers' credences accurate. One might suspect that most policy-makers who
are interested in the topic will not take the time to work through such reports, let alone pay
close attention to specications of uncertainty. They might simply ignore claims that are
tagged with `low condence' or `moderate condence'. If this empirical assumption is correct,
(Accuracy) would recommend making reports more accuracy-conducive by not mentioning
low-condence claims and
at-out asserting high-condence claims.
While it seems that (Accuracy) is critical of current practices of carefully reporting un-
certainty, two qualications must be made. First, reports which carefully make uncertainty
explicit often begin with a one-page summary of the key ndings. In those summaries, one
121
encounters language that sacrices careful descriptions of uncertainty for greater eect on the
reader's beliefs. For example, a summary in one report for policy-makers includes the claim
that \[r]educing CO2 emissions is the only way to minimise long-term, large-scale risks".
48
After the summary, the full report follows, in which uncertainty is made explicit. There,
one nds sentences such as \[i]f CO2 emissions continue on the current trajectory, coral reef
erosion is likely to outpace reef building sometime this century [High Condence]".
49
This
communicative strategy resolves the accuracy-based reservations against making uncertainty
explicit. If the report makes uncertainty explicit only in the full report, but provides legisla-
tors with more digestible, less perspicuously qualied claims that are likely to change their
credences, then the report is perfectly consistent with (Accuracy).
Second, in cases in which the stakes are very high, policy-makers are more likely to
spend the time and energy on understanding the nuanced epistemic situation of current
science. One example of such a case is climate change. Consider the Summary for policy-
makers of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change
(hereafter referred to as the IPCC Report). The report presents uncertainty by giving explicit
denitions for terms such as `very likely' in terms of numerical probabilities and then using
these terms in its claims:
Anthropogenic in
uences have very likely contributed to Arctic sea ice loss since
1979.
50
The norm (Accuracy) vindicates that uncertainty is carefully made explicit in the particular
case of the IPCC. Given how much is at stake in climate policy, and given how much
importance was placed on the IPCC report, it was reasonable for the scientists to assume
that most policy-makers reading the report would pay attention to qualiers such as `very
likely'. Thus, it would have been reasonable to think that the best way to make the policy-
makers' credences accurate was to make the scientists' uncertainty explicit.
48
IGBP et al. (2013, p. 1).
49
IGBP et al. (2013, p. 16).
50
IPCC (p. 19 2014, their emphasis).
122
In sum, (Accuracy), together with the empirical claim that explicit statements of uncer-
tainty can lead to science advice being misunderstood or ignored, cautions against carefully
stating uncertainty in all contexts. However, reports which include simplied summaries
or are expected to receive a lot of attention by policy-makers can do so without violating
(Accuracy).
It is worth mentioning that (Accuracy) can not only guide the decision about whether to
indicate uncertainty or not. It can also guide detailed decisions about exactly how to commu-
nicate uncertainty. For example, a natural worry about the IPCC's current communication
strategy is that some readers may|possibly unintentionally|revert to using their natural
language understanding of terms such as `very likely'. Ordinary meanings of these terms
might come apart from the IPCC's explicit denition of them. As a result, policy-makers
might have inaccurate credences after reading the report.
51
Thus, a more accuracy-conducive
strategy might have been to supplement words with numbers: `anthropogenic in
uences have
contributed to Arctic sea ice loss since 1979. [very likely, 90%-100%]'.
52
If this is so, and
(Accuracy) is correct, then the IPCC ought to present their uncertainty in this way. This
illustrates that (Accuracy) provides a clear, applicable criterion that can guide even ne-
grained decisions about how to present scientic ndings.
Scientists also advise policy-makers in conversations. In those contexts, (Accuracy) is less
likely to recommend making uncertainty explicit. After all, it seems likely that descriptions
of uncertainty lead to confusion more often in spoken rather than written language.
It will not surprise the reader that there are many examples of spoken science advice
which indeed seem less careful than written science advice. For example, in congressional
hearings, one can nd statements such as:
Many strands of mutually supporting evidence are woven into the condent
knowledge that loss of land ice and warming of the ocean are driving sea-level
51
Patt and Schrag (2003) note this problem.
52
The ndings by Budescu et al. (2009) suggest that this indeed makes readers more likely to correctly
understand the authors' intended probabilities.
123
rise.
53
While the speaker was involved in the compilation of the IPCC report, and thus certainly
knew how to articulate uncertainty more precisely, it is consistent with (Accuracy) to here use
the less careful phrase of `condent knowledge thatp'. Such a phrase is arguably more likely
to make the policy-maker adopt a high credence in p than careful, probabilistic statements,
such as `we have 95% condence thatp'. Hence, if the speaker also believed that the policy-
maker's credence inp should be high in order to be accurate, (Accuracy) would endorse the
choice of a less careful phrase.
54
5.6 Conclusions
We explored the view that advising scientists should maximize the expected accuracy of
policy-makers' credences. This view is appealing because it does not undermine the value of
procedural justice in scientically informed policy-making. Moreover, it yields the plausible
verdict that in many cases, there is nothing wrong with simplifying and glossing over uncer-
tainties if that is required to let policy-makers benet epistemically from the conversation.
While it overcomes some crucial issues of other proposed norms, it faces the objection that
it recommends lying in situations in which lying seems hard to justify. We oered some
responses to this objection and possible modications of the norm which avoid it. In our
discussion of how the norm would apply to real-world cases of science advice, we emphasized
that it is able to guide ne-grained decisions about how to communicate uncertainty.
As an important clarication, the norm (Accuracy) and the other norms we discuss spec-
ify what the right thing to say is, not how to gure out what the right thing to say is. On
its own, (Accuracy) does not entail anything about how scientists should attempt to gure
53
This is taken from Richard Alley's statement in the congressional hearing titled Earth's Thermometers:
Glacial and Ice Sheet Melt in a Changing Climate on July 11, 2019.
54
To be clear, we do not claim that the reason why real scientists follow the norm is that scientists are
concerned with the accuracy of policy-makers' credences. It might simply be more dicult to carefully
communicate uncertainty when one speaks than when one writes. All we claim is that simplication is, for
whatever reason, more common in verbal science advice, as required by (Accuracy).
124
out which statements would be accuracy-conducive in a given situation.
55
In particular,
(Accuracy) does not entail that scientists should make explicit predictions about how accu-
rate various credences of their addressees would be conditional on scientists making various
statements. Just as trying to do explicit welfare calculations might be a bad way to maxi-
mize welfare, so trying to do explicit accuracy calculations might be a bad way to maximize
accuracy. It is an interesting question what good strategies would be for guring out which
statements are accuracy-conducive in a given situation. Maybe gauging the scientic exper-
tise of one's addressees would be advantageous because it allows one to better anticipate how
they might understand various claims. Also, getting some acquaintance with common ways
of misunderstanding quantitative information might be helpful for communicating in a more
accuracy-conducive manner. But we do not feel suciently condent in these questions to
say much more about how to best gure out what would be accuracy-conducive in a given
situation.
The point of this chapter was to put an alternative view on the table and assess its
plausibility. The general idea we explored was that scientists should communicate in a
way that maximizes the epistemic benet of the addressee. In order to keep our discussion
focused, we used a very specic notion of epistemic benet: making credences accurate.
While some have argued for accuracy as the sole fundamental epistemic value by which to
evaluate how good or bad a given set of credences is, others might nd the focus on accuracy
or on credences overly narrow.
56
Instead of requiring scientists to say what maximizes
expected accuracy, one might prefer a norm which requires scientists to say what maximizes
the policy-maker's expected ability to reason well about the subject matter. Somewhat
accurate credences might be just one of several ingredients that constitute such an epistemic
state. We expect that much of our discussion would carry over to such variants of (Accuracy).
In that way, we hope that even those who reject the focus on accurate credences will nd
55
In other words, (Accuracy) is a criterion of rightness, not a decision procedure (see Railton, 1984).
56
See Pettigrew (2016, e.g. p. 5{7) for the view that all epistemic virtues that credences have reduce to
accuracy.
125
our arguments worth considering when developing their own views.
126
Chapter 6
Between Democracy and Epistocracy
Abstract
Arguments for democracy aim to show that democracy is preferable to any non-democratic
system. In this chapter, I suggest that many recent arguments for democracy fail to attain
this aim. I describe a non-democratic system, which combines a democracy for values with
an epistocracy for facts. I then argue that recent arguments for democracy struggle to show
that democracy is preferable to it.
6.1 Introduction
Alyssa opposes universal health care. She thinks that it would be very expensive to imple-
ment. She also thinks that there is nothing problematic about the status quo, in which access
to health services depends on one's employment and wealth. Brandon favors universal health
care. He thinks that it would not be as expensive as its critics claim. He considers it unjust
that access to health services depends on one's employment and wealth. Alyssa's and Bran-
don's disagreement is grounded in dierent non-normative judgments|how costly universal
health care would be|as well as in dierent normative judgments|whether equal access
to health services is an important part of a just society. Many disagreements about which
policies should be adopted seem to be the product of underlying disagreements about both
127
normative and non-normative questions. This intuition is supported by empirical evidence.
1
Conventional democracies have two important features with respect to how they resolve
such disagreements. First, they are unfactored decision procedures: when it comes to making
a decision, they ask individuals for their policy preferences, not for their normative and non-
normative judgments which underlie their preferences. In a referendum on universal health
care, for example, you would be asked to vote either for or against universal health care.
You would not be asked about the normative and non-normative judgments on the basis
of which you arrived at your preference.
2
Second, they are democratic decision procedures:
they give each individual's input equal weight.
Epistocracies, as commonly understood, are also unfactored. When policy decisions are
made, the relevant input are preferences over policies, not normative or non-normative judg-
ments. However, epistocracies are undemocratic. They give the inputs of non-democratically
selected `experts' more weight.
3
For example, in an epistocracy, people with an advanced
degree might get twice as many votes as people without an advanced degree. In general, an
expert is non-democratically selected if she is not selected by popular vote or appointed by
elected representatives.
Given that disagreements about policies are the product of disagreements about norma-
tive and non-normative matters, the following hybrid of democracy and epistocracy comes
to mind: separate normative from non-normative inputs, give everyone an equal say over the
normative inputs, but let non-democratically selected experts provide binding non-normative
1
A recent large-scale study by Alesina et al. (2018) nds that preferences concerning redistributive poli-
cies correlate strongly with non-normative judgments about the economic contribution of immigrants, for
instance. Thus, even in what might seem to be foundational questions of justice|the extent to which we
redistribute wealth|individuals' preferences depend on non-normative judgments. The study also provides
some evidence that changing people's non-normative judgments changes their policy preferences, suggesting
that there is|as one might hope|a causal link from non-normative judgments to policy preferences.
2
For most policy questions, only elected representatives are asked to provide their policy preferences.
But the same general point applies: when it comes to making a policy decision, the input consists in policy
preferences, not in normative or non-normative judgments underlying policy preferences.
As a second clarication, I am not denying that democracies allow participants to discuss both normative
and non-normative questions relevant to the policy decision. But once it comes to making a decision, each
participant only reveals a preference between policies. That is all that is meant by saying that democracies
are `unfactored'.
3
Mill (1977 [1861]) famously advocated for such a system.
128
inputs. In a slogan: `democracy for values, epistocracy for facts'. This system is factored
because it processes normative and non-normative inputs separately.
4
It is undemocratic
because non-democratically selected experts provide inputs which aect which policy is cho-
sen. These inputs cannot be overridden by the people or their elected representatives. Even
if everyone disagreed with the experts' non-normative input and the policy decision resulting
from it, there is no legal way to stop the policy from going into eect. The experts have
the last word about decision-relevant non-normative input. As a hybrid of democracy and
epistocracy, call this system a depistocracy.
A system is only a depistocracy if decision-relevant expert input cannot be overridden.
This is what distinguishes depistocracy from existing institutions in democratic societies
which might, at rst glance, appear depistocratic. For example, when the U.S. Environmen-
tal Protection Agency makes decisions based on risk-cost-benet analysis, the non-normative
input is often provided by experts. But these experts are democratically selected, in the sense
dened above. They can be red by a head of an agency who was appointed by represen-
tatives that were elected by the people. Their non-normative input or the alternative that
would be chosen based on it can be overridden if suciently many people oppose it.
This chapter sketches a particular institutional implementation of depistocracy and ar-
gues that, perhaps surprisingly, prominent arguments for democracy do not give us a reason
to prefer representative democracy to that system. The motivation behind this chapter is
not to argue for depistocracy. Rather, I intend to show, rst, that prominent arguments for
democracy are decient. These arguments fail to give us a reason to prefer representative
democracy to any non-democratic system, including depistocracy. Second, I aim to draw
attention to depistocracy as a system which deserves to be carefully re
ected on. Depistoc-
racy poses a more challenging test case for arguments for democracy than epistocracy and is
therefore useful for evaluating such arguments. It is also interesting in its own right. Its cru-
4
Factored decisions procedures are already in use in politics. For example, in the U.S. executive branch,
some decisions are made based on risk-cost-benet analysis. This is a factored decision procedure: it explicitly
keeps normative judgments about the value of outcomes apart from non-normative judgments about the
probabilities of those outcomes.
129
cial feature is that it resolves normative and non-normative disagreements in dierent ways.
Thinking about whether this is defensible sheds light on how the content of a disagreement
aects what procedures are morally desirable for resolving it.
The plan for the rest of this chapter is as follows. In section 6.2, I describe one possible
way of institutionalizing the depistocratic idea of giving power over non-normative questions
to non-democratically selected experts. I do not intend to argue for that way of institu-
tionalizing depistocracy. But it will be helpful to have a concrete system in mind when we
investigate what arguments for democracy say about depistocracy. In section 6.3, I argue
that instrumental arguments for democracy fail to establish a preference for democracy over
depistocracy. In section 6.4, I argue that four recent non-instrumental arguments for democ-
racy fail to do so, too. In section 6.5, I note that there might be reasons to favor democracy
over depistocracy that are not grounded in the larger share of power that depistocracies gives
experts. These reasons might
y under the radar of most arguments for democracy. I end
with some general re
ections in section 6.6.
6.2 Indicator Depistocracy
There are many ways in which one might try to institutionalize a transfer of power over non-
normative questions to experts. To make things concrete, I will now spell out one particular
institutional setup, which I call indicator depistocracy.
In an indicator depistocracy, democratically elected representatives dene a Normative
Index (NI). The NI represents the normative input to policy decisions. It is intended to
measure the extent to which the desiderata that citizens regard normatively important in
policy-making are in fact achieved by the current set of policies. The NI would be a function
of a range of social indicators, such as crime rates, indicators of gender and race discrimina-
tion, income inequality, GDP per capita, unemployment rate, self-reported life satisfaction,
average life expectancy, and so on. It might also include indicators that directly refer to the
130
existence of a particular policy, such as universal health care. After all, some citizens might
consider it intrinsically valuable to have universal health care, independently of whether it
improves health outcomes.
Note that in specifying how various indicators are combined to compute the NI, repre-
sentatives need not only specify which indicators are relevant but also what their relative
importance is. Put dierently, they need to specify which trade-os between various indica-
tors are desirable. This is certainly no easy task. A starting point for a denition of the NI
might be existing welfare indices, such as Bhutan's Gross National Happiness Index, which
incorporates 33 indicators concerning health, education, and other domains.
5
The value of the NI is calculated and published in regular time intervals by an ocial
agency which gathers the statistics necessary to compute it. It is important that this agency
faithfully measures the NI and is not corrupted by actors who would benet from ddling
with the numbers. This should not be too hard to achieve, given that there are already
institutions, such as the Bureau of Economic Analysis in the U.S., which seem to faithfully
measure the kind of indicators that would be part of the NI.
It is to be expected that the denition of the NI would be regularly amended by the
representatives. New indicators might be introduced as new social problems arise. The
values of the citizenry will change over time, and this needs to be captured by adjusting
weights of the dierent indicators that comprise the NI. For example, if many citizens feel
that the preservation of the environment is a pressing issue that should carry more weight in
policy decisions, representatives have an incentive to increase the relative weight of ecological
indicators in order to be re-elected. The procedural rules for changing the denition of the
NI might be very similar to the procedural rules that govern legislative assemblies such as the
U.S. Congress. For the sake of concreteness, suppose that all representatives can propose
amended denitions of the NI, and a new denition goes into eect if a majority of the
representatives vote in favor of it.
5
See Ura et al. (2012).
131
Crucially, elected representatives only have power over the denition of the NI. They
cannot directly enact policies. Instead, policies are chosen as follows. Citizens, either through
petitions or through their representatives, can put issues on the agenda. Within a certain
time period, anyone can submit policy proposals that address the issue to a group of experts.
When the time to decide has come, each expert issues a prediction about the value of the
NI at various points in the future conditional on the various alternatives being enacted.
6
They also predict the NI for the alternative of not enacting any of the proposed policies.
Then, the alternative that maximizes the average of the experts' predictions becomes law.
There is no way for representatives or the public to override the experts' predictions and the
resulting policy decision. The experts' input is binding. This legislative process is limited
by a constitution guaranteeing basic liberal rights.
The experts are not elected by the public or appointed by elected representatives. Instead,
they are selected based on the accuracy of their past predictions of the NI. Each prediction
is evaluated against the actual development of the NI, as ocially measured. Everyone can
make predictions on a public website, and if a current expert becomes less accurate than the
best candidate on the list of contenders, the candidate replaces the current expert. Since
successful prediction of the NI will require expertise in dierent areas, it is most likely that the
best performance will be achieved if the selection scheme is structured around organizations
of experts rather than individual experts.
7
For the sake of illustration, consider the following toy example. Suppose that in some
country, people think that all that matters in policy-making is keeping government expen-
diture low and increasing life expectancy. Accordingly, elected representatives settle on a
denition of the NI that is a function of government expenditure and life expectancy only.
The NI maps each combination of these two indicators to a number representing the desir-
6
To clarify, experts do not predict the causal contribution of a particular policy on the NI. Rather, they
predict the total value of the NI in the future, which will be in
uenced by various other future events,
including future policies that go beyond the proposal in question.
7
To avoid perverse incentives, one should presumably prohibit elected representative to also be experts.
If someone is both an elected representative and an expert, then she would have an incentive to push for
denitions of the NI that she can predict particularly well.
132
ability of that combination, thereby specifying what trade-os between the two desiderata
are deemed acceptable. To take a crude example, if the NI was life expectancy plus govern-
ment expenditure in USD divided by 100 billion, then this would mean that a decision that
increased life expectancy by at least one year for at most 100 billion would be considered
worth undertaking.
Now suppose that representatives put the issue of health care reform on the agenda. A
three-month period to submit policy proposals begins. Think tanks, interest groups, and
other organizations submit proposals for health care reform. After the three months, the
proposals are presented to twenty dierent teams of experts. Each team predicts the value of
the NI at multiple points in the future for each policy. To obtain a prediction of how the NI
would develop if a particular policy were chosen, they predict how government expenditure
and life expectancy would develop and then plug those estimates into the formula of the NI.
After all groups made their predictions, the proposed health care reform with the highest
average predicted NI gets enacted. The proposals are also made public so that any group
aspiring to become ocial experts can make predictions, hoping to outperform some of the
groups currently in charge.
The experts might predict that policyA will do best even though elected representatives
and the general public are convinced that policy A will do much worse than an alternative
policy B. Moreover, the experts might indeed be wrong in this case. But there is no (legal)
way for the elected representatives or the public to prevent policy A from being chosen,
no matter how vigorously they disapprove of it. They cannot re the experts or veto the
experts' decision.
Indicator Depistocracy is inspired by a system proposed by Robin Hanson.
8
The main
dierence is that in Hanson's system, prediction markets play the role of experts. Indica-
tor Depistocracy is also related to Thomas Christiano's model of the division of labor in
democracies, in which citizens choose ends and policy-makers and experts together choose
8
See Hanson (2013).
133
the policies that achieve those ends.
9
The main dierence is that in Christiano's model,
policy-makers have the ultimate say over which policies are chosen. The experts' input is
merely advisory. In Christiano's model, policy B rather than policy A would be chosen in
the example above. I am not aware of any in-depth discussion of depistocratic distributions
of power, although the general idea has been mentioned in print before.
10
Indicator depistocracy might not be the most attractive version of depistocracy. Also,
many details about the system are left unspecied. These would be problems if the purpose
of this chapter was to argue for an alternative to democracy. But it is not. Rather, the
purpose of this chapter is to show that many recent arguments for democracy do not es-
tablish a preference for democracy over a depistocratic system. For that, a suboptimal and
incompletely specied system will do.
Moreover, indicator depistocracy only partially realizes the depistocratic idea of a transfer
of power over non-normative inputs to experts. For example, to decide whether the unem-
ployment rate should be part of the NI, elected representatives should consult their non-
normative beliefs about the causal relationship between unemployment and mental health,
say. Thus, some non-normative judgments that in
uence policies are still made democrati-
cally. Moreover, under realistic conditions, representatives would probably try to undermine
some of the transfer of power to experts. For example, they might pretend that they con-
sider a certain policy to be of large intrinsic normative value and make the existence of this
policy part of the NI, even if they actually think that it should be chosen for instrumental
reasons. By doing so, representatives can be sure that experts will predict that that policy
will strongly boost the NI. But this, too, is not an issue for our discussion. The crucial
feature of the system is that it removes the power over some non-normative inputs from
democratic control and transfers it to experts.
While it is unproblematic for the rest of this chapter that less power is transferred to
9
See Christiano (2012).
10
Winner (1977, p. 146) mentions the possibility that \the populace could voice its desire for the goals
and kind of distribution that a system run by experts would obtain" without discussing it any further.
134
experts than in an ideal depistocracy, it would be problematic for some of my arguments
if more power was transferred: I must assume that the experts' normative judgments do
not in
uence their predictions and thus policy decisions. This assumption follows from
the stronger assumption that experts make sincere predictions. A sincere prediction about
the value of some indicator, as measured by some agency, will not be in
uenced by one's
normative judgments.
Lest one thinks that this is a very unrealistic assumption, note that experts who are
repeatedly insincere lose their job. This follows from two plausible assumptions. First,
assume that individual experts do not always know which policy will end up being chosen
by the aggregate prediction of all the experts. This is crucial because predictions for policies
that do not end up being chosen will never be compared to reality. We cannot know how
the NI would have developed had a dierent policy been chosen. Thus, if an expert knew
that policyA would be chosen if she made a sincere prediction for policyA but an insincere,
very low prediction for all other policies, then she could be insincere without risking her
job because her insincere predictions would never be compared to the actual values of the
NI. But if she does not know which policy will be chosen, her insincere predictions will be
compared to reality from time to time. Second, assume that there is enough competition
for being an expert so that, for any expert, there is an only marginally less accurate non-
expert who tries to get a job as an expert and makes sincere predictions. Then, if an expert
repeatedly makes insincere predictions, her predictions will be revealed over the long run
as less accurate than those of the marginally less accurate non-expert who makes sincere
predictions. The expert would then lose her job. Thus, given the competitive nature of
indicator depistocracy, it is plausible to assume that experts will sincerely predict the NI
rather than skew their predictions to bring about policies that their own normative views
favor.
Before I move on, let me make a terminological clarication. Some people might say that if
experts obey the rules of depistocracy and only control non-normative inputs, depistocracies
135
are perfectly democratic. In the case of indicator depistocracy, roughly speaking, if the
people say `reduce income inequality!', then experts sincerely pick policies that they think
reduce income inequality. Hence, what experts do is under the people's control. They are
mere executors of the will of the citizens and do not have independent decision-making power.
But other people might say that while an expert is not free to make decisions in whatever
way she wants|she must be guided by her own sincere judgment|it is still the experts'
sincere non-normative judgments that determine what becomes law, and not the judgments
of elected representatives, as in actual democracies. Even if everyone thinks the experts are
wrong in a particular case, there is no way for the people to bring about that a dierent policy
is chosen. This feature|that the citizenry does not have the power to stop a decision made
by a small group of experts|makes it less attractive to characterize indicator depistocracies
as a form of democracy.
There is no point in quibbling over terminology. I will continue to call a system such as
the one described above `depistocracy' to distinguish it from a system in which binding non-
normative input to policy decisions is provided by individuals who are elected or appointed by
elected representatives. Nothing substantial rests on this terminological choice. The question
of interest is whether the arguments of proponents of democracy generate a reason to prefer
democracy to the system I call `depistocracy', which is in important respects dierent from
the democracies that we usually think about.
6.3 Instrumental Arguments
A recent wave of work in democratic theory defends the view that one of the key advantages
of democracy is its tendency to choose good policies.
11
Proponents of such instrumental
arguments might reject depistocracy based on the claim that it would perform worse than
democracy. Furthermore, even proponents of non-instrumental arguments for democracy
could rely on an instrumental argument to reject depistocracy, even if their main, non-
11
See Schwartzberg (2015) for an overview of this literature.
136
instrumental arguments fail to rule it out.
12
But pending further empirical evidence, the assumption that depistocracies would per-
form worse than democracies is questionable. First, while it is easy to come up with a
myriad of ways in which depistocracies could lead to bad policy-making, it is also easy to
come up with a myriad of ways in which democracies could lead|and have led|to bad
policy-making. Second, a transition to a depistocracy would constitute a radical change
to political institutions with many unforeseen eects on politics. The aggregate eect of
these consequences on the quality of policy decisions is hard to predict. Even if some of
the impediments to good policy-making in depistocracies that we identify in advance would
materialize, their impact on the performance might well be swamped by all the unforeseen
eects such a radical change would bring about. Thus, any argument based on the claim
that depistocracy would perform worse than democracy rests on brittle foundations.
It is important to stress that neither democrats nor depistocrats should rely on the claim
that their system will perform better. At no point in this chapter will the following argument
be used: Ordinary people are uninformed about non-normative questions. Therefore, systems
in which policy decisions depend on the non-normative judgments of experts rather than
ordinary people must improve the quality of policy-making.
13
Such reasoning is tempting
but
awed. It ignores that profound institutional changes have all sorts of other consequences
on policy-making whose aggregate eects are hard to predict.
I conclude that while instrumental arguments might be used to argue that democracy is
preferable to, say, choosing policies randomly, it cannot be used to argue that democracy is
preferable to depistocracy. It is an open question which system would perform better.
12
Proponent of democratic arguments already use instrumental arguments to rule out other non-democratic
systems. See Kolodny (2014, p. 313{314), for example.
13
As an aside, if one thinks that many citizens are also incompetent in moral matters, then one would be
tempted to make an analogous argument for full epistocracy (see Brennan, 2016, p. 210).
137
6.4 Non-Instrumental Arguments
I will now argue that four non-instrumental arguments for democracy fail to establish a pref-
erence for representative democracy over depistocracy. The arguments I discuss include some
of the most prominent contributions to democratic theory of the last decade, covering both
public reason- and relational egalitarian approaches.
14
Some of the problems that surface
when those arguments are applied to depistocracy are likely to also aect other arguments
for democracy. In the interest of space, I will only sketch the parts of the arguments relevant
to the discussion at hand.
6.4.1 Equal Respect and Mutual Justiability
Laura Valentini argues that democracy settles disagreements about what policies justice
requires \in a way that best captures the ideal of equal respect for persons as rational and
autonomous agents".
15
She gives two reasons why democracy best captures this ideal. I will
argue that neither of the reasons establishes that democracy captures the ideal better than
depistocracy.
First, she argues that in democracies, we deliberate with each other and thereby express
equal respect for each other as rational persons. This would establish a preference for democ-
racy over depistocracy if there was less deliberation in depistocracies than in democracies.
But this is far from clear. On the one hand, there is less incentive to exchange arguments
about non-normative questions in depistocracies because most people's non-normative judg-
ments have no impact on which policies are chosen. On the other hand, maybe people would
discuss non-normative questions more often due to the `gamied', competitive character of
providing non-normative input to policy-making. Moreover, since policy decisions are fac-
14
One prominent argument for democracy which I will not discuss is that provided by Estlund (2009).
The explanation for why Estlund's argument seems to fail to generate a reason to prefer democracy to
depistocracy is the same as Quong's (2010) explanation for why it fails to generate a reason to prefer
democracy to epistocracy. I have little new to add to Quong's criticism. Thus, in the interest of space, I will
refrain from discussing Estlund's argument at length.
15
See Valentini (2013, p. 192).
138
tored into normative and non-normative questions, public discourse would probably also
consider those questions separately. In particular, there would be a separate debate on how
to dene the NI. Here, deep expertise in non-normative questions is not required to make
important contributions. This might engage many more people in public deliberation than
in democracies, in which people might feel insuciently informed about non-normative ques-
tions to contribute to the often highly technical policy debates. Hence, there is no reason to
suspect that depistocracy would lead to less expression of equal respect for people as rational
agents through deliberation.
However, one might say that independently of how much deliberation there is, the fact
that my non-normative judgments have no eect on policy decisions in a depistocracy ex-
presses disrespect for me as a rational agent. It is disrespectful if an expert's judgment about
the eects of policies on the NI counts in policy-making, while my judgment does not.
But whether the lack of in
uence of my non-normative judgments on policy decisions
expresses disrespect for me as a rational agent depends on the reasons for why they are
ignored. If my non-normative judgments do not count because I have not demonstrated
my knowledge in the relevant eld, then I am not disrespected as a rational agent. For
example, it does not express a lack of respect for me as a rational agent if it is an ecologist's
rather than my judgment that determines whether a construction project must be stopped
because it threatens to destroy the habitat of an endangered species of frogs. The ecologist
has demonstrated her ability to make good judgments in this area whereas I have not.
16
Similarly, it does not express disrespect for me as a rational agent if someone with a better
track record in predicting the NI gets to provide binding non-normative judgments about
the impact of policies on the NI.
17
16
In fact, Valentini (2013, p. 182, 187) agrees that in disagreements over non-normative questions, such
as which policy will most boost average happiness, conferring decision-making authority on someone based
on their established expertise is not in tension with the ideal of equal respect for persons as rational and
autonomous agents.
17
Other authors have observed that the normative status of discounting someone's judgment depends on
the reasons for which someone's judgment is discounted. For example, Fricker (2007) denes testimonial
injustice to occur when prejudice causes a hearer to give de
ated credibility to a speaker. Using markers
such as past performance to decide how much credibility to give to a speaker need not constitute a testimonial
139
Second, Valentini argues that equal respect requires mutual justiability of the rules that
govern society, in the sense that the rules should be acceptable to all. If we cannot nd an
alternative that everyone nds acceptable, democracies best approximate the ideal of mutual
justiability. This is because deliberation before the vote makes it more likely that we will
at least partially converge on a preferred policy, and majority voting selects the policy which
is \accepted by as large a number of the populace as possible".
18
The idea seems to be that
if 10% of the population are in favor of policy A and nd policy B unacceptable, and 90%
are in favor of policy B and nd policy A unacceptable, then a majoritarian vote would
choose policyB. This procedure minimizes the number of people who nd the chosen policy
unacceptable.
The corresponding concern about depistocracy would be that among a set of options, it
is less likely than democracy to choose the option that maximizes the number of people who
nd the option acceptable, based on their normative and non-normative judgments. In the
case above, the depistocratic procedure might choose policy A even if 90% nd that option
unacceptable. For example, this might happen because the experts might believe that policy
A will boost the NI, whereas the public might disagree with this prediction.
This concern for maximizing the number of people to whom the chosen policy is justiable
plausibly favors a majoritarian direct democracy over depistocracy. But it is unclear whether
it favors representative democracy over depistocracy.
19
After all, representative democracies
also often choose policies dierent from those that most people endorse.
20
Whether they
do so more or less often than depistocracies is unclear. Hence, this line of argument does
not generate a reason to prefer democracy|at least in its common and most practicable
form|to depistocracy.
injustice.
18
Valentini (2013, p. 195).
19
As an aside, there is also plenty of room to question the assumption that maximizing the number of
people who nd a policy acceptable, based on their non-normative and normative beliefs, is required by a
concern for equal respect. More generally, there is plenty of room to question that this the right goal when
setting up a political decision procedure, whether or not one is concerned with equal respect.
20
See Canes-Wrone (2015) for a review of the empirical literature on the relationship between mass pref-
erence and policy in the U.S.
140
In sum, a concern for equal respect for persons as rational and autonomous agents, as
understood by Valentini, fails to establish a preference for representative democracy over
depistocracy. Depistocracies do not show less equal respect for persons as rational and
autonomous agents.
6.4.2 Equal Advancement of Interests
Thomas Christiano proposes a complex argument for democracy.
21
For the purpose of this
chapter, it suces to provide a rough sketch of his argument. The core idea is that systems
which exclude some people from decision-making and deliberation will foreseeably advance
the included people's interests more than the excluded people's interests. In particular, such
systems will fail to equally advance four specic interests (discussed below) whose satisfaction
is a prerequisite to advancing many other interests. Given that the system is structured to
not aim to advance everyone's interests equally, someone who is excluded may \legitimately
infer that his or her moral standing is being treated as less than that of others", in the sense
that their interests are treated as though they had less moral weight.
22
But social justice
requires that no one is publicly treated as having less moral standing than others. So, social
justice requires democracy: a system in which no one is excluded from decision-making and
deliberation.
I will now argue that this argument has no force against depistocracies because depistoc-
racies do not fail to aim to equally advance those four \fundamental interests that ground
democracy".
23
The rst fundamental interest is the interest in being able to correct for
others' cognitive biases. For example, if poor people do not get a say in politics, they cannot
correct for rich people's cognitive biases. Consequently, policy decisions in a system that
disenfranchises poor people are prone not to advance their interests as much as the interests
of rich people. Even if rich people in power try to advance poor people's interests as much
21
See Christiano (2008).
22
Christiano (2008, p. 88).
23
Christiano (2008, p. 88).
141
as the interests of everyone else, cognitive biases will make rich people fail to understand
what the poor people's interests are.
But this is not an issue in depistocracies. Others' cognitive biases might aect normative
or non-normative judgments. In depistocracies, one can correct for others' cognitive biases
in their normative judgments just as much as in democracies since everyone gets an equal
say on normative questions. On the other hand, one cannot directly correct for biases of
experts in their non-normative judgments. For instance, one cannot directly correct for
biases of experts in their predictions about which policies will promote your mental health,
which might be relevant to policy decisions if mental health indicators are part of the NI.
Unless one is an expert, one does not get a direct input on those questions. However, biased
judgments that other people can correct are weeded out in depistocracies. After all, experts
who are systematically biased in how policies will aect those parts of the NI that have to
do with the interests of you and other people in similar situations will be outcompeted by
teams that have eliminated this bias. Depistocracies therefore contain a mechanism that
corrects other people's biases in their non-normative judgments about your interests. Thus,
there is no reason to think that in depistocracies, someone's interests will be advanced less
because of biased judgments about their interests.
Of course, in realistic circumstances, depistocracies will feature biased non-normative
judgments that one cannot easily correct. Expert organizations might not hire members of
minorities even if that would allow them to make more accurate predictions. They might
nevertheless not get outcompeted because, for instance, it might be dicult to start a new
expert organization.
24
That said, even if the correction of cognitive biases is imperfect in
realistic depistocracies, so is the correction of cognitive biases in realistic democracies. This
is enough to show that this consideration cannot ground an argument for why democracy is
preferable to depistocracy.
24
We know that if competition is imperfect, rms with discriminatory hiring practices are by no means
guaranteed to be outcompeted. After all, plenty of rms that discriminate in hiring prosper in our economy
(see Bertrand and Mullainathan, 2004).
142
The second interest is the interest of having one's equal moral standing publicly recog-
nized. According to Christiano, to not give someone an equal say is to \fail to acknowledge
that person's capacity for moral judgment and to treat her like a child or an animal".
25
In depistocracies, no one's capacity for moral judgment is failed to be acknowledged since
everyone gets an equal say on normative questions.
The third interest is the interest of learning about matters of social importance. If
some are excluded from decision-making, then others will not have an incentive to debate
their views. But, as mentioned in the discussion of Valentini's argument, it is dicult to
judge whether there would be more or less deliberation in depistocracies on normative and
non-normative questions. In addition, note that depistocracies might make the public more
informed even if there was less deliberation. In a depistocracy, there is a public repository of
expert judgments about the eect of policies on things we care about, and experts providing
those judgments have a strong incentive to be accurate. The availability of a canonical source
of aggregated expert judgments might well make the public more informed.
The fourth interest is the interest in being at home in the world. This is understood
as the interest of a person in having \a sense of t with the world around him, a sense of
connection with that world, a sense that this world makes some sense to him".
26
If some
people have more power than others, then
[t]he ruling class will be able to form a world in which they experience a sense of
t, connection and meaningfulness. They deprive those who have no say of such
a sense of being at home in the world.
27
But, in contrast to experts in epistocracies, experts in depistocracies do not constitute a
ruling class. They cannot choose policies that make sense to them and build themselves a
world in which they feel at home. All they can do is make sincere non-normative judgments
about the eects of policies on various indicators. It is far from clear why providing this
25
Christiano (2008, p. 93).
26
Christiano (2008, p. 90).
27
Christiano (2008, p. 92).
143
non-normative input makes it more likely that experts feel at home in the world than non-
experts. Of course, experts might be more likely to experience a sense of t, connection, and
meaning with respect to the world around them because they are more intimately involved
in the policy-making process than non-experts. But the same can be said of representative
democracy, which also lets a small group of people be much more intimately involved with
the policy-making process.
In sum, Christiano's argument fails to give us a reason to prefer representative democracy
to depistocracy. From the perspective of the equal advancement of Christiano's four interests,
depistocracies are on a par with representative democracies.
6.4.3 Equal Status
Another class of arguments for democracy holds that non-democratic systems undermine
the ideal of relational equality. Daniel Vieho provides a helpful reconstruction of the two
strands of relational egalitarian arguments for democracy, on which I will rely in the following
two sections.
28
I will argue that neither kind of relational egalitarian argument manages to
establish a preference for representative democracy over depistocracy.
The rst type of relational egalitarian argument for democracy holds that non-democratic
systems involve objectionable status inequality.
29
Status inequalities exist in a society if
there are norms that put people into groups and then order these groups in a hierarchy
by conferring advantages on some of the groups. The society thereby treats some people
as `social superiors' and others as `social inferiors'. These advantages can take a variety of
forms. For example, there might be norms that require social inferiors to obey the commands
of social superiors in a variety of situations. Or there might be norms that require social
inferiors to bow before social superiors when they meet them. Caste societies are a paradigm
example of status inequality.
One might suspect that depistocracies involve morally problematic status inequality,
28
See Vieho (2019).
29
See Kolodny (2014), for example.
144
with the experts being social superiors and the non-experts being social inferiors. To
esh
out this objection, one must provide an account of status inequality on which depistocracies
involve status inequality and status inequality is plausibly morally problematic. To structure
the discussion, note that depistocracies can involve status inequality in two ways. First,
the depistocratic regime|the way decision-making power is distributed|might constitute
status inequality. Experts might be social superiors simply in virtue of having the power
to provide non-normative input to policy decisions. Second, the depistocratic regime might
cause further norms to arise in social life that constitute status inequality. Let us consider
each possibility in turn.
To determine whether the depistocratic distribution of power itself constitutes morally
problematic status inequality, we must understand better what objectionable status inequal-
ity consists of. On Vieho's account, a central feature of objectionable status inequality is
that society justies giving some people special advantages by the assumption that those
people's interests are morally more important.
30
There are at least two reasons why this is
plausibly a central feature of objectionable status inequality. First, it provides the beginning
of an explanation of what makes status inequality intrinsically objectionable: our interests
are in fact equally important, but status inequality treats us as if they were not. Second,
it explains why some norms that confer advantages on some people are unproblematic. To
take an example from Vieho, there are norms in our society that permit doctors on duty
to park their cars where they want. This looks suspiciously similar to a society in which a
certain group of people, the Lords, are permitted to park their carriages where they want.
But surely, only the Lords' and not the doctors' privilege is objectionable. Vieho's account
generates this verdict. Our society justies the doctors' advantage by reference to the ben-
ets this rule brings to other people, not by assuming that the doctors' interests are more
important than the interests of others. But the latter society|at least on the most plausible
way of imagining it|justies the more extensive rights of the Lords by holding that the
30
See Vieho (2019, p. 19).
145
interests of Lords are more important than the interests of commoners.
Since norms that confer advantages on groups of people constitute status inequality in
a society only if the society justies them in a certain way, whether a suggested legal norm
constitutes status inequality depends on the particular society in which it is instituted.
Thus, it is dicult to make general claims about which legal systems constitute status
inequality. But, in any case, the obvious justication for why experts have more power over
non-normative inputs in a depistocracy is that they have given more accurate non-normative
inputs than others in the past, and, other things equal, we would rather have our policy
decisions rest on accurate non-normative judgments. This justication does not rely on the
claim that the experts' interests are more important. The experts are not treated like Lords
whose interests have special priority. Rather, the justication is perfectly consistent with
considering everyone's interests equally important. Thus, on this account of status inequality,
depistocratic power dierentials need not constitute objectionable status inequality.
But maybe one would favor a dierent account of problematic status inequality, which
does not require that society justies advantages for some group of people by the greater
importance of their interests. In particular, one might propose an account on which paternal-
istic arrangements constitute status inequality: paternalizers have higher social status than
paternalizees. This would apply to depistocracies because depistocracies institute a pater-
nalistic arrangement. They give one group|the experts|authority to decide what policies
are most likely to satisfy democratically determined desiderata. This power asymmetry is
justied by the claim that competitively chosen experts know better than the people which
policies will satisfy those desiderata. According to the proposed account of status inequality,
the experts have higher status in virtue of occupying the position of the paternalizer in a
paternalistic scheme.
31
The challenge for this objection to depistocracy is to explain why status inequality, de-
31
To be clear, depistocracies are not necessarily paternalistic in the sense of passing laws whose content
is paternalistic, such as bans of certain drugs or subsidies for opera houses. The point is, rather, that the
procedure by which laws are chosen is paternalistic because of the way in which non-experts are excluded
from decision-making.
146
ned such that paternalistic arrangements count as status inequality, is morally problematic.
Maybe the most promising route to argue for the moral wrongness of status inequality, thus
dened, would be to directly appeal to standard accounts of what is wrong with paternal-
ism.
32
For example, some say that paternalism is bad because people know best what is
good for them.
33
But in the present context, this boils down to the claim that democracies
make better decisions than depistocracies, which I set aside for reasons described in section
6.3. Others object to paternalism because it seems to treat the paternalizees as though
they did not have an equal capacity to plan, revise, and rationally pursue a conception of
their own good.
34
But depistocracies only treat the paternalizees as though they did not
have equal capacity to make accurate non-normative judgments about which policies will
promote a given set of desiderata.
35
There does not seem to be anything problematic about
being treated in this way, since having the specialized knowledge required to make such
judgments is not plausibly part of what it is to be a full agent or citizen. This is related to
a more general point: depistocracy is an instance of a means-paternalistic rather than an
ends-paternalistic scheme. The experts do not impose their judgment of what constitutes a
good society on non-experts. Rather, they only choose means for ends which the non-experts
have chosen themselves. Thus, any argument for why paternalism is wrong that hinges on
paternalism overriding an agent's non-instrumental normative judgments cannot show that
there is something wrong with depistocracy.
Finally, possibly the most compelling argument for the wrongness of status inequality is
unavailable to accounts of status inequality that apply to depistocracy, whether they rest
on notions of paternalism or not. The argument is simply that paradigm cases of status
inequality|such as societies in medieval Europe, in which feudal lords get to arbitrarily
boss around peasants and are treated as though they were something better|are clearly
32
In that case, the notion of status inequality does not seem to do much work in one's objection against
depistocracy; one might as well directly object to depistocracy for being paternalistic. Nevertheless, it is
natural to discuss this objection at this point.
33
See Mill (2003 [1859]).
34
See Quong (2011, p. 100{107).
35
See also the related remarks in section 6.4.1.
147
wrong. But such paradigm cases are far away from a transfer of power over non-normative
inputs to experts who lose their job when they exercise their power in accordance with their
own policy preferences. Thus, judgments about paradigm cases do little to motivate why
status inequality, dened suciently broadly to be instantiated in depistocracy, is morally
worrisome.
Consider second the objection that the depistocratic arrangement would cause rather
than constitute status inequality, by triggering the emergence of further norms and attitudes
which make experts social superiors. For example, one might think that non-experts would
start acting subserviently towards experts. They might start letting them jump the queue at
restaurants. Conversely, experts might get into the habit of requesting special treatment and
expecting to be shown signs of respect in various social situations. The experts' power might
seem much more potent to lead to such norms and attitudes than the doctors' advantage to
park where they want.
In response, it is worth remembering the restricted nature of the experts' authority: an
expert cannot just pick whatever policy she wants. Therefore, people have no incentive to be
especially nice to experts. If you
atter an expert, she cannot return favors by picking policies
that benet you. If the expert does not act on her sincere judgment about which policy will
boost the NI most, and instead chooses policies that benet you, she will be outcompeted
and lose her job. Thus, the experts' power does not seem prone to cause further norms that
constitute status inequality.
To sum up, depistocracies seem to neither constitute nor cause status inequality. Thus,
democratic arguments based on status equality, at least as they are currently presented, do
not provide a convincing case against depistocracy.
6.4.4 Friendship-Like Equality
Another kind of relational egalitarian argument starts from the observation that friendships
are governed by certain egalitarian norms. Among those norms is the requirement that
148
friends must have equal power over what is done in the domain of the relationship.
36
Impor-
tantly, the norm of friendship requires equal power even in cases in which power inequality
is justied by other, weightier reasons. For example, suppose you are stranded on an island
with a friend, and you exercise more power than they do because that is the only way for the
two of you to survive. For example, you decide what berries to eat from the bushes and you
will not allow that they get a say in this, even if they think they know better. The power
inequality is justied, given that your survival depends on it, but nevertheless there is an
outweighed moral reason against it: the relation between you and your friend now violates
one of the norms of friendship.
Proponents of this argument then claim that political relations between citizens are
relevantly alike relations between friends so that the same egalitarian norms apply to them.
In particular, fellow citizens must have the same power over the norms that govern their
political relations. Thus, non-democratic systems are problematic because they undermine
the fulllment of egalitarian norms that apply to relations between fellow citizens.
This relational egalitarian argument generates an objection against depistocracy. Vieho
denes having equal power as having an equal opportunity to in
uence decisions. Clearly,
people in a depistocracy do not have equal power according to this denition. Experts have
more opportunity to in
uence political decisions. Thus, depistocracies prevent the political
relation between an ordinary citizen and an expert from satisfying the egalitarian norm which
requires equal power.
37
This is a reason against depistocracy, independently of whether we
can justify the power inequality on other grounds or not.
But the argument generates the same objection against representative democracies. They
too give some people greater power, in the sense of more opportunity to in
uence decisions.
Representative democracies prevent the political relation between an ordinary citizen and a
36
See Vieho (2014).
37
To clarify, the point is not that depistocracies do not allow friendships to fully realize the egalitarian
ideal of friendship. I can have an egalitarian friendship with someone because we have equal power over
the norms governing, for instance, where we go out for dinner, even if our political relationship is not one
between equals because my friend has more power over who pays how much taxes.
149
representative from satisfying the egalitarian norm which requires equal power. Thus, from
the perspective of the norm of equal power, depistocracies and representative democracies
are on a par.
One might object that in representative democracies, there is more equal opportunity
for in
uence on political relations than in depistocracies, and that therefore the argument
generates a preference for representative democracy over depistocracy. But this is far from
clear.
First, Vieho emphasizes that what matters is having equal opportunity to in
uence
political decisions, not having had equal opportunity to acquire in
uence in the past.
38
Thus, when representatives decide whether to institute universal health care, you do not
have equal opportunity to in
uence that political decision in the relevant sense. That you
had an equal opportunity to acquire this in
uence at some point in the past by running for
oce is irrelevant.
Second, suppose that one instead held that the equal power norm is satised if peo-
ple have an equal opportunity to acquire the opportunity to in
uence political decisions.
39
Again, depistocracies fulll that norm to the same extent as representative democracies. De-
pistocracies guarantee equal formal opportunity to acquire in
uence on political decisions.
Since each person has the right to be a candidate in elections of representatives, each person
has an equal formal opportunity to in
uence the normative input to political decisions. Since
each person has the right to compete to become an expert, each person has an equal formal
opportunity to in
uence the non-normative input to political decisions. Also, the informal
opportunity to in
uence political decisions in depistocracies does not seem any more unequal
than in democracies. Resources and talents give one better chances of becoming an expert
or representative in a depistocracy. But in democracies, too, resources and talents give one
better chances of becoming a representative. Thus, depistocracies feature no less equality of
38
Vieho (2019, fn. 3).
39
As an aside, I think that it is implausible that friendships are regulated by this weaker norm. When
two friends
ip a coin to decide who gets to decide over the course of a year where to go for dinner, their
friendship plausibly deviates from the egalitarian ideal. But it does not violate the suggested norm.
150
formal and informal opportunity to in
uence political decisions than democracies.
Thus, arguments for democracy based on the desideratum of making relationships be-
tween citizens satisfy norms of equal power fail to generate a reason to prefer representative
democracy to depistocracy. Those egalitarian norms are no more violated in depistocracies
as in representative democracies.
6.5 Arguments against Factoring
Before I conclude, an important clarication must be made. Suppose that the reasons for giv-
ing people an equal say are compatible with depistocracy just as much as with representative
democracy. From this, it does not follow that we have no reason to prefer democracy to depis-
tocracy. After all, depistocracy diers from democracy not only in being undemocratic|in
the sense that some people's inputs are given a larger weight than other people's inputs|but
also in being factored. Maybe the reasons against depistocracy are not reasons against being
undemocratic but rather against factoring.
In particular, factored decision procedures have the peculiar feature that, while everyone
can suggest policies, no one is ever asked which policy out of a set of alternatives they prefer.
Rather, people are asked general questions about what makes a policy choiceworthy, and
some of them are then asked further questions about how suggested policies might impact
various indicators. But no one is asked, for instance, which of various proposals to reform
the health care system appeals to them. One might feel uneasy about this gap between the
input that one provides and the laws one is subjected to. The depistocratic procedure that
bridges this gap is transparent and reasonable, so the worry cannot be that citizens do not
understand why a particular policy was chosen. But there are other ways in which the gap
might give rise to an objection to depistocracy.
For example, one might worry that the gap causes citizens to have attitudes of estrange-
ment from the rules they live under. They might feel that the rules are not their own, even
151
if they understand the relationship between their inputs and the decisions. Just as division
of labor in manufacturing|and the resulting gap between the worker's input and the nal
product|might alienate workers from the product, so division of labor in policy-making|
and the resulting gap between one's inputs and the chosen policies|might alienate citizens
from the policies they live under.
40
It is unclear to me how plausible this empirical conjecture
about the inner life of citizens under a depistocratic system is.
41
This worry about depistocracy has nothing to do with the fact that experts have more
power than non-experts. It also applies to a factored democracy in which half of the pop-
ulation is randomly chosen to provide normative input and the other half provides non-
normative inputs. Here, too, there would be a potentially worrisome gap between the input
that one provides and the rules one lives under. Thus, it targets the fact that depistocracies
are factored, not that they are undemocratic.
While the above worry might not strike the reader as particularly compelling, it should
suce to illustrate that the factored nature of depistocracies represents a second source of
potential arguments for why conventional, unfactored democracies are preferable to depis-
tocracies.
6.6 Conclusions
Depistocracy lies between democracy and epistocracy. It combines a democracy for nor-
mative questions with an epistocracy for non-normative questions. By deviating less from
democracy than a full-blown epistocracy, it provides a more challenging test case for argu-
ments for democracy. An argument for democracy might successfully draw the line between
the good and the bad systems somewhere between democracy and epistocracy yet fail to
40
See Marx (1975 [1844]).
41
Such feelings of estrangement would mean that the citizens' interest in `being at home in the world'
would be set back. But that does not mean that Christiano's argument can, after all, establish a preference
for democracy over depistocracy. His argument is focused on how non-democratic systems allow some people
to feel at home in the world at the expense of others. It is about publicly treating some people's interests
as more important than those of others. But in depistocracies, the problem runs deeper: experts and
non-experts alike might feel estranged from the rules they live under.
152
draw it exactly between democracy and all other non-democratic systems, including depis-
tocracy. Indeed, several recent arguments for democracy fail the depistocratic challenge.
The concerns they appeal to|whether it be equal respect, equal advancement of interests,
equal status, or friendship-like equality|do not generate a reason to prefer democracy|at
least in its most common, representative form|to depistocracy.
Some might take this result to suggest that those arguments have not gotten to the
heart of the special value of democracy. Others might take it to be further evidence that
there is no special value of democracy. On this view, if there is anything objectionable about
depistocracy, it has nothing to do with the undemocratic aspects of the system. As discussed
above, among the other potentially objectionable aspects of depistocracy is the separation
of normative and non-normative inputs to policy decisions.
Two limitations of this chapter should be stressed. First, it only discussed whether ar-
guments for democracy can establish a reason to bring about and sustain democratic rather
than depistocratic institutions. Some proponents of arguments for democracy provide addi-
tional arguments why the democratic pedigree of a decision contributes to citizens having
a duty to obey the decision and the government being permitted to enforce the decision.
Maybe some proponents of democracy could argue that while their arguments justify depis-
tocratic institutions just as much as democratic institutions, they establish authority and
legitimacy for democratic decisions but not for depistocratic decisions. Thus, the tie be-
tween the two systems would be broken in favor of democracy. This is a possibility yet to
be explored.
Second, while this chapter attempted to cover several of the most prominent types of
arguments for democracy, maybe other arguments for democracy succeed in establishing a
preference for representative democracy over depistocracy.
42
Furthermore, this chapter only
42
For example, some arguments for democracy emphasize the character-shaping eect of democratic
decision-making. Maybe depistocracies do not \foster in people the ability intelligently and creatively to
control their lives" to the same extent as democracies (see Bowles and Gintis, 1986, ch. 5). This is a natural
worry, given the tendency of `high-modernist' schemes to deskill people by transferring power to experts.
See Scott (1998) for a study of this and other failure modes of high-modernist schemes.
153
brie
y considered objections against depistocracy that target its separation of normative
and non-normative questions in policy-making. A more thorough examination of the idea of
division of labor in policy-making is left for future work.
I hope that readers will nd depistocracy useful as an alternative system to keep in mind
when pondering the normative status of democracy. I also hope that the insights that can be
obtained from thinking about depistocracy will serve as a motivation to more carefully study
other parts of the space between democracy and epistocracy. Future work could consider
systems in which normative inputs are delegated to experts|according to some standard of
normative expertise|and non-normative inputs are determined democratically. This could
yield further insight into the extent to which arguments for democracy treat normative and
non-normative disagreements asymmetrically. Moreover, as machine learning systems begin
to outperform humans in predicting various social indicators, one could explore the normative
status of outsourcing non-normative inputs to algorithms rather than human experts.
43
43
In particular, one might examine whether any of the worries raised by Danaher (2016) about outsourcing
entire policy decisions to algorithms also apply to only outsourcing non-normative inputs.
154
Bibliography
Mohammed Abdellaoui. Parameter-free elicitation of utility and probability weighting func-
tions. Management Science, 46(11):1497{1512, 2000.
Heather Akin and Asheley Landrum. A recap: Heuristics, biases, values, and other challenges
to communicating science. In Kathleen Hall Jamieson, Dan Kahan, and Dietram Scheufele,
editors, The Oxford handbook of the science of science communication. Oxford University
Press, 2017.
Omar Al-Ubaydli, John List, and Dana Suskind. The science of using science: Towards an
understanding of the threats to scaling experiments. Technical report, National Bureau of
Economic Research, 2019.
Alberto Alesina, Armando Miano, and Stefanie Stantcheva. Immigration and redistribution.
Technical report, National Bureau of Economic Research, 2018.
Elizabeth Anderson. Values, risks, and market norms. Philosophy & Public Aairs, 17(1):
54{65, 1988.
Elizabeth Anderson. What is the point of equality? Ethics, 109(2):287{337, 1999.
Richard Arneson. Equality and equal opportunity for welfare. Philosophical Studies, 56(1):
77{93, 1989.
Richard Arneson. Equality of opportunity for welfare defended and recanted. Journal of
Political Philosophy, 7(4):488{497, 1999.
155
Kenneth Arrow. Social choice and individual values. Wiley, 1951.
Kenneth Arrow and Robert Lind. Uncertainty and the evaluation of public investment
decisions. The American Economic Review, 60(3):364{378, 1970.
Deborah Barnes and Lisa Bero. Industry-funded research and con
ict of interest: An analysis
of research sponsored by the tobacco industry through the center for indoor air research.
Journal of Health Politics, Policy and Law, 21(3):515{542, 1996.
Brian Barry. Theories of justice. University of California Press, 1989.
Brian Barry. Sustainability and intergenerational justice. Theoria, 44(89):43{64, 1997.
Mary Ann Bates and Rachel Glennerster. The generalizability puzzle. Stanford Social In-
novation Review, 2017:50{54, 2017.
Ulrich Beck. Risk society: Towards a new modernity. Sage, 1992.
Ulrich Beck. World at risk. Polity, 2009.
Jenna Bednar. Nudging federalism towards productive experimentation. Regional & Federal
Studies, 21(4-5):503{521, 2011.
Carole Bernard, Christoph Rheinberger, and Nicolas Treich. Catastrophe aversion and risk
equity in an interdependent world. Management Science, 64(10):4490{4504, 2018.
Marianne Bertrand and Sendhil Mullainathan. Are Emily and Greg more employable than
Lakisha and Jamal? A eld experiment on labor market discrimination. American Eco-
nomic Review, 94(4):991{1013, 2004.
Gregor Betz. In defence of the value free ideal. European Journal for Philosophy of Science,
3:207{220, 2013.
156
Toby Bolsen, James Druckman, and Fay Lomax Cook. How frames can undermine support
for scientic adaptations: Politicization and the status-quo bias. Public Opinion Quarterly,
78(1):1{26, 2014.
Nick Bostrom. Existential risk prevention as global priority. Global Policy, 4(1):15{31, 2013.
Samuel Bowles and Herbert Gintis. Democracy and capitalism. Basic Books, 1986.
Jason Brennan. Against democracy. Princeton University Press, 2016.
John Broome. Uncertainty and fairness. The Economic Journal, 94(375):624{632, 1984.
John Broome. Utilitarian metaphysics. In Jon Elster and John Roemer, editors, Interpersonal
comparisons of well-being. Cambridge University Press, 1993.
Massimiano Bucchi and Brian Trench, editors. Handbook of public communication of science
and technology. Routledge, 2008.
Lara Buchak. Risk and rationality. Oxford University Press, 2013.
Lara Buchak. Risk and tradeos. Erkenntnis, 79(6):1091{1117, 2014.
Lara Buchak. Taking risks behind the veil of ignorance. Ethics, 127(3):610{644, 2017.
David Budescu, Stephen Broomell, and Han-Hui Por. Improving communication of uncer-
tainty in the reports of the intergovernmental panel on climate change. Psychological
Science, 20(3):299{308, 2009.
Brandice Canes-Wrone. From mass preferences to policy. Annual Review of Political Science,
18(1):147{165, 2015.
Simon Caney. Cosmopolitan justice and equalizing opportunities. Metaphilosophy, 32(1-2):
113{134, 2001.
157
Simon Caney. Justice and future generations. Annual Review of Political Science, 21(1):
475{493, 2018.
Thomas Christiano. The constitution of equality: Democratic authority and its limits. Oxford
University Press, 2008.
Thomas Christiano. Rational deliberation among experts and citizens. In John Parkinson
and Jane Mansbridge, editors, Deliberative systems. Cambridge University Press, 2012.
Charles West Churchman. Statistics, pragmatics, induction. Philosophy of Science, 15(3):
249{268, 1948.
Gerald Cohen. On the currency of egalitarian justice. Ethics, 99(4):906{944, 1989.
Gerald Allan Cohen. Rescuing justice and equality. Harvard University Press, 2008.
Glenn Cohen, Norman Daniels, and Nir Morechay Eyal. Statistical versus identied per-
sons. In Glenn Cohen, Norman Daniels, and Nir Morechay Eyal, editors, Identied versus
statistical lives: An interdisciplinary perspective. Oxford University Press, 2015.
John Danaher. The threat of algocracy: Reality, resistance and accommodation. Philosophy
& Technology, 29(3):245{268, 2016.
Inmaculada de Melo-Mart n and Kristen Intemann. The risk of using inductive risk to
challenge the value-free ideal. Philosophy of Science, 83(4):500{520, 2016.
Peter Diamond. Cardinal welfare, individualistic ethics, and interpersonal comparison of
utility: Comment. Journal of Political Economy, 75(5):765{766, 1967.
Heather Douglas. Inductive risk and values in science. Philosophy of Science, 67(4):559{579,
2000.
Heather Douglas. The role of values in expert reasoning. Public Aairs Quarterly, 22(1):
1{18, 2008.
158
Heather Douglas. Science, policy, and the value-free ideal. University of Pittsburgh Press,
2009.
Mary Douglas. Natural symbols: Explorations in cosmology. Pantheon, 1970.
Ronald Dworkin. What is equality? Part 2: Equality of resources. Philosophy & Public
Aairs, pages 283{345, 1981.
Kevin Elliott. Hydrogen fuel-cell vehicles, energy policy, and the ethics of expertise. Journal
of Applied Philosophy, 27(4):376{393, 2010.
Kevin Elliott and Daniel McKaughan. Nonepistemic values and the multiple goals of science.
Philosophy of Science, 81(1):1{21, 2014.
Kevin Elliott and Ted Richards, editors. Exploring inductive risk: Case studies of values in
science. Oxford University Press, 2017a.
Kevin Elliott and Ted Richards. Exploring inductive risk: Future questions. In Kevin Elliott
and Ted Richards, editors, Exploring inductive risk: Case studies of values in science.
Oxford University Press, 2017b.
Jon Elster. Risk, uncertainty and nuclear power. Social Science Information, 18(3):371{400,
1979.
David Estlund. Democratic authority: A philosophical framework. Princeton University
Press, 2009.
Cecile Fabre. Global distributive justice: An egalitarian perspective. Canadian Journal of
Philosophy, 35:139{164, 2005.
Maria Ferretti. Risk and distributive justice: The case of regulating new technologies. Science
and Engineering Ethics, 16(3):501{515, 2010.
159
Elizabeth Finneron-Burns. What's wrong with human extinction? Canadian Journal of
Philosophy, 47(2-3):327{343, 2017.
Baruch Fischho. Risk perception and communication unplugged: Twenty years of process.
Risk Analysis, 15(2):137{145, 1995.
Peter Fishburn. Transitive measurable utility. Journal of Economic Theory, 31(2):293{317,
1983.
Marc Fleurbaey. Assessing risky social situations. Journal of Political Economy, 118(4):
649{680, 2010.
Marc Fleurbaey and Alex Voorhoeve. Decide as you would with full information! In Nir Eyal,
Samia Hurst, Ole Norheim, and Daniel Wikler, editors, Inequalities in health: Concepts,
measures, and ethics. Oxford University Press, 2013.
Andreas Fllesdal. Federal inequality among equals: A contractualist defense. Metaphiloso-
phy, 32(1-2):236{255, 2001.
David Frank. Making uncertainties explicit. In Kevin Elliott and Ted Richards, editors,
Exploring inductive risk: Case studies of values in science. Oxford University Press, 2017.
Harry Frankfurt. Equality as a moral ideal. Ethics, 98(1):21{43, 1987.
Harry Frankfurt. How the afterlife matters. In Death and the afterlife. Oxford University
Press, 2013.
Benjamin Freedman. Equipoise and the ethics of clinical research. New England Journal of
Medicine, 317, 1987.
Johann Frick. Contractualism and social risk. Philosophy & Public Aairs, 43(3):175{223,
2015.
160
Johann Frick. On the survival of humanity. Canadian Journal of Philosophy, 47(2-3):344{
367, 2017.
Miranda Fricker. Epistemic injustice: Power and the ethics of knowing. Oxford University
Press, 2007.
Thibault Gajdos and Eric Maurin. Unequal uncertainties and uncertain inequalities: An
axiomatic approach. Journal of Economic Theory, 116(1):93{118, 2004.
Stephen Gardiner. Ethics and global climate change. Ethics, 114(3):555{600, 2004.
Gerald Gaus. The tyranny of the ideal: Justice in a diverse society. Princeton University
Press, 2016.
Anthony Giddens. Risk and responsibility. The Modern Law Review, 62(1):1{10, 1999.
Itzhak Gilboa, Dov Samet, and David Schmeidler. Utilitarian aggregation of beliefs and
tastes. Journal of Political Economy, 112(4):932{938, 2004.
Robert Goodin. Utilitarianism as a public philosophy. Cambridge University Press, 1995.
Hilary Greaves. Discounting for public policy: A survey. Economics & Philosophy, 33(3):
391{439, 2017.
Hilary Greaves and Will MacAskill. The case for longtermism. Ms.
Herbert Grice. Logic and conversation. In Peter Cole and Jerry Morgan, editors, Syntax and
semantics. Academic Press, 1975.
Peter Hammond. Ex-ante and ex-post welfare optimality under uncertainty. Economica, 48
(191):235{250, 1981.
Peter Hammond. Utilitarianism, uncertainty, and information. In Utilitarianism and beyond.
Cambridge University Press, 1983.
161
Robin Hanson. Shall we vote on values, but bet on beliefs? Journal of Political Philosophy,
21(2):151{178, 2013.
Sven Ove Hansson. The ethics of risk: Ethical analysis in an uncertain world. Palgrave
Macmillan, 2013.
John Harsanyi. Cardinal welfare, individualistic ethics, and interpersonal comparisons of
utility. Journal of Political Economy, 63(4):309{321, 1955.
John Harsanyi. Nonlinear social welfare functions: Do welfare economists have a special
exemption from bayesian rationality? Theory and Decision, 6(3):311{332, 1975.
John Harsanyi. Utilities, preferences, and substantive goods. Social Choice and Welfare, 14
(1):129{145, 1996.
Sol Hart and Erik Nisbet. Boomerang eects in science communication: How motivated
reasoning and identity cues amplify opinion polarization about climate mitigation policies.
Communication research, 39(6):701{723, 2012.
Joyce Havstad and Matthew Brown. Inductive risk, deferred decisions, and climate science
advising. In Kevin Elliott and Ted Richards, editors, Exploring inductive risk: Case studies
of values in science. Oxford University Press, 2017.
Madeleine Hayenhjelm and Jonathan Wol. The moral problem of risk impositions: A survey
of the literature. European Journal of Philosophy, 20:26{51, 2012.
Sebastian Heilmann. Policy experimentation in China's economic rise. Studies in Compara-
tive International Development, 43(1):1{26, 2008.
Joe Horton. Aggregation, complaints, and risk. Philosophy & Public Aairs, 45(1):54{81,
2017.
162
Keith Hyams. On the contribution of ex ante equality to ex post fairness. In David Sobel,
Peter Vallentyne, and Steven Wall, editors, Oxford studies in political philosophy, volume 3.
Oxford University Press, 2017.
IGBP, IOC, and SCOR. Ocean acidication summary for policymakers|Third symposium
on the ocean in a high-CO2 world. International Geosphere-Biosphere Programme, 2013.
IPCC. Climate change 2014: Synthesis report. Contribution of working groups I, II and III
to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. IPCC,
2014.
Subash Iyer. Resolving constitutional uncertainty in armative action through constrained
constitutional experimentation. NYU Law Review, 87(4), 2012.
Sheila Jasano. The songlines of risk. Environmental Values, 8(2):135{152, 1999.
Koen Jaspaert, Freek Van de Velde, Geert Br^ one, Kurt Feyaerts, and Dirk Geeraerts. Does
framing work? An empirical study of simplifying models for sustainable food production.
Cognitive Linguistics, 22(3):459{490, 2011.
Richard Jerey. Valuation and acceptance of scientic hypotheses. Philosophy of Science,
23(3):237{246, 1956.
Richard Jerey. The Logic of Decision. University of Chicago Press, 1983.
Stephen John. Epistemic trust and the ethics of science communication: Against trans-
parency, openness, sincerity and honesty. Social Epistemology, 32(2):75{87, 2018.
Branden Johnson and Paul Slovic. Presenting uncertainty in health risk assessment: Initial
studies of its eects on risk perception and trust. Risk Analysis, 15(4):485{494, 1995.
Henry Kissinger. World order. Penguin Books, 2014.
163
Niko Kolodny. Rule over none II: Social equality and the justication of democracy. Philos-
ophy & Public Aairs, 42(4):287{336, 2014.
Rebecca Kukla. Resituating the principle of equipoise: Justice and access to care in non-ideal
conditions. Kennedy Institute of Ethics Journal, 17(3):171{202, 2007.
Rahul Kumar. Wronging future people. In Axel Gosseries and Ludwig Meyer, editors,
Intergenerational justice. Oxford University Press, 2009.
Joseph Kupfer. The moral presumption against lying. The Review of Metaphysics, 36:
103{126, 1982.
Seth Lazar. In dubious battle: Uncertainty and the ethics of killing. Philosophical Studies,
175(4):859{883, 2018a.
Seth Lazar. Limited aggregation and risk. Philosophy & Public Aairs, 46(2):117{159,
2018b.
James Lenman. On becoming extinct. Pacic Philosophical Quarterly, 83(3):253{269, 2002.
Kasper Lippert-Rasmussen. Arneson on equality of opportunity for welfare. Journal of
Political Philosophy, 7(4):478{487, 1999.
Niccolo Machiavelli. The prince. University of Chicago Press, 1998 [1532].
Douglas MacKay. The ethics of public policy RCTs: The principle of policy equipoise.
Bioethics, 32(1):59{67, 2018.
Douglas MacKay. Government policy experiments and the ethics of randomization. Philos-
ophy & Public Aairs, 2020.
Karl Marx. Economic and philosophical manuscripts. International Publishers, 1975 [1844].
Rui Mata, Andreas Wilke, and Uwe Czienskowski. Foraging across the life span: Is there a
reduction in exploration with aging? Frontiers in Neuroscience, 7, 2013.
164
David McCarthy, Kalle Mikkola, and Teruji Thomas. Utilitarianism with and without ex-
pected utility, 2016. MPRA Paper No. 79315.
Lukas Meyer and Dominic Roser. Enough for the future. In Axel Gosseries and Lukas Meyer,
editors, Intergenerational justice. Oxford University Press, 2009.
John Stuart Mill. Considerations on representative government. In John Robson, editor,
Essays on politics and society. Routledge, 1977 [1861].
John Stuart Mill. On liberty. In Utilitarianism and On Liberty. Blackwell, 2003 [1859].
Philippe Mongin. Consistent Bayesian aggregation. Journal of Economic Theory, 66(2):
313{351, 1995.
Philippe Mongin. Spurious unanimity and the Pareto principle. Economics & Philosophy,
32(3):511{532, 2016.
Philippe Mongin and Marcus Pivato. Social evaluation under risk and uncertainty. In
Matthew Adler and Marc Fleurbaey, editors, The Oxford handbook of well-being and public
policy. Oxford University Press, 2016.
Tim Mulgan. Ethics for a broken world: Imagining philosophy after catastrophe. Routledge,
2011.
Emily Nacol. An age of risk: Politics and economy in early modern Britain. Princeton
University Press, 2016.
Jacob Nebel. Rank-weighted utilitarianism and the veil of ignorance. Ethics, 131(1):87{106,
2020.
Yew-Kwang Ng. Eciency, equality and public policy: With a case for higher public spending.
Palgrave Macmillan, 2000.
165
Yew-Kwang Ng. The importance of global extinction in climate change policy. Global Policy,
7(3):315{322, 2016.
Matthew Nisbet. The ethics of framing science. In Brigitte Nerlich, Richard Elliott, and
Brendon Larson, editors, Communicating biological sciences: Ethical and metaphorical
dimensions. Routledge, 2009.
Robert Nozick. Anarchy, state, and utopia. Basic Books, 1974.
John Oberdiek. Imposing risk: A normative framework. Oxford University Press, 2017.
OECD. Scientic advice for policy making: The role and responsibility of expert bodies and
individual scientists. OECD Science, Technology and Industry Policy Papers, 2015.
Derek Part. Reasons and persons. Oxford University Press, 1984.
Anthony Patt and Daniel Schrag. Using specic language to describe risk and probability.
Climatic change, 61(1-2):17{30, 2003.
Richard Pettigrew. Accuracy and the laws of credence. Oxford University Press, 2016.
Roger Pielke. The honest broker: Making sense of science in policy and politics. Cambridge
University Press, 2007.
Richard Posner. Catastrophe: Risk and response. Oxford University Press, 2004.
John Quiggin. A theory of anticipated utility. Journal of Economic Behavior & Organization,
3(4):323{343, 1982.
Jonathan Quong. The distribution of authority. Representation, 46(1):35{52, 2010.
Jonathan Quong. Liberalism without perfection. Oxford University Press, 2011.
Peter Railton. Alienation, consequentialism, and the demands of morality. Philosophy &
Public Aairs, pages 134{171, 1984.
166
John Rawls. Political liberalism. Columbia University Press, 1993.
John Rawls. A theory of justice, revised edition. Belknap, 1999.
Jerey Reiman. Being fair to future people: The non-identity problem in the original posi-
tion. Philosophy & Public Aairs, 35(1):69{92, 2007.
Thomas Rowe and Alex Voorhoeve. Egalitarianism under severe uncertainty. Philosophy &
Public Aairs, 46(3):239{268, 2019.
Richard Rudner. The scientist qua scientist makes value judgments. Philosophy of Science,
20(1):1{6, 1953.
Giovanni Sartori. The theory of democracy revisited. Chatham House, 1987.
Leonard Savage. The foundations of statistics. Wiley, 1954.
Thomas Scanlon. What we owe to each other. Harvard University Press, 1998.
Samuel Scheer. Death and the afterlife. Oxford University Press, 2013.
Samuel Scheer. Why worry about future generations? Oxford University Press, 2018.
Miriam Schoeneld. Permission to believe: Why permissivism is true and what it tells us
about irrelevant in
uences on belief. No^ us, 48(2):193{218, 2014.
Melissa Schwartzberg. Epistemic democracy and its challenges. Annual Review of Political
Science, 18:187{203, 2015.
James Scott. Seeing like a state: How certain schemes to improve the human condition have
failed. Yale University Press, 1998.
Amartya Sen. Collective choice and social welfare. Holden-Day, 1970.
Claude Shannon. A mathematical theory of information. Bell System Technical Journal, 27:
379{423, 1948.
167
Seana Shirin. Speech matters: On lying, morality, and the law. Princeton University Press,
2016.
Kristin Sharon Shrader-Frechette. Ethics of scientic research. Rowman & Littleeld, 1994.
Daniel Steel. Climate change and second-order uncertainty: Defending a generalized, norma-
tive, and structural argument from inductive risk. Perspectives on Science, 24(6):696{721,
2016.
Katie Steele. The scientist qua policy advisor makes value judgments. Philosophy of Science,
79(5):893{904, 2012.
Anna Stilz. The value of self-determination. In David Sobel, Peter Vallentyne, and Steven
Wall, editors, Oxford studies in political philosophy, volume 2, pages 98{127. Oxford Uni-
versity Press, 2016.
Cass Sunstein. Preferences and politics. Philosophy & Public Aairs, 20(1):3{34, 1991.
Patrick Suppes. Some formal models of grading principles. Synthese, 6:284{306, 1966.
Dennis Thompson. Representing future generations: Political presentism and democratic
trusteeship. Critical Review of International Social and Political Philosophy, 13(1):17{37,
2010.
Bruce Tonn. Obligations to future generations and acceptable risks of human extinction.
Futures, 41(7):427{435, 2009.
Amos Tversky and Daniel Kahneman. Advances in prospect theory: Cumulative represen-
tation of uncertainty. Journal of Risk and Uncertainty, 5(4):297{323, 1992.
Karma Ura, Sabina Alkire, Tshoki Zangmo, and Karma Wangdi. A short guide to gross
national happiness index. 2012.
168
Laura Valentini. Justice, disagreement and democracy. British Journal of Political Science,
43(1):177{199, 2013.
Patti Valkenburg, Holli Semetko, and Claes De Vreese. The eects of news frames on readers'
thoughts and recall. Communication Research, 26(5):550{569, 1999.
Daniel Vieho. Democratic equality and political authority. Philosophy & Public Aairs, 42
(4):337{375, 2014.
Daniel Vieho. Power and equality. In David Sobel, Peter Vallentyne, and Steven Wall,
editors, Oxford studies in political philosophy, volume 5. Oxford University Press, 2019.
Eva Vivalt. How much can we generalize from impact evaluations? Journal of the European
Economics Association, 5, forthcoming.
Eva Vivalt and Aidan Coville. How do policymakers update their beliefs? ms.
Elke Weber and Denis Hilton. Contextual eects in the interpretations of probability words:
Perceived base rate and severity of events. Journal of Experimental Psychology: Human
Perception and Performance, 16(4):781, 1990.
Peter Weingart and Justus Lentsch, editors. Scientic advice to policy making: International
Comparisons. Barbara Budrich, 2009.
Roger White. Epistemic permissiveness. Philosophical Perspectives, 19(1):445{459, 2005.
Torsten Wilholt. Epistemic trust in science. The British Journal for the Philosophy of
Science, 64(2):233{253, 2013.
Deirdre Wilson and Dan Sperber. Meaning and relevance. Cambridge University Press, 2012.
Langdon Winner. Autonomous technology. MIT Press, 1977.
Hannah Wiseman and Dave Owen. Federal laboratories of democracy. UCD Law Review,
52:1119, 2018.
169
George Wood, Tom Tyler, and Andrew Papachristos. Procedural justice training reduces
police use of force and complaints against ocers. Proceedings of the National Academy
of Sciences, 117(18):9815{9821, 2020.
Yang Yang and Jill Hobbs. The power of stories: Narratives and information framing eects
in science communication. American Journal of Agricultural Economics, 102:1271{1296,
2020.
170
Appendix A
An Impossibility Result for Strict
Preferences
In this section, I state the principles that capture the con
ict between respecting individual
preferences and statewise reasoning in cases in which statewise reasoning leads to a strict
preference for an act which is dispreferred by all individuals. In the introduction to Chapter
2, I presented such a case:
Sunny April Rainy April
Risky ticket for Alice,
safe ticket for Bob
(2, 1) (0, 1)
Safe ticket for Alice,
risky ticket for Bob
(1 +, 2 +) (1 +, 0 +)
If Alice is risk-seeking and Bob is risk-averse, Alice will prefer the risky ticket and Bob
the safe ticket, despite the small additional benet that Alice and Bob would receive if
the tickets were allocated dierently. But plausible statewise reasoning leads to a strict
preference for giving out the tickets in the way dispreferred by the individuals because an
outcome in which one individual is at welfare level 2 + and another at welfare level 1 +
171
is plausibly strictly better than an outcome in which one individual is at welfare level 1 and
another at welfare level 2.
The principle which captures Alice's and Bob's preferences says that two individuals
have opposite preferences between prospects P and Q even if you `sweeten' the respectively
dispreferred prospect by marginally increasing the welfare level it leads to in each state.
(Individual Preference Divergence Sweetened) For some individualsi;j, and prospects
P , P
+
, Q, Q
+
,
1) for all s, P
+
(s)>P (s) and Q
+
(s)>Q(s), and
2) P
i
Q
+
and Q
j
P
+
.
Individuals will have such preferences if they are risk-averse or risk-seeking so that they are
willing to forego some expected utility in exchange for a preferred risk prole. In particular,
REU theory permits such preferences.
From the result in the main text, we keep the ex ante Pareto principle, which requires
the observer to follow unanimous individual preferences.
(Ex Ante Pareto) For all social acts A;B, if A
i
i
B
i
for all individuals i, then AB.
We also keep the principle that the observer is indierent between outcomes that involve
exactly the same welfare levels and only dier in who is at which welfare level.
(Constant Anonymity) For all welfare proles (w
1
;:::;w
n
) and permutations of individ-
uals , (w
1
;:::;w
n
) (w
(1)
;:::;w
(n)
).
In addition, we require that the observer strictly prefers an outcome if every individual is
better o in that outcome.
(Ex Post Pareto) For all welfare proles (w
1
;:::;w
n
) and (w
0
1
;:::;w
0
n
), if w
i
>w
0
i
for all i,
then (w
1
;:::;w
n
) (w
0
1
;:::;w
0
n
)
172
Finally, if the outcome of act A is strictly preferred to the outcome of act B in all states,
then A is strictly preferred to B.
(Strict Dominance) For all social actsA;B, ifA(s)B(s) for all statess, thenAB.
These ve principles are inconsistent.
Proposition. (Individual Preference Divergence Sweetened), (Ex Ante Pareto), (Constant
Anonymity), (Ex Post Pareto), and (Strict Dominance) are inconsistent if the number of
individuals is two.
Proof. By (Individual Preference Divergence Sweetened), there are prospectsP ,P
+
,Q,Q
+
,
so that for alls,P
+
(s)>P (s) andQ
+
(s)>Q(s), andP
1
Q
+
andQ
2
P
+
. Consider the
social acts A = (P;Q) and B = (Q
+
;P
+
). We have that A
1
1
B
1
and A
2
2
B
2
. By (Ex
Ante Pareto),AB. Now considerA
0
= (Q;P ). SinceA
0
1
=A
2
andA
0
2
=A
1
, we have that
for any states, (A
0
(s))
1
= (A(s))
2
and (A
0
(s))
2
= (A(s))
1
. Thus, the welfare vectors in each
state are permutations of each other. Hence, by (Constant Anonymity), A
0
(s)A(s) for all
s. By (Ex Post Pareto), B(s)A
0
(s) for all s. Thus, by the transitivity of, B(s)A(s)
for all s. Hence, by (Strict Dominance), BA. Contradiction.
173
Abstract (if available)
Abstract
Political decision-makers cannot perfectly foresee the consequences of their actions. For example, when they consider raising the income tax, they do not know exactly what effect that would have on consumer spending or the distribution of wealth. When political decision-makers are uncertain about the consequences of the available options, which options should they choose? And which procedures should societies implement to ensure that expert knowledge best informs political decisions? This dissertation tackles aspects of these two questions. Regarding which options political decision-makers should choose, I discuss to what extent they should take people's risk attitudes into account, to what extent they should decrease risks of catastrophically bad events, and to what extent they should experiment with policies to get a better understanding of policy consequences. Regarding how we should best bring expert knowledge to bear on policy decisions, I discuss how scientists should inform political decision-makers and whether we should transfer some power from elected politicians to experts.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Toward a more perfect liberalism: perfectionism in Kantian political philosophy
PDF
Units of agency in ethics
PDF
Aggregating happiness: seeking and identifying a single plausible unifying theory
PDF
Unbounded utility
PDF
Choice biases in making decisions for oneself vs. others
PDF
Aggregating complaints
PDF
Aggregation and the structure of value
PDF
Politics is something we do together: identity and institutions in U.S. elections
PDF
California's adaptation to sea level rise: incorporating environmental justice communities along the California coastline
PDF
Political participation, public opinion, and the law
PDF
Value-based decision-making in complex choice: brain regions involved and implications of age
PDF
A two-level analysis of foreign policy decision making: an empirical investigation of the case of China-Taiwan
PDF
Stochastic multi-hazard risk analysis of coastal infrastructure
PDF
Decisions to ratify the Kyoto Protocol: a Latin American perspective on poliheuristic theory
PDF
Responding to harm
PDF
Belief as credal plan
PDF
Private interests in American government institutions
PDF
Getting our act(s) together: the normativity of diachronic choice
PDF
Deposit insurance, bank risk management and credit source choices
PDF
Circuit breakers: how policy entrepreneurs interrupted the electric flow with Peru’s first renewable energy legislation for the grid
Asset Metadata
Creator
Blessenohl, Simon
(author)
Core Title
Political decision-making in an uncertain world
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Philosophy
Publication Date
03/13/2021
Defense Date
02/23/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
democracy,OAI-PMH Harvest,political philosophy,risk,science communication,social choice theory
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Quong, Jonathan (
committee chair
), Russell, Jeffrey (
committee chair
), Jiménez, Felipe (
committee member
), Nebel, Jacob (
committee member
), Wedgwood, Ralph (
committee member
)
Creator Email
blesseno@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-426921
Unique identifier
UC11666579
Identifier
etd-Blessenohl-9315.pdf (filename),usctheses-c89-426921 (legacy record id)
Legacy Identifier
etd-Blessenohl-9315.pdf
Dmrecord
426921
Document Type
Dissertation
Rights
Blessenohl, Simon
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
political philosophy
risk
science communication
social choice theory