Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Perceived social consensus: a metacognitive perspective
(USC Thesis Other)
Perceived social consensus: a metacognitive perspective
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Copyright 2024 Pragya Arya
Perceived Social Consensus: A Metacognitive Perspective
by
Pragya Arya
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PSYCHOLOGY)
August 2024
ii
Acknowledgements
First, I would like to thank my advisor, Norbert Schwarz, for his invaluable support and
guidance throughout my graduate career. I could not have asked for a better mentor and I’m
deeply grateful for all I have learned from him.
I would also like to thank Madeline Jalbert, for being an excellent collaborator, role
model, and friend, and for being excited about every new research idea I threw at her.
I would like to thank Daphna Oyserman, Wandi Bruine De Bruine, Mark Lai, and Gale
Sinatra for serving on my dissertation committee and providing feedback on my research as I
developed my dissertation.
Additionally, I would like to thank my lab group- Andrew Gorenz, Lynn Zhang, Steve
Carney, and Maansi Dalmia, for their help, academic support, and friendship.
Finally, I’d like to express my appreciation for my family and friends, who have
supported and uplifted me through this entire journey and without whom, this PhD would not
have been the same. I thank my parents and my brother for their never-ending encouragement
and for always believing in me. I thank my fiancé, Ian Anderson, for his endless patience, care,
and support. Amabel Jeon, Sarah Hennessy, Ellen Herschel, Karissa Patel-- I’m grateful for your
support, advice, and love through the ups and downs of graduate school.
Last but not least, I want to give a big shoutout to my mom for all the dubious “health
facts” she has forwarded to me on WhatsApp and Instagram over the years (despite not believing
them herself)—thanks for giving me the inspiration to research those.
iii
Table of Contents
Acknowledgements………………………………………………………………………………..ii
List of Tables……………………………………………………………………………………...iv
List of Figures……………………………………………………………………………………..v
Abstract…………………………………………………………………………………………...vi
Chapter I: Introduction…………………………………………………………………………….1
Chapter II: Do 70% Support Or 30% Oppose? Focusing On Vaccine Opposition Undermines
Vaccination Efforts……..…………………………………………………………......5
Chapter III: Nyquil Chicken: Can News Reporting On Harmful Social Media Challenges Spread
Them Further? ……………………………………………………………………31
Chapter IV: The Illusory Consensus Effect: Repeated Exposure To Health Information Increases
Estimates Of Scientific Consensus…………………………………………………50
Conclusion……………………………………………………………………………………….76
References………………………………………………………………………………………..79
Appendices……………………………………………………………………………………….88
Appendix A……………………………………………………………………………....88
Appendix B……………………………………………………………………………....94
iv
List of Tables
Table 3.1. Main effect of news exposure and descriptive statistics for all dependent
measures………………………………………………………………………………………….38
Table 3.2. News Headlines in the Warning + Trend exposure and Warning exposure
conditions………………………………………………………………………………………...42
Table 4.1. Results from a 2 (repetition) x 2 (source relevance) x 2 (time of judgment) mixed
model ANOVA…………………..……………………………………………………………... 69
v
List of Figures
Figure 2.1. Example News Headline in The Vaccinated Focus (Top) and Unvaccinated
Focus (Bottom) Condition (Experiment 1)……………………………………………………...11
Figure 2.2. Norm Perception of Covid-19 Vaccination in Andorra, by Experimental
Condition………………………………………………………………………………………...13
Figure 2.3. Norm Perception of the Second COVID-19 booster in the United States, by
Experimental Condition……………………………………………………………………...…..18
Figure 2.4. Mediation Model……………………………………………………………………20
Figure 2.5. Norm Perception of the Monkeypox Vaccine in the United States, by
Experimental Condition..………………………………………………………………………...24
Figure 2.6. Mediation Model……………………………………………………...…………….26
Figure 2.7. Effect size analyses for norm perception……………………………………………27
Figure 3.1. TikTok trend headlines in the with trend language condition (left) and the
without trend language condition (right)……………………………………...…………………35
Figure 3.2. Effects of news exposure and news content on participants’ judgments of
descriptive norms, peer norms, likelihood of trying, and perceived effectiveness….....……...…39
Figure 3.3. Participants’ behavioral intentions across the warning + trend exposure,
warning exposure, and no exposure conditions……………………..………………………..….46
Figure 4.1. Descriptions and logos for the relevant source (left) and non-relevant source
(right)…………………………………………………………………………………………….61
Figure 4.2. Perceived scientific consensus in repeated and new claims from a relevant
source (left) and non-relevant source (right)………………………………………….………....64
Figure 4.3. Perceived scientific consensus in repeated and new claims by source
relevance and time of judgment conditions……………………………………………………...70
Figure 4.4. Estimates of unstandardized effect sizes for the simple effects of claim
repetition within source conditions for Experiments 2 and 3 using a single-paper metaanalysis...…………………………………………………………………………………………72
vi
Abstract
In this dissertation, I explore aspects of media influence, namely how exposure to a
behavior or opinion in mainstream media or social media influences people’s perception of
social consensus. Perceived social consensus figures prominently in decision-making and
influences what people consider true and normal (Lewandowsky et al., 2013; Miller & Prentice,
2016). To date, research into the impact of perceived consensus has typically provided people
with explicit consensus information, i.e., direct information about how many others engage in a
given behavior (descriptive norms) or agree on a given opinion. This research strategy bypasses
a crucial issue: Does mere exposure to behaviors and opinions in mainstream or social media
influence perceived consensus with downstream consequences on epistemic and behavioral
decisions? To fill this gap, my experimental studies address how media exposure influences
perceived consensus and explore its downstream consequences. I present three sets of studies
(nine total experiments) that investigate various contextual influences on perceptions of
descriptive norms and scientific consensus. In Chapter I, I examine how the focus of news
headlines can influence perceived descriptive norms of vaccination, and whether this has
downstream effects on individuals’ vaccination intent (Arya, Jalbert, & Schwarz, 2024). In
Chapter II, I investigate how exposure to news reporting on risky social media trends may
influence perceptions of how widespread those trends are. In Chapter III, I turn to perceptions of
scientific consensus and investigate how repeated exposure to information influences judgments
of scientific consensus in the information.
1
Chapter I: Introduction
According to the American Press Institute, one of the key roles of journalists is to be
“committed observers” and draw attention to issues that need resolution, while providing the
public with the information to make the best possible informed decisions about their lives,
societies, and governments (American Press Institute, n.d.). As a result, audiences not only learn
factual information from the news but also learn about the importance of certain topics by how
the media emphasizes or fails to emphasize them (McCombs, 1997). The observation that the
media set the agenda of public discourse and shape public opinion is consistent with a basic
principle of social cognition: human judgment is based on the subset of potentially relevant
information that is most accessible at the time of judgment (Iyengar, 1990; Schwarz, 2007).
In this dissertation, I explore aspects of media influence, namely how exposure to a
behavior or opinion in mainstream media or social media influences people’s perception of
social consensus, that is, how many others do so or think so. Perceived social consensus figures
prominently in epistemic and behavioral decision-making and influences what people consider
true, desirable, risky, normal, and so on (Kahan et al., 2011; Lewandowsky et al., 2013; Miller &
Prentice, 2016; Schultz et al., 2007; Tankard & Paluck, 2016). To date, research into the impact
of perceived consensus has typically provided people with explicit consensus information, i.e.,
direct information about how many others engage in a given behavior or agree on a given
opinion. This research strategy bypasses a crucial issue: Does mere exposure to behaviors and
opinions in mainstream or social media influence perceived consensus with downstream
consequences on epistemic and behavioral decisions? To fill this gap, my experimental studies
address how media exposure influences perceived consensus and explore its downstream
consequences.
2
What others are doing: perceived descriptive norms
In behavioral decision-making, perceptions of how many others engage in a particular
behavior serve as descriptive norms, which can be more influential than injunctive norms, i.e.,
appeals to what one should do (Cialdini et al., 2006). Descriptive norms reflect perceptions of
behavioral frequency and injunctive norms involve social consensus: they reflect the majority
opinion on what is approved or disapproved within a certain group or culture (Reno et al., 1993).
Perceived descriptive norms have been shown to influence behavior in a variety of domains,
including pro-environmental behavior, healthcare decisions, and vaccination intentions, and
research suggests that they are often more influential than injunctive norms in predicting
behavior (Bruine de Bruin et al., 2019; Goldstein et al., 2008; Strough et al., 2022). However, an
aligned combination of descriptive and injunctive norms can result in larger behavioral changes
than either norm in isolation, indicating that perceived behavioral consensus and opinion
consensus within a group are both important (Cialdini, 2003).
Our perceptions of descriptive norms are subjective and context-sensitive. Individuals
construct estimates of the frequency of a behavior as and when needed to make particular
judgments and draw on the information that is most salient in their context to make their
estimates (Miller & Prentice, 1996; Prentice, 2018). Given the proliferation of online
communication channels, often what is most salient is media: online, news, and social (Prentice,
2018). Indeed, for many social issues, the primary source of information on descriptive norms is
media exposure (Gunther et al., 2006; McCombs, 1997; Paluck, 2009). Most people are
frequently exposed to news on social media, even when they are not seeking them out (Fletcher
& Nielsen, 2018). What we see in the news shapes our opinion by influencing what issues are
considered important as well as what others think about said issues (McCombs & Shaw, 1993).
3
However, media communications- both news and social media- are biased and often
sensationalized (Brown et al., 2016; Molek-Kozakowska, 2013). These biases in what
individuals are exposed to can make their way into norm-construction processes, and lead to
inaccurate perceptions of descriptive norms.
What experts think: Perceived Scientific Consensus
People look to others to form their own beliefs (Festinger, 1954). This becomes
particularly relevant when one looks to experts within the area in which one must make a
decision. Perceptions of scientific consensus, i.e., the extent to which scientists agree on a
scientific issue, predict individuals’ beliefs about the issue and are crucial in public acceptance of
science, agreement with scientific insights, and adherence to health recommendations (Kerr &
Wilson, 2018; Lewandowsky et al., 2013; Maibach & Van Der Linden, 2016; van der Linden et
al., 2015). Studies on climate change and vaccination show that the more individuals believe that
there is consensus on a scientific issue among scientists, the more they believe in the scientific
issue and support public action for it (Bertoldo et al., 2019; Sturgis et al., 2021; van der Linden et
al., 2019).
Construction of norm and consensus estimates
Most research on descriptive norms and scientific consensus has focused on studying the
impacts of providing explicit norm or consensus information to individuals (Tankard & Paluck,
2016). We do not understand much about how individuals come to form these estimates in the
absence of explicit information. However, we do know that individuals are often inaccurate at
estimating both norms and consensus. Studies on the false consensus effect illustrate that people
who engage in a given behavior themselves estimate that behavior to be more common than it
actually is (Mullen et al., 1985; Ross et al., 1977). Cognitive biases such as the fundamental
4
attribution error and the correspondence bias may also impact the process of norm construction.
When a group participates in a given behavior, people tend to believe that every member of the
group privately accepts and supports it even when they privately reject the behavior themselves,
a phenomenon known as pluralistic ignorance (Prentice & Miller, 1996).
Outside of these few demonstrated biases in descriptive norm and consensus perception,
there has been no systematic testing of contextual variables that may influence the construction
of these estimates. In this dissertation, I address how exposure to a given behavior or opinion can
influence perceptions of descriptive norms and scientific consensus.
Overview of this dissertation
I present three sets of studies (nine total experiments) that investigate various contextual
influences on perceptions of descriptive norms and scientific consensus. In Chapter I, I examine
how the focus of news headlines can influence perceived descriptive norms of vaccination, and
whether this has downstream effects on individuals’ vaccination intent (Arya, Jalbert, &
Schwarz, 2024). In Chapter II, I investigate how exposure to news reporting on risky social
media trends may influence perceptions of how widespread those trends are. In Chapter III, I
turn to perceptions of scientific consensus and investigate how repeated exposure to information
influences judgments of scientific consensus in the information.
5
Chapter II: Do 70% support or 30% oppose? Focusing on vaccine opposition undermines
vaccination efforts
During the COVID-19 vaccination efforts, health authorities and mainstream media
emphasized that all eligible citizens should get vaccinated, thus conveying a clear injunctive
norm of vaccination. Indeed, most U.S. residents supported vaccination, and opposition to
COVID-19 vaccination was a minority position (Funk & Tyson, 2021). Nevertheless, media
coverage often focused on those who were not (yet) vaccinated, presumably to highlight a
persistent problem and to encourage the unvaccinated minority to join the vaccinated majority.
Headlines like “Fauci Sounds Alarm Over Low Covid Vaccination Rates” (Belluck, 2021) or
“The 6 Reasons Americans Aren’t Getting Vaccinated” (Lopez, 2021), illustrate this focus.
Readers scanning these headlines may infer that vaccine opposition dominates. From this
perspective, the media sent a mixed message. On the one hand, their reporting emphasized the
injunctive norm that people should get vaccinated – and without the rationale supporting this
norm, there would have been little reason to alarm the public about limited compliance. On the
other hand, the media focus on the unvaccinated conveyed a descriptive norm that many people
do not comply with the injunctive norm and refuse vaccination. Unfortunately, descriptive norms
(what our peers do) are often more influential than injunctive norms (what all ought to do) in
guiding attitudes and behaviors (Cialdini et al., 1991; Miller & Prentice, 2016). Hence, the
descriptive norms conveyed by problem-focused headlines may inadvertently counteract the
injunctive norm underlying the message, potentially impairing readers’ vaccination intention. We
explore this possibility and report three experiments that test the impact of problem-focused
news headlines on readers’ inferences about the prevalence of the problematic behavior and their
potential impact on readers’ behavioral intentions.
6
Problem-Focused Headlines and Descriptive Norms
The news media’s overemphasis on the unvaccinated minority is consistent with the
adage, “If it bleeds, it leads.” Indeed, negativity in the news grabs attention and increases news
consumption (Robertson et al., 2023). While grabbing attention may be a useful way to alert the
public to issues that require attention, it can also come at the expense of inviting erroneous
conclusions or spreading misinformation (Andrew, 2007; Molek-Kozakowska, 2013).
In defense of problem-focused headlines, one might assume they alert the public to issues
that require attention. Emphasizing that 30% of the U.S. public are still not fully vaccinated
against COVID-19 may be more likely to alert readers of a continuing health threat than the
more comforting framing that 70% of the U.S. public are fully vaccinated. However, attentiongrabbing headlines about anti-vaccination attitudes and behaviors may also mislead the public to
believe that these attitudes and behaviors are more common than they are, changing the public’s
perception of the descriptive social norms regarding vaccination. Because people consider
descriptive norms in forming their own attitudes and behavioral intentions (Miller & Prentice,
2016), this may, in turn, impair their own intention to get vaccinated.
Social norm research shows that descriptive norms (e.g., “75% of hotel guests reuse their
towels”) are often more influential in inducing behavior change than injunctive norms (e.g.,
“Help save the environment by reusing your towels”; Goldstein et al., 2008).
Descriptive norm information about what others are doing can influence engagement in desirable
as well as problematic behaviors. For example, experimental interventions have successfully
used descriptive norm information to increase engagement in desirable behaviors, such as energy
and water conservation (Goldstein et al., 2008; Nolan et al., 2008; Schultz et al., 2007), and to
decrease the frequency of undesirable behaviors, including littering (Kallgren et al., 2000) and
7
drinking and driving (Perkins et al., 2010). These interventions typically give participants
descriptive norm information about the proportion of people who engage in a desired behavior
(e.g., 75% of hotel guests reuse their towels) or the frequency with which they do so. This
information increases the likelihood that participants themselves engage in the behavior.
Descriptive norm information about undesirable behaviors can similarly increase engagement in
these behaviors. For example, in a study at Arizona’s Petrified Forest National Park, Cialdini and
colleagues (2006) found that drawing attention to others’ problematic behavior (“many past
visitors have removed the petrified wood from the park”) led more visitors to remove petrified
wood from the park.
Consistent with the observed influence of descriptive norms across many domains,
studies have found strong positive associations between perceived social norms and vaccination
behavior (Brewer et al., 2017; Bruine de Bruin et al., 2019; Jaffe et al., 2022; Nyhan et al.,
2012). More importantly, providing participants in 23 countries with accurate descriptive norm
information about COVID-19 vaccination (e.g., “X% of people in your country say they will
take a vaccine if one is made available”) increased willingness to receive a COVID-19 vaccine.
This causal effect was particularly strong in countries where descriptive norms were higher
(Moehring et al., 2023). However, all these experiments manipulated the social norm by
providing participants with explicit information about what others are doing, i.e., an explicit
descriptive norm. Less is known about how individuals infer descriptive norms in the absence of
explicit information.
Norms And Media Representation
Key to the influence of descriptive norms are people’s perceptions of what others are
doing, not what others are actually doing (Tankard & Paluck, 2016). Accordingly, most social
8
norm interventions attempt to change perceived norms by providing explicit information about
others’ behavior. Less is known about how people arrive at estimates of norms in the absence of
such explicit information.
One source of relevant information is personal experience (Miller & Prentice, 1996;
Paluck & Shepherd, 2012). But for many issues, the primary source of information is media
exposure (Brossard, 2013; McCombs, 1997; McCombs & Shaw, 1972; Su et al., 2015).
Consistent with general information accessibility principles (Iyengar, 1990; Tversky &
Kahneman, 1973), media exposure has been found to influence a broad range of probability and
frequency judgments, from the assessment of health risks (Chia & Lee, 2008; Gunther et al.,
2006) to the perception of descriptive norms (Gunther et al., 2006; Paluck, 2009). High media
coverage of an issue further conveys that the issue is an important problem that poses a threat to
society. Hence, a reader may infer from extensive reporting about the unvaccinated that
vaccination refusal is high and that the resulting low vaccination rates present a major threat.
These considerations converge on predicting that a media focus on the unvaccinated biases the
perception of vaccination rates and hence the perception of descriptive norms about vaccination.
Present Studies
In three studies, we test whether exposure to news stories that focus on anti-vaccination
attitudes and behaviors (as opposed to pro-vaccination attitudes and behavior) reduces the
perceived descriptive norm of vaccination (Experiments 1-3) and whether this perception, in
turn, reduces vaccination intent (Experiments 2-3). Throughout, the news stimuli implicitly
convey an injunctive norm that people should be vaccinated and vary merely in whether their
headlines focus on the vaccinated or the unvaccinated. We test whether this differential focus of
9
a news headline can influence perceptions of the descriptive norm, even when the headlines are
substantively equivalent.
In each study, participants located in the United States were asked to read news headlines
that focused either on the unvaccinated or on the vaccinated. News headlines across both
conditions had the same substantive meaning and underlying message but varied in who they
focused on. All headlines and excerpts were formatted and presented in the style of Google
News. In Experiment 1, participants read headlines about COVID-19 vaccination in another
country and estimated the descriptive norm of vaccination in that country. In Experiments 2 and
3, participants viewed news headlines about the COVID-19 booster vaccine (Experiment 2) or
monkeypox vaccine (Experiment 3) in the United States. They then estimated the future rates of
vaccine uptake and reported their own intentions to receive the vaccine.
Open Practices Statement
Experiment 1 was not pre-registered; the pre-registration for Experiment 2 can be
accessed at https://aspredicted.org/XPB_QH3, and the pre-registration for Experiment 3 can be
accessed at https://aspredicted.org/QDX_8BS. De-identified data files and analytic scripts for all
experiments are available at the OSF repository for this article:
https://osf.io/trsyw/?view_only=7a46596e7fad4870ae4b2cf11d2bf915. All experimental
materials are posted in the OSF repository as well.
Experiment 1
Experiment 1 tested whether viewing news headlines that focus on anti-vaccination (vs.
pro-vaccination) sentiment around COVID-19 would reduce participants’ estimates of the
prevalence of COVID-19 vaccination. To minimize the impact of factual knowledge of COVID19 vaccination in this initial study, the materials pertained to the country Andorra (a small
10
European country in the Pyrenees) which was unfamiliar to most participants. We predicted that
participants who read news headlines focused on the unvaccinated in Andorra would infer that
fewer people in the country were vaccinated against COVID-19 compared to participants who
read news headlines focused on the vaccinated. Experiment 1 was conducted in March 2022.
Method
Participants
For the initial study, we aimed to collect data from a convenience sample of about 200
undergraduate participants. Recruitment and data collection started on March 11, 2022, and
ended on April 2, 2022. In total, 215 undergraduate psychology students from the University of
Southern California (18-31 age range, Mage = 20, 62% women, 37% men, 1% non-binary)
completed the study online for course credit. According to a power analysis conducted using the
pwr package on R (Champely et al., 2017), a sample size of N = 215 would be sufficient to detect
a difference between two independent means of effect size d = 0.38, difference, alpha = 0.05,
power = 0.80, two-tailed.
Design and Materials
Participants were randomly assigned to one of two between-subjects news focus
conditions: unvaccinated focus (N = 107) or vaccinated focus (N = 108). All participants were
shown five target and two filler news headlines and short excerpts about COVID-19 in Andorra.
All news materials were formatted and presented in the style of Google News (Figure 1) and
were presented together on one page. Target headlines were matched in substantive meaning
across conditions but varied in whether they focused on the vaccinated (e.g., “Local vaccination
drives going well: increase in vaccine uptake.”) or the unvaccinated (e.g., “Local vaccination
11
drives not reaching the doubtful unvaccinated.”). Full materials for all experiments can be found
in Appendix B.
Figure 2.1. Example News Headline in The Vaccinated Focus (Top) and Unvaccinated Focus
(Bottom) Condition (Experiment 1)
Our primary dependent variable, norm perception, was assessed with the question “Out of
100 people who are eligible for the COVID-19 vaccine in this country, how many do you think
are vaccinated against COVID-19?”. Participants used a sliding scale to respond with a number
between 0 and 100. Participants also responded to “How close do you think this country is to
reaching its goal of full COVID-19 vaccination?” on a 1 (not at all) - 5 (extremely) response
scale. We additionally assessed participants' perceptions of polarization, protests, agreement
among the unvaccinated, and reasons behind vaccine refusal; full results for the impact of news
focus condition on these measures are reported in the online supplement available in the OSF
12
repository. All experiments reported in this article were approved by the University of Southern
California Institutional Review Board.
Procedure
Before beginning the survey, participants were shown an informed consent document and
indicated their written consent to participate in the study. Participants were informed they were
participating in a study about how people form impressions of countries from the news. For this
purpose, they would be shown trending news stories about COVID-19 from a randomly selected
country and be asked about their impressions of the country. support the cover story, participants
first completed a filler task in which they rated how familiar they were with each of ten
European countries, including the country of Andorra. This also served to check our assumption
that our participants were not very familiar with Andorra (M = 1.28 on a 1 (not at all familiar) -
5 (extremely familiar) scale, with 80% reporting “not at all familiar”). We then told participants
they would now be seeing news headlines from Andorra, which had allegedly been randomly
chosen from the list they just rated. Participants then answered the dependent measures followed
by demographic questions including age, gender, ethnicity, nationality, and political orientation.
Results and Discussion
To examine the impact of news focus, we conducted independent-sample t-tests for each
of our dependent variables. As predicted, participants who read news headlines that focused on
the unvaccinated believed that fewer eligible people were vaccinated against COVID-19 (M =
61.84, SD = 19.22) than participants who read news headlines that focused on the vaccinated (M
= 67.18, SD = 17.70), t (213) = 2.12, p = .04, Cohen’s d = 0.29, mean difference = 5.33, 95% CI
[0.37, 10.30] (Figure 2). Participants in the unvaccinated focus condition further believed that
Andorra was less close to reaching its’ goal of full COVID-19 vaccination coverage (M = 3.08,
13
SD = 0.81) than participants in the vaccinated focus condition (M = 3.67, SD = 0.76), t (213) =
5.50, p < .001, Cohen’s d = 0.75, mean difference = 0.59, 95% CI [0.38, 0.80]. In this
experiment and in the following experiments in this paper, we also examined whether these
results were moderated by political orientation. A full report of these analyses can be found in
the online supplement available in the OSF repository.
Figure 2.2
Norm Perception of Covid-19 Vaccination in Andorra, by Experimental Condition
* p < .05.
Note. Norm Perception was assessed with the question “Out of 100 people who are eligible for
the COVID-19 vaccine in this country, how many do you think are vaccinated against COVID19?”. Error bars show standard errors.
Experiment 2
14
Experiment 2 tested the same hypotheses on perceptions of the COVID-19 booster in the
United States (participants’ own country of residence). We were interested in whether news
focus had similar effects on norm perception even in participants’ own country– for which they
were likely to have some preexisting knowledge about vaccination rates. We were also interested
in measuring participants’ vaccination intentions and assessed whether changes in norm
perception would carry over to an individual's own intent to receive a vaccine. Experiment 2 was
conducted in May 2022, soon after the second COVID-19 booster was made available in the
United States to adults aged 50 and older. Given this real-world context and the described aims,
our measures of norm perception and vaccination intention referred specifically to the second
COVID-19 booster in the United States. Experiment 2 was preregistered at
https://aspredicted.org/XPB_QH3.
Method
Participants
According to a power analysis using the pwr package on R (Champely et al., 2017) and
the smallest effect size found in Experiment 1, a sample size of N = 375 (N = 187 per betweensubjects condition) was required to detect the difference between two independent means with
effect size d = 0.29, alpha = 0.05, power = 0.80, two-tailed. We slightly over-recruited from this
number to account for potential missing data and collected data from N = 393 (Mage = 38.63, age
range 19-76, 41% women, 58% men, 0.3% non-binary) participants using the online survey
platform CloudResearch. We limited participation to participants currently located in the United
States using the location filter on CloudResearch. Participants were paid $1.20 for their
participation in a 5-minute survey. As pre-registered, we excluded 15 participants who did not
15
complete the survey, and a total of N = 378 participants were included in the analysis.
Recruitment and data collection started on May 2, 2022, and ended on May 4, 2022.
Design and Materials
Participants were randomly assigned to one of two between-subjects news focus
conditions: vaccinated focus (N = 190) or unvaccinated focus (N = 188) and were shown a set of
six news headlines (five target headlines, one filler headline). Target headlines were made
specific to coverage of the first COVID-19 booster in the United States, which was prominent at
the time, and dependent measures were made specific to the second COVID-19 booster, which
had just become available to certain segments of the population. Additionally, target headlines
across conditions in Experiment 2 were edited to be exact mirrors of each other (e.g., “59% of
vaccinated adults have received first COVID-19 booster” vs. “41% of vaccinated adults have not
received first COVID-19 booster”; “Who’s getting COVID-19 booster shots?” vs. “Who’s
failing to get COVID-19 booster shots?”). All headlines and excerpts were formatted and
presented in the style of Google News.
We assessed participants’ current expectations about the future acceptance of the new
booster. We asked participants, “Out of 100 people who will be eligible for the second COVID19 booster, how many do you think will have been boosted a second time by the end of 2022?”.
This question served as the measure of the perceived descriptive norm. Participants indicated
their response on a sliding scale from 0 - 100. Participants also responded to “How close do you
think this country is to reaching its goal of full COVID-19 vaccination?” on a 1 (not at all) - 5
(extremely) response scale. We then assessed participants’ booster intentions. We first asked
participants to report their own vaccination status. Participants who had already received the first
booster dose were asked “The CDC and FDA have recently authorized a second COVID-19
16
booster shot for Americans aged 50 and older. If you became eligible to get the second COVID19 booster shot, what would you do?”. Participants responded on a scale from 1 (would
definitely not get boosted) to 6 (would definitely get boosted). Our primary interest was in
participants who had already received the first booster and their intentions to receive a second
booster dose. However, we also assessed intentions to get any booster doses from participants
who had not received the first booster. Participants who had not yet received a booster dose were
asked “The CDC and FDA have recently authorized a second COVID-19 booster shot for
Americans aged 50 and older. Do you intend to get any booster shots?” with a response scale
from 1 (would definitely not get boosted) to 6 (would definitely get boosted). Our analyses focus
on the first measure, i.e., on participants’ intentions to receive the second booster dose.
We asked participants additional questions related to their perceptions of polarization,
protests, agreement among the unvaccinated, and reasons behind non-vaccination. Full questions
and results showing the impact of the news focus condition on these measures are reported in the
online supplement available in the OSF repository.
Procedure
Before beginning the survey, participants were shown an informed consent document and
indicated their written consent to participate in the study. Participants were told they were
participating in a study about how people form perceptions of countries from the news.
Participants then saw a set of headlines relevant to their experimental condition and answered the
dependent measures, followed by demographic questions including age, gender, ethnicity,
nationality, party identification, and political orientation.
Results
17
We predicted that participants who read news headlines focusing on the unvaccinated and
unboosted would believe fewer people in the United States would get a second COVID-19
booster and would report lower intent to get a second booster themselves than participants who
read news headlines focusing on the vaccinated and boosted. We additionally predicted that
perceptions of the prevailing vaccine norm would mediate the effect of news focus on
participants’ vaccine intentions.
Norm Perception
Supporting our predictions, participants in the unvaccinated focus condition believed that
fewer people in the United States will get a second booster (M = 46.31, SD = 19.71) than
participants in the vaccinated focus condition (M = 54.18, SD = 21.65), t (376) = 3.69, p < .001,
Cohen’s d = 0.38; mean difference = 7.87, 95% CI [3.68, 12.05] (Figure 3). Participants in the
unvaccinated focus condition also believed that the United States. was currently further away
from its goal of full vaccination and booster coverage (M = 2.53, SD = 1.07) than participants in
the vaccinated focus condition (M = 3.07, SD = 0.88), t (376) = 5.38, p < .001, Cohen’s d = 0.55,
mean difference = 0.54, 95% CI [0.34, 0.74]. These results indicate that the focus of the news
headlines shifted participants’ perceptions of prevailing descriptive norms regarding COVID-19
vaccination and booster coverage.
18
Figure 2.3
Norm Perception of the Second COVID-19 booster in the United States, by Experimental
Condition
Note. *** p < .001.
Note. Norm Perception was assessed with the question “Out of 100 people who will be eligible
for the second COVID-19 booster, how many do you think will have been boosted a second time
by the end of 2022?”. Error bars show standard errors.
Booster intentions
Contrary to our prediction, we did not detect a mean difference between news focus
conditions on participants’ second booster intentions. Participants in the unvaccinated condition
(M = 5.11, SD = 1.14) and participants in the vaccinated condition (M = 5.06, SD = 1.18)
reported similar intentions to get a second booster, t (196) = -0.31, p = .76, Cohen’s d = -0.04,
mean difference = -0.05, 95% CI [-0.38, 0.28]. However, the group means revealed possible
19
ceiling effects: in both conditions, participants who had already received the first booster
reported high intentions to get a second booster.
Participants who had not yet received a first booster were asked about their intentions to
receive any booster shots. These participants also reported similar intentions to receive any
booster shots across both experimental conditions (Mvaccinated = 2.55, SD = 1.68 vs. Munvaccinated =
2.53, SD = 1.82), t (178) = 0.10, p = .92, Cohen’s d = 0.02, mean difference = 0.03, 95% CI [-
0.49, 0.54]. The group means in both experimental conditions were fairly low, indicating that
participants who had still not received a first booster by the time of the study were unlikely to
want any booster doses.
We next tested our hypothesis that norm perception would mediate the effect of news
focus condition on participants’ booster intentions using the PROCESS macro on SPSS, Model 4
(Hayes, 2012). We used the norm perception measure “Out of 100 people who will be eligible
for the second COVID-19 booster, how many do you think will have been boosted a second time
by the end of 2022?”. We calculated 95% confidence intervals for the direct and indirect effect
(mediated through norm perception) of news focus on second booster intentions using 5,000
bootstrapped samples. Consistent with the t-test results, news focus did not have a significant
direct effect, b = 0.23, 95% CI [-0.09, 0.54], p = .16, or total effect, b = .05, 95% CI [-0.28,
0.38], p = .76, on second booster intentions. However, norm perception did function as a
mediator, as indicated by a significant indirect effect of news focus on second booster intentions
through norm perception, b = -0.18, 95% CI [-0.33, -0.06] (Figure 4). The negative effect
indicates that reading news focused on the unvaccinated contributes to reducing booster
intentions. Despite the absence of a detectable total effect of news focus on second booster
20
intentions, the indirect effect of norm perception on intentions indicates an important path of
influence through which people may make their vaccination intentions (Hayes, 2009).
Figure 2.4
Mediation Model
Note. ** p < .01 *** p < .001
Experiment 3
Experiments 1 and 2 confirmed that problem-focused news headlines can shift
perceptions of descriptive norms. Here, news headlines focused on the unvaccinated minority
increased perceptions of how many people are unvaccinated. Both previous experiments
pertained to COVID-19 vaccinations, a topic that had already received extensive news coverage.
Results from Experiment 2 suggest that participants who have already been vaccinated -- or
refused vaccination -- have developed solidified intentions that may limit the impact of
descriptive norms on future vaccination intent. We therefore took advantage of a newly emerging
health threat, i.e., monkeypox, in Experiment 3, which was conducted in June 2022. At that time,
news outlets were just beginning to report about the spread of monkeypox in the United States
21
and the availability of a monkeypox vaccine. Experiment 3 was pregistered at
https://aspredicted.org/QDX_8BS.
Method
Participants
A power analysis using the pwr package in R (Champely et al., 2017) and the smallest
effect size found in Experiment 1 showed that a sample size of N = 375 (N = 187 per betweensubjects condition) was required to detect the difference between two independent means with
effect size d = 0.29, alpha = 0.05, power = 0.80, two-tailed. We again slightly over-recruited to
account for potential missing data and collected data from N = 400 (Mage = 39.06, age range 19-
72, 37% women, 61% men, 0.3% non-binary) participants using the online survey platform
CloudResearch. We also again limited participation to participants currently located in the
United States. Participants were paid $1.20 for their participation in a 5-minute survey. As preregistered, we excluded participants who did not respond to the entire questionnaire, resulting in
N = 396 participants being included in the analysis. Participant recruitment and data collection
started on July 7, 2022 and ended on July 14, 2022.
Design and Materials
We again manipulated news focus and randomly assigned participants to one of two
experimental between-participants news focus conditions: unvaccinated focus (N = 198) or
vaccinated focus (N = 198). All participants were shown a set of five headlines with excerpts
(four target, one filler) about monkeypox in the United States. Target headlines were matched in
substantive meaning, but varied across experimental conditions in their focus on those who
wanted the vaccine (vaccinated focus) vs. those who did not (unvaccinated focus). In contrast to
COVID-19, monkeypox vaccine uptake and availability was quite low at the time. Given this, we
22
focused our headlines on demand and preferences rather than vaccination rates (e.g., “Demand
for monkeypox vaccine surges” vs. “Little demand for monkeypox vaccine”; “National polls
show 57% of people say they would get the monkeypox vaccine” vs. “National polls show 27%
of people say they would not get the monkeypox vaccine”). All headlines and excerpts were
formatted and presented in the style of Google News.
We assessed norm perception with the question “How many eligible adults do you think
will get the monkeypox vaccine?” with a response scale from 1 (almost nobody) - 7 (almost
everybody). We followed up the scale measure with a count measure of norm perception, “More
specifically, out of 100 eligible adults, how many do you think will get the monkeypox vaccine?”.
Participants answered the count measure by sliding a scale to choose a number between 0 and
100.
We assessed monkeypox vaccine intention with the question “How likely is it that you
would get a vaccine for monkeypox once it is widely available and the CDC and WHO
recommend it to you?” on a scale from 1 (extremely unlikely) to 6 (extremely likely).
Additionally, we asked participants to complete the Vaccine Hesitancy Scale (Shapiro et al.,
2018), and similar measures of perceptions of polarization, protest, and agreement among the
unvaccinated as in Experiments 1 and 2. Results showing the impact of the news focus condition
on these measures are reported in the online supplement available in the OSF repository.
Procedure
Before beginning the survey, participants were shown an informed consent document and
indicated their written consent to participate in the study. Participants were informed that they
were participating in a study about how people form perceptions of countries from the news.
Participants were shown the news headlines relevant to the condition they were assigned to.
23
Before the headlines, all participants were shown the following description of monkeypox. There
has recently been an unusual spread of monkeypox in the U.S. and Europe. Monkeypox is a
disease caused by infection with the monkeypox virus, a member of the same family of viruses as
smallpox. Vaccines have been proven effective at preventing monkeypox infection.
Participants were then asked to look ahead to the likely future of the monkeypox vaccine
and imagine that the CDC and WHO make the vaccine widely available and recommend that all
eligible adults get it. Participants were asked to answer the dependent measures keeping this
scenario in mind. Finally, we asked participants to answer demographic questions, including age,
gender, nationality, ethnicity, party identification, political orientation, and COVID-19 vaccine
status.
Results
We predicted that participants who read news headlines focusing on the unvaccinated and
low vaccine demand would infer that fewer people in the United States would get a monkeypox
vaccine and would report lower intent to get a monkeypox vaccine themselves than participants
who read news headlines focusing on the vaccinated and high vaccine demand. We additionally
predicted that vaccine norm perception would mediate the effect of news focus on vaccination
intentions.
Norm Perception
Supporting our predictions, participants in the unvaccinated focus condition believed that
fewer eligible people in the United States will get a monkeypox vaccine (M = 3.77, SD = 1.34)
than participants in the vaccinated focus condition (M = 4.34, SD = 1.22), t (394) = 4.40, p <
.001, Cohen’s d = 0.44; mean difference = 0.57, 95% CI [0.31, 0.82]. The count measure of
norm perception confirmed this shift. Participants in the unvaccinated focus condition believed
24
that a smaller number of eligible people in the United States will get a monkeypox vaccine (M =
44.49, SD = 21.85) than participants in the vaccinated focus condition (M = 54.16, SD = 20.10), t
(394) = 4.58, p < .001, Cohen’s d = 0.46, mean difference = 9.67, 95% CI [5.52, 13.81] (Figure
5). These results replicate Experiments 1 and 2 in a new domain and reflect similar shifts in norm
perception.
Figure 2.5.
Norm Perception of the Monkeypox Vaccine in the United States, by Experimental Condition
Note. *** p < .001. Norm Perception was assessed with the question “More specifically, out of
100 eligible adults, how many do you think will get the monkeypox vaccine?”. Error bars show
standard errors.
Vaccination Intent
Supporting our predictions, participants in the unvaccinated focus condition reported
lower intent to receive a monkeypox vaccine themselves if it was recommended to them (M =
25
3.47, SD = 1.82) than participants in the vaccinated focus condition (M = 3.91, SD = 1.75), t
(394) = 2.42, p = .016, Cohen’s d = 0.24, mean difference = 0.43, 95% CI [0.08, 0.79].
We next tested our hypothesis that norm perception would mediate the effect of news
focus on participants’ vaccination intentions using the PROCESS macro on SPSS (Hayes, 2012).
We calculated 95% confidence intervals of direct and indirect effect (mediated through norm
perception) of news focus on second booster intentions using 5,000 bootstrapped samples. We
used the count measure of norm perception in our model to be consistent with Experiment 2
(“Out of 100 Americans…”). Replicating prior t-test results, news focus had a total effect on
vaccination intent, b = -0.43, 95% CI [-0.79, -0.08], p = 0.016. The direct effect of news focus
on vaccination intent was non-significant, as evidenced by the 95% confidence interval including
zero, b = 0.01, 95% CI [-0.31, 0.30], p = 0.93. However, the analyses revealed a significant
indirect effect of news focus on second booster intentions through the mediator of norm
perception, b = -0.45, 95% CI [-0.65, -0.25] (Figure 6). Reading news headlines focused on the
unvaccinated reduced perceptions of the descriptive norm of vaccination, which, in turn, reduced
vaccination intentions. A significant indirect effect in the absence of a direct effect signals
complete mediation of the effect of news focus on vaccination intent by norm perception.
26
Figure 2.6.
Mediation Model
Note. *** p < .001
Effect Size Analyses
We additionally meta-analyzed the measure of norm perception across our three
experiments using Comprehensive Meta-Analysis Software (version 4) to get an overall estimate
of effect size corrected for small samples bias (Borenstein et al., 2009). Because a randomeffects model revealed no evidence of heterogeneity across studies, τ2 < 0.001, Q (2) = 1.038, p
=.595, the model was reduced to a fixed-effect model. We found evidence for a significant mean
difference between news focus conditions across all three experiments, such that participants
who saw news headlines focused on the vaccinated rather than the unvaccinated thought that
more people were vaccinated or would get vaccinated d = 0.39, 95% CI [0.27, 0.52], p < .001
(Figure 7).
We did not conduct effect size analyses for our vaccination intention measure. We only
assessed vaccine intent in Experiments 2 and 3, which both used slightly different measures of
vaccine intent. Given the limitation of only two effect sizes and non-equivalent measures, an
overall effect size analysis would not be meaningful.
27
Figure 2.7.
Effect size analyses for norm perception
General Discussion
Consistent with the old adage, “If it bleeds, it leads”, good news is often seen as
synonymous with no news (van der Meer & Hameles, 2022). Negative news attracts more
audience attention and receives more clicks, thus incentivizing the media to emphasize potential
problems (Lengauer et al., 2012; Soroka & McAdams, 2015; Robertson et al., 2023). In news
reporting about COVID-19 vaccination, this regularity was reflected in a frequent emphasis on
the unvaccinated minority of the population rather than the vaccinated majority. As a result, the
media sent a mixed message by conveying the injunctive norm that all should be vaccinated,
while highlighting the conflicting descriptive norm that many people are not. Unfortunately,
people’s perception of what others do is important input into their own judgments and behavioral
decisions (Cialdini et al., 1991, 2006; Miller & Prentice, 2016; Tankard & Palluk, 2016 ). Hence,
the descriptive norm conveyed by problem-focused news reporting may ironically undermine the
28
injunctive norm the report wants to advocate. Even when news reports attempt to convey a provaccination norm, a focus on the unvaccinated may hurt this intention, leading readers to
overestimate the prevalence of nonvaccination. Our studies provided a first test of this
possibility.
In three experiments, participants read snippets presented in the format of Google News.
Depending on conditions, the headlines focused either on the unvaccinated or on the vaccinated.
The results consistently supported the hypothesized adverse impact of problem-focused
headlines. First, participants who read problem-focused headlines inferred that fewer people
were — or would get — vaccinated compared to participants who read headlines focused on the
vaccinated. This was even the case when the headlines provided logically equivalent information
(e.g., “41% of vaccinated adults have not received first COVID-19 booster” vs. “59% of
vaccinated adults have received first COVID-19 booster”). Second, mediation analyses indicated
that this shift in perceived descriptive norms reduced participants’ reported willingness to receive
a recommended vaccine (Experiments 2-3).
The observed adverse effects of problem-focused headlines have potentially important
implications that extend beyond the issue of vaccination: a focus on problematic behaviors
increases the perceived prevalence of those behaviors, reducing the normative barriers for
engaging in them. To reduce this risk, news reports should preferably highlight the desirable
behavior and refrain from a disproportionate emphasis on the undesirable behavior. At the least,
reports about the undesirable behavior should clearly, and repeatedly, emphasize that it is
uncommon and highlight that most people do the right thing. In journalistic practice, this is
easier said than done, given that news organizations are incentivized to prioritize audience size
29
and engagement, which encourages sensationalism and focusing on the negatives (MolekKozakowska, 2013; Nelson, 2021).
Limitations and Future Directions
Several limitations are worth noting. Although we found similar indirect effects of news
focus on vaccination intentions in Experiment 2 and Experiment 3, we only found a statistically
significant mean difference in vaccination intentions in Experiment 3 (monkeypox). This may
reflect that people had already developed solidified intentions to receive or not receive a
COVID-19 vaccine by the time of Experiment 2, whereas monkeypox vaccination was a novel
issue at the time of Experiment 3. Future experiments could outline the boundary conditions
under which perceived descriptive norms are or are not likely influence vaccination intentions.
We used hypothetical scenarios in our materials and dependent measures, which may not
generalize to consequential real-world decisions. All three experiments were conducted with
participants living in the United States and results may differ for other populations. Additionally,
our results are limited to the domain of COVID-19 and monkeypox vaccination; future research
may extend the analysis to different vaccines and other societal issues (e.g., news reporting on
climate change, voting behavior, etc.). Additional research may investigate the impact of
focusing on problematic or desirable behaviors in different contexts, and on outcomes beyond
perceived norms and behavioral intent. For example, one might expect there to be contexts in
which focusing on desirable behaviors could backfire if it leads people to infer that their own
efforts are no longer needed, possibly encouraging free-riding behavior. It may also be that
focusing on problem behaviors can have positive outcomes in some instances, such as increasing
the perceived importance of the issue or increasing public support for issue resolution. Future
30
research may illuminate the conditions under which different news foci may lead to desired (vs.
undesired) outcomes.
Conclusion
Across three experiments, we found that news coverage that focuses on anti-vaccination attitudes
and behavior — as opposed to pro-vaccination attitudes and behavior — has the unintended
effect of reducing perceptions of norms around vaccination with adverse consequences for
readers’ reported intent to vaccinate. Our results suggest that reporting on problem behaviors in
the news can backfire by shifting perceived descriptive norms.
31
Chapter III: Nyquil Chicken: Can news reporting on harmful social media challenges
spread them further?
In addition to anti-vaccination sentiment, news media reporting may help spread other
undesirable behaviors. News stories report on and warn readers about social media trends or
challenges, which have recently included risky behaviors such as eating candy wrappers or
chugging a cup of salt. Reporting on these potentially risky trends often includes warnings
against doing them, but also includes language that communicates that the behavior is
widespread (e.g., “viral”, “trend”, “many users”). In this chapter, I present two experiments that
test whether news reporting that warns against risky social media trends can increase judgments
of how common the behavior is (descriptive norms) and participants’ intent to engage in the
behavior. I predict that drawing attention to these trends may make them seem more common
than they are, and encourage more people to participate in them.
Social media challenges or trends rapidly spread on popular social media platforms, such
as TikTok and Instagram. While most of these trends are harmless, some can seriously harm the
health and well-being of those participating in them. Recent infamous examples have included
the “Tide pod challenge”, which involves biting down on a laundry detergent pod and ingesting
its contents. Unsurprisingly, medical experts warn against ingesting laundry detergent as it can
be extremely harmful to health (Kriegel et al., 2021). This challenge went viral online in late
2017 and early 2018, and the American Association of Poison Control reported over 200
poisoning cases involving teenagers in early 2018 (American Association of Poison Control
Centers, 2018). Moreover, the mainstream news media reported on the phenomenon widely (e.g.,
Abad-Santos, 2018; Bever, 2018; Chokshi, 2018).
32
These challenges could remain contained within the pockets of social media they
originate from, but mainstream media news reports can spread them to a much broader audience,
and may inadvertently increase engagement in them. Research on misinformation suggests that
people most frequently learn fake news from mainstream news media, even though most fake
news originates from social media sites (Tsfati et al., 2020). When the mainstream news picks up
on fake news trending on social media and covers it, it spreads the misinformation to audiences
that might not otherwise have been exposed to it, and may inadvertently increase belief in it.
News reporting on obscure social media trends can be expected to act similarly.
A very recent real-world example of this phenomenon involves the “Chicken NyQuil”
challenge on TikTok. This trend challenged users to cook raw chicken in the over-the-counter
cough and cold medicinal syrup NyQuil and eat it. The FDA grew concerned about the possible
harmful effects of the trend and issued a public warning, which was reported on by many news
outlets (Adams, 2022). Ironically, the videos posing this challenge were not widely seen before
the FDA warning– TikTok confirmed that there were only five searches for it before the FDA’s
statement. However, after the FDA warning, the number of searches went up to 7000 in one day,
and the challenge began to trend on TikTok (Schulz, 2022).
In addition to increasing viewership of such trends, mainstream news reporting may also
increase participation in them. Seeing trends frequently reported on in the news may make them
feel more common than they are, leading viewers to infer a higher descriptive norm. Greater
familiarity with the trend may also make them feel more “true” or effective and less risky. These
factors may lower the barriers to participation, and in fact, encourage more people to participate
in them. In this chapter, I present two studies that investigate this phenomenon. I predict that
exposure to news stories warning against social media trends will make participants perceive
33
them as more common and increase participants’ willingness to participate in the trend
themselves. I also investigate whether these effects vary by the content of the news stories,
specifically, whether the reports include language that communicates that the behavior is
widespread or not.
Experiment 3.1
In Experiment 3.1, we test whether exposure to news headlines about potentially harmful
social media trends impacts participants’ judgments of descriptive norms, peer norms, and their
reported likelihood of participating in the trend. We also explore impacts on participants’
perceptions of the riskiness and effectiveness of the behavior in the trend. In addition to news
exposure, we also test whether the content of the news moderates these effects, i.e., we compare
headlines that warn about the trend with or without explicit indicators of its popularity on social
media. We predict that when participants are exposed to news headlines warning about a social
media trend, they will perceive the trend as more common (i.e., perceive a higher descriptive
norm) both generally and amongst their peers. Additionally, we predict “backfire” effects of this
exposure beyond changes in perceived descriptive norms – we expect that participants will be
more likely to want to try the trend when they have been exposed to a news headline about it. We
also explore whether exposure influences participants’ perceived effectiveness and riskiness of
the trend.
Method
Participants
We recruited N = 500 (Mage = 35.84, 43% women, 54% men, 3% non-binary) participants
using the online platform Prolific. We prescreened for and selected participants who were
located in the United States, had greater than a 95% approval rating, and self-reported that they
34
used TikTok at least once a month. Participants were paid $2 for their participation in a 10-
minute survey.
Design
We manipulated news exposure (exposure vs. no exposure) within subjects, such that
participants saw a headline about one of two possible key trends, and later made judgments about
both trends – one they were exposed to, and one they were not exposed to. We also manipulated
news content (with trend language vs. without trend language) between subjects as follows. The
headlines participants saw about the social media trend either included explicit information that
it was trending or popular on social media (with trend language condition) or did not include any
information about it trending or being popular on social media (without trend language
condition).
Materials
All participants saw a series of one target and four filler news headlines and excerpts,
presented in a randomized order. Target headlines were created to be about real social media
trends that may be harmful to the health or well-being of individuals who engage in them. To
minimize spreading potentially harmful information, we only used social media trends that
already existed online and our headlines mirrored real news headlines about those trends. We
selected two key trends for this study: using a Mr. Clean Magic Eraser to whiten one’s teeth, and
putting garlic in one’s nose to relieve congestion. Headlines described the trend and contained a
warning against its possible harm (e.g., “Doctors warn against..”). Headlines were created to be
in the format of Google News and included a headline and a brief article description (Figure 3.1).
We created two versions of headlines for each of these trends: one that contained trend language
and one that did not (Figure 3.1). Both headlines mentioned the behavior being on social media,
35
but headlines in the trend condition additionally included indicators of popularity such as “taking
over Tiktok” and “amassed hundreds of thousands of views”. We additionally created four filler
headlines that were related to health but did not mention anything about social media (e.g., “How
to exercise and train during this winter’s extremes”).
Figure 3.1.
TikTok trend headlines in the with trend language condition (left) and the without trend
language condition (right)
Measures
Our key dependent variables were participants’ perceptions of descriptive norms and peer
norms. We were additionally interested in participants’ likelihood to try the behavior as well as
their perceptions of the risk and effectiveness of the behavior. We assessed perceptions of
descriptive norms with the question “How common is it to try this behavior?” measured on a
scale from 1 (not at all common) to 7 (extremely common). We assessed peer norms with “How
common is it for your peers to try this behavior?”, using the same scale. Next, we measured
participants’ likelihood to try the behavior with “How likely are you to try this?” on a scale from
1 (not at all likely) to 7 (extremely likely); their perceived risk with “How risky do you think this
36
is?” on a scale from 1 (not at all risky) to 7 (extremely risky), and perceived effectiveness with
“How well do you think this works?” on a scale from 1 (not well at all) to 7 (extremely well).
Procedure
Participants were asked to complete the survey on a computer, not a mobile device or a
tablet. Participants who were using a mobile device or tablet were not allowed to begin the
survey. Participants started the study by reading an information sheet and indicating their consent
to participate.
Participants were told we were interested in people’s perceptions of health-related news.
Participants were then shown a series of five news headlines (four filler, one key) and asked to
answer the question “How interesting do you find this news story?” for each of them. Headlines
were presented in a random order and one at a time. Key headlines (warnings about the garlic or
magic eraser trends) were counterbalanced across participants such that participants saw one of
two possible headlines, presented either in the with trend language or without trend language
condition.
Next, we began the judgment task. Participants were shown a list of six health-related
behaviors and were told they would be asked to answer questions about each of them. This list
included both key behaviors: the one participants were exposed to and the one they had not been
exposed to, as well as four filler behaviors unrelated to any of the content they’d seen so far.
Participants first made the first dependent judgment for all six of the behaviors and then moved
on to the next dependent judgment, where they again answered for all six of the behaviors. This
continued until participants had made all five dependent judgments for all of the behaviors. The
dependent measures were presented in a random order for each participant. Next, we asked
37
participants if they had heard of either of our key trends before as well as some demographic
questions including age, gender, ethnicity, TikTok use frequency, and political orientation.
Results
We predicted that when participants are exposed to news headlines about a social media
trend, they will think it is more common (i.e., perceive a higher descriptive norm) in general and
amongst their peers and be more likely to want to try it, as compared to a trend they have not
been exposed to. We also investigated whether these effects differed when the news article
mentioned that the trend was popular on social media (with trend language condition) as
compared to when the article did not contain any indicators of popularity (without trend language
condition).
To test these predictions, we conducted 2 x 2 ANOVAs with news exposure (exposure
vs. no exposure) as a within-subjects variable and news content (with trend language vs. without
trend language) as a between-subjects variable for each of the five dependent measures. We
report the results for each of these variables below.
News exposure. Looking at the main effects of news exposure across different
judgments, we found that when participants were exposed to a news story, they thought more
people were participating in the trend, i.e., perceived a higher descriptive norm, consistent with
our expectations (Table 3.1). However, this did not extend to perceptions of peer norms, meaning
that participants’ judgments of how common the trend was amongst their peers did not differ
between news exposure conditions. Contrary to our predictions, participants were less likely to
want to try the trend they had been exposed to as compared to a trend they were not exposed to.
Additionally, participants believed the trend they were exposed to was more risky and less
effective than a trend they were not exposed to. In short, warnings conveyed in the news were
38
effective and increased perceived risk. Table 3.1 shows descriptive statistics for the main effects
of news exposure across all our judgments.
Table 3.1.
Main effect of news exposure and descriptive statistics for all dependent measures
Measure Exposure No exposure F (1, 499) p η2
M SD M SD
Descriptive norm 2.29 1.47 2.08 1.41 9.37 .002** .018
Peer norm 1.96 1.45 1.89 1.41 1.36 .244 .003
Likelihood to try 1.48 1.12 1.70 1.37 14.66 < .001*** .029
Perceived effectiveness 1.97 1.38 2.22 1.47 12.73 < .001*** .025
Perceived risk 5.53 1.59 4.85 1.88 36.67 < .001*** .068
Note. *** p < .001, ** p < .01
News content. News content (with vs. without trend language) did not moderate the
effects of news exposure, i.e., there were no significant interaction effects, for judgments of
descriptive norms, peer norms, perceived effectiveness, and perceived risk, all F’s < 2.17, p’s >
.14. News content did moderate the effect of news exposure on participants’ likelihood of trying
the trend, F (1, 499) = 5.36, p = .024, η2 = 0.011. We followed up the significant interaction by
looking at the simple effects of news exposure for each of the two news content conditions.
News exposure significantly reduced participants’ likelihood to participate in the trend when
headlines did not contain any trend or popularity information, mean difference = -0.36, 95% CI
[-0.52, -0.20], p < .001. When headlines did contain trend information, news exposure did not
have a significant effect on participants’ likelihood of trying the trend, mean difference = -0.09,
95% CI [-0.24, 0.07], p = 0.285.
39
The main effects of news content varied across different judgments. There was only a
significant effect of news content on judgments of how common the trend was, F (1, 499) = 4.87,
p = .028, η2 = 0.01, and how effective the trend was, F (1, 499) = 6.29, p = .012, η2 = 0.012.
Participants thought the trend was less common when the article contained trend language (M =
2.06, SD = 1.21), as compared to when it did not contain trend or popularity indicators (M =
2.30, SD = 1.22). Participants also thought the trend was less effective when the article contained
trend language (M = 1.96, SD = 1.17) as compared to when it did not contain trend language (M
= 2.23, SD = 1.17). News type did not have statistically significant effects on judgments of peer
norms, perceived risk, and the likelihood of trying the trend, all F’s < 2.08, p’s > .15.
Figure 3.2.
Effects of news exposure and news content on participants’ judgments of descriptive norms, peer
norms, likelihood of trying, and perceived effectiveness
40
Note. **p < .01, ***p < .001. All measures were assessed on a 1 - 6 scale. Error bars represent
95% confidence intervals.
Discussion
In Experiment 1, we found that news exposure to potentially harmful social media trends
increases participants’ perceptions of how common that trend is, or the descriptive norm of the
behavior, supporting our hypothesis. However, this effect was limited to a general norm
judgment; participants’ perceptions of the norms of their peers were not impacted by news
exposure. Contrary to our predictions, news exposure did not have a “backfire” effect and had
the opposite effect on participants' likelihood of trying the trend– participants were less likely to
try a trend when they were exposed to news stories about it, particularly when the news
headlines did not contain information about its popularity on social media. News exposure also
increased participants’ judgments of how risky the trend was and reduced its perceived
effectiveness, suggesting that the warning in the news headline was effective.
Although we found evidence for the descriptive norm shift we predicted, we did not find
evidence for any corresponding adverse behavioral intentions. We decided to run another
experiment testing different behavioral intentions and using more items in the exposure phase to
avoid possible item effects.
Experiment 3.2
In Experiment 3.2, we conducted a similar experiment but made some important changes.
First, we more directly tested specific content of the news coverage – we changed headlines in
the without-trend language condition such that they warned about a behavior but did not have
any indication that the behavior was on social media. We were interested in testing if there was a
better way to report on risky behavior and warn readers against it without having an adverse
41
effect on descriptive norms and subsequent intentions. Next, we decided to focus only on the
dependent measures that were most important to understand for potential “backfire” effectsparticipants’ behavioral intentions. We expanded our measures of behavioral intentions to
include participants’ likelihood to recommend the behavior to others and their likelihood to seek
more information about the behavior, in addition to their own intentions to engage in that
behavior. We also used a greater number of news stories and trends to control for possible item
effects.
Method
Participants
We recruited N = 361 (Mage = 20.41, 62% women, 36% men, 1% non-binary)
undergraduate participants from the Psychology subject pool at the University of Southern
California. Participants completed the survey for course credit. We aimed to collect data from as
many students as possible before the end of the semester.
Design
We had three within-subjects news exposure conditions: warning + trend exposure,
warning exposure, and no exposure. Headlines in the warning + trend exposure condition
included information about a health-related behavior trending on social media and a warning
against doing it (e.g. “The “garlic in nose” trend is supposed to ease congestion, but doctors warn
against it.”). Headlines in the warning exposure condition only included a warning against doing
the behavior, but no indication that the behavior was on social media (e.g., “Doctors warn
against putting garlic in your nose to ease congestion”). All participants saw a series of four
target (two trend+warning, two warning) and four filler news headlines about health-related
behaviors, presented in a randomized order. At test, participants answered questions about six
42
target behaviors: the four behaviors participants had seen headlines about before (two trends, two
warnings), and two new behaviors (no exposure).
Materials and Measures
We collected six possible health-related behaviors that were present on social media and
created two versions of news headlines (warning + trend and warning) about each of them. Each
of the behaviors we selected seemed innocuous but could be slightly risky to people participating
in them. For example, one of the behaviors was drinking a lot of salted water to cleanse one’s
colon. Not only is this trend not effective, but doing this could cause dehydration and stomach
distress. Table 3.2 shows the headlines we created for each of these trends. All headlines were
presented in the format of Google News. We then created three counterbalances for these items,
to pseudo-randomize which trends appeared in which news exposure condition (trend, warning,
no exposure) for each participant.
Table 3.2.
News Headlines in the Warning + Trend exposure and Warning exposure conditions
Warning + Trend Warning
The “garlic in nose” trend is supposed to ease
congestion, but doctors warn it against it. Videos
and anecdotes of people putting raw garlic cloves up
their nose are trending, but doctors warn it won’t
relieve symptoms.
Doctors warn against putting garlic in your
nose to ease congestion. Doctors say putting
garlic cloves up your nose will not help a
stuffy nose, warning against it.
Social media influencers are doing salt water
flushes to remove toxins and cleanse their colon,
but experts say the cleanse is unnecessary. There is
no research to support that salt water flushes improve
health, and the viral trend could lead to dehydration.
Research finds no benefit of using salt water
flushes to cleanse colon, experts say it is
unnecessary. There is no research to support
that salt water flushes improve health, and it
may lead to dehydration.
Influencers say viral onion water trend is going to
keep you from getting sick. Doctors disagree.
Onion water is the latest health trend, with users
claiming it helps relieve cold symptoms like
congestion, but doctors say it doesn’t help and might
cause stomach distress.
Doctor says onion water home remedy
won’t help your cold symptoms. Drinking
onion water does not relieve cold symptoms
like congestion, and might cause stomach
distress.
43
‘Healthy coke’ is trending as an alternative to
soda, but dentists say it can damage your teeth.
Influencers are making ‘healthy coke’ with balsamic
vinegar and sparkling water to reduce their soda
intake, but dentists warn of tooth enamel erosion and
acid reflux.
Dentist warns soda-alternative drinks made
with vinegar can damage your teeth. Adding
vinegars such as balsamic vinegar to drinks
can make them highly acidic, putting you at
risk for tooth enamel erosion and acid reflux.
Wellness influencers claim mouth taping is the
secret to a good night’s sleep, but studies disagree.
Influencers claim it helps prevent snoring and dry
mouth, but medical providers warn of health risks
related to viral trend, such as blocked airflow.
Study shows taping your mouth for better
sleep may be risky. Medical providers warn
of health risks related to mouth taping,
including blocked airflow.
Doctors warn against online trend of drinking raw
potato juice to treat throat infections. Influencers
claim it cures bacterial throat infections, but doctors
have dismissed the claims and said it can cause
unwanted GI symptoms.
Doctors warn against drinking raw potato
juice. Raw potato juice contains compounds
that can be difficult to digest and cause
unwanted GI symptoms.
Our key measures were behavioral intention measures: likelihood to try, likelihood to
recommend, and interest in seeking more information. We made the first two questions specific
to the symptoms that the trend was supposed to help with. For example, for the garlic easing
congestion trend, we asked, “If you were congested, how likely would you be to try putting
garlic in your nose?”. In general, the questions followed the format “If you were [symptom], how
likely would you be to try [behavior]?” and “If your friend was [symptom], how likely would
you be to recommend [behavior] to them?”, measured on a scale from 1 (not at all likely) to 5
(extremely likely). We assessed interest in information with “How interested would you be in
seeking more information about this behavior?” on a scale from 1 (not at all) to 5 (extremely).
Procedure
Participants were asked to complete the survey on a computer, not a mobile device or a
tablet. Participants who were using a mobile device or tablet were not allowed to begin the
survey. Participants started the study by reading an information sheet and indicating their consent
to participate.
44
Participants were told we were interested in people’s perceptions of health-related news.
Participants were then shown a series of eight news headlines (four filler, four key) and asked to
answer the question “How interesting do you find this news story?” for each of them. Headlines
were presented in a random order and one at a time. Of the four key headlines participants saw,
two were from the warning + trend condition and two were from the warning condition. Key
headlines were counterbalanced across participants such that participants saw two of six possible
headlines in the trend condition and a different two in the warning condition.
Next, we began the judgment task. Participants were shown a list of eight health-related
behaviors and were told they would be asked to answer questions about each of them. This list
included six key behaviors: four that participants were exposed to and the two they had not been
exposed to, as well as two filler behaviors. Behaviors were presented in a random order for each
participant. Participants were shown the first behavior and asked to answer the three dependent
judgments related to that behavior, presented in a random order. Then they moved on to the next
behavior, until participants had made all three dependent judgments for all of the behaviors.
Next, we asked participants if they had heard of our key trends before as well as some
demographic questions including age, gender, ethnicity, TikTok use frequency, and political
orientation.
Results
We tested whether exposure to news stories warning people against social media trends
could increase participants' likelihood of wanting to try the trend, recommending the trend to a
friend, or seeking more information about the trend. We predicted that participants in the
warning + trend exposure condition would be more likely to want to try, recommend, and seek
45
information about the trend as compared to participants in the warning and no exposure
conditions.
To test these predictions, we conducted one-way ANOVAs for each of our three
dependent variables.
News exposure did not impact participants’ likelihood of wanting to try the behavior
(Figure 3.3). Participants in the warning + trend (M = 2.11, SD = 0.98), warning (M = 2.05, SD =
1.01), and no exposure (M = 2.12, SD = 1.07) conditions reported similar likelihood to try, F (2,
361) = 0.92, p = .39, η2 = 0.003. New exposure did not impact participants’ likelihood of
recommending the behavior to a friend either. Participants in the warning + trend (M = 2.28, SD
= 1.07), warning (M = 2.27, SD = 1.07), and no exposure (M = 2.31 SD = 1.09) conditions
reported similar likelihood to recommend the behavior to a friend, F (2, 361) = 0.44, p = .65, η2 =
0.001.
News exposure did have a statistically significant impact on participants’ intentions to
seek more information about the trend, F (2, 361) = 4.40, p = .013, η2 = 0.012. Participants in the
warning condition (M = 2.43 SD = 1.04) reported lower interest in seeking information about the
trend than participants in the no exposure condition (M = 2.59 SD = 1.11), mean difference = -
0.16, 95% CI [-0.27, -0.05], p = 0.004. Participants in the warning condition also reported lower
interest in seeking information about the trend than participants in the warning + trend condition
(M = 2.50 SD = 1.08), however, this difference was not significant, mean difference = -0.08,
95% CI [-0.02, 0.17], p = 0.14.
46
Figure 3.3.
Participants’ behavioral intentions across the warning + trend exposure, warning exposure, and
no exposure conditions
Note. *p < .05. All measures were assessed on a 1 - 6 scale. Error bars represent 95% confidence
intervals.
Discussion
In Experiment 3.2, we again found no adverse “backfire” effects of news exposure on
participants’ behavioral intentions. Exposure to news stories about social media trends did not
impact participants' intentions to try the behavior or recommend the behavior to a friend.
Moreover, contrary to our predictions, when participants were warned about a behavior, they
were significantly less interested in seeking more information about it.
47
General Discussion
We investigated whether exposure to news stories warning against harmful social media
trends impacts individuals’ perceptions about how widespread participation in the trends is, i.e.,
the perceived descriptive norm of the behavior, as well as their subsequent behavioral intentions.
In Experiment 1, we found support for our primary hypothesis: news exposure to
potentially harmful social media trends does increase participants’ perceptions of how common
that trend is. However, we only observed significant increases in participants’ perceptions of the
descriptive norm among the general public– news exposure did not impact participants’
perceptions of the norms among their peers. We also found that exposure to warnings about these
trends increased participants’ judgments of how risky the trend was, indicating that participants
were paying attention to the warnings in the headlines. However, contrary to our predictions, we
did not find any adverse “backfire” effect of news exposure on participants’ subsequent
perceptions and behavioral intentions – participants perceived the trend to be less effective and
were less likely to try it when they were exposed to news stories about it.
In Experiment 2, we solely examined participants’ behavioral intentions to investigate
whether news exposure has negative impacts beyond changing perceptions of the norm. Similar
to Experiment 1, we did not find any “backfire” effects: news exposure did not impact
participants' intentions to try the behavior or recommend the behavior to a friend. Participants
were also significantly less likely to seek more information about a behavior when they had been
warned about it.
Across both experiments, the details of the news report mattered less. Results from
Experiment 1 suggest that news articles that do not contain indicators of the trend’s popularity
lead to lower intentions to engage with the trend. Experiment 2 suggests similarly: news articles
48
that simply warn about a behavior without indicating that it is on social media may be best in
preventing interest in the behavior. However, these effects were small and may not be
meaningful.
Overall, our results suggest that exposure to news stories warning against harmful
behavior can increase perceptions of how common that behavior is. However, contrary to our
predictions, the warnings seem to work as intended, and individuals are not more likely to want
to engage in the behavior despite the increase in perceived norms. The shift in norms does not
correspond to a shift in perceptions of the issue or behavioral intentions, in contrast to results
from Chapter II. Although norms are predictive of behavior generally, whether perceptions of a
social norm affect behavior in a given situation is context-dependent (Prentice, 2018). In the
context of an online survey questionnaire, thinking that a social media trend is popular may not
be enticing enough to want to participate in it. However, in other contexts, such as social events,
participating in a social media challenge may be more enticing when one remembers that it may
be quite common. Another possible reason for the differing results between Chapter II and this
chapter is the context surrounding the issue: vaccination was a controversial issue and one that
individuals were exposed to and had likely considered or thought about participating in in the
recent past. The unfamiliar social media trends in this chapter likely did not resonate with
participants and were not similar to things they may have encountered or considered doing in
their own lives. Perceived descriptive norms may be more likely to have a behavioral impact in a
more familiar context like vaccination, for a behavior that individuals are more likely to have
considered participating in. Some data from this study suggests that this is the case --
participants’ reported likelihood of trying a trend was around 2 on a 6-point scale (with higher
numbers indicating greater likelihood) for both trends they had and had not been exposed to.
49
This suggests that participants in our sample were generally not interested in trying these trends,
so news exposure did not make an impact on them. However, results might be different if we
were to sample individuals who do engage in and are interested in participating in online trends.
We might see significant adverse effects of news exposure in this vulnerable population that we
were not able to capture with our current recruitment methods.
Our results suggest that reporting on risky or potentially risky trends may increase the
perception that a greater number of individuals are engaging in the trend, but it may not have
adverse impacts beyond that. This is good news. However, as discussed above, this may be a
limitation of our sample. Multiplied by millions of viewers including those who are more likely
to engage with such behaviors, even small effects of news exposure would imply that some
people become more likely to engage in potentially harmful behavior.
There are several design considerations to keep in mind that may limit our results. We
used relatively few items in both experiments, and the results we found may be limited to the
social media trends we used. The trends we chose had clear harmful effects and the headlines we
constructed explicitly warned about them. News headlines that don’t explicitly warn about risk,
or headlines about trends that are not as risky as the ones we chose, may act as we initially
predicted. Additionally, the hypothesized effects may be very small and we may not have had an
adequate sample to detect certain results.
Chapter II and Chapter III addressed the role of news reporting plays in people’s
perception of descriptive norms. In the following chapter, we turn to another important aspect of
social consensus: perceived consensus amongst scientists.
50
Chapter III: The Illusory Consensus Effect: Repeated exposure to health information
increases estimates of scientific consensus
The internet gives us unprecedented access to health information, and people frequently
search online to learn about their health and find health advice (Chou et al., 2018; Rutten et al.,
2006). However, online health advice is often inaccurate and can be dangerous (Mathur et al.,
2005; Scullard et al., 2010). As documented by several meta-analyses, the quality of online
health information is problematic and poses a significant public health concern (Eysenbach et al.,
2002; Suarez-Lledo & Alvarez-Galvez, 2021; Swire-Thompson & Lazer, 2020; Zhang et al.,
2015). In short, people can easily be exposed to false information when searching for health
advice or when browsing social media, where false claims are often shared more and spread
farther and faster than true ones (Vosoughi et al., 2018). Exposure to false information is
particularly problematic when the information seems credible, e.g., when readers conclude that
the health information they read represents the consensus of health experts. As shown in the
preceding chapters, repeated exposure to a behavior or opinion is sufficient to increase the
perception that the behavior or belief is widely shared and enjoys public consensus. Building on
this work, this chapter explores whether the same regularity applies to scientific
consensus. Across three experiments, we test whether repeated exposure to health-related claims
increases estimates of scientific consensus in the claims. We also test whether the veracity of the
claim (Experiment 1), the source of the information (Experiments 2 and 3), and the time that
passes between exposure and judgment (Experiment 3) moderate the effect of exposure.
Scientific Consensus
People commonly look to others to form and evaluate their own beliefs (Festinger, 1954).
When it comes to health or scientific issues, experts play a critical role in guiding belief
51
formation and behavioral decisions (Jucks & Thon, 2017; Thon & Jucks, 2017; Vraga & Bode,
2017). Perceptions of scientific consensus, i.e., the extent to which scientists agree on a scientific
issue, predict individuals’ beliefs about the issue, the public acceptance of science, and
adherence to health recommendations (Bertoldo et al., 2019; Kerr & Wilson, 2018;
Lewandowsky et al., 2013; Linden et al., 2015; Maibach & Van Der Linden, 2016). For example,
studies on climate change and vaccination show that when individuals believe that a claim has
greater scientific consensus, they are more likely to believe the claim and to support public
action (Kerr & Wilson, 2018; Van Der Linden et al., 2015). On the other hand, perceptions of
scientific dissent reduce public support for policies addressing the issue (Aklin & Urpelainen,
2014). A pre-registered meta-analysis of 43 experiments about vaccination, climate change, and
genetically modified food found that exposure to scientific consensus messaging consistently
increased belief in scientific facts regarding contested scientific topics (van Stekelenburg et al.,
2022).
Although perceptions of scientific consensus play a key role in guiding beliefs,
individuals are often inaccurate at estimating consensus. For example, despite 97% of climate
scientists concluding that human activities are the primary driver of climate change, only 12% of
Americans correctly estimate scientific agreement at 90% or higher (Leiserowitz et al., 2014).
Individuals are rarely exposed to information in a way that is representative of true expert
consensus, and this biased exposure can lead to misperceptions of consensus estimates. For
example, journalists often spend equal time reporting on both sides of an issue, even when only
one side is believed by a majority and backed by expert support and evidence (Dunwoody,
1999). Exposure to such false balance reporting can reduce perceptions of expert consensus
(Koehler, 2016). Even when the reporting communicates source and evidentiary information,
52
individuals may not distinguish between true consensus, where a majority claim is supported by
many independent sources, and false consensus, where a single source’s claim is repeated
multiple times (Yousif et al., 2019). Given that people are often more sensitive to the frequency
of exposure than to its source, incidental exposure to misleading health information on social
media may lead to misperceptions of scientific consensus (Weaver et al., 2007). The present
chapter addresses this possibility.
To date, empirical research on perceived scientific consensus has relied on directly
informing participants about the real or alleged scientific consensus to assess the impact of
different consensus perceptions. While these studies document the importance of consensus, as
discussed above, they tell us little about how people construct estimates of scientific consensus
in the absence of such explicit information. The present studies aim to fill this gap.
Mere Repetition and Consensus
The limited evidence available on how people may estimate consensus suggests an
important effect of frequency of exposure, or familiarity. For example, repeatedly hearing an
opinion increases perceptions of how widespread that opinion is, even when the repetitions come
from the same source (Weaver et al., 2007). The authors traced this effect to processing fluency–
repeated exposure to an opinion increased the accessibility of the opinion, and the resulting
subjective feeling of familiarity is likely what drove the increase in perceptions of popularity.
The finding that people are more sensitive to the subjective feeling of familiarity from repetition
than the number of sources the repetition comes from has been replicated in different contexts.
Yousif et al. (2019) found that repeatedly hearing the same argument in different articles, even
when the articles cited the same source, increased individuals’ confidence in that information.
Similarly, repeated claims from a single eyewitness were considered just as credible as the same
53
claim being corroborated by several different eyewitnesses (Foster et al., 2012). Other evidence
has shown that mere exposure, i.e., repeated and incidental exposure to stimuli, makes the
stimuli feel more familiar to oneself and also increases one’s perceptions of how familiar the
stimuli must be to others, or the perceived norms of familiarity (Kwan et al., 2015). People also
tend to think repeated statements come from more credible sources, and are more likely to want
to share repeated statements online (Fragale & Heath, 2004; Vellani et al., 2023).
However, these experiments do not directly measure the impact of mere repetition on
consensus, and the measures of consensus are limited to specific contexts. This raises the
question if merely hearing about or reading a claim could be sufficient to increase the perception
that many scientists support it, and whether this perception could generalize beyond the specific
context and source. Moreover, it is unclear if claims must come from a source relevant to the
target of judgment in order to impact estimates of consensus– could claims from a relevant
source (e.g., medical experts) and claims from a non-relevant source (e.g., parents) both increase
estimates of scientific consensus? Findings from Weaver et al. (2007) and Foster et al. (2012)
suggest that people are not sensitive to the number of sources, but both these experiments used
sources relevant to the target judgment and do not bear on the question of source relevance.
Simmonds et al. (2023) tested the impact of both the number of sources and source expertise on
belief in health claims encountered online. They found that people were somewhat sensitive to
source cues and were more persuaded by medical professionals than by non-experts, but a
greater frequency of non-expert messages was just as persuasive as an individual expert message.
These findings suggest that exposure to scientific or medical sources may be more influential in
the construction of scientific consensus estimates than exposure to low-expertise sources, but
54
exposure to information from low-expertise sources may nevertheless be sufficient to shift
perceived scientific consensus.
Present research
In the present research, we investigate whether mere repetition can increase perceptions
of scientific consensus about a claim (Experiments 1, 2, and 3). Across three experiments, we
expose participants to true and false health-related claims (e.g., disposable chopsticks contain
carcinogens) and later ask them to judge scientific consensus in those claims and new claims,
i.e., claims they have not been exposed to. We expect that participants will judge that there is
greater scientific consensus in a claim when they have seen the claim before than when they have
not. We then investigate whether this effect is moderated by source relevance (Experiments 2
and 3) and time delay between exposure and judgment (Experiment 3). We expect (1) that
repetition of a claim increases perceived scientific consensus in the claim. We further expect (2)
that this increase is larger when the repetition comes from a source related to the relevant
scientific community rather than an unrelated source. We further test (3) whether repetition from
an unrelated (lay) source is nevertheless sufficient to increase perceived scientific consensus.
Finally, (4) we expect that the influence of repetition on perceived scientific consensus can be
observed even after a multi-day delay.
Experiment 1
In Experiment 1, we investigated the effect of repetition on participants’ judgments of
scientific consensus of true and false health-related claims. Participants were not given any
specific information about the source of the claims. Experiment 1 was pre-registered at
https://researchbox.org/2839&PEER_REVIEW_passcode=BWSEDP. We predicted that
55
participants would infer greater scientific consensus in repeated claims as compared to new
claims (H1). We additionally explored whether this effect varied by claim veracity.
Method
Design
We used a 2 (claim repetition: repeated vs. new) within-subjects design. All participants
were exposed to 18 claims (9 true, 9 false) in the exposure phase and judged 36 claims (9 new &
true, 9 repeated & true, 9 new & false, 9 repeated & false) in the test phase.
Participants
For our primary variable of interest (repetition), we found a within-subjects effect size of
d = 0.28 in a pilot study. According to G*Power (Faul, Erdfelder, Lang, & Buchner, 2007), a
sample of 136 participants would be required to detect the effect in a repeated measures design
with α = .05, power (1-β) = .90, and two-tailed. We decided to slightly over-recruit and collected
data from N = 150 (Mage = 37.5, age range = 18-77, 48% women, 47% men, 3% non-binary)
participants using the online survey platform Prolific. We limited participation to people located
in the United States. Participants were paid $2.75 for their participation in a 10-minute survey,
per California’s minimum hourly wage.
Materials and Measures
Stimuli. To create our stimuli, we compiled 52 claims (half true, half false) from the
CDC, the fact-checking website Snopes, Ecker et al. (2020), and Swire et al. (2022). All claims
were related to health and medicine, e.g., “Disposable chopsticks contain carcinogens”,
“Testosterone treatment helps older men retain their memory”, “Ticks can cause paralysis”. We
then pre-tested these claims on ratings of familiarity and believability with a sample of N = 108
undergraduate participants. We selected 18 true claims and 18 false claims from this set that
56
were relatively unfamiliar to participants and created 2 counterbalances of 18 claims (9 true, 9
false) each. The mean believability of true claims and of false claims was similar within and
across counterbalances. Full counterbalance sets with pre-test data and overall means can be
found in Appendix A.
Measures. We assessed perceptions of scientific consensus with an unnumbered 7-point
rating scale that asked participants whether they agreed with the statement “There is clear
consensus in the scientific community that [claim]” (“strongly disagree” to “strongly agree,”
coded 1 to 7 for analysis). Additionally, we asked participants to complete the 12-item Need for
Cognition Scale (NFC; Cacioppo and Petty, 1982) to capture individual differences in the
tendency to engage in elaborative thinking. Participants were asked to indicate to what extent
each of the NFC statements was characteristic of them on a 5-point scale from “Extremely
uncharacteristic” to “Extremely characteristic”. This scale was administered as a filler task for
exploratory purposes.
Procedure
Participants were asked to complete the survey on a computer, not a mobile device or a
tablet. Participants who were using a mobile device or tablet were not allowed to begin the
survey. Participants started the study by reading an information sheet and indicating their consent
to participate before they moved on to the exposure phase.
In the exposure phase, participants were told that they would see a series of claims for
approximately three minutes and were asked to read the claims carefully. Participants were
randomly assigned to view one of the two counterbalances of 18 health-related claims. The
claims were presented in a random order for each participant, with each claim appearing on the
screen for 5 seconds before auto-advancing to the next claim.
57
After the exposure phase, participants completed the 12-item NFC scale. This served as a
brief filler task in between exposure and test. Next, participants were told they would see another
series of claims appear on the screen, some of which they may have seen before. Participants
were asked to answer whether there was clear consensus in the scientific community about each
statement on a scale from “strongly disagree” to “strongly agree”. Of note is that the claims were
not presented separate from the question, but were embedded in the question itself. For example,
participants saw the statement “There is clear consensus in the scientific community that honey
bee stings can help treat arthritis” and indicated their agreement or disagreement on a 7-point
bipolar scale. Participants rated the scientific consensus for 36 claims, 18 of which they had seen
in the exposure phase (repeated claims), and 18 new claims, which they had not seen before. All
participants rated the same 36 claims appearing in a random order, but which claims were
repeated and which were new varied according to the counterbalance condition. For participants
who saw claims from Counterbalance 1 during the exposure phase, the new claims were 18
claims from Counterbalance 2, and vice versa. After completing the test phase, participants
answered a few demographic questions about their age, gender, and ethnicity and then were
shown a debriefing statement.
Results and Discussion
To test our primary hypothesis that participants would judge repeated claims as having
greater scientific consensus than new claims, we conducted a paired samples t-test comparing the
mean scientific consensus ratings of claims that had been repeated to the mean scientific
consensus ratings of new claims. Supporting our predictions, participants judged repeated claims
as enjoying greater scientific consensus (M = 3.90, SD = 0.88) than new claims (M = 3.53, SD =
0.72), mean difference = -0.37, 95% CI [-0.50, -0.25], t (148) = -5.88, p < .001, d = 0.48.
58
Exploratory analyses
We next conducted a 2 (repetition: repeated vs. new) x 2 (claim veracity: true vs. false)
repeated measures ANOVA to test whether the effect of repetition on scientific consensus
differs for true and false claims. Given that our claims were normed to be unfamiliar and similar
in believability, we did not expect that the predicted effect would vary by claim veracity.
Consistent with our t-test, we found a significant main effect of repetition, F (1, 149) = 34.61, p
< .001, partial eta2 = .188. We also found a significant main effect of claim veracity, F (1, 149) =
66.79, p < .001, partial eta2 = .310, such that true claims (M = 3.93, SD = 0.71) were judged to
have greater scientific consensus than false claims (M = 3.49, SD = 0.84). As expected, we did
not find an interaction effect between claim veracity and repetition, F (1, 149) = 0.99, p = .319,
partial eta2 = .007. The effect of repetition on perceived scientific consensus was similar for true
claims and false claims.
Finally, we explored whether individual differences in NFC scores moderated the effects
of repetition on judgments of scientific consensus. Full NFC scores were calculated by reversescoring appropriate items and then adding up the score on all items, giving a possible range of
scores from -36 to 36). The scale had a Cronbach’s α of .941, with a mean score of 7.23 (SD =
14.61) for our sample. We found no significant interaction of NFC and repetition, F (1, 148) =
0.14, p = .705, partial eta2 = 0.001, indicating that the impact of repetition on judgments of
scientific consensus did not significantly vary across individual differences in tendency to
engage in elaborative thought. We also did not observe a significant main effect of NFC, F (1,
148) = 1.71, p = .192, partial eta2 = 0.011.
59
Experiment 2
In Experiment 1, participants received health-related claims that lacked source
information. However, in the real world, individuals may encounter health information from
sources with health-related expertise (a cue that the source is relevant for judging scientific
consensus) and sources without health-related expertise (a cue that the source is not relevant for
judging scientific consensus). Hence, we tested the influence of claim source in Experiment 2.
Would previous exposure to a claim increase its perceived scientific consensus even if the claim
came from a non-expert source? Or would perceived scientific consensus only be influenced
when the claims are attributed to a source with expertise in the health sciences? We chose
medical experts as a relevant source for health science claims and concerned parents as a lay
group that is not representative of medical expertise. Of interest is whether the respective source
moderates the emergence and size of the illusory consensus effect observed in Experiment 1.
Experiment 2 was pre-registered at
https://researchbox.org/2839&PEER_REVIEW_passcode=BWSEDP. In line with results from
Experiment 1, we predicted a main effect of repetition, such that participants would infer greater
scientific consensus in repeated claims as compared to new claims (H1). We additionally
predicted an interaction: the effect of repetition on perceived scientific consensus will be
stronger when original exposure to the repeated claims comes from a relevant source as
compared to a non-relevant source (H2).
Method
Design
We used a 2 (repetition: repeated vs. new) x 2 (source relevance: relevant vs. nonrelevant) mixed design. Repetition was manipulated within subjects and source relevance was
60
manipulated between subjects. We used the same claim exposure and test paradigm as
Experiment 1, that is, all participants were saw18 claims during the exposure phase and judged
36 claims (18 new, 18 repeated) at test.
Participants
We found an effect size of d = 0.48 for the within-subjects effect of repetition on
scientific consensus in Experiment 1. A total sample of 59 participants would be required to
detect the effect in a repeated measures design with α = .05, power (1-β) = .95, and two-tailed
according to G*Power (Faul, Erdfelder, Lang, & Buchner, 2007). Given that we did not have an
estimate for how this effect size would vary by source, we decided to overrecruit from this
estimate and aimed to recruit 100 total participants for the within-subjects effect in each source
relevance condition, making our total recruited sample 200 participants.
We collected data from N = 200 (Mage = 35.5, age range = 18-76, 49% women, 48% men,
3% non-binary) participants using the online survey platform Prolific. We restricted participation
to Prolific workers located in the U.S. who had greater than a 95% approval rating. Participants
were paid $3.50 for their participation in a 12-minute survey per California’s minimum wage
requirements.
Materials and Measures
Stimuli. Experiment 2 used the same health-related claims and counterbalances as
Experiment 1. To manipulate source relevance, we created logos and descriptions for two
imaginary groups- a closed discussion group for the Massachusetts Journal of Medicine (relevant
condition) and a Facebook group called Parents Supporting Parents (non-relevant condition) (see
Figure 4.1).
61
Figure 4.1.
Descriptions and logos for the relevant source (left) and non-relevant source (right)
Measures. We used the same measure of scientific consensus as Experiment 1, “There is
clear consensus in the scientific community that [claim]”. Participants answered this question for
each claim on an unnumbered 7-point scale from “strongly disagree” (coded as 1) to “strongly
agree” (coded as 7). We again measured individual differences in elaborative processing by
asking participants to complete the 12-item Need for Cognition Scale (NFC; Cacioppo and Petty,
1982). Lastly, as a manipulation check for source relevance, we asked participants to provide
credibility and trustworthiness ratings for the source they saw, using the question “How
credible/trustworthy do you find [source] as a source of health information?” assessed on a
scale from 1 (not at all) - 7 (extremely).
Procedure
The procedure was similar to Experiment 1, except that participants were randomly
assigned to a between-subjects source condition (relevant or non-relevant). Before participants
62
began the exposure task, we informed them that the statements they would see come from the
respective source and showed them the description and logo of the group.
Participants then saw a series of 18 health-related claims, each presented along with the
logo of the respective group. The claims appeared in a random order for each participant and
remained on the screen for 5 seconds before the screen auto-advanced to the next claim.
After the exposure phase, participants were asked to complete the 12-item NFC scale.
This served as a brief filler task in between exposure and test. Next, we started the test phase.
Participants were told they would see another series of claims appear on the screen, some of
which they may have seen before. For each claim, participants indicated whether there was clear
consensus in the scientific community using the same scale as in Experiment 1. Participants
answered this measure for 36 claims, 18 of which they had seen in the exposure phase (repeated
claims), and 18 new claims that they had not seen before. No logos appeared with these claims
and claims were presented in a randomized order. After completing this test phase, participants
answered some individual difference variables collected for exploratory purposes, and a few
demographic questions about their age, gender, and ethnicity before being shown a debriefing
statement.
Results and Discussion
As a manipulation check, we checked source credibility ratings for the relevant and nonrelevant sources. Supporting our manipulation, participants rated the relevant source, the
Massachusetts Journal of Medicine discussion group, as a fairly credible source of health
information (M = 4.21, SD = 1.68) and the non-relevant group, the Parents Supporting Parents
Facebook group, as a less credible source of health information (M = 2.18, SD = 1.73).
63
To answer our primary hypothesis that participants would judge repeated claims to have
greater scientific consensus than new claims, and that the effect of repetition on perceived
scientific consensus would be stronger when the claims come from a relevant source (medical
experts) compared to an irrelevant source (a group of concerned parents), we conducted a 2 x 2
mixed model ANOVA using repetition as a within-subjects factor and source relevance as a
between-subjects factor. As predicted, we found a main effect of repetition, such that repeated
claims (M = 4.01, SD = 1.14) were judged to have greater scientific consensus than new claims
(M = 3.40, SD = 0.86), F (1, 199) = 55.71, p < .001, partial eta2 = 0.219. We also found a
significant main effect of source relevance, such that, overall, participants in the relevant source
condition gave higher mean consensus ratings across repeated and new claims (M = 3.93, SD =
0.75) compared to those in the non-relevant source condition (M = 3.48, SD = 0.81), F (1, 199) =
16.73, p < .001, partial eta2 = 0.078. These main effects were qualified by a significant
interaction between repetition and source, F (1, 199) = 22.92, p < .001, partial eta2 = 0.103. We
followed up the significant interaction with simple effect analyses across source conditions,
using a Bonferonni correction for multiple comparisons, to test i) whether the effects of
repetition on consensus were significant within the relevant and the non-relevant source
condition and ii) whether the effects of repetition were stronger in the relevant vs. non-relevant
source conditions.
The follow-up analyses revealed that the effect of repetition on scientific consensus was
stronger for claims from a relevant source as compared to a non-relevant source. Repeated
claims from a relevant source (M = 4.42, SD = 1.18) were judged to have significantly greater
scientific consensus than new claims (M = 3.43, SD = 0.89), mean difference = 0.99, 95% CI
[0.76, 1.2], p < .001. Repeated claims from a non-relevant source were also judged as having
64
greater scientific consensus (M = 3.59, SD = 0.94) than new claims (M = 3.37, SD = 0.82), but
this effect was only marginally significant, mean difference = 0.22, 95% CI [ -0.01, 0.44], p =
0.06 (Figure 4.2).
Figure 4.2.
Perceived scientific consensus in repeated and new claims from a relevant source (left) and nonrelevant source (right)
Note. Scientific consensus was assessed with the question “There is clear consensus in the
scientific community that [claim]”. Participants were asked to rate their agreement on a scale
from 1 (strongly disagree) to 7 (strongly agree). Error bars represent standard errors.
These results replicate the key finding of Experiment 1: Repetition increases judgments
of scientific consensus. This illusory consensus effect was moderated by source: repetition
significantly increased judgments of scientific consensus when claims came from a relevant
65
source, a group of medical experts. When claims came from a group of parents, the effect of
repetition was in the same direction, but only marginally significant. This may reflect that this
experiment was not adequately powered to detect a true smaller effect, indicated by a power of
0.46 for the simple effect. It could also be the case that individuals take the relevance of the
source into account when evaluating scientific consensus, especially when individuals make
judgments immediately after source exposure. We wondered whether the influence of source
relevance would decrease with a longer delay between exposure and judgment.
Experiment 3
In Experiments 1 and 2, there was only a brief delay between the exposure phase and
when participants made their consensus ratings. In Experiment 3, we tested whether the effects
of repetition and source would persist after a 3-5 day delay. We predicted that participants would
judge repeated claims to have greater scientific consensus than new claims, both when making
this judgment immediately and when making it after a delay. Once again, we expected that the
effect would be stronger when the original exposure to the repeated claims comes from a
relevant (vs. non-relevant) source. We further predicted that the effect of source relevance will
be stronger at initial judgment, and may or may not persist after a delay. Experiment 3 was preregistered at https://researchbox.org/2839&PEER_REVIEW_passcode=BWSEDP.
Method
Design
We used the same 2 (repetition: repeated vs. new) x 2 (source relevance: relevant vs. nonrelevant) design and experimental paradigm as Experiment 2, but additionally added a third
independent variable: time of judgment (immediate vs. delayed), manipulated between-subjects.
66
Participants either made the key dependent judgments immediately after the claim exposure and
a brief filler task, or after a 3-to-5-day delay.
Participants
We collected data from N = 600 (Mage = 41.2, age range = 18 – 86 years, 49% women,
49% men, 2% non-binary) participants using the online survey platform Prolific. We restricted
participation to Prolific workers located in the U.S. with a greater than 95% approval rating, who
had not participated in any of the prior experiments reported in this chapter. Participants were
paid $2.00 for their participation in Part 1 of the survey, which took 10 minutes to complete, and
$1.34 for their participation in Part 2, which took 5 minutes. 96% of participants completed Part
2 of the study. As pre-registered, we excluded participants who did not complete both parts of
the study, leaving us with a total of N = 579 participants (300 immediate judgment, 279 delayed
judgment)
Materials and Measures
We used the same claims, source descriptions and logos, and experimental set-up as
Experiment 2. We also used the same measure of scientific consensus as Experiments 1 and 2,
“There is clear consensus in the scientific community that [claim]”.
To replace the dependent judgments in Part 1 for participants in the delayed judgment
condition, we asked them to answer a few general questions about the claims that they saw.
Participants answered the questions “How many statements do you think you saw?”, “How
interesting did you find the statements you saw?” and “How easy were the statements to
understand?”. All participants completed the NFC as a filler task in Part 1 of the study.
Procedure
67
Participants were asked to complete the survey on a computer, and those using a mobile
device or tablet were not allowed to begin the survey. Participants were informed that this was a
two-part study and that they would take a 10-minute survey today, and a 5-minute survey after a
72-hour (3-day) period.
Participants began the study by giving their informed consent and were then randomly
assigned to a between-subjects source condition (relevant or non-relevant) and a betweensubjects time of judgment condition (immediate vs. delayed). Next, participants were told that
they would see a series of claims for approximately three minutes. The claim exposure phase was
identical to Experiment 2; participants saw a series of 18 health-related claims, each presented
along with the logo of the respective group and appearing in a random order for each participant.
Each claim appeared on the screen for 5 seconds before auto-advancing to the next claim. After
the claim exposure, participants were asked to complete the 12-item NFC scale.
In the last phase, participants were given one of two tasks depending on their assigned
time-of-judgment condition. Participants in the immediate judgment condition completed the test
phase of the experiment, identical to Experiment 2. They answered the dependent variable for 36
claims: 18 repeated and 18 new. Participants in the delay condition were instead asked to answer
three general questions about the health-related claims they saw earlier. After this phase, we
thanked participants for their participation in Part 1.
Three days after the completion of Part 1, we sent participants a link to complete Part 2 of
the experiment. Participants were told they had 48 hours to complete this survey, making the
total delay time between 3 and 5 days for each participant. However, we kept the survey open
after the 5-day deadline and explored results with and without including participants who
completed the survey after this cutoff. In Part 2 of the survey, all participants were asked to
68
complete the test phase and rate scientific consensus for 36 claims (18 repeated, 18 new).
Participants in the immediate judgment had rated these claims already in Part 1 of the study, but
we asked them to rate them again, to make the demands of the experiment similar for both time
of judgment conditions. Participants in the delayed judgment condition were rating the claims for
the first time. After the test phase, participants answered demographic questions about their age,
gender, and ethnicity before being shown a debriefing statement.
Results and Discussion
To test our key predictions, we conducted a 2 (repetition: repeated vs. new) x 2 (source
relevance: relevant vs. non-relevant) x 2 (time of judgment: immediate vs. delayed) mixed model
ANOVA using repetition as a within-subjects variable and source relevance and time of
judgment as between-subjects variables. As predicted, we found a significant main effect of
repetition, such that participants inferred greater scientific consensus for repeated claims (M =
3.93, SD = 1.10) as compared to new claims (M = 3.45, SD = 0.78), F (1, 565) = 129.68, p <
.001, partial eta2 = 0.197 (Table 4.1; Figure 4.3). We also found a significant main effect of
source, such that participants in the relevant source condition (M = 3.85, SD = 1.05) inferred
greater scientific consensus across all claims compared to participants in the non-relevant source
condition (M = 3.53, SD = 1.04), F (1, 565) = 25.66, p < .001, partial eta2 = 0.043. These main
effects were again qualified by a significant interaction between repetition and source, F (1, 565)
= 41.93, p < .001, partial eta2 = 0.069. We followed up the interaction with simple effects
analyses using a Bonferonni correction for multiple comparisons. Repeated claims were judged
to have greater scientific consensus than new claims when they came from a relevant source,
mean difference = 0.75, 95% CI [0.63, 0.86], p < .001, as well as from a non-relevant source,
mean difference = 0.21, 95% CI [0.09, 0.32], p < .001. Note that the effect of repetition was
69
statistically significant for claims from a non-relevant source, rather than marginal as in
Experiment 2.
We found no main effect of time of judgment, and no interactions of time of judgment with
repetition or source, all F’s < 0.43, all p’s > .51, all partial eta2 < 0.001. The three-way
interaction was also not statistically significant, p = 0.11.
Table 4.1.
Results from a 2 (repetition) x 2 (source relevance) x 2 (time of judgment) mixed model ANOVA
Source df F p partial eta2
Between subjects effects
Time of judgment 1 0.43 .513 0.001
Source relevance 1 25.66 <.001*** 0.043
Time of judgment * Source relevance 1 0.31 .58 0.001
Error 565
Within subjects effects
Repetition 1 129.68 <.001*** 0.187
Repetition * Time of judgment 1 0.04 .835 0.000
Repetition * Source relevance 1 41.93 < .001*** 0.069
Repetition * Time of judgment * Source relevance 1 2.62 .106 0.005
Error (repetition) 565
70
Figure 4.3.
Perceived scientific consensus in repeated and new claims by source relevance and time of
judgment conditions
Note. Scientific consensus was assessed with the question “There is clear consensus in the
scientific community that [claim]”. Participants were asked to rate their agreement on a scale
from 1 (strongly disagree) to 7 (strongly agree). Error bars represent standard errors.
Experiment 3 replicated the main effect of repetition on scientific consensus observed in
the prior experiments. We also found a source moderation effect, as in Experiment 2. There was
a larger effect of repetition when claims came from a relevant source as compared to a nonrelevant source, however, the effect of repetition was significant in both conditions. Additionally,
we found no main effect or moderation by time of judgment– the effects of repetition and source
were consistent even with a 3-5 delay between exposure and judgment.
71
Meta-analysis
We conducted a single-paper meta-analysis following recommendations by McShane and
Bockenholt (2017) to resolve the differing simple effects of repetition on claims from relevant
and non-relevant sources in Experiments 2 and 3. We only used the immediate judgment data
from Experiment 3 as the procedure for these was identical to Experiment 2. Across two
experiments, we found that the overall effect size estimate for the simple effect of repeated vs.
new claims from a relevant source is 0.87, SE = 0.09, 95% CI [0.70, 1.06]. The overall effect
size estimate for the simple effect of repeated vs. new claims from a non-relevant source is 0.16,
SE = 0.05, 95% CI [0.08, 0.25]. When meta-analyzed across both studies, the simple effect for
claims coming from a non-relevant source was significant, i.e., the confidence interval does not
cross zero, however, is it much smaller than the effect for claims from a relevant source (Figure
4.4). According to the computed effect size for claims from a non-relevant source, a sample size
of N = 497 would be required to achieve 80% power to detect the effect. Neither of our
experiments alone were adequately powered to detect this small effect.
72
Figure 4.4.
Estimates of unstandardized effect sizes for the simple effects of claim repetition within source
conditions for Experiments 2 and 3 using a single-paper meta-analysis.
Note. Effect estimates are given by the squares for single-study estimates and the vertical bars for
SPM estimates; 50% and 95% intervals are given by the thick and thin lines, respectively.
Estimates are for the simple effect of claim repetition for claims coming from a relevant source
(left) and a non-relevant source (right)
73
General Discussion
Across three experiments, we consistently found that repeated exposure to health-related
claims increased individuals’ perceptions that those claims have greater scientific
consensus. This illusory consensus effect was obtained when scientific consensus was estimated
after a delay of a few minutes (Experiments 1-3) and it remained robust even after a delay of 3 to
5 days between exposure to the claim and the consensus judgment (Experiment 3). As one may
expect, reading claims that came from a group of health experts exerted a stronger influence on
judgments of scientific consensus in the health domain than hearing claims that came from a
group of concerned parents, non-experts whose judgments do not bear on scientific consensus
(Experiments 2 and 3). Nevertheless, even hearing a claim from non-experts was sufficient to
increase the claim’s perceived scientific consensus (Experiments 2 and 3; meta-analysis) –
merely reading a parent making a claim on Facebook suffices to endow the claim with the
credibility that comes from higher perceived scientific consensus.
Overall, our findings highlight a critical factor in the construction of estimates of
scientific consensus: prior exposure. A single exposure to a claim was sufficient to increase
perceptions of scientific consensus, even when the exposure occurred several days prior and
when it came from a source that lacked relevant expertise. These results are consistent with two
previous lines of research that addressed the influence of repetition on perceived consensus and
perceived truth, respectively, and extend their scope.
Previous work examining repetition and consensus found that individuals tend to infer
that a familiar, repeated opinion from a single speaker is more prevalent and more credible than
an opinion heard only once (Foster et al., 2012; Weaver et al., 2007). Weaver et al. (2007) found
that hearing one person express an opinion repeatedly led perceivers to estimate that the opinion
74
was more widely held. Foster et al, (2012) found that repeated statements from one eyewitness in
a courtroom trial were considered similarly credible to corroborating statements from multiple
eyewitnesses, and more credible than a statement heard only once. Both experiments suggest that
individuals use the familiarity of a given piece of information as an informative input into
making consensus estimates and may not pay as much attention to the source of this familiarity,
consistent with prior research (Schwarz, 2012; Schwarz & Clore, 2007). However, these studies
were focused on the influence of the source of a claim, i.e., comparing repeated statements from
one source versus multiple sources, rather than mere repetition alone. Additionally, the repeated
statements in both sets of experiments came from sources relevant to the judgment at hand. Our
findings extend these by examining the influence of mere repetition alone, and showing that
repetition influences perceived consensus even when the source of the claims is not
representative of the target of judgment.
Next, research on repetition and truth shows that merely repeating a claim makes it feel
truer, in a phenomenon known as the illusory truth effect (for reviews, see: Dechene et al., 2010;
Udry & Barber, 2023). Repetition has been shown to increase truth for both true and false
statements, and for statements from both credible and non-credible sources (Begg et al., 1992).
Our results parallel findings on the illusory truth effect and extend it to a new area of judgment–
scientific consensus. Merely seeing a claim repeated once can increase not just individuals’ own
belief in information, but also their perceptions of how many scientists support that information.
Central to repetition effects on judgments of truth and consensus is the metacognitive
experience of processing information, namely, the subjective ease or difficulty with which
information can be recalled from memory or be processed. When information is repeated, it feels
more familiar at the time of judgment, making it more easily processed. People draw on this
75
subjective experience of ease and use it as an informative input in making a variety of
judgments, including truth, aesthetics, and risk (for a review, see Schwarz et al., 2021). Of note
is that the effect of ease happens at the time of judgment, not at exposure: the claim is more
fluent when one reads it to judge truth or consensus thereby impacting truth and consensus
estimates. Our results mirror these findings and highlight that the metacognitive experience of
processing fluency can influence judgments of scientific consensus as well.
These findings have important implications for the fields of public health and science
communication. Efforts to curb the spread of health misinformation should take into account that
exposure to misleading health claims can influence not only belief in the claim, but also
perceptions of scientific consensus in the claim. Preventing exposure to misleading information
may be the most effective approach.
76
Conclusion
In this dissertation, I investigated whether exposure to a behavior or opinion in the news
and social media influences people’s perceptions of social consensus and whether this has
downstream consequences on epistemic and behavioral decisions. Across three chapters, I found
that exposure to a behavior or information does indeed increase perceptions of descriptive norms,
i.e., how many others engage in the behavior, and perceptions of scientific consensus, i.e., how
many scientists agree with the information.
In Chapters 2 and 3, I studied the impact of exposure to news reporting about undesirable
behaviors on participants’ perceptions of the descriptive norm of the behavior as well as their
likelihood to participate in it. Addressing core issues of the vaccination discussion during the
COVID-19 pandemic, the studies reported in Chapter 2 showed that exposure to news that
focuses on the problem-behavior (here, opposing vaccination) increases the perception that the
problem-behavior is common (here, that there are many people who are not vaccinated). This
shift in perceived descriptive norms supports the problematic behavior and impairs behavior
change, as reflected in participants’ reduced intention to get a vaccine themselves. Chapter 3
identified a similar pattern for norm perception. Exposure to news about risky social media
trends increased the perception that participation in these trends is common. However, this shift
in perceived norms did not correspond to a shift in behavioral intentions or subsequent
perceptions of the social media trend. Despite thinking that more people were participating in the
trend, participants still viewed the trend as risky and ineffective and were not more likely to want
to try it or to recommend it to a friend.
The observed discrepancy between norms and behavioral intentions in Chapter 3 may be
because of several factors. First, the context of measurement: an online survey questionnaire may
77
not be enticing enough for participants to want to participate in a potentially harmful behavior.
Whether perceptions of a social norm affect behavior in a given situation is context-dependent
(Prentice, 2018). However, in other contexts, such as social events, participating in a social
media challenge may be more enticing when one remembers that it is common. Next, the context
surrounding the issue: vaccination was a controversial issue and one that individuals were
exposed to and had likely considered or thought about participating in in the recent past. The
unfamiliar social media trends in this chapter were likely not something participants had
considered doing in their own lives, as supported by the low reported intentions. Lastly, the
context of the descriptive norm may matter. Norms are more influential when they describe the
behavior of those similar to oneself, or members of one’s social groups. In Chapter 3, although
we observed an increase in participants’ perceptions of the broad descriptive norm for the
behavior, their perceptions of the norm amongst their peers did not change, namely, participants
did not think the behavior was more common among their peers. Since peer norms are more
predictive of behavior, this may explain why participants’ behavioral intentions did not change.
In Chapter 4, I turned to a different aspect of perceived consensus: perceptions of
scientific consensus. Mere repetition of health-related claims increased the perception that there
is consensus among scientists in those claims, even after a multi-day delay between exposure to
the claims and judgments of consensus. More importantly, this effect did not only emerge when
participants were exposed to claims on a discussion board of healthcare professionals – it also
emerged when the source was a parenting Facebook group with no particular expertise in the
issue. many scientists believe that information. Finally, repetition increased perceived scientific
consensus for true as well as false claims. In short, reading a claim multiple times is sufficient to
believe that most scientists would agree with it, independent of whether the claim is true or false
78
and independent of whether one heard it from a credible science-related source or in a discussion
of laypeople.
In combination, the nine experiments in this dissertation shed light on a critical but
previously overlooked aspect of social consensus– how it is estimated. The experiments
highlight that media exposure is an important contributor to the construction of norm and
consensus estimates. Merely seeing a given piece of information can increase perceptions that it
is more common and widely believed, indicating that the construction of these estimates is
context-sensitive. Factors including the framing or focus of the news and repetition influence our
perceptions of social consensus. Understanding these contributors and how they interact with
contextual variables is critical for providing effective recommendations for the fields of public
health and science communication, as well as for efforts to decrease the spread of
misinformation. Given the vast amounts of information posted online and the growing
sensationalization of mass media, it is important to note that reporting on a false or undesirable
issue, even with good intentions, may produce unwanted effects and imply that the issue is
widespread. Communicators should carefully consider whether drawing attention to a piece of
misinformation or undesirable public behavior is necessary to achieve their goals.
79
References
Abad-Santos, A. (2018, January 4). Why people are (mostly) joking about eating Tide Pods. Vox.
https://www.vox.com/2018/1/4/16841674/tide-pods-eating-meme-tide-pod-challenge
Adams, A. (2022, September 22). Please don't cook chicken in NyQuil, the FDA asks TikTok
users. NPR. https://www.npr.org/2022/09/22/1124252556/nyquil-chicken-challenge-fdawarning
Aklin, M., & Urpelainen, J. (2014). Perceptions of scientific dissent undermine public support
for environmental policy. Environmental Science & Policy, 38, 173–177.
https://doi.org/10.1016/j.envsci.2013.10.006
American Association of Poison Control Centers. (2018). Intentional exposures among teens to
single-load laundry packets. Retrieved October 4, 2018, from
http://www.aapcc.org/alerts/intentional-exposures-among-teens-single-load-laun/. This
page was deleted but has been archived: https://web.archive.org/web/20180430235815/,
http://www.aapcc.org/alerts/intentional-exposures-among-teens-single-load-laun/
Andrew, B. C. (2007). Media-generated Shortcuts: Do Newspaper Headlines Present Another
Roadblock for Low-information Rationality? Harvard International Journal of
Press/Politics, 12(2), 24–43. https://doi.org/10.1177/1081180X07299795
Arya, P., Jalbert, M., & Schwarz, N. (2024). Do 70% Support Or 30% Oppose? Focusing On
Vaccine Opposition Undermines Vaccination Efforts. Under Review at PLOS One.
Begg, I. M., Anas, A., & Farinacci, S. (1992). Dissociation of processes in belief: Source
recollection, statement familiarity, and the illusion of truth. Journal of Experimental
Psychology: General, 121(4), 446.
Belluck, P. (2021, July 25). ‘It’s not going to be good.’ Fauci Sounds Alarm Over Low
Vaccination Rates Fueling Covid Surge. The New York Times. Retrieved December 11,
2022, from https://www.nytimes.com/2021/07/25/health/fauci-covid-surgevaccinations.html
Bertoldo, R., Mays, C., Böhm, G., Poortinga, W., Poumadère, M., Tvinnereim, E., Arnold, A.,
Steentjes, K., & Pidgeon, N. (2019). Scientific truth or debate: On the link between
perceived scientific consensus and belief in anthropogenic climate change. Public
Understanding of Science, 28(7), 778–796. https://doi.org/10.1177/0963662519865448
Bever, L. (2018, January 14). Teens are daring each other to eat Tide Pods. We don’t need to tell
you that’s a bad idea. The Washington Post. https://www.washingtonpost.com/news/toyour-health/wp/2018/01/13/teens-are-daring-each-other-to-eat-tide-pods-we-dont-needto-tell-you-thats-a-bad-idea/?utm_term=.7dae3d8c8955
Brewer, N. T., Chapman, G. B., Rothman, A. J., Leask, J., & Kempe, A. (2017). Increasing
Vaccination: Putting Psychological Science Into Action. Psychological Science in the
80
Public Interest, 18(3), 149–207. https://doi.org/10.1177/1529100618760521
Brossard, D. (2013). New media landscapes and the science information consumer. Proceedings
of the National Academy of Sciences, 110(supplement_3), 14096–14101.
https://doi.org/10.1073/pnas.1212744110
Brown, D., Harlow, S., García-Perdomo, V., & Salaverría, R. (2016). A new sensation? An
international exploration of sensationalism and social media recommendations in online
news publications. Journalism, 0. https://doi.org/10.1177/1464884916683549
Bruine de Bruin, W., Parker, A. M., Galesic, M., & Vardavas, R. (2019). Reports of social
circles’ and own vaccination behavior: A national longitudinal survey. Health
Psychology, 38(11), 975–983. https://doi.org/10.1037/hea0000771
Champely, S. (Developer), Ekstrom, C. (Developer), Dalgaard, P. (Developer), Gill, J.
(Developer), Weibelzahl, S. (Developer), Anandkumar, A. (Developer), Ford, C.
(Developer), Volcic, R. (Developer), & De Rosario, H. (Developer). (2017). pwr: Basic
functions for power analysis. Software https://cran.r-project.org/web/packages/pwr/
Chia, S. C., & Lee, W. (2008). Pluralistic Ignorance About Sex: The Direct and the Indirect
Effects of Media Consumption on College Students’ Misperception of Sex-Related Peer
Norms. International Journal of Public Opinion Research, 20(1), 52–73.
https://doi.org/10.1093/ijpor/edn005
Chokshi, N. (2018, January 20). Yes, people are really eating Tide Pods. No, it’s not safe. New
York Times. https://www-nytimes-com.libproxy2.usc.edu/2018/01/20/us/tide-podchallenge.html?smid=url-share
Chou, W.-Y. S., Oh, A., & Klein, W. M. P. (2018). Addressing Health-Related Misinformation
on Social Media. JAMA, 320(23), 2417. https://doi.org/10.1001/jama.2018.16865
Cialdini, R. B. (2003). Crafting Normative Messages to Protect the Environment. Current
Directions in Psychological Science, 12(4), 105–109. https://doi.org/10.1111/1467-
8721.01242
Cialdini, R. B., Demaine, L. J., Sagarin, B. J., Barrett, D. W., Rhoads, K., & Winter, P. L.
(2006). Managing social norms for persuasive impact. Social influence, 1(1), 3-15.
Cialdini, R. B., Kallgren, C. A., & Reno, R. R. (1991). A focus theory of normative conduct: A
theoretical refinement and reevaluation of the role of norms in human behavior. In
Advances in experimental social psychology (Vol. 24, pp. 201-234). Academic Press.
Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about the truth: A metaanalytic review of the truth effect. Personality and Social Psychology Review,
14(2), 238-257.
Dunwoody, S. (1999). Scientists, Journalists, and the Meaning of Uncertainty. In
Communicating Uncertainty. Routledge.
81
Eysenbach, G., Powell, J., Kuss, O., & Sa, E.-R. (2002). Empirical Studies Assessing the Quality
of Health Information for Consumers on the World Wide Web: A Systematic Review.
JAMA, 287(20), 2691–2700. https://doi.org/10.1001/jama.287.20.2691
Festinger, L. (1954). A Theory of Social Comparison Processes. Human Relations, 7(2), 117–
140. https://doi.org/10.1177/001872675400700202
Fletcher, R., & Nielsen, R. K. (2018). Are people incidentally exposed to news on social media?
A comparative analysis. New Media & Society, 20(7), 2450–2468.
https://doi.org/10.1177/1461444817724170
Foster, J. L., Huthwaite, T., Yesberg, J. A., Garry, M., & Loftus, E. F. (2012). Repetition, not
number of sources, increases both susceptibility to misinformation and confidence in the
accuracy of eyewitnesses. Acta Psychologica, 139(2), 320–326.
https://doi.org/10.1016/j.actpsy.2011.12.004
Fragale, A. R., & Heath, C. (2004). Evolving Informational Credentials: The (Mis)Attribution of
Believable Facts to Credible Sources. Personality and Social Psychology Bulletin, 30(2),
225–236. https://doi.org/10.1177/0146167203259933
Funk, C., & Tyson, A. (2021, March 5). Growing Share of Americans Say They Plan To Get a
COVID-19 Vaccine – or Already Have. Pew Research Center.
https://www.pewresearch.org/science/2021/03/05/growing-share-of-americans-say-theyplan-to-get-a-covid-19-vaccine-or-already-have/
Goldstein, N. J., Cialdini, R. B., & Griskevicius, V. (2008). A Room with a Viewpoint: Using
Social Norms to Motivate Environmental Conservation in Hotels. Journal of Consumer
Research, 35(3), 472–482. https://doi.org/10.1086/586910
Gunther, A. C., Bolt, D., Borzekowski, D. L. G., Liebhart, J. L., & Dillard, J. P. (2006).
Presumed Influence on Peer Norms: How Mass Media Indirectly Affect Adolescent
Smoking. Journal of Communication, 56(1), 52–68. https://doi.org/10.1111/j.1460-
2466.2006.00002.x
Iyengar, S. (1990). The Accessibility Bias In Politics: Television News And Public Opinion*.
International Journal of Public Opinion Research, 2(1), 1–15.
https://doi.org/10.1093/ijpor/2.1.1
Jaffe, A. E., Graupensperger, S., Blayney, J. A., Duckworth, J. C., & Stappenbeck, C. A. (2022).
The role of perceived social norms in college student vaccine hesitancy: Implications for
COVID-19 prevention strategies. Vaccine, 40(12), 1888–1895.
https://doi.org/10.1016/j.vaccine.2022.01.038
Jucks, R., & Thon, F. M. (2017). Better to have many opinions than one from an expert? Social
validation by one trustworthy source versus the masses in online health forums.
Computers in Human Behavior, 70, 375–381. https://doi.org/10.1016/j.chb.2017.01.019
Kahan, D. M., Jenkins‐Smith, H., & Braman, D. (2011). Cultural cognition of scientific
82
consensus. Journal of Risk Research, 14(2), 147–174.
https://doi.org/10.1080/13669877.2010.511246
Kallgren, C. A., Reno, R. R., & Cialdini, R. B. (2000). A Focus Theory of Normative Conduct:
When Norms Do and Do not Affect Behavior. Personality and Social Psychology
Bulletin, 26(8), 1002–1012. https://doi.org/10.1177/01461672002610009
Kerr, J. R., & Wilson, M. S. (2018). Changes in perceived scientific consensus shift beliefs about
climate change and GM food safety. PLOS ONE, 13(7), e0200295.
https://doi.org/10.1371/journal.pone.0200295
Koehler, D. J. (2016). Can journalistic “false balance” distort public perception of consensus in
expert opinion? Journal of Experimental Psychology: Applied, 22(1), 24–38.
https://doi.org/10.1037/xap0000073
Kriegel, E. R., Lazarevic, B., Athanasian, C. E., & Milanaik, R. L. (2021). TikTok, Tide Pods
and Tiger King: Health implications of trends taking over pediatric populations. Current
Opinion in Pediatrics, 33(1), 170. https://doi.org/10.1097/MOP.0000000000000989
Kwan, L. Y.-Y., Yap, S., & Chiu, C. (2015). Mere exposure affects perceived descriptive norms:
Implications for personal preferences and trust. Organizational Behavior and Human
Decision Processes, 129, 48–58. https://doi.org/10.1016/j.obhdp.2014.12.002
Leiserowitz, A., Maibach, E., Roser-Renouf, C., Feinberg, G., & Rosenthal, S. (2014). Climate
Change in the American Mind: American’s Global Warming Beliefs and Attitudes in
April 2013. Yale Project on Climate Change Communication. Available at SSRN:
https://ssrn.com/abstract=2298705 or http://dx.doi.org/10.2139/ssrn.2298705
Lengauer, G., Esser, F., & Berganza. R. (2012). “Negativity in Political News: A Review of
Concepts, Operationalizations and Key Findings.” Journalism, 13(2): 179–202.
Lewandowsky, S., Gignac, G. E., & Vaughan, S. (2013). The pivotal role of perceived scientific
consensus in acceptance of science. Nature Climate Change, 3(4), Article 4.
https://doi.org/10.1038/nclimate1720
Linden, S. L. van der, Leiserowitz, A. A., Feinberg, G. D., & Maibach, E. W. (2015). The
Scientific Consensus on Climate Change as a Gateway Belief: Experimental Evidence.
PLOS ONE, 10(2), e0118489. https://doi.org/10.1371/journal.pone.0118489
Lopez, G. (2021, June 2). The 6 reasons Americans aren’t getting vaccinated. Vox. Retrieved
December 11, 2022, from https://www.vox.com/2021/6/2/22463223/covid-19-vaccinehesitancy-reasons-why
Maibach, E. W., & Van Der Linden, S. L. (2016). The importance of assessing and
communicating scientific consensus. Environmental Research Letters, 11(9), 091003.
https://doi.org/10.1088/1748-9326/11/9/091003
Mathur, S., Shanti, N., Brkaric, M., Sood, V., Kubeck, J., Paulino, C., & Merola, A. A. (2005).
83
Surfing for Scoliosis: The Quality of Information Available on the Internet: Spine,
30(23), 2695–2700. https://doi.org/10.1097/01.brs.0000188266.22041.c2
McCombs, M. (1997). Building Consensus: The News Media’s Agenda-Setting Roles. Political
Communication, 14(4), 433–443. https://doi.org/10.1080/105846097199236
McCombs, M. E., & Shaw, D. L. (1993). The Evolution of Agenda-Setting Research: TwentyFive Years in the Marketplace of Ideas. Journal of Communication, 43(2), 58–67.
https://doi.org/10.1111/j.1460-2466.1993.tb01262.x
McCombs, M., & Shaw, D. (1972). The Agenda-Setting Function of Mass Media. Public
Opinion Quarterly, 36, 176–187. https://doi.org/10.1086/267990
Miller, D. T., & Prentice, D. A. (1996). The construction of social norms and standards. In Social
psychology: Handbook of basic principles (pp. 799–829). The Guilford Press.
Miller, D. T., & Prentice, D. A. (2016). Changing Norms to Change Behavior. Annual Review of
Psychology, 67(1), 339–361. https://doi.org/10.1146/annurev-psych-010814-015013
Moehring, A., Collis, A., Garimella, K., Rahimian, M. A., Aral, S., & Eckles, D. (2023).
Providing normative information increases intentions to accept a COVID-19 vaccine.
Nature Communications, 14(1), 126. https://doi.org/10.1038/s41467-022-35052-4
Molek-Kozakowska, K. (2013). Towards a pragma-linguistic framework for the study of
sensationalism in news headlines. Discourse & Communication, 7(2), 173–197.
https://doi.org/10.1177/1750481312471668
Mullen, B., Atkins, J. L., Champion, D. S., Edwards, C., Hardy, D., Story, J. E., & Vanderklok,
M. (1985). The false consensus effect: A meta-analysis of 115 hypothesis tests. Journal
of Experimental Social Psychology, 21(3), 262–283. https://doi.org/10.1016/0022-
1031(85)90020-4
Nelson, J. L. (2021). The next media regime: The pursuit of ‘audience engagement’ in
journalism. Journalism, 22(9), 2350–2367. https://doi.org/10.1177/1464884919862375
Nolan, J. M., Schultz, P. W., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2008).
Normative Social Influence is Underdetected. Personality and Social Psychology
Bulletin, 34(7), 913–923. https://doi.org/10.1177/0146167208316691
Nyhan, B., Reifler, J., & Richey, S. (2012). The Role of Social Networks in Influenza Vaccine
Attitudes and Intentions Among College Students in the Southeastern United States.
Journal of Adolescent Health, 51(3), 302–304.
https://doi.org/10.1016/j.jadohealth.2012.02.014
Paluck, E. (2009). What’s in a Norm? Sources and Processes of Norm Change. Journal of
Personality and Social Psychology, 96, 594–600. https://doi.org/10.1037/a0014688
Paluck, E. L. (2009). Reducing intergroup prejudice and conflict using the media: A field
84
experiment in Rwanda. Journal of Personality and Social Psychology, 96(3), 574–587.
https://doi.org/10.1037/a0011989
Paluck, E. L., & Shepherd, H. (2012). The salience of social referents: A field experiment on
collective norms and harassment behavior in a school social network. Journal of
Personality and Social Psychology, 103(6), 899–915. https://doi.org/10.1037/a0030015
Perkins, H. W., Linkenbach, J. W., Lewis, M. A., & Neighbors, C. (2010). Effectiveness of
social norms media marketing in reducing drinking and driving: A statewide campaign.
Addictive Behaviors, 35(10), 866–874. https://doi.org/10.1016/j.addbeh.2010.05.004
Prentice, D. A. (2018). Intervening to Change Social Norms: When Does It Work? Social
Research, 85(1), 115–139.
Prentice, D. A., & Miller, D. T. (1996). Pluralistic Ignorance and the Perpetuation of Social
Norms by Unwitting Actors. In M. P. Zanna (Ed.), Advances in Experimental Social
Psychology (Vol. 28, pp. 161–209). Academic Press. https://doi.org/10.1016/S0065-
2601(08)60238-5
Reno, R., Cialdini, R., & Kallgren, C. (1993). The Transsituational Influence of Social Norms.
Journal of Personality and Social Psychology, 64, 104–112.
https://doi.org/10.1037/0022-3514.64.1.104
Robertson, C. E., Pröllochs, N., Schwarzenegger, K., Pärnamets, P., Van Bavel, J. J., &
Feuerriegel, S. (2023). Negativity drives online news consumption. Nature Human
Behaviour, 1–11. https://doi.org/10.1038/s41562-023-01538-4
Ross, L., Greene, D., & House, P. (1977). The “false consensus effect”: An egocentric bias in
social perception and attribution processes. Journal of Experimental Social Psychology,
13(3), 279–301. https://doi.org/10.1016/0022-1031(77)90049-X
Rutten, L. J. F., Squiers, L., & Hesse, B. (2006). Cancer-Related Information Seeking: Hints
from the 2003 Health Information National Trends Survey (HINTS). Journal of Health
Communication, 11(sup001), 147–156. https://doi.org/10.1080/10810730600637574
Schultz, P. W., Nolan, J. M., Cialdini, R. B., Goldstein, N. J., & Griskevicius, V. (2007). The
Constructive, Destructive, and Reconstructive Power of Social Norms. Psychological
Science, 18(5), 429–434. https://doi.org/10.1111/j.1467-9280.2007.01917.x
Schulz, B. (2022, September 29). Were people actually eating NyQuil chicken? Viral challenge
was the latest internet lore. USA Today.
https://www.usatoday.com/story/tech/2022/09/29/cooking-nyquil-chicken-challengehoax-fda-warnings/10444293002/
Schwarz, N. (2007). Attitude Construction: Evaluation in Context. Social Cognition, 25(5), 638–
656. https://doi.org/10.1521/soco.2007.25.5.638
Schwarz, N. (2012). Feelings-as-information theory. Handbook of theories of social psychology,
85
1, 289-308.
Schwarz, N., & Clore, G. L. (2007). Feelings and phenomenal experiences. Social psychology:
Handbook of basic principles, 2, 385-407.
Schwarz, N., Jalbert, M., Noah, T., & Zhang, L. (2021). Metacognitive experiences as
information: Processing fluency in consumer judgment and decision making. Consumer
Psychology Review, 4(1), 4-25.
Scullard, P., Peacock, C., & Davies, P. (2010). Googling children’s health: Reliability of medical
advice on the internet. Archives of Disease in Childhood, 95(8), 580–582.
https://doi.org/10.1136/adc.2009.168856
Simmonds, B. P., Stephens, R., Searston, R. A., Asad, N., & Ransom, K. J. (2023). The
Influence of Cues to Consensus Quantity and Quality on Belief in Health Claims.
Proceedings of the Annual Meeting of the Cognitive Science Society, 45(45).
https://escholarship.org/uc/item/73t0j4tc
Soroka, S., & McAdams, S. (2015). “News, Politics, and Negativity.” Political Communication
32 (1): 1–22.
Strough, J., Stone, E. R., Parker, A. M., & de Bruin, W. B. (2022). Perceived Social Norms
Guide Healthcare Decisions for Oneself and Others: A Cross-Sectional Experiment in a
US Online Panel. Medical Decision Making : An International Journal of the Society for
Medical Decision Making, 42(3), 326–340. https://doi.org/10.1177/0272989X211067223
Sturgis, P., Brunton-Smith, I., & Jackson, J. (2021). Trust in science, social consensus and
vaccine confidence. Nature Human Behaviour, 5(11), 1528–1534.
https://doi.org/10.1038/s41562-021-01115-7
Su, L. Y.-F., Akin, H., Brossard, D., Scheufele, D. A., & Xenos, M. A. (2015). Science News
Consumption Patterns and Their Implications for Public Understanding of Science.
Journalism & Mass Communication Quarterly, 92(3), 597–616.
https://doi.org/10.1177/1077699015586415
Suarez-Lledo, V., & Alvarez-Galvez, J. (2021). Prevalence of Health Misinformation on Social
Media: Systematic Review. Journal of Medical Internet Research, 23(1), e17187.
https://doi.org/10.2196/17187
Swire-Thompson, B., & Lazer, D. (2020). Public Health and Online Misinformation: Challenges
and Recommendations. Annual Review of Public Health, 41(1), 433–451.
https://doi.org/10.1146/annurev-publhealth-040119-094127
Tankard, M. E., & Paluck, E. L. (2016). Norm Perception as a Vehicle for Social Change:
Vehicle for Social Change. Social Issues and Policy Review, 10(1), 181–211.
https://doi.org/10.1111/sipr.12022
Thon, F. M., & Jucks, R. (2017). Believing in Expertise: How Authors’ Credentials and
86
Language Use Influence the Credibility of Online Health Information. Health
Communication, 32(7), 828–836. https://doi.org/10.1080/10410236.2016.1172296
Tsfati, Y., Boomgaarden, H. G., Strömbäck, J., Vliegenthart, R., Damstra, A., & Lindgren, E.
(2020). Causes and consequences of mainstream media dissemination of fake news:
Literature review and synthesis. Annals of the International Communication Association,
44(2), 157–173. https://doi.org/10.1080/23808985.2020.1759443
Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and
probability. Cognitive Psychology, 5(2), 207–232. https://doi.org/10.1016/0010-
0285(73)90033-9
Udry, J., & Barber, S. J. (2023). The illusory truth effect: A review of how repetition increases
belief in misinformation. Current Opinion in Psychology, 101736.
van der Linden, S. L., Clarke, C. E., & Maibach, E. W. (2015). Highlighting consensus among
medical scientists increases public support for vaccines: Evidence from a randomized
experiment. BMC Public Health, 15(1), 1207. https://doi.org/10.1186/s12889-015-2541-4
van der Linden, S., Leiserowitz, A. A., Feinberg, G. D., & Maibach, E. W. (2015). The Scientific
Consensus on Climate Change as a Gateway Belief: Experimental Evidence. PLOS ONE,
10(2), e0118489. https://doi.org/10.1371/journal.pone.0118489
van der Linden, S., Leiserowitz, A., & Maibach, E. (2019). The gateway belief model: A largescale replication. Journal of Environmental Psychology, 62, 49–58.
https://doi.org/10.1016/j.jenvp.2019.01.009
van der Meer, T.G., & Hameleers, M. (2022) I Knew It, the World is Falling Apart! Combatting
a Confirmatory Negativity Bias in Audiences’ News Selection Through News Media
Literacy Interventions, Digital Journalism, 10(3), 473-492, DOI:
10.1080/21670811.2021.2019074
van Stekelenburg, A., Schaap, G., Veling, H., van ’t Riet, J., & Buijzen, M. (2022). ScientificConsensus Communication About Contested Science: A Preregistered Meta-Analysis.
Psychological Science, 33(12), 1989–2008. https://doi.org/10.1177/09567976221083219
Vellani, V., Zheng, S., Ercelik, D., & Sharot, T. (2023). The illusory truth effect leads to the
spread of misinformation. Cognition, 236, 105421.
https://doi.org/10.1016/j.cognition.2023.105421
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science,
359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Vraga, E. K., & Bode, L. (2017). Using Expert Sources to Correct Health Misinformation in
Social Media. Science Communication, 39(5), 621–645.
https://doi.org/10.1177/1075547017731776
Weaver, K., Garcia, S. M., Schwarz, N., & Miller, D. T. (2007). Inferring the popularity of an
87
opinion from its familiarity: A repetitive voice can sound like a chorus. Journal of
Personality and Social Psychology, 92(5), 821–833. https://doi.org/10.1037/0022-
3514.92.5.821
Yousif, S. R., Aboody, R., & Keil, F. C. (2019). The Illusion of Consensus: A Failure to
Distinguish Between True and False Consensus. Psychological Science, 30(8), 1195–
1204. https://doi.org/10.1177/0956797619856844
Zhang, Y., Sun, Y., & Xie, B. (2015). Quality of health information for consumers on the web: A
systematic review of indicators, criteria, tools, and evaluation results. Journal of the
Association for Information Science and Technology, 66(10), 2071–2084.
https://doi.org/10.1002/asi.23311
88
Appendices
Appendix A
Experimental Stimuli: Chapter II
Experiment 1
E1 Headlines: Unvaccinated Focus
89
E1 Headlines: Vaccinated Focus
90
Experiment 2
E2 Headlines: Unvaccinated Focus
91
E2 Headlines: Vaccinated Focus
92
Experiment 3
E3 Headlines: Unvaccinated Focus
93
E3 Headlines: Vaccinated Focus
94
Appendix B
Table B1.
Claims used in Experiments 1, 2, and 3 with source and pretest ratings of believability
Counterbalance 1
Claim Claim
veracity M SD N Source
Pessimism may be inherited due to a genetic
mutation
True 2.47 1.22 108 Ecker et al.
(2020)
Honeybee stings are used in the treatment of
arthritis
True 2.13 1.04 108 Ecker et al.
(2020)
Leprosy can be cured with antibiotic treatment True 2.73 1.15 108 CDC
Epilepsy can be caused by anything that hurts the
brain
True 2.58 1.13 108 CDC
Women who have breastfed their infants have a
decreased risk of developing Rheumatoid Arthritis
True 2.43 1.10 108 CDC
There is no cure for lead poisoning True 2.65 1.11 108 CDC
Males are five times more likely than females to be
struck by lightning
True 2.13 1.19 106 CDC
Your blood pressure is higher right after using
marijuana
True 2.89 1.14 108 CDC
Taking folic acid may help protect against the risks
related to arsenic poisoning
True 2.32 1.05 108 CDC
Frequently wearing silk garments in direct contact
with the skin can cause spontaneous lactation
False 1.82 0.94 108 Ecker et al.
(2020)
Fibers found in cow skin are now being added to
Botox injections
False 2.65 1.15 108 Ecker et al.
(2020)
The placenta is more likely to be on the right side
of the uterus if the baby is a boy and on the left side
if it is a girl
False 2.04 0.96 108 Swire et al.
(2022)
70% of Americans are afflicted with a ‘vampire
fungus’ that causes a number of chronic illnesses
False 1.66 0.94 108 Snopes
Gasoline is a recommended treatment for getting
rid of lice
False 1.6 1.00 108 Snopes
95
Drinking cold water after meals can cause cancer False 1.36 0.84 108 Snopes
Raw milk is healthier and more nutritious than
pasteurized milk
False 2.67 1.17 108 CDC
Scabies are often spread in public swimming pools False 2.73 1.17 108 CDC
The vitamin K shot can lead to cancer False 1.69 0.84 108 CDC
Counterbalance 2
Claim Claim
veracity M SD N Source
Dandelion root extract is being tested as a cancer
treatment
True 2.74 1.07 107 Ecker et al.
(2020)
Losing fingerprints can be a side effect of
chemotherapy
True 2.49 1.14 107 Snopes
Adults aged 65 and older are at a higher risk of
getting food poisoning
True 2.71 1.10 107 CDC
There is no evidence that Lyme disease is
transmitted from person-to-person
True 2.64 1.20 107 CDC
A lack of facial hygiene can lead to blindness True 1.8 0.99 107 CDC
Breastfeeding during a child’s vaccination can
reduce the pain of injection
True 2.37 1.00 107 CDC
Polio mainly affects children under 5 years of age True 3.07 1.24 108 CDC
Lazy eye is the most common cause of vision loss
in children
True 2.51 0.99 107 CDC
Tuberculosis bacteria can live in your body for a
lifetime and never cause symptoms
True 3.07 1.17 107 CDC
The outer skin of a pineapple emits a dangerous
toxin into the environment when it breaks down
False 2.24 0.97 107 Ecker et al.
(2020)
The “redhead gene” is becoming extinct False 2.32 1.13 107 Ecker et al.
(2020)
Testosterone treatment helps older men retain their
memory
False 2.5 0.89 107 Swire et al.
(2022)
Disposable chopsticks contain carcinogens False 2.5 1.07 107 Snopes
Shaking duvets has been found to increase heart
attack risk
False 1.64 0.90 107 Snopes
Leprosy causes the fingers and toes to fall off False 2.48 1.12 106 CDC
96
Only kids can have epilepsy False 1.4 0.76 107 CDC
Women with hepatitis C should not breastfeed their
babies
False 3.26 1.25 107 CDC
Boiling water removes radioactive material False 2.06 1.06 107 CDC
Note. Believability was assessed with the question “How believable is this claim?” on a scale
from 1 (not at all) to 5 (extremely)
Abstract (if available)
Abstract
In this dissertation, I explore aspects of media influence, namely how exposure to a behavior or opinion in mainstream media or social media influences people’s perception of social consensus. Perceived social consensus figures prominently in decision-making and influences what people consider true and normal (Lewandowsky et al., 2013; Miller & Prentice, 2016). To date, research into the impact of perceived consensus has typically provided people with explicit consensus information, i.e., direct information about how many others engage in a given behavior (descriptive norms) or agree on a given opinion. This research strategy bypasses a crucial issue: Does mere exposure to behaviors and opinions in mainstream or social media influence perceived consensus with downstream consequences on epistemic and behavioral decisions? To fill this gap, my experimental studies address how media exposure influences perceived consensus and explore its downstream consequences. I present three sets of studies (nine total experiments) that investigate various contextual influences on perceptions of descriptive norms and scientific consensus. In Chapter I, I examine how the focus of news headlines can influence perceived descriptive norms of vaccination, and whether this has downstream effects on individuals’ vaccination intent (Arya, Jalbert, & Schwarz, 2024). In Chapter II, I investigate how exposure to news reporting on risky social media trends may influence perceptions of how widespread those trends are. In Chapter III, I turn to perceptions of scientific consensus and investigate how repeated exposure to information influences judgments of scientific consensus in the information.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Metacognitive experiences in judgments of truth and risk
PDF
Climate change communication: challenges and insights on misinformation, new technology, and social media outreach
PDF
Thinking Of Trump and Weinstein: the impact of prominent cases of sexual harassment on perceptions of sexual harassment across countries
PDF
Three essays on cooperation, social interactions, and religion
PDF
Contagious: social norms about health in work group networks
PDF
Intuitions of beauty and truth: what is easy on the mind is beautiful and true
PDF
Beyond active and passive social media use: habit mechanisms are behind frequent posting and scrolling on Twitter/X
PDF
#BLM or #ALM: accessible perspective shapes downstream judgment even among people high in social dominance
PDF
Culture's consequences: a situated account
PDF
The social groups approach to quitting smoking: An examination of smoking cessation online and offline through the influence of social norms, social identification, social capital and social support
PDF
How misinformation exploits moral values and framing: insights from social media platforms and behavioral experiments
PDF
Actual and perceived social reinforcements of weight-related cognitions and behaviors in adolescent peer groups
PDF
The dynamics of well-being in daily life: a multilevel perspective
PDF
Habits and Friction: Actual and Perceived Effects on Behavior
PDF
Performance and attention novelty slows hedonic adaptation during habit formation
PDF
Investigating the effects of Pavlovian cues on social behavior
PDF
Measuing and mitigating exposure bias in online social networks
PDF
Analysis and prediction of malicious users on online social networks: applications of machine learning and network analysis in social science
PDF
Classrooms are game settings: learning through and with play
PDF
Breaking it down to make it stronger: examining the role of source credibility and reference group specificity in the influence of personalized normative feedback on perceived alcohol use norms a...
Asset Metadata
Creator
Arya, Pragya
(author)
Core Title
Perceived social consensus: a metacognitive perspective
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Psychology
Degree Conferral Date
2024-08
Publication Date
06/27/2024
Defense Date
05/01/2024
Publisher
Los Angeles, California
(original),
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
descriptive norms,misinformation,OAI-PMH Harvest,scientific consensus,social consensus,social norms
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Schwarz, Norbert (
committee chair
)
Creator Email
parya@usc.edu,pragya.arya9@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC1139971UE
Unique identifier
UC1139971UE
Identifier
etd-AryaPragya-13159.pdf (filename)
Legacy Identifier
etd-AryaPragya-13159
Document Type
Dissertation
Format
theses (aat)
Rights
Arya, Pragya
Internet Media Type
application/pdf
Type
texts
Source
20240627-usctheses-batch-1175
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
descriptive norms
misinformation
scientific consensus
social consensus
social norms