Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
How misinformation exploits moral values and framing: insights from social media platforms and behavioral experiments
(USC Thesis Other)
How misinformation exploits moral values and framing: insights from social media platforms and behavioral experiments
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
HOW MISINFORMATION EXPLOITS MORAL VALUES AND FRAMING: INSIGHTS FROM
SOCIAL MEDIA PLATFORMS AND BEHAVIORAL EXPERIMENTS
by
Suhaib Abdurahman
A Thesis Presented to the
FACULTY OF THE USC DORNSIFE COLLEGE OF LETTERS, ARTS AND SCIENCES
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
MASTER OF ARTS
(PSYCHOLOGY)
August 2023
Copyright 2023 Suhaib Abdurahman
TableofContents
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Chapter 1: Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter 2: Analyzing Moral Framing, User Stance, and Engagement of COVID-Related Tweets on
Twitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Analysis Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Chapter 3: Investigating the Relationship between Matching Moral Framing and User Moral
Values ond Responses to Shared Social Media Content . . . . . . . . . . . . . . . . . . 13
Developing the Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Applying the Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Analysis Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Chapter 4: Examining the Influence of Deliberation and Analytical Thinking on Moral Framing
Effects in Sharing Intentions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
ii
Participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Analysis Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Chapter 5: Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Chapter 6: Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Appendices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Appendix A: Stimuli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Appendix B: Questionnaire Items (Study 2a, 2b, 3) . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Appendix C: Foundation-Level Analysis of Social Media Posts . . . . . . . . . . . . . . . . . . . 54
Appendix D: Additional Analyses of Analytical Thinking . . . . . . . . . . . . . . . . . . . . . . 56
Appendix E: Additional Analyses of Order Effects . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Appendix F: Additional Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
iii
ListofTables
2.1 Comparison of models estimating engagement (retweet count) as a function of various
predictor variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Comparison of models estimating engagement (favourites count) as a function of various
predictor variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1 Comparison of preregistered models estimating sharing intentions as a function of various
predictor variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 Comparison of preregistered models estimating sharing intentions as a function of various
predictor variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A1 Exemplary tweets showcasing moral framing and/or stance on COVID vaccinations and
mandates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A2 List of headlines selected for Study 2b and Study 3 . . . . . . . . . . . . . . . . . . . . . . . 50
B1 In your opinion, the post the user has written about the headline is . . . . . . . . . . . . . . 52
B2 In your opinion, the post the user has written about the headline is . . . . . . . . . . . . . . 53
B3 How much do you agree or disagree with the post the user has written about the headline? 53
B4 How much does the post the user has written about the headline align with your values? . 53
B5 If you came across this post, how likely would you be to ... . . . . . . . . . . . . . . . . . . 54
B6 Why did you decide to share (or not to share) the previous post? (Only Study 3) . . . . . . 54
F1 Comparison of models estimating engagement (favorite count) in Study 1 as a function of
various predictor variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
F2 Comparison of models estimating engagement (retweet count) in Study 1 as a function of
various predictor variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
F3 Effect sizes for Model 4 when controlling for headline veracity . . . . . . . . . . . . . . . . 60
iv
F4 Effect sizes for Model 4 when controlling for headline veracity and familiarity . . . . . . . 61
v
ListofFigures
3.1 Distribution of political orientation across samples . . . . . . . . . . . . . . . . . . . . . . . 18
3.2 Illustration of Study flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Exemplary stimulus presentation (shared news headline). . . . . . . . . . . . . . . . . . . . 20
3.4 Results from the preregistered mediation analysis . . . . . . . . . . . . . . . . . . . . . . . 25
4.1 Distribution of political orientation across samples . . . . . . . . . . . . . . . . . . . . . . . 28
4.2 Results from the preregistered mediation analysis of the effect of aligning moral framing
and moral values via deliberation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
B1 Example stimulus presentation for the introduction . . . . . . . . . . . . . . . . . . . . . . 51
B2 Example stimulus presentation for headline-level items . . . . . . . . . . . . . . . . . . . . 52
B3 Example stimulus presentation for post-level items . . . . . . . . . . . . . . . . . . . . . . 53
B4 Example stimulus presentation for whole-stimulus items . . . . . . . . . . . . . . . . . . . 54
D1 Replication of the mediation in Study 2b: Effect of matching moral values and framing via
perceived agreement and alignment with the post . . . . . . . . . . . . . . . . . . . . . . . 58
vi
Abstract
Does targeting audiences’ core values facilitate the spread of misinformation? We investigate this ques-
tion by analyzing real-world Twitter data (N = 20,235; 809,414 tweets) on misinformation regarding COVID
vaccinations, in conjunction with a set of behavioral experiments. First, we use natural language process-
ing to determine messages’ moral framing and stance on COVID vaccinations and mandates and find that
an alignment of moral framing and stance (e.g., Binding framing and “anti-vax”) facilitates the spread of
COVID vaccination misinformation. We then replicate our findings in three behavioral experiments, two
of which preregistered (N
2a
= 615; N
2b
= 505; N
3
= 533). We investigate how the effect of aligning
messages’ moral framing with participants’ moral values impacts participants intentions to share true and
false news headlines and whether this effect is driven by a lack of analytical thinking. Our results show that
framing a post such that it aligns with audiences’ moral values leads to increased sharing intentions, inde-
pendent of headline veracity, headline familiarity, and participants’ political ideology. However, we find
no effect of analytical thinking on misinformation sharing and plausibility concerns. Our findings suggest
that (a) targeting audiences’ core values can be used to influence the dissemination of (mis)information on
social media platforms, (b) partisan divides in misinformation sharing can be, at least partially, explained
through alignment between audiences’ underlying moral values and moral framing that often accompanies
content shared online, and (c) this effect is driven by motivational factors and not lack of deliberation.
vii
Chapter1
Introduction
The prevalence of misinformation poses an imminent threat to our society. Increasingly, cyberattacks
leverage social media networks to malevolently influence audiences, undermining civil discourse by insti-
gating division and polarization [3, 64, 48, 108, 76, 47]. Most notably, malicious actors have manipulated
narratives, amplified inflammatory messages, anda distorted public opinion, as highlighted by The US
Senate Investigation Committee on Russian Interference into the 2016 US Election and the January 6
th
committee [114, 56, 115, 12, 74]. Similar adversarial operations have been documented in other democratic
countries all over the world, such as during the Brexit campaign in the UK or elections in Brazil and In-
dia [6]. The scope and severity of these attacks make it important to identify the specific psychological
strategies used by malicious actors to spread targeted misinformation in order to mitigate vulnerabilities
to such attacks.
Past research on misinformation has identified cognitive, affective, and social factors that drive the
belief in, and spread of, misinformation. Cognitive heuristics and peripheral cues such as familiarity, pro-
cessing fluency and cohesion have been found to increase acceptance of misinformation [32, 86], indepen-
dent of ability and prior knowledge [27, 35, 34]. Affective factors, such as mood and emotions, have been
linked to susceptibility to misinformation through increased reliance on processing fluency and decreased
skepticism [73, 40, 62]. Social factors, such as perceived source credibility, have been found to affect belief
1
in misinformation and people are generally more likely to trust sources that are aligned with their values
and worldview [32, 15, 70, 71].
A large body of literature further points to the role of prior beliefs in sharing and believing misinfor-
mation through motivated reasoning [63, 97]. Misinformation that aligns with one’s moral and political
attitudes is perceived as more accurate and reliable [110, 104, 32] and readers tend to share or leave positive
comments on content that resonates with their political beliefs [24, 80]. Furthermore, past research has
shown that using moral-emotional language generally increases the virality and spread of messages on
social media platforms, due to increased attention [14, 102, 13] and resonance with audiences [55, 1]. This
indicates that misinformation campaigns could utilize moral language not only to persuade users but also
to achieve extensive spread in these networks and thus reach a vast number of users. However, focusing
on the mere presence of moral language is too simplistic of an approach to explain differences in behav-
ior relating to misinformation. Some studies observe interactions effects between specific kinds of moral
language and person-level variables, such as ideology [33, 68, 61] and other demographics [61].
Related to the study of (moral) language used in messages shared online, framing effects have been
discussed in past research on judgments and behaviors regarding moral and political issues [94, 53]. Moral
framing can lead to persuasion even in highly partisan settings, such that political arguments that are
framed in line with audiences’ moral concerns are more successful in persuading audiences [26, 38, 107].
For instance, framing pro-environmental behavior in moral language relating to values typically endorsed
by conservatives (i.e., loyalty, purity, patriotism or duty), promoted pro-environmental attitudes among
conservatives even though these behaviors are typically associated with liberals [39, 37, 111]. Importantly,
the specific language and framing used influenced the acceptance of information beyond political beliefs
conveyed in the very same message (i.e., the message being pro-Democrat or pro-Republican). This sug-
gests that misinformation campaigns can gain efficacy in part by “personalizing” claims to resonate with
the core moral concerns of their intended audiences and gain legitimacy and strength from the value-laden
2
and moral claims they make. The use of moral framing can also lead to the moralization or sacrilization
of issues [72] which in turn influences group behavior and attitudes, such as increasing polarization and
inciting outrage and violence against outgroups [28, 44]. Specifically, the moralization of issues can ac-
tivate moral convictions which are linked to rigid, absolutist mindsets [88] and thus an overt focus on
achieving morally mandated goals [90] by potentially engaging in and justifying extreme actions [90, 88,
89]. Therefore, it is critical to understand how misinformation campaigns use moral language in order to
mitigate these severe consequences.
Our work, thus, seeks to elucidate how misinformation campaigns can use moral framing to effec-
tively persuade their audiences and how this strategy plays out in the real world. Specifically, this work
investigates the effect of matching message framing and individuals’ values on the spread of targeted
(mis)information. Our work relies on the Moral Foundations Theory (MFT; [42]), an intuition-driven plu-
ralistic model of morality, to operationalize individuals’ moral values. In this model, moral values are
composed of two superordinate, bipolar categories [7, 42, 50, 43, 49, 51]: Individualizing (i.e., focused on
individuals’ rights and well-being) and Binding values (i.e., focused on group preservation)
∗
. This more spe-
cific and granular perspective on both the message content and individuals’ values provides additional nu-
ances to the psychological drivers of misinformation and the role of morality in people’s decision-making
in regard to information sharing. Adopting the MFT framework, our work adds to past literature that only
investigated the general presence of moral language in shared content [14, 102, 13] or the impact of align-
ing the content of misinformation and audience worldview on acceptance and spread of misinformation
[110, 104, 32, 24, 80].
We hypothesize that messages that align with audiences’ core moral values, independent of being true
or false, will be more effective than those which are misaligned or which do not target core moral val-
ues. We expect that, in the U.S., misinformation campaigns that rely on moral framing centered around
∗
Note, that recent research suggests that these superordinate categories might be specific to Western cultures [7]
3
Binding values are more effective in specifically persuading political conservatives, and conversely, that
misinformation campaigns that rely on Individualizing framing are more effective in specifically persuad-
ing liberals to believe and share misinformation. Our hypotheses are based on the observation that, across
countries and cultures, liberals tend to prioritize Individualizing values instead of Binding values, while
conservatives value Individualizing and Binding values more equally [46]. A recent meta-analysis of 89
samples and 226,674 participants found that Individualizing values correlate negatively and Binding values
correlate positively with political conservatism [61].
Further, we test the hypothesis that the proposed effects of moral values and framing might be driven by
a lack of deliberation. Previous work has argued that “analytical thinking”, and more generally trait-level
deliberation tendency, reduces belief in and sharing of misinformation [80, 81] and that moral language
increases the spread of messages via increased attention capture [13]. In line with the classical reasoning
approach, which suggests that people share misinformation because they do not notice it is misinformation
(“lack of deliberate thinking”), it could be that aligned moral framing distracts participants from deliberat-
ing over sharing a post and thus from the shared information being false or implausible. If true, then the
effect of aligning moral values and message framing should be mediated by deliberating over sharing a
post. Alternatively, participants could be motivated by their intuitions of right and wrong that accompany
moralized posts (see work on motivated reasoning and specifically how moral values motivate behavior:
Kahan et al. [58] and Dehghani et al. [29]) and that these intuitions supersede accuracy concerns. In that
case, there should not be an effect of deliberation, both trait-level and measured for each post, on sharing
of (mis)information.
To investigate the relationship between moral framing and responses to shared content, we conduct
two sets of studies. First, we analyze real-world social media (Twitter) conversations about COVID-19 vac-
cinations and mandates regarding the relationship between a message’s moral framing and the sender’s
stance on this issue. Second, we develop a paradigm that allows us to directly test how the specific match of
4
moral framing and audiences’ moral values affect responses to shared social media content in a controlled
experimental paradigm. We then use this paradigm in two pre-registered studies to confirm the proposed
effects and to shed light on the underlying psychological mechanisms. Together, our work provides addi-
tional insight into how the specific alignment of moral values and message framing may contribute to the
spread of (mis)information.
5
Chapter2
AnalyzingMoralFraming,UserStance,andEngagementof
COVID-RelatedTweetsonTwitter
In Study 1, we analyze COVID-related content on Twitter regarding the relationship between tweets’ moral
framing, users’ stance on the topic, and liking or sharing of the tweets. We predict that moral framing that
matches values associated with a stance (e.g., liberal and Individualizing values) will lead to increased
sharing and liking of tweets. Previous research has documented that stance on COVID-19 vaccinations is
strongly related to political ideology [92, 22, 57, 60] with more conservatives endorsing anti-vaccination
(“anti-vax”) attitudes and more liberals endorsing pro-vaccination (“pro-vax”) attitudes. Since Individual-
izing values correlate negatively and Binding values correlate positively with political conservatism [see
61], we expect that content by “anti-vax” users would be shared more frequently, compared to content by
“pro-vax” users, when framed with Binding values. Conversely, we expect that content by “pro-vax” users
would be shared more frequently, compared to content by “anti-vax” users, when framed with Individu-
alizing values. We also expect to replicate previous findings of liberals prioritizing Individualizing over
Binding values and conservatives endorsing both equally [46]. This means that we predict Individualizing
framing to be more effective than Binding framing for content by “pro-vax” users, but both framing to
be equally effective for content by “anti-vax”. Note, that this is a within-group comparison (i.e., within
6
“pro-vax” and within “anti-vax”), whereas the previous hypotheses were between-group comparisons (i.e.,
between “pro-vax” and “anti-vax”).
Method
We collected social media messages about COVID vaccinations and mandates from Twitter and used state-
of-the-art natural language processing methods to extract the messages’ moral framing and users’ stance
on this issue. Finally, we fit a model predicting liking and sharing of these messages as a function of
messages’ moral framing, users’ stance on COVID vaccinations, and their interaction.
DataCollection
We utilized an existing corpus of tweets, specifically rumors and misinformation, on COVID-19 vaccina-
tions and mandates compiled by Muric, Wu, Ferrara, et al. [75]. We collected a random sample of 809,414
tweets spanning from June 2021 to November 2021 (most current tweets at the time of data collection)
using the Twitter API. Other than the tweet text, we collected meta-data, including the user-id, dates,
number of retweets, and favorite count (i.e., “likes”).
Procedure
We used a Bidirectional Encoder Representations from Transformers (BERT)-based [30] classifier to deter-
mine the moral language in each tweet with the tweet text as input. Specifically, we used the pre-trained
BERT model “small BERT” [101] with L = 12 hidden layers (i.e., Transformer blocks), a hidden size of
H =256, andA=4 attention heads. We added a downstream classification layer to the language model
to predict whether a tweet contained moral vs. non-moral language, and for the moral messages whether
these were framed using Individualizing or Binding foundations. We simultaneously trained the classifi-
cation layer and fine-tuned the embedding layers on the Moral Foundations Twitter Corpus [54], which is
7
an annotated corpus containing 35,108 tweets along with each tweet’s moral framing based on the Moral
Foundations framework [45]. The classifier achieved a cross-validated F
1
score of 0.84 for moral/non-moral
message classification and 0.76 when predicting Binding vs. Individualizing framing.
We further inferred each user’s position on COVID vaccination and mandates. More specifically, we
employed an unsupervised stance detection method [25] which uses dimensionality reduction to project
users onto a low-dimensional space, followed by clustering, that allows identifying representative core
users. To classify the stance of each user in the corpus as either pro-vaccination (“pro-vax”) or anti-
vaccination (“anti-vax”), we compute the cosine similarity between each pair of users based on (1) (re-
)tweeting identical tweets; (2) the hashtags that users use; and (3) the accounts they retweet. We then
manually checked a random sample of 1000 users by evaluating the tweets and keywords they posted or
retweeted, and based on manual verification, the stance of 85% of users had been classified correctly.
Measures
In our final data set, each tweet, in addition to the number of retweets and “likes”, had the following
additional information associated with it
∗
:
• Moral Framing: Whether it contained moral or non-moral language.
• Binding & Individualizing framing: Whether the moral messages were framed using Binding and/or
Individualizing or non-moral language.
• Stance: Whether the tweet comes from a user who is “pro-vax” or “anti-vax”.
In total, 64% of tweets were posted by “anti-vax” users (vs. 36% by “pro-vax” users), 25% of tweets
contained moral framing (vs. 75% non-moral framing), 6% of tweets containing Binding framing and 18%
Individualizing framing.
∗
See A1 for example messages covering the different framing and stances.
8
AnalysisStrategy
Focusing on our hypothesis, we analyzed our data to determine whether people engage more (measured
via the number of retweets and favorites) with a social media post if the framing of the post aligned with
the values associated with the posts’ stance (e.g., Binding values with “anti-vax” posts). Specifically, we ran
a series of negative binomial models that predicted the number of retweets or likes (separate outcome vari-
ables) as a function of various predictor variables. Model 0 estimated the number of likes as a function of
the user’s stance (“pro-vax” vs. “anti-vax”) and included a fixed intercept and a varying (random) intercept
accounting for variance across users, with the average number of tweets per user being 40. Model 1 ex-
tended Model 0 by estimating the number of likes as a function of a tweet’s moral framing (Individualizing
and Binding) and including a random effect accounting for variance in framing effects over users. Model
2 extended Model 1 by estimating the number of likes as a function of the interaction between a tweet’s
moral framing and the user’s stance. Model 2 functioned as our main model and tested the hypotheses of
moral framing and stance interactions being predictive of message engagement while showing the specific
underlying dynamic (e.g, the effect of individualizing vs. binding framing on likes for pro-vax tweets). We
further ran the same series of models with the number of retweets as an alternative outcome variable for
user engagement.
To estimate these models, we used the ‘brms’ R package (Version 2.16.1) [17, 18] as an interface to fit
Bayesian generalized linear multilevel models in Stan [91]. Bayesian inference involves choosing a like-
lihood function and prior distributions. The likelihood function links the observed data to one or more
model parameters (e.g., regression coefficients) by expressing how likely the observed data would have
been for different values of said model parameters. Prior distributions state how plausible different values
of said model parameters are before considering the observed data. Our models used weakly informative
prior distributions, Student-t(3,0,2.5), for all model parameters. Bayesian inference applies Bayes’ theo-
rem to update prior distributions in light of the observed data to produce posterior distributions. Posterior
9
distributions state how plausible different values of the model parameters are given the observed data. We
report point estimates, based on the median of posterior samples, and 95% uncertainty intervals, based on
the quantiles of posterior samples, for relevant model parameters.
We used 10-fold cross-validation to compare how well each model predicted sharing intentions out-
side the sample used to estimate it. As a measure of out-of-sample prediction accuracy, we calculated each
model’s expected log predictive density (ELPD), that is, the logarithm of the joint posterior predictive prob-
ability of all observations. To compare models, we calculated the difference in out-of-sample prediction
accuracy for each pair of models (∆ ELPD
), with positive values indicating that a model made more accurate
predictions than a comparison model [106]. We divided this difference by its standard error ( z =∆ ELPD
/SE)
to account for the uncertainty of cross-validation as an estimate of out-of-sample prediction accuracy. We
selected a more complex over a simpler model when the difference in prediction accuracy was at least 1.96
times larger than its standard error.
†
Results
Table 2.1 compares each model’s out-of-sample prediction accuracy of engagement, captured by retweet
count to that of the null model without predictors (M0) and that of the other models with predictors (M1–
M2). We found that Model 2 —which included tweet’s moral framing (Binding and Individualizing) and
their interactions with the user’s stance (“pro-vax” and “anti-vax”)— predicted engagement more accu-
rately than Model 0 (∆ ELPD
=59.8,SE=9.8,z =6.10) and Model 1 (∆ ELPD
=25.7,SE=6.1,z =4.23),
indicating the relevance of matching moral framing and individuals’ values for the spread of social me-
dia messages. The between-group analyses shows, as hypothesized, that tweets’ Individualizing fram-
ing predicted more (1.6 times) engagement when posted by “pro-vax” users compared to “anti-vax” users
(β =0.20,[0.09,0.31]). Conversely, Binding framing predicted more (1.7 times) engagement when posted
†
For a conventional interpretation, consider the critical values to be|∆ ELPD/SE| > 1.96 forp < .05;2.58 forp < .01; and
3.29 forp<.001 in a two-sided null-hypothesis significance test of the difference in out-of-sample prediction accuracy.
10
Table 2.1: Comparison of models estimating engagement (retweet count) as a function of various predictor
variables
z
Model Description R
2
M0 M1 M2
M0 Stance 0.10 - -3.63 -6.1
M1 Moral framing 0.14 3.63 - -4.23
M2 Moral framing & stance interaction 0.14 6.1 4.23 -
Note. R
2
is a Bayesian analogue to the proportion of within-sample variance explained by a model (not
considering varying effects). z is the difference in out-of-sample prediction accuracy between two models
divided by its standard error (z =∆ ELPD
/SE).
by “anti-vax” users compared to “pro-vax” users (β =− 0.23,[− 0.42,− 0.03]). The within-group analyses
show, as hypothesized, that Individualizing framing predicted significantly more engagement (2.6 times)
than Binding framing for “pro-vax” users (β = 0.41,[0.21,0.60]), as well as no difference between both
framing for “anti-vax” users (β =− 0.02,[− 0.15,0.11]).
Analogously to Table 2.1, Table 2.2 compares each model’s out-of-sample prediction accuracy of en-
gagement, captured by favorite count, to that of the null model without predictors (M0) and that of the
other models with predictors (M1–M2). Supporting Hypothesis 1, Model 2 —that included tweets’ moral
framing (Binding and Individualizing) and their interactions with the user’s stance (“pro-vax” and “anti-
vax”)— predicted engagement more accurately than Model 0 (∆ ELPD
= 65.9,SE = 9.2,z = 7.16) and
Model 1 (∆ ELPD
= 20.6,SE = 6.7,z = 3.07). The between-group analyses show, again as hypothesized,
that tweets’ Individualizing framing predicted more (12 times) engagement when posted by “pro-vax” users
compared to “anti-vax” users (β ‡
=1.08,[0.78,1.38]). However, against our hypothesis, the effect of Bind-
ing framing did not differ for “pro-vax” and “anti-vax” users ( β =− 0.10,[− 0.60,0.41]). The within-group
analyses show that, as hypothesized, Individualizing framing predicted more engagement (6.3 times) than
Binding framing for “pro-vax” users (β = 0.80,[0.20,1.36]). However, against our hypothesis, Binding
‡
Note that for negative binomial regression the regression coefficient expresses the difference in the log of expected outcome
count for one unit change of the predictor variable.
11
Table 2.2: Comparison of models estimating engagement (favourites count) as a function of various pre-
dictor variables
z
Model Description R
2
M0 M1 M2
M0 Stance 0.047 - -4.67 -7.16
M1 Moral framing 0.075 4.67 - -3.07
M2 Moral framing & stance interaction 0.076 7.16 3.07 -
Note. R
2
is a Bayesian analogue to the proportion of within-sample variance explained by a model (not
considering varying effects). z is the difference in out-of-sample prediction accuracy between two models
divided by its standard error (z =∆ ELPD
/SE).
framing predicted more engagement (2.5 times) compared to Individualizing framing for “anti-vax” users
(β =− 0.39,[− 0.70,− 0.06]).
Overall, these findings show that an alignment of moral framing and stance increases engagement
with social media messages. Specifically, we found that Individualizing framing facilitated engagement
for “pro-vax” (compared to “anti-vax”) tweets while Binding framing facilitated engagement for “anti-vax”
(compared to “pro-vax”) tweets. Furthermore, these results were found across both engagement metrics
(i.e., liking and retweeting) with only one exception: Binding framing showed no difference between both
groups regarding likes (but not retweets). This could be caused by user behavior differing for liking and
sharing tweets. For example, users could be less hesitant to like content that they would not share because
it is less public. Furthermore, we are not able to directly measure the political and moral values of the users
and instead use their stance (pro-vax vs. anti-vax) as a proxy. While attitudes towards COVID vaccinations
are indeed strongly polarized (most pro-vaxers being liberal and most anti-vaxers being conservative;
Cheng et al. [21]), there is a significant number of conservatives who are not “anti-vax”. These “pro-
vax” conservatives could engage with “pro-vax” messages that have a Binding framing. We therefore
address these limitations in Study 2 and Study 3, which directly investigate the relationship between moral
framing of messages, individual’s moral values, and responses to shared social media content to explore
the underlying mechanisms of information sharing in a controlled experimental paradigm.
12
Chapter3
InvestigatingtheRelationshipbetweenMatchingMoralFramingand
UserMoralValuesondResponsestoSharedSocialMediaContent
We conduct two behavioral experiments to (1) develop a paradigm for studying how moral framing affects
responses to shared social media content (Study 2a) and (2) use this paradigm to test our hypotheses about
the relationships between moral framing, moral values, and responses to shared social media content
(Study 2b). Study 2 was designed to, first, confirm that matching moral framing and moral values increase
liking and sharing of shared online content– in a controlled experimental paradigm and to, second, shed
light on the underlying mechanisms that drive engagement with information shared online. Note that
whereas Study 1 focused on the apparent moral values of message sources, Study 2 focuses on the moral
values of message recipients. However, given that people tend to expose themselves to social media content
that agrees with their worldview [9, 2] and moral values [29, 87], it is very likely that audience engagement
(favorite and retweet count) measured in Study 1 were captured from users whose moral values matched
those of the relevant message source. Nevertheless, Study 2 addresses the aforementioned limitation of
Study 1 by directly investigating the relationship between messages’ moral framing and audiences’ moral
values.
13
DevelopingtheParadigm
In Study 2a, we developed a set of stimuli consisting of social media posts about either true or false news
headlines. These posts were framed to either aligned with Individualizing or Binding values or were framed
in nonmoral terms.
Method
Participants
We recruited 804 U.S. American Twitter users from the Prolific subject pool who, according to their re-
sponses to the Prolific prescreening questionnaire, were U.S. residents, used Twitter at least once a month,
and had posted on Twitter at least 1–3 times in the last 12 months. Our sample was stratified by gender (
1
2
female,
1
2
male) and political orientation (
1
3
liberal,
1
3
moderate,
1
3
conservative). We excluded participants
who failed at least one of three attention checks or whose responses conflicted with their responses to
the Prolific prescreening questionnaire. This left a final sample of 615 participants ( Mdn = 32 years, age
range: 18–79 years; 304 women, 305 men, 6 other) of whom 205 identified as conservative, 205 identified
as moderate, and 205 identified as liberal. As Figure 3.1 shows, our sample spanned the whole spectrum
of political orientation.
Stimuli
To create the stimuli set, we selected 51 news headlines (23 true, 28 false) from the fact-checking website
snopes.com and created three social media posts for each news headline. Social media posts were designed
to look like Twitter posts, with information unrelated to the study (e.g., the date, the poster’s identity, and
profile picture) blurred. Specifically, we used moralreframing [38] to create, for each headline, three posts
that commented on the headline: one post that appealed to Binding values (Loyalty, Authority, Purity), one
post that appealed to Individualizing values (Care, Equality), and one post that avoided moral sentiment.
14
For each headline, we created posts that all either expressed negative sentiment (27) or positive sentiment
(24). This resulted in 51 (news headline)× 3 (moral framing) = 153 social media posts.
For example, we created three social media posts for the true news headline: “Portland Named a New
Bridge After ‘The Simpsons’ Ned Flanders” [69]. Two posts commented on the headline in a way that
appealed either to Binding values (e.g., “I read this article and I can’t believe it! This bridge should be
named after a great American patriot, not a cartoon character!”) or to Individualizing values (“I read this
article and can’t believe it. We have so many civil rights leaders who go nameless and we give it to another
white man!”). Another post commented on the headline in nonmoral terms (e.g., “I read this article and am
surprised—a bridge named after a Simpsons character?! Ridiculous! People have too much time on their
hands!”). For this headline, all posts expressed negative sentiment.
Procedure
After agreeing to participate, participants responded to three questions that mirrored the questions in
the Prolific prescreening questionnaire. Provided that participants’ answers matched their pre-screening
questionnaire, they were informed that the following pages would showcase social media posts, each con-
taining a news headline and a user’s written commentary. We informed them that some details about the
posts, such as who posted it and when, were omitted. Participants were instructed to answer each question
as if they had come across the post while using social media (e.g., Twitter or Facebook).
Participants then responded to randomly sampled social media posts, none of which were about the
same news headline. For each post, participants answered several questions about the shared headline,
and the post about the shared headline. They also rated how likely they would be to share the post if they
came across it. We used a planned missingness design so that each participant responded to 6 of 153 posts
and each post was rated by 15–35 participants. After responding to six posts, participants completed the
MFQ-2 and the demographic measures. On the final page, participants read that they had seen both real
15
and fake news headlines and were provided with a table of all headlines, showing which ones were true
and false.
Measures
For each social media post, participants rated how much the post aligned with their values on a 5-point
Likert scale (1 = strongly opposed to my values, 5 = strongly aligned with my values). Then, participants
completed the 36-item moral foundations questionnaire [MFQ-2, 8] which assesses to what extent partici-
pants endorse moral concerns about Care (e.g., “We should all care for people who are in emotional pain.”),
Equality (e.g., “The world would be a better place if everyone made the same amount of money.”), Propor-
tionality (e.g., “I think people who are more hard-working should end up with more money.”), Loyalty (e.g.,
“It upsets me when people have no loyalty to their country.”), Authority (e.g., “I believe that one of the most
important values to teach children is to have respect for authority.”), and Purity (e.g., “I believe chastity is
an important virtue.”; 1 =doesnotdescribemeatall, 5 =describesmeextremelywell). Items were presented
in random order with three additional attention checks embedded within the questionnaire (e.g., “To show
that you are paying attention and giving your best effort, please select ‘moderately describes me’.”). In
addition to the aforementioned two measures that were central to the purpose of Study 2a, participants in
Study 2a also completed a subset of additional measures used in Study 2b to facilitate exploratory analysis
and piloting (see an overview of our measures in section 6 of the supplemental materials).
Results
To select stimuli for Study 2b, we correlated participants’ responses to the question, “How much does the
post the user has written about the headline align with your values?”, with their endorsement of Bind-
ing and Individualizing values. To that end, we calculated an index of Binding values by averaging a
16
participants’ endorsement of the Loyalty, Authority, and Purity foundations and an index of Individual-
izing values by averaging a participants’ endorsement of the Care and Equality foundations. We selected
those news headlines for which (1) for posts framed with Binding values, the correlations of participants’
ratings of the extent that the post aligned with their moral values were maximally more positive for Bind-
ing compared to Individualizing values, (2) for posts framed with Individualizing values, the correlations
of participants’ moral alignment ratings were maximally more positive for Individualizing compared to
Binding values, and (3) for posts with nonmoral framing, the correlations of participants’ moral align-
ment ratings with participants’ Binding and Individualizing values were smallest. Using this criterion,
we selected the 5 x 2 (positive/negative sentiment) x 2 (true/false headline) = 20 best sets of 3 stimuli
(Binding/Individualizing/nonmoral framing) for use in future studies (see Table A2 in the supplemental
materials). In this way, Study 2a resulted in a paradigm that facilitates the investigation of how moral
framing affects responses to shared social media content.
ApplyingtheParadigm
In Study 2b, we used the newly developed paradigm to test hypotheses about the relationships between
moral framing, moral values, and responses to shared social media content. We tested two preregistered
hypotheses, predicting that respondents would be more likely to share a social media post about a news
headline if the framing of the post aligned with their moral values (Hypothesis 1) and that they would do
so because they agreed with the post and because it aligned with their moral values (Hypothesis 2).
Method
We preregistered the sample size as well as all hypotheses, inclusion/exclusion criteria, statistical models,
measures, and manipulations (https://osf.io/f7r8d/?view_only=7e4b1b5e3c574be6848664235fbd41ca). We
made all materials, data, and analysis scripts available online (https://osf.io/z25tc/?view_only=0141845d12024a2cbdbd0f71f77f23a8).
17
Figure 3.1: Distribution of political orientation across samples
0%
5%
10%
15%
1 2 3 4 5 6 7
Conservatism
Study 2a
Study 2b
Note. “How would you describe your political beliefs?” (1 = liberal, 7 = conservative). Dashed line shows
proportions expected under a uniform distribution.
Participants
We recruited 641 U.S. American Twitter users from the Prolific subject pool who, according to their re-
sponses to the Prolific prescreening questionnaire, were U.S. residents, used Twitter at least once a month,
who had posted on Twitter at least 1–3 times in the last 12 months, and who had not participated in Study
2a. We excluded 136 participants who failed at least one of three attention checks or whose responses
in our survey conflicted with their responses to the Prolific prescreening questionnaire. We had preregis-
tered that we would recruit a sample of 540 eligible participants, stratified by gender (
1
2
female,
1
2
male) and
self-identified political orientation (
1
3
liberal,
1
3
moderate,
1
3
conservative). We found, however, that, after
recruiting 145 conservative participants, we exhausted the pool of eligible conservative participants in the
Prolific subject pool and concluded data collection. This left a final sample of 505 participants ( Mdn=32
years, age range: 18–79 years; 231 women, 269 men, 5 other) of whom 145 identified as conservative, 180
identified as moderate, and 180 identified as liberal. As Figure 3.1 shows, our sample spanned the whole
spectrum of political orientation.
18
Figure 3.2: Illustration of Study flow
Note. Participants are presented with social media post containing a news headline and text. Participants
then give ratings on the headline-level & post-level and indicate their sharing intentions & and delibera-
tion over sharing (Study 3). Lastly, participants complete the MFQ-2, CRT-2 (Study 3), and demographic
questions.
Procedure
We used a planned missingness design that allowed both within-subject and between-subjects compar-
isons. In total, we included 2 (headline: true, false)× 2 (post: positive, negative sentiment)× 5 = 20 news
headlines selected in Study 2a (Table A2). In total, we included 3 (Binding, Individualizing, nonmoral fram-
ing)× 20 (news headlines) = 60 social media posts. Each participant responded to six randomly sampled
social media posts, none of which were based on the same news headline. That is, the same participant
responded to posts using Binding, Individualizing, or nonmoral framings (within-subject comparison) but
different participants respond to posts using different framings of the same headline (between-subject
comparison). Each post was rated by 33–66 participants. See Figure 3.2 for an illustration of the general
study procedures and see Figure 3.3 for an example of how the stimuli from Table A2 were presented to the
participants. A summary of the stimuli presentation as well as the survey items for the post and headline
ratings can be found in section 6 and 6 of the supplemental materials.
19
Figure 3.3: Exemplary stimulus presentation (shared news headline).
Note. After stimulus presentation, participants are asked to give ratings on the headline-level (e.g., believ-
ability), post-level (e.g., surprising), and indicate their sharing intentions.
Study 2b followed the same procedure as Study 2a, except that, after responding to six posts, partici-
pants were again shown each of the posts and asked to indicate which of the headlines they thought were
true or false.
Measures
For each social media post, we used bipolar adjective ratings to measure how unbelievable–believable,
uncontroversial–controversial, unsurprising–surprising, uninteresting–interesting, and negative–positive
a participant rated the news headline as well as the post about the news headline (1–7). We also measured
how much a participant agreed or disagreed with the post that was written about the headline (1 =strongly
disagree, 5 = strongly agree) and how much the post that was written about the headline aligned with the
participant’s values (1 = strongly opposed to my values, 5 = strongly aligned with my values).
For each social media post, we also recorded how likely participants would be to share the post publicly
on their social media feed; ‘like’ the post; share the post in a private message, text message, or email; and
talk about the post or headline in an offline conversation (1 = veryunlikely, 5 =verylikely). We calculated
an index of sharing intentions by averaging each participants’ responses to the four items for each post
they responded to (α =0.86). Participants were also asked to indicate whether they believed each headline
to be true or false (1 = true, 0 = false).
20
In addition, participants completed the 36-item MFQ-2 [8] to measure how much they endorsed Binding
(Loyalty, Authority, Purity) and Individualizing (Care, Equality) values. Participants also responded to
demographic questions about their gender, education, and political beliefs.
AnalysisStrategy
We ran a series of multilevel linear regression models that estimated participants’ z-standardized sharing
intentions as a function of various predictor variables. Model 0 did not include any predictor variables and
estimated sharing intentions as a function of a fixed intercept and three varying (random) intercepts that
accounted for variance across posts, headlines, and participants. Model 1 extended Model 0 by estimating
sharing intentions as a function of ratings of how believable, controversial, surprising, interesting, and
positive a headline was perceived to be. We modeled headline-level predictor variables with the fixed effect
of the z-standardized average ratings of each headline and with the fixed and varying (across headlines)
effect of each participant’s z-standardized deviation from the average rating for each headline. Model 2
extended Model 0 by estimating sharing intentions as a function of ratings of how controversial, surprising,
interesting, and positive a post about a headline was perceived to be. We modeled post-level predictor
variables with the fixed effect of the z-standardized average ratings of each post and with the fixed and
varying (across posts) effect of each participant’s z-standardized deviation from the average rating for
each post. Model 3 mirrored Model 2 but included only post-level ratings of how much participants agreed
with the post, how much the post aligned with their values, and the interaction between the two. Model 4
extended Model 0 by estimating sharing intentions as a function of participants’ endorsement of Binding,
Individualizing, and proportionality values. We modeled participant-level predictor variables with the
fixed effect and varying (across headlines) effect of the participants’ z-standardized moral concerns, the
dummy-coded framing of each post, and the interaction between the two. Model 5 mirrored Model 4 but
included participants’z-standardized conservatism instead of their endorsement of moral concerns. Lastly,
21
Table 3.1: Comparison of preregistered models estimating sharing intentions as a function of various pre-
dictor variables
z
Model Description R
2
M0 M1 M2 M3 M4 M5
M0 No Predictors .00 - -13.11 -15.82 -11.08 -3.53 -1.16
M1 Headline-Level Ratings .15 13.11 - -4.16 -0.20 8.83 11.73
M2 Post-Level Ratings .21 15.82 4.16 - 3.47 12.41 15.36
M3 Agreement/Alignment .18 11.08 0.20 -3.47 - 8.30 11.00
M4 Moral Concerns .08 3.53 -8.83 -12.41 -8.30 - 2.89
M5 Political Orientation .02 1.16 -11.73 -15.36 -11.00 -2.89 -
Note. R
2
is a Bayesian analogue to the proportion of within-sample variance explained by a model (not
considering varying effects). z is the difference in out-of-sample prediction accuracy between two models
divided by its standard error (z =∆ ELPD
/SE).
we ran a multivariate multilevel linear regression model (mediation) to estimate indirect effects of moral
concerns on sharing intentions via ratings of how much participants agreed with each post and how much
it aligned with their moral values. We estimated and evaluated these models analogously to Study 1, using
the ‘brms’ R package and 10-fold cross-validated ELPD scores.
Results
Preregistered Analyses Table 3.1 compares the models’ out-of-sample prediction accuracies to each
other. Supporting Hypothesis 1, Model 4—that included participants’ endorsement of Binding and Indi-
vidualizing values and their interactions with the moral framing of each social media post as predictor
variables—predicted sharing intentions more accurately than Model 0 (∆ ELPD
= 59.11,SE = 16.73,z =
3.53). As hypothesized, participants’ endorsement of Binding values predicted greater sharing intentions
in the Binding framing condition (β = 0.26,[0.16,0.36]) than in the Individualizing framing condition
(β =0.14,[0.03,0.24];∆ β =0.12,[0.03,0.21]) and, to a lesser extent, in the nonmoral framing condition
(β = 0.20,[0.10,0.30];∆ β = 0.06,[− 0.03,0.15]). In other words, participants with Binding values had
greater sharing intentions for posts framed with Binding values (aligned) than posts with Individualizing
values (misaligned).
22
Likewise, participants’ endorsement of Individualizing values predicted greater sharing intentions
in the Individualizing framing condition (β = 0.23,[0.16,0.31]) than in the Binding framing condition
(β = 0.07,[− 0.01,0.14];∆ β = 0.16,[0.09,0.24]) and, to a lesser extent, in the nonmoral framing con-
dition (β = 0.14,[0.06,0.21];∆ β = 0.10,[0.01,0.18]). In other words, participants with individualizing
values had greater sharing intentions for posts framed with individualizing values (aligned) than nonmoral
posts (neutral) and posts with Binding values (misaligned). Participants’ endorsement of proportionality
concerns was unrelated to sharing intentions in all three framing conditions (β =0.00,[− 0.09,0.09];β =
− 0.05,[− 0.14,0.04];β =− 0.03,[− 0.12,0.06]).
∗
Model 4 predicted sharing intentions more accurately than Model 5 (∆ ELPD
=50.90,SE=16.90,z =
3.01) which predicted sharing intentions as a function of political orientation instead of moral concerns.
Taken together, these findings emphasize the facilitatory effect of targeting people’s moral values on shar-
ing (mis)information. Models that estimated sharing intentions as a function of headline-level ratings (M1;
z = 8.96), of post-level ratings (M2; z = 12.52), or of post-level alignment and agreement ratings (M3;
z =8.39) made more accurate out-of-sample predictions than the model that estimated sharing intentions
as a function of moral concerns and their interaction with moral framing (M4). Across the three models
(M1–M3), the most important predictors were to what extent a participant rated the headline to be inter-
esting (M1: β = 0.27,[0.22,0.32]) and believable (M1: β = 0.11,[0.06,0.15]); rated the post to be inter-
esting (M2: β =0.34,[0.31,0.38]) and positive (M2: β =0.13,[0.09,.16]); and agreed with the post (M3:
β =0.16,[0.11,0.21]), considered the post to align with their moral values (M3:β =0.22,[0.17,0.28]), or
both (M3: β = 0.14,[0.11,0.16]). These findings were, perhaps, not surprising as the predictor variables
included in those models, especially Model 2, were more proximal to our outcome variable and related
to core motives of using social media (i.e., eliciting social connection/interactions: Al-Saggaf and Nielsen
∗
Importantly, all effects held when controlling for headline veracity, which did not have a significant effect ( β =
− 0.03,[− 0.13,0.07]) on sharing intentions. See Table F3 in the SI for effect sizes when controlling model 4 for headline ve-
racity.
23
[85], Wu and Atkin [112], and Sung et al. [93]). Nevertheless, our findings show that both perceived align-
ment of shared content and participant values, as well as the specific match between moral framing and
moral values, have a significant facilitating effect on sharing intentions.
To test Hypothesis 2, we estimated a Bayesian multilevel mediation model and compared, across the
three moral framing conditions, the total indirect effects of participants’ endorsement of Binding and In-
dividualizing values on sharing intentions via their ratings of how much they agreed with the post, how
much the post aligned with their moral values, and the interaction of the two ratings, while controlling for
headline veracity. Figure 3.4 provides an overview of the observed relationships. Supporting Hypothesis
2, we found that, first, participants’ endorsement of Binding values had a positive indirect effect on shar-
ing intentions in the Binding framing condition (β = .31,[.22,.41]) but a negative indirect effect in the
Individualizing framing condition (β =− .12,[− .20,− .04]); second, participants’ endorsement of Individ-
ualizing values had a positive indirect effect on sharing intentions in the Individualizing framing condition
(β = .30,[.21,.38]) but no indirect effect in the Binding framing condition ( β = − .03,[− .10,.04]); and
third, participants’ endorsement of Binding (β =.05,[− .03,.13]) and Individualizing (β =.03,[− .06,.11])
values had no indirect effect in the nonmoral framing condition.
Therefore, supporting the original hypothesis of Study 1, the results of Study 2b showed that an align-
ment of moral framing and moral values (Binding values and Binding framing or Individualizing values
and Individualizing framing) indeed increases sharing of social media posts, even when controlled for ve-
racity. In other words, aligning a post’s framing with a user’s core values will increase sharing intentions
independent of the post containing true or false content. Importantly, we also found that a match of fram-
ing and values predicts sharing intentions more accurately than other related variables, such as political
ideology. Additionally, our findings that moral framing and political ideology interact to predict sharing
(Study 1) but that moral values predict sharing more accurately than political ideology (Study 2) suggests
that political ideology and moral values are linked but distinct concepts.
24
Figure 3.4: Results from the preregistered mediation analysis
Note. Results show that there is a positive mediation (blue color) for match of moral framing and moral
values, and no effect (grey) or a negative effect (red) for a mismatch.
25
Chapter4
ExaminingtheInfluenceofDeliberationandAnalyticalThinkingon
MoralFramingEffectsinSharingIntentions
The results of Studies 1 and 2 support the hypothesis that aligning a social media post’s moral framing with
a user’s core values increases sharing intentions but leave open the underlying mechanism. For instance,
matching moral values and message framings could elicit a moral-emotional response that facilitates mes-
sage sharing by distracting participants from deliberating and thus from carefully judging whether the
post is false or implausible. If so, then the effect of aligning moral values and message framing should be
mediated by deliberation. Alternatively, participants could be motivated by their intuitions of right and
wrong that accompany moralized posts and supersede accuracy concerns. In this case, there should not
be an effect of deliberation on sharing intentions. To test these hypotheses, Study 3 replicates Study 2b in
a pre-registered experiment and includes measures of deliberation.
We first replicate Study 2b and its original hypotheses, predicting that respondents would be more
likely to share a social media post about a news headline if the framing of the post aligns with their moral
values (Hypothesis 1). We then investigate whether these findings can be explained by deliberation; if
the effect of aligning posts’ moral framing and respondents’ moral values is mediated by how much they
deliberate about sharing the post (Hypothesis 2) and whether susceptibility to this effect is moderated
by trait-level analytical thinking (Hypothesis 3). As done in previous works, we utilize the Cognitive
26
Reflection Test (CRT-2; Thomson and Oppenheimer [99]) as a trait-level measure of analytical thinking.
We also directly measure deliberation over sharing a post via ratings of how much a participant’s decision
is guided by deliberation or intuition.
We preregistered the sample size as well as all hypotheses, inclusion/exclusion criteria, statistical mod-
els, measures, and manipulations
∗
and made all materials, data, and analysis scripts available online
†
.
Method
Participants
We recruited 676 U.S. American Twitter users from the Prolific subject pool who, according to their re-
sponses to the Prolific prescreening questionnaire, were U.S. residents, used Twitter at least once a month,
posted on Twitter at least 1–3 times in the last 12 months, and who had not participated in Study 2a or 2b.
We excluded participants who failed at least one of three attention checks or whose responses conflicted
with their responses to the Prolific prescreening questionnaire. We had preregistered that we would recruit
a sample of 540 eligible participants, stratified by gender (
1
2
female,
1
2
male) and self-identified political ori-
entation (
1
3
liberal,
1
3
moderate,
1
3
conservative). After excluding participants with failed attention checks
or missing data, we were left with a final sample of 533 participants ( Mdn = 32 years, age range: 18–75
years; 265 women, 256 men, 12 other) of whom 178 identified as conservative, 177 identified as moderate,
and 178 identified as liberal. As Figure 4.1 shows, our sample spanned the spectrum of political orientation.
∗
Preregistration Link
†
OSF Repository Link
27
Figure 4.1: Distribution of political orientation across samples
0%
5%
10%
15%
20%
1 2 3 4 5 6 7
Conservatism
Note. “How would you describe your political beliefs?” (1 = liberal, 7 = conservative). Dashed line shows
proportions expected under a uniform distribution.
Procedure
We used the same planned missingness design from Study 2b that allowed both within-subject and between-
subjects comparisons. We included the same 2 (headline: true, false)× 2 (post: positive, negative senti-
ment)× 5 = 20 news headlines selected in Study 2a (Table A2) and used in Study 2b. In total, we included
3 (Binding, Individualizing, nonmoral framing)× 20 (news headlines) = 60 social media posts. Each par-
ticipant responded to six randomly sampled social media posts, none of which were based on the same
news headline. That is, the same participant responded to posts using Binding, Individualizing, or non-
moral framings (within-subject comparison) but different participants responded to posts using different
framings of the same headline (between-subject comparison). Each post was rated by 33–66 participants.
Study 3 followed the same procedure as Study 2b, except we removed the items asking participants
whether the presented headlines were true or false because these measures were only used for exploratory
analysis outside of the scope of this work.
28
Measures
We collected the same post and headline-level ratings as in Study 2b, such as how believable, controversial,
surprising, interesting, and negative participants rated the social media posts and news headlines, how
much they agreed with a post, how much it aligned with their values, and their sharing intentions. To
increase robustness of our estimates, we added a measure of headline familiarity (unfamiliar – familiar;
1–7) as an additional control variable for our analyses because familiarity is linked to perceived accuracy of
news due to fluency effects [96, 82, 86]. Participants also indicated to what extent they deliberated or used
intuition when deciding to share or not to share a post (bipolar items; intuition – deliberation,α =0.65).
Participants again completed the 36-item MFQ-2 and responded to the same demographic questions
about their gender, education, and their political beliefs. Lastly, participants completed the Cognitive Re-
flection Task 2 [99], which measures to what extent participants generally think analytically or intuitively.
AnalysisStrategy
First, we replicated the series of five multilevel linear regression from Study 2b that estimated participants’
z-standardized sharing intentions as a function of various predictor variables (M0: no predictors, M1:
headline-level ratings, M2: post-level ratings, M3: moral framing & user values interaction, moral framing
& political ideology interaction).
Second, we ran a multivariate multilevel linear regression model (mediation) to estimate the indirect
effects of moral concerns on sharing intentions via ratings of how much participants deliberated to share
each post. We also included analytical thinking (CRT-2) in this model as a potential moderator (higher
trait-level analytical thinking should reduce susceptibility to the effect of moral framing).
Analogous to Study 1 and Study 2b, we used the ‘brms’ R package to estimate the generalized linear
multilevel models and used 10-fold cross-validated ELPD scores for model comparison.
29
Table 4.1: Comparison of preregistered models estimating sharing intentions as a function of various pre-
dictor variables
z
Model Description R
2
M0 M1 M2 M3 M4 M5
M0 No Predictors .00 - -15.28 -14.93 -13.41 -3.68 0.06
M1 Headline-Level Ratings .16 15.28 - -0.70 -1.48 10.94 14.40
M2 Post-Level Ratings .18 14.93 0.70 - -0.92 11.60 14.57
M3 Agreement/Alignment .22 13.41 1.48 0.92 - 11.10 13.67
M4 Moral Concerns .07 3.68 -10.94 -11.60 -11.10 - 3.66
M5 Political Orientation .02 -0.06 -14.40 -14.57 -13.67 -3.66 -
Note. R
2
is a Bayesian analogue to the proportion of within-sample variance explained by a model (not
considering varying effects). z is the difference in out-of-sample prediction accuracy between two models
divided by its standard error (z =∆ ELPD
/SE).
Results
Preregistered Analyses Replicating the analysis in Study 2b (Hypothesis 1), Table 4.1 compares each
model’s out-of-sample prediction accuracy to that of the null model without predictors (M0) and that
of the other models with predictors (M1–M5). Supporting Hypothesis 1, Model 4—that included partici-
pants’ endorsement of Binding and Individualizing values and their interactions with the moral framing
of each social media post as predictor variables—predicted sharing intentions more accurately than Model
0 (∆ ELPD
= 59.66,SE = 16.91,z = 3.68). As hypothesized, participants’ endorsement of Binding values
predicted greater sharing intentions in the Binding framing condition (β = 0.26,[0.17,0.34]) than in the
Individualizing framing condition (β = 0.11,[0.02,0.20];∆ β = 0.14,[0.05,0.23]) and, to a lesser extent,
in the nonmoral framing condition (β = 0.15,[0.06,0.24];∆ β = 0.11,[0.02,0.15]). In other words, par-
ticipants with Binding values had greater sharing intentions for posts framed with Binding values (aligned)
than posts with Individualizing values (misaligned) or without moral framing (neutral).
Likewise, participants’ endorsement of Individualizing values predicted greater sharing intentions
in the Individualizing framing condition (β = 0.26,[0.18,0.34]) than in the Binding framing condition
(β =0.11,[0.03,0.19];∆ β =0.15,[0.08,0.22]) and, to a lesser extent, in the nonmoral framing condition
30
(β = 0.13,[0.06,0.21];∆ β = 0.13,[0.05,0.20]). In other words, participants with Individualizing values
had greater sharing intentions for posts framed with Individualizing values (aligned) than nonmoral posts
(neutral) and posts with Binding values (misaligned). Participants’ endorsement of proportionality con-
cerns was unrelated to sharing intentions in all three framing conditions (β = 0.01,[− 0.08,0.10];β =
− 0.01,[− 0.10,0.08];β = 0.02,[− 0.07,0.11]). Importantly, all effects held when controlling for head-
line veracity, which did not have a significant effect ( β =− 0.03,[− 0.13,0.07]) on sharing intentions, and
headline familiarity, which had a positive effect on sharing intentions ( β =0.18,[0.14,0.21]). See Table F4
for a comparison of effect sizes for model 4 with and without controls for headline veracity and familiarity.
Consistent with Study 2b, Model 4 predicted sharing intentions more accurately than Model 5 (∆ ELPD
=
60.22,SE = 16.46,z = 3.66) which predicted sharing intentions as a function of political orientation in-
stead of moral concerns. Overall, Study 3 successfully replicated the facilitatory effect of targeting people’s
moral values on (mis)information sharing. Consistent with Study 2b, models that estimated sharing in-
tentions as a function of headline-level ratings (M1; z = 10.94), of post-level ratings (M2; z = 11.60),
or of post-level alignment and agreement ratings (M3; z = 11.10) made more accurate out-of-sample
predictions than the model that estimated sharing intentions as a function of moral concerns and their
interaction with moral framing (M4). Across the three models (M1–M3), the most important predictors
were, consistent across both studies, to what extent a participant rated the headline to be interesting
(M1: β = 0.26,[0.21,0.32]), believable (M1: β = 0.13,[0.09,0.17]), familiar (additional in Study 3; M1:
β = 0.10,[0.07,0.13]); rated the post to be interesting (M2: β = 0.35,[0.30,0.39]) and positive (M2:
β =0.10,[0.07,0.13]); and agreed with the post (M3: β =0.17,[0.12,0.22]), considered the post to align
with their moral values (M3: β =0.28,[0.23,0.33]), or both (M3: β =0.12,[0.09,0.14]).
To test Hypothesis 2, which is an alternative explanation of our findings in Study 2b – that partici-
pants’ sharing behavior is not motivated by the desire to align with their own values but that they are
31
instead distracted from estimating message veracity or plausibility – we investigated whether delibera-
tion about sharing a post mediated the effect of aligning a post’s moral framing and participants’ moral
values. We estimated a Bayesian multilevel mediation model with ratings of how much participants de-
liberated about sharing a post would mediate the effect of moral framing and moral values on sharing
intentions, while controlling for headline veracity and familiarity. Figure 4.2 provides an overview of
the mediation results. We found no evidence for a mediation. Alignment of moral values and moral
framing did not predict less deliberation (β = 0.02,[− 0.02,0.07];β = − 0.00,[− 0.05,0.04]) and, im-
portantly, deliberation did not predict lower sharing intentions for false news compared to true news
(β = 0.02,[− 0.03,0.08]). Furthermore, analytical thinking did not moderate the effect of aligning moral
values and moral framing (β = − 0.01,[− 0.06,0.04];β = − 0.03,[− 0.08,0.03]), meaning that analytical
thinking did not reduce susceptibility to moral framing (Hypothesis 3). We also ran an identical mediation
model with response time for sharing a post as an alternative deliberation measure with shorter response
time indicating faster, more intuitive thinking. We again found no mediation, with no effect of moral
framing and values on response time (β = − 0.04,[− 0.13,0.05];β = 0.02,[− 0.03,0.07]) and longer re-
sponse time (indicating deliberation) did not predict lower sharing intentions of false news compared to
true news (β =0.01,[− 0.03,0.05]). For a more detailed analysis of analytical thinking see section 6 in the
supplemental materials.
Summary Supporting the original hypotheses of Study 1 and Study 2b, Study 3 confirmed that an align-
ment of a post’s moral framing to users’ moral values indeed increased sharing of social media posts, even
when controlling for veracity and familiarity. In other words, aligning a post’s framing with a user’s moral
values will increase sharing intentions independent of the post containing true or false content, and how
familiar the content is. We also consistently found that a match between a user’s moral values and posts’
moral framing predicted sharing intentions more accurately than other related variables, such as political
ideology. Furthermore, our results showed that matching post framing and user values increases sharing
32
Figure 4.2: Results from the preregistered mediation analysis of the effect of aligning moral framing and
moral values via deliberation
Note. Figure shows that deliberation does not mediate the effect of moral alignment on sharing intentions.
Importantly, there is no effect of deliberation on sharing intentions.
intentions independent of deliberative thinking. Specifically, alignment of posts’ framing and users’ moral
value did not reduce deliberation over sharing a post and deliberation did not impact the intention to share
misinformation. Finally, trait-level analytical thinking did not moderate the effect of moral alignment, did
not affect sharing of misinformation and did not increase plausibility concerns, except for nonmoral posts
(see additional analyses in supplemental materials).
33
Chapter5
Discussion
We investigated the role of moral framing, specifically the match or mismatch of an individual’s moral
values and a message’s moral framing, on belief and sharing of social media posts. Across three studies,
one large-scale analysis of real-world conversations on Twitter and two behavioral experiments, we found
that a match of framing and values leads to increased sharing of (mis)information. Crucially, these effects
were found while controlling for message veracity and message familiarity.
These results are relevant for the current debate on psychological drivers of misinformation spread.
Our findings indicate that it is not just moral content but rather matched moral content that matters.
Importantly, our experimental manipulation (i.e., framing a message by using language centered around
Individualizing or Binding values) was independent of the message’s core contents, such as its main ar-
guments or partisanship. For example, a headline about the State Department charging Americans for
evacuation flights could be framed using Individualizing Language (e.g.,“It is unfair that only the rich get
saved”) or Binding Language (e.g.,“The government is betraying its poor citizens”) without changing the
main argument that the government shouldn’t charge for evacuations, the negative sentiment, or left-
leaning partisanship. We created our stimuli through careful crafting of matched messages while staying
away from obviously partisan content and counterbalancing moral content across headline veracity. Since
we avoided confounding message content and moral framing with political ideology, the absence of an
34
effect of political ideology must not be misinterpreted as partisanship in messages or individual differ-
ences in conservatism not playing a role in (mis)information sharing. Rather, our results suggest that the
effects of such variables are driven by underlying moral values. In the real world, where partisan messages
are frequently accompanied by moralized language and arguments, and where partisan groups differ in
their endorsement of moral values [46, 61], this can indeed lead to partisan differences in misinformation
sharing as observed by prior research [59, 104, 110].
Past work on misinformation has shown that more deliberative, analytical reasoning often leads to
less sharing of misinformation, indicating that more analytical individuals might be able override initial
sharing intentions [16, 78, 80, 79]. However, in Study 3, analytical and “lazy” thinkers did not differ in their
sharing of misinformation and how much they relied on plausibility cues. Furthermore, deliberation over
sharing a post did not predict lower sharing intentions of misinformation and did not mediate the effect
of aligning moral framing and moral values on misinformation sharing intentions, meaning that moral
framing did not simply distract participants from accuracy cues. It is possible that the effect of analytical
thinking is restricted to contexts that do not strongly evoke values, group identities, and threats thereof
(e.g., see the following work for failures to replicate the effect of analytical thinking or failures to exclude
motivational drivers for highly politically polarized, moralized, and identity-relevant issues: Osmundsen
et al. [77], Pretus et al. [83], Lee et al. [65], and Tandoc et al. [98]). It might be that in these contexts,
analytical thinking cannot override participants’ strong intuitions of right and wrong. Supporting this
line of reasoning, our additional analyses in section 6 of the supplemental materials found an effect of
analytical thinking on misinformation sharing but only for nonmoral stimuli.
While previous work on analytical thinking can, in certain contexts, explain why individuals eventu-
ally decide to share or not to share misinformation - and thus help develop countermeasures (e.g., accuracy
nudges) - it still leaves open what makes individuals want to share misinformation in the first place. Our
research could fill in this gap: People are motivated to share value-aligned, identity-affirming content.
35
Our studies found that agreement and perceived moral alignment with a post increased sharing of misin-
formation, indicating motivational drivers, potentially further amplified by moral-emotional responses to
the moral framing of posts that are aligned with one’s moral values. Some evidence for this idea comes
from past research that found a link between emotional responses and analytical thinking on believing
and sharing of misinformation [67, 109]. Emotional responses are linked to increased sharing of misin-
formation while analytical thinking reduces misinformation sharing. Deliberation might be used, only for
strong enough accuracy concerns or conversely weak value and identity-based motives, to “rethink” this
impulse and thus not share a message if it is inaccurate. Notable work that integrates both cognitive and
motivational drivers of misinformation is the integrative approach by Van Bavel et al. [103]. This model
acknowledges the influence of multiple motivational drivers (e.g., accuracy or identity-based) on believing
and sharing misinformation. Our findings can contribute to this work by informing on the limitations and
constraints of different drivers of misinformation and their potential interplay.
Our findings also complement current literature on affective and motivational drivers of responses
to (mis)information, which found that emotional responses, such as psychological discomfort [95], fear
[36], or anger [100] influence processing, believing, and sharing of misinformation [105]. Our work also
confirms past findings that moral-emotional content elicits more engagement on social media platforms
compared to non-moral-emotional content [14], and importantly, showcases that matching moral values
and moral message content increases user engagement. Future work should investigate how far the effect
of moral values and framing extends. For instance, past work has found that negative emotions, such as
fear, anger, or anxiety, have a lasting effect on the perception of misinformation even after (successful) cor-
rections [23, 100] and might moderate partisanship effects. It would, therefore, be fruitful to investigate
whether moral emotions (e.g., emotional responses from perceived moral transgressions) similarly im-
pact perception of misinformation. This is especially relevant considering that misinformation frequently
features moral-emotional appeals [113, 66, 41].
36
Our work is also in line with past research that utilized values-based messages which appeal to core
morality to influence individuals’ attitudes and behaviors on a range of topics, such as vaccinations [4],
mask-wearing [59], or climate change [39, 38]. Specifically, this line of work demonstrates that moral
framing in line with recipients’ moral values can be used to make specific misinformation more believable
and increase sharing intentions. Thus, this work further extends the current literature on the effects of
(moral) framing and re-framing on persuasion into the field of misinformation.
Our work comes with some limitations. Although our stimuli are arguably naturalistic – that is, we
analyzed real-world Twitter data in Study 1 and used realistic posts (including real news headlines) in
Study 2 and Study 3 – their presentation does not fully represent participants experience on social media
platforms. Specifically, due to logistical study limitations (e.g., survey length), we showed participants
the stimuli in Study 2 and Study 3 with no other (filler) content in-between, such as friends’ messages
or ads. Similarly, the stimuli shown may not reflect the type of content to which the participants are
usually exposed (e.g., due to user-specific social media algorithms). This is relevant as “echo chambers” are
frequently encountered on social media and most Americans see mostly ideologically concordant content
online [9]. Furthermore, this work focused on self-reported sharing intentions of social media posts on a
specific social media platform (i.e., Twitter). Future work should expand the scope of the current study to
investigate whether the effect of moral values and framing on belief and sharing of misinformation also
translates into real-world behaviors, such as patterns of sharing information online or offline and especially
changes in behaviors relevant to the content they’ve seen (e.g., voting patterns or health behavior).
Lastly, this work did not account for habits in social media sharing behavior. Social media platforms
are heavily invested in building a habitual user base as their automatic behavior is monetized and critical
to their financial models [31, 10, 5]. Furthermore, social rewards (e.g., up-votes, likes, badges, notifications)
which are powerful cues in habitual learning are integral parts of these platforms’ designs [10, 11]. Users
then build habits of sharing content, including against one’s beliefs, that elicits social rewards (especially
37
attention) but is not necessarily accurate. This results in a significant proportion of misinformation online
being shared by highly habitual users [20]. In this context, future work should also investigate the role
that moral values and message content plays in building sharing habits. Moral values and message content
may shape sharing habits because content that aligns with recipients’ values elicits more engagement (see
this work or Brady et al. [14] and Candia et al. [19]). As such, habitual sharing might lead to sharing
moral-emotional content that elicits engagement instead of accurate content, thus facilitating the sharing
of misinformation. Pennycook and Rand [81] further found that users’ sharing intentions of false headlines
were significantly higher than their accuracy ratings, potentially indicating habitual sharing of headlines
independent of veracity and accuracy judgments (see Herrero-Diz, Conde-Jiménez, and Reyes de Cózar [52]
for a discussion of habitual sharing of news independent of veracity). In this way, cognitive factors, socio-
affective factors and habits might tie into an integrated system of sharing and believing misinformation
online.
38
Chapter6
Conclusions
Building on past work on socio-affective drivers of misinformation, moral psychology and re-framing, we
demonstrated how targeting audiences’ core moral values can increase their engagement with (mis)information
online, thus facilitating its spread. Importantly, we find that it is not moral content per se that drives mis-
information sharing but it is the matching of a message’s moral content and an individual’s moral values.
Framing content in line with target audiences’ core values (e.g., Individualizing or Binding values) will in-
crease sharing of misinformation, even when the underlying arguments, partisanship, and worldview are
kept constant. This indicates that partisan divides in misinformation sharing might be explained through
their underlying moral values and beliefs. Importantly, our findings are independent of cognitive drivers,
such as analytical thinking and familiarity with the content, further highlighting the role of motivational
drivers behind (mis)information. As such, this work advances our understanding of the psychological
mechanisms by which moral values and message framing interact, thereby leading to more sophisticated
models that integrate characterizations of messages’ moral content and receivers’ core moral values to
predict the success of social cyber-attacks. Ultimately, this research may offer a novel important perspec-
tive on our post-truth world: simple, targeted re-framing of the same message contents can lead to higher
acceptance and spread of misinformation.
39
Bibliography
[1] W Neil Adger, Catherine Butler, and Kate Walker-Springett. “Moral reasoning in adaptation to
climate change”. In: Environmental Politics 26.3 (2017), pp. 371–390.
[2] Luca Maria Aiello, Alain Barrat, Rossano Schifanella, Ciro Cattuto, Benjamin Markines, and
Filippo Menczer. “Friendship prediction and homophily in social media”. In: ACM Transactions on
the Web (TWEB) 6.2 (2012), pp. 1–33.
[3] Hunt Allcott and Matthew Gentzkow. “Social media and fake news in the 2016 election”. In:
Journal of economic perspectives 31.2 (2017), pp. 211–36.
[4] Avnika B Amin, Robert A Bednarczyk, Cara E Ray, Kala J Melchiori, Jesse Graham,
Jeffrey R Huntsinger, and Saad B Omer. “Association of moral values with vaccine hesitancy”. In:
Nature Human Behaviour 1.12 (2017), pp. 873–880.
[5] Ian A Anderson and Wendy Wood. “Habits and the electronic herd: The psychology behind social
media’s successes and failures”. In: Consumer Psychology Review 4.1 (2021), pp. 83–99.
[6] Sinan Aral and Dean Eckles. “Protecting elections from social media manipulation”. In: Science
365.6456 (2019), pp. 858–861.
[7] Mohammad Atari, Jesse Graham, and Morteza Dehghani. “Foundations of morality in Iran”. In:
Evolution and Human Behavior 41.5 (2020), pp. 367–384.
[8] Mohammad Atari, Jonathan Haidt, Jesse Graham, Sena Koleva, Sean T. Stevens, and
Morteza Dehghani. Morality beyond the weird: How the nomological network of morality varies
across cultures. preprint. PsyArXiv, Mar. 2022.doi: 10.31234/osf.io/q6c9r.
[9] Eytan Bakshy, Solomon Messing, and Lada A Adamic. “Exposure to ideologically diverse news
and opinion on Facebook”. In: Science 348.6239 (2015), pp. 1130–1132.
[10] Joseph B Bayer, Ian Axel Anderson, and Robert Tokunaga. “Building and breaking social media
habits”. In: Current Opinion in Psychology (2022), p. 101303.
[11] Joseph B Bayer and Robert LaRose. “Technology habits: Progress, problems, and prospects”. In:
The psychology of habit (2018), pp. 111–130.
40
[12] Michael Bossetta. “The weaponization of social media: Spear phishing and cyberattacks on
democracy”. In: Journal of international affairs 71.1.5 (2018), pp. 97–106.
[13] William J Brady, Ana P Gantman, and Jay J Van Bavel. “Attentional capture helps explain why
moral and emotional content go viral.” In: Journal of Experimental Psychology: General 149.4
(2020), p. 746.
[14] William J Brady, Julian A Wills, John T Jost, Joshua A Tucker, and Jay J Van Bavel. “Emotion
shapes the diffusion of moralized content in social networks”. In: Proceedings of the National
Academy of Sciences 114.28 (2017), pp. 7313–7318.
[15] Pablo Brinol and Richard E Petty. “Source factors in persuasion: A self-validation approach”. In:
European review of social psychology 20.1 (2009), pp. 49–96.
[16] Michael V Bronstein, Gordon Pennycook, Adam Bear, David G Rand, and Tyrone D Cannon.
“Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and
reduced analytic thinking”. In: Journal of applied research in memory and cognition 8.1 (2019),
pp. 108–117.
[17] Paul-Christian Bürkner. “Advanced Bayesian multilevel modeling with the R package brms”. In:
The R Journal 10.1 (May 2018), pp. 395–411.doi: 10.32614/RJ-2018-017.
[18] Paul-Christian Bürkner. “brms: An R package for Bayesian multilevel models using Stan”. In:
Journal of Statistical Software 80.1 (Aug. 2017).doi: 10.18637/jss.v080.i01.
[19] Cristian Candia, Mohammad Atari, Nour Kteily, and Brian Uzzi. “Overuse of moral language
dampens content engagement on social media”. In: (2022).
[20] Gizem Ceylan, Ian A. Anderson, and Wendy Wood. “Sharing of misinformation is habitual, not
just lazy or biased”. In: Proceedings of the National Academy of Sciences 120.4 (2023), e2216614120.
doi: 10.1073/pnas.2216614120. eprint: https://www.pnas.org/doi/pdf/10.1073/pnas.2216614120.
[21] Cindy Cheng, Joan Barceló, Allison Spencer Hartnett, Robert Kubinec, and Luca Messerschmidt.
“COVID-19 government response event dataset (CoronaNet v. 1.0)”. In: Nature human behaviour
4.7 (2020), pp. 756–768.
[22] Evan Clarkson and John D Jasper. “Individual differences in moral judgment predict attitudes
towards mandatory vaccinations”. In: Personality and Individual Differences 186 (2022), p. 111391.
[23] Michael D Cobb, Brendan Nyhan, and Jason Reifler. “Beliefs don’t always persevere: How
political figures are punished when positive information about them is discredited”. In: Political
Psychology 34.3 (2013), pp. 307–326.
[24] Jonas Colliander. ““This is fake news”: Investigating the role of conformity to other users’ views
when commenting on and spreading disinformation in social media”. In: Computers in Human
Behavior 97 (2019), pp. 202–215.
41
[25] Kareem Darwish, Peter Stefanov, Michaël Aupetit, and Preslav Nakov. “Unsupervised user stance
detection on twitter”. In: Proceedings of the International AAAI Conference on Web and Social
Media. Vol. 14. 2020, pp. 141–152.
[26] Martin V Day, Susan T Fiske, Emily L Downing, and Thomas E Trail. “Shifting liberal and
conservative attitudes using moral foundations theory”. In: Personality and Social Psychology
Bulletin 40.12 (2014), pp. 1559–1573.
[27] Jonas De Keersmaecker, David Dunning, Gordon Pennycook, David G Rand, Carmen Sanchez,
Christian Unkelbach, and Arne Roets. “Investigating the robustness of the illusory truth effect
across individual differences in cognitive ability, need for cognitive closure, and cognitive style”.
In: Personality and Social Psychology Bulletin 46.2 (2020), pp. 204–215.
[28] Morteza Dehghani, Scott Atran, Rumen Iliev, Sonya Sachdeva, Douglas Medin, and
Jeremy Ginges. “Sacred values and conflict over Iran’s nuclear program”. In: Judgment and
Decision making 5.7 (2010), p. 540.
[29] Morteza Dehghani, Kate Johnson, Joe Hoover, Eyal Sagi, Justin Garten, Niki Jitendra Parmar,
Stephen Vaisey, Rumen Iliev, and Jesse Graham. “Purity homophily in social networks.” In:
Journal of Experimental Psychology: General 145.3 (2016), p. 366.
[30] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. “Bert: Pre-training of deep
bidirectional transformers for language understanding”. In:arXivpreprintarXiv:1810.04805 (2018).
[31] Niall Docherty. “Facebook’s ideal user: Healthy habits, social capital, and the politics of
well-being online”. In: Social Media+ Society 6.2 (2020), p. 2056305120915606.
[32] Ullrich KH Ecker, Stephan Lewandowsky, John Cook, Philipp Schmid, Lisa K Fazio,
Nadia Brashier, Panayiota Kendeou, Emily K Vraga, and Michelle A Amazeen. “The psychological
drivers of misinformation belief and its resistance to correction”. In: Nature Reviews Psychology
1.1 (2022), pp. 13–29.
[33] Nikola Erceg, Zvonimir Galić, and Andreja Bubić. “The Psychology of Economic Attitudes–Moral
Foundations Predict Economic Attitudes beyond Socio-Demographic Variables”. In: Croatian
Economic Survey 20.1 (2018), pp. 37–70.
[34] Lisa K Fazio. “Repetition increases perceived truth even for known falsehoods”. In: Collabra:
Psychology 6.1 (2020).
[35] Lisa K Fazio, Nadia M Brashier, B Keith Payne, and Elizabeth J Marsh. “Knowledge does not
protect against illusory truth.” In: Journal of Experimental Psychology: General 144.5 (2015), p. 993.
[36] Jieyu Ding Featherstone and Jingwen Zhang. “Feeling angry: the effects of vaccine
misinformation and refutational messages on negative emotions and vaccination attitude”. In:
Journal of Health Communication 25.9 (2020), pp. 692–702.
[37] Matthew Feinberg and Robb Willer. “From gulf to bridge: When do moral arguments facilitate
political influence?” In: Personality and Social Psychology Bulletin 41.12 (2015), pp. 1665–1681.
42
[38] Matthew Feinberg and Robb Willer. “Moral reframing: A technique for effective and persuasive
communication across political divides”. en. In: Social and Personality Psychology Compass 13.12
(Dec. 2019).issn: 1751-9004, 1751-9004.doi: 10.1111/spc3.12501. (Visited on 06/22/2022).
[39] Matthew Feinberg and Robb Willer. “The moral roots of environmental attitudes”. In:
Psychological science 24.1 (2013), pp. 56–62.
[40] Joseph P Forgas and Rebekah East. “On being happy and gullible: Mood effects on skepticism and
the detection of deception”. In: Journal of Experimental Social Psychology 44.5 (2008),
pp. 1362–1367.
[41] Bilal Ghanem, Simone Paolo Ponzetto, Paolo Rosso, and Francisco M. Rangel Pardo. “FakeFlow:
Fake News Detection by Modeling the Flow of Affective Information”. In: CoRR abs/2101.09810
(2021). arXiv: 2101.09810.url: https://arxiv.org/abs/2101.09810.
[42] Jesse Graham. “Mapping the moral maps: From alternate taxonomies to competing predictions”.
In: Personality and Social Psychology Review 17.3 (2013), pp. 237–241.
[43] Jesse Graham and Jonathan Haidt. “Beyond beliefs: Religions bind individuals into moral
communities”. In: Personality and social psychology review 14.1 (2010), pp. 140–150.
[44] Jesse Graham and Jonathan Haidt. “Sacred values and evil adversaries: A moral foundations
approach.” In: (2012).
[45] Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P Wojcik, and
Peter H Ditto. “Moral foundations theory: The pragmatic validity of moral pluralism”. In:
Advances in experimental social psychology. Vol. 47. Elsevier, 2013, pp. 55–130.
[46] Jesse Graham, Jonathan Haidt, and Brian A Nosek. “Liberals and conservatives rely on different
sets of moral foundations.” In: Journal of personality and social psychology 96.5 (2009), p. 1029.
[47] Nir Grinberg, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, and David Lazer. “Fake
news on Twitter during the 2016 US presidential election”. In: Science 363.6425 (2019),
pp. 374–378.
[48] Andrew Guess, Brendan Nyhan, and Jason Reifler. “Selective exposure to misinformation:
Evidence from the consumption of fake news during the 2016 US presidential campaign”. In:
European Research Council 9.3 (2018), p. 4.
[49] Jonathan Haidt and Jesse Graham. “When morality opposes justice: Conservatives have moral
intuitions that liberals may not recognize”. In: Social Justice Research 20.1 (2007), pp. 98–116.
[50] Jonathan Haidt, Jesse Graham, and Craig Joseph. “Above and below left–right: Ideological
narratives and moral foundations”. In: Psychological Inquiry 20.2-3 (2009), pp. 110–119.
[51] Jonathan Haidt and Craig Joseph. “Intuitive ethics: How innately prepared intuitions generate
culturally variable virtues”. In: Daedalus 133.4 (2004), pp. 55–66.
43
[52] Paula Herrero-Diz, Jesús Conde-Jiménez, and Salvador Reyes de Cózar. “Teens’ motivations to
spread fake news on WhatsApp”. In: Social Media+ Society 6.3 (2020), p. 2056305120942879.
[53] Joe Hoover, Kate Johnson, Reihane Boghrati, Jesse Graham, and Morteza Dehghani. “Moral
framing and charitable donation: Integrating exploratory social media analyses and confirmatory
experimentation”. In: Collabra: Psychology 4.1 (2018).
[54] Joe Hoover, Gwenyth Portillo-Wightman, Leigh Yeh, Shreya Havaldar,
Aida Mostafazadeh Davani, Ying Lin, Brendan Kennedy, Mohammad Atari, Zahra Kamel,
Madelyn Mendlen, et al. “Moral foundations twitter corpus: A collection of 35k tweets annotated
for moral sentiment”. In: Social Psychological and Personality Science 11.8 (2020), pp. 1057–1071.
[55] Kristin Hurst and Marc J Stern. “Messaging for environmental action: The role of moral framing
and message source”. In: Journal of Environmental Psychology 68 (2020), p. 101394.
[56] Michael Jensen. “Russian trolls and fake news: Information or identity logics?” In: Journal of
International Affairs 71.1.5 (2018), pp. 115–124.
[57] Xiaoya Jiang, Min-Hsin Su, Juwon Hwang, Ruixue Lian, Markus Brauer, Sunghak Kim, and
Dhavan Shah. “Polarization over Vaccination: Ideological differences in twitter expression about
COVID-19 vaccine favorability and specific hesitancy concerns”. In: Social Media+ Society 7.3
(2021), p. 20563051211048413.
[58] Dan M Kahan, Ellen Peters, Erica Cantrell Dawson, and Paul Slovic. “Motivated numeracy and
enlightened self-government”. In: Behavioural public policy 1.1 (2017), pp. 54–86.
[59] Jonas Kaplan, Anthony Vaccaro, Max Henning, and Leonardo Christov-Moore. “Moral reframing
of messages about mask-wearing during the COVID-19 pandemic”. In: (2021).
[60] John Kerr, Costas Panagopoulos, and Sander van der Linden. “Political polarization on COVID-19
pandemic response in the United States”. In: Personality and individual differences 179 (2021),
p. 110892.
[61] J Matias Kivikangas, Belén Fernández-Castilla, Simo Järvelä, Niklas Ravaja, and
Jan-Erik Lönnqvist. “Moral foundations and political orientation: Systematic review and
meta-analysis.” In: Psychological Bulletin 147.1 (2021), p. 55.
[62] Alex S Koch and Joseph P Forgas. “Feeling good and feeling truth: The interactive effects of mood
and processing fluency on truth judgments”. In: Journal of Experimental Social Psychology 48.2
(2012), pp. 481–485.
[63] Ziva Kunda. “The case for motivated reasoning.” In: Psychological bulletin 108.3 (1990), p. 480.
[64] David Lazer, Matthew Baum, Nir Grinberg, Lisa Friedland, Kenneth Joseph, Will Hobbs, and
Carolina Mattsson. “Combating fake news: An agenda for research and action”. In: (2017).
44
[65] Sian Lee, Joshua P Forrest, Jessica Strait, Haeseung Seo, Dongwon Lee, and Aiping Xiong.
“Beyond cognitive ability: susceptibility to fake news is also explained by associative inference”.
In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 2020,
pp. 1–8.
[66] Stephan Lewandowsky, Ullrich KH Ecker, Colleen M Seifert, Norbert Schwarz, and John Cook.
“Misinformation and its correction: Continued influence and successful debiasing”. In:
Psychological science in the public interest 13.3 (2012), pp. 106–131.
[67] Ming-Hui Li, Zhiqin Chen, and Li-Lin Rao. “Emotion, analytic thinking and susceptibility to
misinformation during the COVID-19 outbreak”. In: Computers in Human Behavior 133 (2022),
p. 107295.
[68] Michelle Low, Ma Wui, and Glenda Lopez. “Moral foundations and attitudes towards the poor”.
In: Current Psychology 35.4 (2016), pp. 650–656.
[69] Dan MacGuill. Yes, Portland really did name a new bridge after Ned Flanders. Sept. 2021.url:
https://www.snopes.com/fact-check/portland-ned-flanders-bridge/ (visited on 06/22/2022).
[70] Diane M Mackie, Leila T Worth, and Arlene G Asuncion. “Processing of persuasive in-group
messages.” In: Journal of personality and social psychology 58.5 (1990), p. 812.
[71] Ali Mahmoodi, Dan Bang, Karsten Olsen, Yuanyuan Aimee Zhao, Zhenhao Shi, Kristina Broberg,
Shervin Safavi, Shihui Han, Majid Nili Ahmadabadi, Chris D Frith, et al. “Equality bias impairs
collective decision-making across cultures”. In: Proceedings of the National Academy of Sciences
112.12 (2015), pp. 3835–3840.
[72] Morgan Marietta. “From my cold, dead hands: Democratic consequences of sacred rhetoric”. In:
The Journal of Politics 70.3 (2008), pp. 767–779.
[73] Cameron Martel, Gordon Pennycook, and David G Rand. “Reliance on emotion promotes belief in
fake news”. In: Cognitive research: principles and implications 5.1 (2020), pp. 1–20.
[74] Robert S Mueller III. “Report On The Investigation Into Russian Interference In The 2016
Presidential Election. Volumes I & II.(Redacted version of 4/18/2019)”. In: (2019).
[75] Goran Muric, Yusong Wu, Emilio Ferrara, et al. “COVID-19 vaccine hesitancy on social media:
building a public twitter data set of antivaccine content, vaccine misinformation, and
conspiracies”. In: JMIR public health and surveillance 7.11 (2021), e30642.
[76] Greg Nyilasy. “Fake news: When the dark side of persuasion takes over”. In: International Journal
of Advertising 38.2 (2019), pp. 336–342.
[77] Mathias Osmundsen, Alexander Bor, Peter Bjerregaard Vahlstrup, Anja Bechmann, and
Michael Bang Petersen. “Partisan polarization is the primary psychological motivation behind
political fake news sharing on Twitter”. In: American Political Science Review 115.3 (2021),
pp. 999–1015.
45
[78] Gordon Pennycook and David G Rand. “Cognitive reflection and the 2016 US Presidential
election”. In: Personality and Social Psychology Bulletin 45.2 (2019), pp. 224–239.
[79] Gordon Pennycook and David G Rand. “Fighting misinformation on social media using
crowdsourced judgments of news source quality”. In: Proceedings of the National Academy of
Sciences 116.7 (2019), pp. 2521–2526.
[80] Gordon Pennycook and David G Rand. “Lazy, not biased: Susceptibility to partisan fake news is
better explained by lack of reasoning than by motivated reasoning”. In: Cognition 188 (2019),
pp. 39–50.
[81] Gordon Pennycook and David G Rand. “The psychology of fake news”. In: Trends in cognitive
sciences 25.5 (2021), pp. 388–402.
[82] Gordon Pennycook and David G Rand. “Who falls for fake news? The roles of bullshit receptivity,
overclaiming, familiarity, and analytic thinking”. In: Journal of personality 88.2 (2020),
pp. 185–200.
[83] Clara Pretus, Camila Servin-Barthet, Elizabeth Harris, William Brady, Oscar Vilarroya, and
Jay Van Bavel. “The role of political devotion in sharing partisan misinformation”. In: (2022).
[84] Nils Karl Reimer, Mohammad Atari, Farzan Karimi-Malekabadi, Jackson Trager,
Brendan Kennedy, Jesse Graham, and Morteza Dehghani. “Moral values predict county-level
COVID-19 vaccination rates in the United States.” In: American Psychologist 77.6 (2022), p. 743.
[85] Yeslam Al-Saggaf and Sharon Nielsen. “Self-disclosure on Facebook among female users and its
relationship to feelings of loneliness”. In: Computers in Human Behavior 36 (2014), pp. 460–468.
[86] Norbert Schwarz, Eryn Newman, and William Leach. “Making the truth stick & the myths fade:
Lessons from cognitive psychology”. In: Behavioral Science & Policy 2.1 (2016), pp. 85–95.
[87] Maneet Singh, Rishemjit Kaur, Akiko Matsuo, SRS Iyengar, and Kazutoshi Sasahara.
“Morality-Based Assertion and Homophily on Social Media: A Cultural Comparison Between
English and Japanese Languages”. In: Frontiers in psychology (2021), p. 5081.
[88] Linda J Skitka, Christopher W Bauman, and Edward G Sargis. “Moral conviction: Another
contributor to attitude strength or something more?” In: Journal of personality and social
psychology 88.6 (2005), p. 895.
[89] Linda J Skitka and G Scott Morgan. “The social and political implications of moral conviction”. In:
Political psychology 35 (2014), pp. 95–110.
[90] Linda J Skitka and Elizabeth Mullen. “The dark side of moral conviction”. In: Analyses of Social
Issues and Public Policy 2.1 (2002), pp. 35–41.
[91] Stan Development Team. RStan: The R interface to Stan. Version 2.26.3. 2021.url:
http://mc-stan.org/ (visited on 09/09/2021).
46
[92] Samuel Stroope, Rhiannon A Kroeger, Courtney E Williams, and Joseph O Baker.
“Sociodemographic correlates of vaccine hesitancy in the United States and the mediating role of
beliefs about governmental conspiracies”. In: Social Science Quarterly 102.6 (2021), pp. 2472–2481.
[93] Yongjun Sung, Jung-Ah Lee, Eunice Kim, and Sejung Marina Choi. “Why we post selfies:
Understanding motivations for posting pictures of oneself”. In: Personality and Individual
Differences 97 (2016), pp. 260–265.
[94] Cass R Sunstein. “Moral heuristics and moral framing”. In: Minn. L. Rev. 88 (2003), p. 1556.
[95] Mark W Susmann and Duane T Wegener. “The role of discomfort in the continued influence
effect of misinformation”. In: Memory & Cognition 50.2 (2022), pp. 435–448.
[96] Briony Swire, Ullrich KH Ecker, and Stephan Lewandowsky. “The role of familiarity in correcting
inaccurate information.” In: Journal of experimental psychology: learning, memory, and cognition
43.12 (2017), p. 1948.
[97] Charles S Taber and Milton Lodge. “Motivated skepticism in the evaluation of political beliefs”.
In: American journal of political science 50.3 (2006), pp. 755–769.
[98] Edson C Tandoc, James Lee, Matthew Chew, Fan Xi Tan, and Zhang Hao Goh. “Falling for fake
news: the role of political bias and cognitive ability”. In: Asian Journal of Communication 31.4
(2021), pp. 237–253.
[99] Keela S Thomson and Daniel M Oppenheimer. “Investigating an alternate form of the cognitive
reflection test”. In: Judgment and Decision making 11.1 (2016), pp. 99–113.
[100] Emily Thorson. “Belief echoes: The persistent effects of corrected misinformation”. In: Political
Communication 33.3 (2016), pp. 460–480.
[101] Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. “Well-read students learn
better: On the importance of pre-training compact models”. In: arXiv preprint arXiv:1908.08962
(2019).
[102] Sebastián Valenzuela, Martina Piña, and Josefina Ramírez. “Behavioral effects of framing on social
media users: How conflict, economic, human interest, and morality frames drive news sharing”.
In: Journal of communication 67.5 (2017), pp. 803–826.
[103] Jay J Van Bavel, Elizabeth A Harris, Philip Pärnamets, Steve Rathje, Kimberly C Doell, and
Joshua A Tucker. “Political psychology in the digital (mis) information age: A model of news
belief and sharing”. In: Social Issues and Policy Review 15.1 (2021), pp. 84–113.
[104] Jay J Van Bavel and Andrea Pereira. “The partisan brain: An identity-based model of political
belief”. In: Trends in cognitive sciences 22.3 (2018), pp. 213–224.
[105] Ilse Van Damme and Karolien Smets. “The power of emotion versus the power of suggestion:
Memory for emotional events in the misinformation paradigm.” In: Emotion 14.2 (2014), p. 310.
47
[106] Aki Vehtari, Andrew Gelman, and Jonah Gabry. “Practical Bayesian model evaluation using
leave-one-out cross-validation and WAIC”. In: Statistics and Computing 27.5 (Aug. 2017),
pp. 1413–1432.doi: 10.1007/s11222-016-9696-4.
[107] Jan G Voelkel, Mashail Malik, Chrystal Redekopp, and Robb Willer. “Changing Americans’
attitudes about immigration: Using moral framing to bolster factual arguments”. In: The ANNALS
of the American Academy of Political and Social Science 700.1 (2022), pp. 73–85.
[108] Soroush Vosoughi, Deb Roy, and Sinan Aral. “The spread of true and false news online”. In:
science 359.6380 (2018), pp. 1146–1151.
[109] Rui Wang, Yuan He, Jing Xu, and Hongzhong Zhang. “Fake news or bad news? Toward an
emotion-driven cognitive dissonance model of misinformation diffusion”. In: Asian Journal of
Communication 30.5 (2020), pp. 317–342.
[110] Piotr Winkielman, David E Huber, Liam Kavanagh, and Norbert Schwarz. “Fluency of
consistency: When thoughts fit nicely and flow smoothly”. In: Cognitive consistency: A
fundamental principle in social cognition (2012), pp. 89–111.
[111] Christopher Wolsko, Hector Ariceaga, and Jesse Seiden. “Red, white, and blue enough to be
green: Effects of moral framing on climate change attitudes and conservation behaviors”. In:
Journal of Experimental Social Psychology 65 (2016), pp. 7–19.
[112] Tai-Yee Wu and David Atkin. “Online news discussions: Exploring the role of user personality
and motivations for posting comments on news”. In: Journalism & Mass Communication
Quarterly 94.1 (2017), pp. 61–80.
[113] Sara K Yeo and Meaghan McKasy. “Emotion and humor as misinformation antidotes”. In:
Proceedings of the National Academy of Sciences 118.15 (2021), e2002484118.
[114] Leon Yin, Franziska Roscher, Richard Bonneau, Jonathan Nagler, and Joshua A Tucker. “Your
friendly neighborhood troll: The Internet Research Agency’s use of local and fake news in the
2016 US presidential campaign”. In: SMaPP Data Report, Social Media and Political Participation
Lab, New York University (2018).
[115] Charles E Ziegler. “International dimensions of electoral processes: Russia, the USA, and the 2016
elections”. In: International Politics 55.5 (2018), pp. 557–574.
48
Appendices
AppendixA:Stimuli
Study1
Table A1: Exemplary tweets showcasing moral framing and/or stance on COVID vaccinations and man-
dates
Topic Description Example
Nonmoral Does not contain moral language
They really think mandating vaccines on airplanes is
gonna sway the unvaccinated, lol. I guess I’m gonna
just drive...
Individualizing
Focused on individual rights and
well-being
Under 12s are unvaccinated! We need to ensure all
primary schools have safe air to prevent mass infec-
tions.
Binding Focused on group preservation
Common law, natural law, God’s law. I will never
consent and am both disgusted and horrified by peo-
ple’s acquiescence... My body belongs to GOD!
Pro-vax
Endorsing COVID vaccines and
mandates
Let’s get vaxxed!
Anti-vax
Opposing COVID vaccines and
mandates
No. Do Not get Vaccinated!
49
Study2&Study3
Table A2: List of headlines selected for Study 2b and Study 3
# Headline True/False Sentiment
1 Man Shoots Off His Left Ear Taking Selfies With Gun False Negative
2 Refugees Have 100 Times Greater Rate Of Tuberculosis Than National Average False Negative
3 Starbucks Is Giving Out Free Lifetime Passes On Its 44th Anniversary False Negative
4 Man Infects 586 People With HIV On Purpose, Plans On Infecting 2,000 More
Before 2024
False Negative
5 Trillionaires Now Exist! False Negative
6 America Is Now Reducing CO2 Emissions Much Faster Than Other Developed
Countries
False Positive
7 Black And White Wealth Gap Is Closing Fast False Positive
8 John Travolta Takes A New Wife After The Death Of Kelly Preston False Positive
9 Polar Bears Are Strong and Healthy Across Alaska False Positive
10 Taco Bell Reportedly Going Out of Business False Positive
11 Twitter Is Making A ‘Dislike’ Button True Negative
12 Crocs is Giving Away Free Footwear to Healthcare Workers True Negative
13 Portland Named a New Bridge After ‘The Simpsons’ Ned Flanders True Negative
14 State Department Charging Americans $2k For Flights Out Of Afghanistan True Negative
15 Whitest-Ever Paint Could Help Cool Heating Earth True Negative
16 Hole In The Ozone Layer Will Totally Heal Within 50 Years True Positive
17 An ‘Invisible Sculpture’ Sold for $18K True Positive
18 A California Man Sued A Psychic For Allegedly Failing To Remove A Curse True Positive
19 ‘Bluetooth’ Technology Was Named After A Viking King True Positive
20 Scientists Detect Cocaine In Freshwater Shrimp True Positive
Note. Headlines within each combination of veracity and sentiment are ordered by the criterion described
in the Results section of Study 2a.
AppendixB:QuestionnaireItems(Study2a,2b,3)
Introduction
On the next few pages, you will see examples of social media posts that look somewhat like this:
Each example contains a news headline a user has shared (here: [HEADLINE]) and a post the user has
written about the news headline (here: [POST]). For this study, we are leaving out some information about
the post (for example, who posted it and when). Please answer each of the questions as if you had come
50
Figure B1: Example stimulus presentation for the introduction
across the post while using social media (e.g., Twitter or Facebook). In total, you will answer questions
about 5 posts.
51
Headline-Level
On this page, please focus on the headline the user has shared:
Figure B2: Example stimulus presentation for headline-level items
Table B1: In your opinion, the post the user has written about the headline is . . .
Unbelievable 1 2 3 4 5 6 7 Believable
Uncontroversial 1 2 3 4 5 6 7 Controversial
Unsurprising 1 2 3 4 5 6 7 Surprising
Uninteresting 1 2 3 4 5 6 7 Interesting
Negative 1 2 3 4 5 6 7 Positive
52
Post-Level
On this page, please focus on the post the user has written about the headline:
Figure B3: Example stimulus presentation for post-level items
Table B2: In your opinion, the post the user has written about the headline is . . .
Uncontroversial 1 2 3 4 5 6 7 Controversial
Unsurprising 1 2 3 4 5 6 7 Surprising
Uninteresting 1 2 3 4 5 6 7 Interesting
Negative 1 2 3 4 5 6 7 Positive
Table B3: How much do you agree or disagree with the post the user has written about the headline?
Strongly disagree 1 2 3 4 5 Strongly agree
Table B4: How much does the post the user has written about the headline align with your values?
Strongly opposed to my values 1 2 3 4 5 6 7 Strongly aligned with my values
On this page, please focus on the entire social media post:
53
Figure B4: Example stimulus presentation for whole-stimulus items
Table B5: If you came across this post, how likely would you be to ...
... share the post publicly on your social media feed (e.g., retweet or
share on your Facebook)
1 2 3 4 5
... ’like’ the post (e.g., Twitter or Facebook)? 1 2 3 4 5
... share the post privately in a private message, text message, or
email?
1 2 3 4 5
... talk about the post or headline in an offline conversation? 1 2 3 4 5
Table B6: Why did you decide to share (or not to share) the previous post? (Only Study 3)
It just felt right/wrong without much
thought
1 2 3 4 5
I carefully considered the information
and implications
It matched my gut-feeling and intuitions 1 2 3 4 5
It matched my knowledge and experi-
ences
It made me feel strongly 1 2 3 4 5 It made me very thoughtful
My social circle posts similar/different
posts
1 2 3 4 5
I reflected on my own thoughts and opin-
ions
AppendixC:Foundation-LevelAnalysisofSocialMediaPosts
For a more detailed analysis of the interaction between user stance and moral framing on user engage-
ment (retweet count) in Study 1, we further investigated the individual moral foundations (Care, Fairness,
Loyalty, Authority, and Purity)
∗
.
Regarding Individualizing foundations (Care & Fairness), we find, in line with our hypothesis, that Care
and Fairness framing predicted more (1.4 times; 3.9 times) engagement for “pro-vax” users compared to
∗
We trained another BERT-based classifier to detect the individual moral foundations analogue to the methods in Study 1.
The classifier achieved a cross-validated F1 score of 0.71
54
“anti-vax” users (β = 0.15,[0.02,0.28];β = 0.59,[0.41,0.78]). Regarding Binding foundations (Loyalty,
Authority, and Purity), we found that, in line with our hypothesis, Authority and Purity framing predicted
significantly more (3.31 times; 11.7 times) engagement for “anti-vax” users compared to “pro-vax” users
(β = − 0.52,[− 0.90,− 0.10];β = − 1.07,[− 1.68,− 0.42]). However, opposite to our hypothesis Loyalty
predicted more (3.7 times) engagement for “pro-vax” compared to “anti-vax” users (β =0.57,[0.07,1.08]).
Our findings regarding retweet count are therefore in line with our expectations regarding the underlying
relationship of moral values and stances (ideology-aligned moral framing increases engagement) with the
single exception of Loyalty framing. Past work has observed that Loyalty values and framing can be linked
to increased vaccination rates [84], which might explain this specific outlier. Importantly, however, this
outlier was not large enough to influence the general finding that Binding framing facilitates “anti-vax”
messages as shown in the main analyses.
We further investigated the effects of each individual foundation on favourite count. Regarding In-
dividualizing foundations (Care & Fairness), we found in line with our hypothesis that Care and Fair-
ness framing predicted more (4.2 times; 4.7 times) engagement for “pro-vax” compared to “anti-vax” users
(β = 0.62,[0.27,0.98];β = 0.67,[0.17,1.17]). Regarding Binding foundations (Loyalty, Authority, and
Purity), we found, opposite to our hypothesis, that Authority framing predicted significantly more (46.8
times) engagement for “pro-vax” users compared to “anti-vax” users (β = 1.67,[0.74,2.62]). Further-
more, we found no difference between “pro-vax” and “anti-vax” for Loyalty ( β = − 0.97,[− 2.29,0.46])
and Purity framing (β = − 0.71,[− 2.33,1.03]). Thus, our findings are in line with our expectations for
Individualizing framing but not for Binding framing, driven by a reversed effect of Authority.
Overall, the extended analysis shed light on which specific values drive the outlier observed in our
main analyses in Study 1: Authority framing increasing liking of “pro-vax” tweets. These findings relate
to some of the limitations discussed in the main paper. First, there could be conservatives (who endorse
55
Binding values) that are not “anti-vax” and therefore endorse “pro-vax” arguments that include Author-
ity framing. Similarly, there could be “pro-vax” arguments that confound with Authority language (e.g.,
“Respect experts/the law to keep everyone safe”; relates to both authority and care/harm avoidance). De-
pending on the distribution of conservatives and liberals that are shown the respective tweet, this might
then lead to the observed effect, even if liberals engage less with tweets that have a mismatched moral
framing. However, it should be noted that we observe this outlier only for likes but not for retweets,
where the effect was in the predicted direction. This could be caused by user behavior differing for liking
and sharing tweets, e.g., users could be less hesitant to like content that they would not share because it
is less public.
AppendixD:AdditionalAnalysesofAnalyticalThinking
Study 3 replicated the results of Study 2b and tested whether lack of deliberation could be an alternative
explanation. We tested whether an alignment of moral values and framing distracted participants from
post accuracy and plausibility (via reducing deliberation), leading to more sharing of misinformation. We
did not find that deliberation of posts mediated the effect of aligning a participant’s moral values and
a post’s framing. To further strengthen these findings, we directly tested whether deliberation reduced
sharing of false (vs true) news by fitting four additional linear regression models.
First, we fitted model M7 that predicted sharing intentions as a function of analytical thinking, headline
veracity and their interaction. This model tested whether analytical thinking (CRT-2) reduced sharing of
false (vs true) news. We found no effect for this relationship ( β =0.03,[− 0.04,0.08]). Second, we fit model
M8 that predicted sharing intentions as a function of deliberating over sharing a post, headline veracity
and their interaction. This model tested whether deliberation, over each sharing behavior, reduced sharing
of fake (vs true) news. We, again, found no evidence for this relationship (β = 0.01,[− 0.09,0.11]). Fur-
thermore, these models did not predict sharing intentions more accurately than the null model (∆ ELPD
=
56
− 2.61,SE = 8.52,z = − 0.31;∆ ELPD
= 7.70,SE = 9.04,z = 0.85), which predicted sharing intentions
as a function of random intercepts for participants, headlines, and posts. Third, we fitted model M9 that
predicted sharing intentions as a function of headline veracity, analytical thinking (CRT-2), headline be-
lievability and their interaction. This model tested whether analytical thinking increased accuracy and
plausibility considerations, as argued by past work [80]. If analytical thinking results in participants esti-
mating and considering plausibility, then there should be a positive interaction of CRT and headline believ-
ability, meaning that analytical thinkers should have higher plausibility concerns than “lazy” thinkers [80].
We found no effect for this relationship ( β = − 0.03,[− 0.08,0.02]). Fourth, focusing only on nonmoral
stimuli, we fitted model M10 that predicted sharing intentions as a function of analytical thinking, head-
line veracity, and their interaction. This model investigated whether the failure to replicate past findings
of analytical thinking reducing misinformation sharing was caused by most of our stimuli being mor-
alized (two thirds). It could be that for moral-emotional stimuli accuracy concerns were superseded by
participants’ intuitions of right and wrong. Supporting this idea, we find that, for nonmoral stimuli, an-
alytical thinking reduces sharing of misinformationβ = − 0.10,[− 0.17,− 0.02]) but not true information
(β =− 0.04,[− 0.14,0.05]).
We further replicated the mediation analysis from Study 2, which mediated the effect of matching
a participant’s moral values and a post’s moral framing on sharing intentions via agreement and moral
alignment with the post. Similar to Study 2b, we compared across the three moral framing conditions the
total indirect effects of participants’ endorsement of Binding and Individualizing values on sharing inten-
tions via their ratings of how much they agreed with the post, how much the post aligned with their moral
values, and the interaction of the two ratings, while controlling for headline veracity and, additionally,
headline familiarity. Figure D1 provides an overview of the observed relationships. Supporting the orig-
inal Hypothesis 2 in Study 2b, we found that participants’ endorsement of Binding values had a positive
indirect effect on sharing intentions in the Binding framing condition ( β = .21,[.15,.27]) but a negative
57
indirect effect in the Individualizing framing condition ( β = − .10,[− .15,− .05]). Furthermore, partici-
pants’ endorsement of Individualizing values had a positive indirect effect on sharing intentions in the
Individualizing framing condition (β =.21,[.15,.28]) but a negative indirect effect in the Binding framing
condition (β = − .08,[− .14,− .03]). Lastly, participants’ endorsement of Binding (β = .00,[− .05,.06])
and Individualizing (β =.03,[− .02,.09]) values had no indirect effect in the nonmoral framing condition.
These findings, again, support the idea of motivational drivers of message sharing.
Figure D1: Replication of the mediation in Study 2b: Effect of matching moral values and framing via
perceived agreement and alignment with the post
Note. The figure shows a clear separation between the effect of matching moral framing, which leads to
an increase of sharing intentions (blue color), mismatched moral framing, which leads to a decrease of
sharing intentions (red color), and not addressing moral values, which has no effect on sharing intentions
(grey color), across all conditions.
AppendixE:AdditionalAnalysesofOrderEffects
In our studies 2 & 3, we presented participants repeatedly with stimuli and items to rate said stimuli
(e.g., believability, agreement, etc). Participants were then asked for their sharing intentions. Thus, par-
ticipants might have learned after the first trial that they will have to indicate their sharing intentions
for the presented social media posts at each trial. In the following iterations participants might have
58
decided about their sharing intentions during post presentation (before all ratings) and this could have
influenced their subsequent stimuli ratings (i.e., rating to justify their sharing intentions; e.g., rate a
headline as more believable if they want to share the post). Therefore, as a robustness check, we con-
ducted an additional analysis to test for potential order effects, that is whether the effect of our ratings
on sharing intentions increased over trial iterations. We ran the respective models (M1, M3, M6) again
while controlling for stimulus order but found no significant interactions of stimulus order and main
predictors (M1: β familiar:order
= − 0.00,[− 0.02,0.01]), M3: β agreement:order
= 0.00,[− 0.03,0.02]; M6:
β order:framing1:individualizing
=0.00,[− 0.02,0.02],β order:framing2:binding
=0.01,[− 0.02,0.03]).
AppendixF:AdditionalModels
Study1
Table F1: Comparison of models estimating engagement (favorite count) in Study 1 as a function of various
predictor variables
z
Model Description R
2
M0 M1 M2
M0 Stance 0.05 - -2.56 1.55
M1 Moral framing (all foundations) 0.04 2.56 - 4.75
M2 Moral framing (all foundations) & stance interaction 0.05 -1.55 -4.75 -
Note. R
2
is a Bayesian analogue to the proportion of within-sample variance explained by a model (not
considering varying effects). z is the difference in out-of-sample prediction accuracy between two models
divided by its standard error (z =∆ ELPD
/SE).
59
Table F2: Comparison of models estimating engagement (retweet count) in Study 1 as a function of various
predictor variables
z
Model Description R
2
M0 M1 M2
M0 Stance 0.10 - 1.84 -3.28
M1 Moral framing (all foundations) 0.10 -1.84 - -4.93
M2 Moral framing (all foundations) & stance interaction 0.10 3.28 4.93 -
Note. R
2
is a Bayesian analogue to the proportion of within-sample variance explained by a model (not
considering varying effects). z is the difference in out-of-sample prediction accuracy between two models
divided by its standard error (z =∆ ELPD
/SE).
Study2b
Table F3: Effect sizes for Model 4 when controlling for headline veracity
Values - Condition β [CI] ∆ β no control
Binding - Binding .26 [.16, .36] 0.0
Binding - Individualizing .14 [.04, .24] 0.0
Binding - Nonmoral .20 [.10, .30] 0.0
Individualizing - Binding .07 [-.01, .14] 0.0
Individualizing - Individualizing .23 [.16, .26] 0.0
Individualizing - Nonmoral .14 [.06, .21] 0.0
Proportionality - Binding .00 [-.09, .09] 0.0
Proportionality - Individualizing -.05 [-.14, .04] 0.0
Proportionality - Nonmoral -.03 [-.12, .07] 0.0
∆ β Binding:Binding− Individualizing
.12 [.03, .21] 0.0
∆ β Binding:Binding− Nonmoral
.06 [-.04, .15] 0.0
∆ β Individualizing:Individualizing− Binding
.17 [.09, .24] 0.01
∆ β Individualizing:Individualizing− Nonmoral
.10 [.01, .18] 0.0
Note. Table shows the effect sizes for a given moral value in a given framing condition, as well as the
difference in effect sizes for a moral value across framing conditions (last 4 rows). Table shows that the
reported effect sizes in Study 2b hold when controlling for headline veracity.
60
Study3
Table F4: Effect sizes for Model 4 when controlling for headline veracity and familiarity
Values– Framing Condition β [CI] ∆ β no control
Binding– Binding .25 [.17, .33] -0.01
Binding– Individualizing .09 [.01, .18] -0.02
Binding– Nonmoral .14 [.05, .23] 0.01
Individualizing– Binding .09 [.02, .16] -0.02
Individualizing– Individualizing .24 [.16, .32] -0.02
Individualizing– Nonmoral .12 [.05, .20] -0.01
Proportionality– Binding .01 [-.08, .10] 0.00
Proportionality– Individualizing .00 [-.09, .09] 0.01
Proportionality– Nonmoral .03 [-.06, .11] 0.01
∆ β Binding:Binding− Individualizing
.16 [.07, .24] 0.02
∆ β Binding:Binding− Nonmoral
.11 [.02, .19] 0.00
∆ β Individualizing:Individualizing− Binding
.15 [.08, .22] 0.00
∆ β Individualizing:Individualizing− Nonmoral
.12 [.04, .19] -0.01
Note. Table shows the effect sizes for a given moral value in a given framing condition, as well as the
difference in effect sizes for a moral value across framing conditions (last 4 rows). Table shows that the
reported effect sizes in Study 3 hold when controlling for headline veracity.
61
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Socio-ecological psychology of moral values
PDF
The spread of moral content in online social networks
PDF
Generalization of social rejection in social networks
PDF
Bound in hatred: a multi-methodological investigation of morally motivated acts of hate
PDF
Investigating the effects of Pavlovian cues on social behavior
PDF
Expressing values and group identity through behavior and language
PDF
Measuring truth detection ability in social media following extreme events
PDF
Beyond active and passive social media use: habit mechanisms are behind frequent posting and scrolling on Twitter/X
PDF
Bridging possible identities and intervention journeys: two process-oriented accounts that advance theory and application of psychological science
PDF
Do you see what I see? Personality and cognitive factors affecting theory of mind or perspective taking
PDF
Essays on information design for online retailers and social networks
Asset Metadata
Creator
Abdurahman, Suhaib
(author)
Core Title
How misinformation exploits moral values and framing: insights from social media platforms and behavioral experiments
School
College of Letters, Arts and Sciences
Degree
Master of Arts
Degree Program
Psychology
Degree Conferral Date
2023-08
Publication Date
06/22/2024
Defense Date
06/21/2023
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
fake news,information sharing,misinformation,moral foundations theory,moral values,natural language processing,OAI-PMH Harvest,social media
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Dehghani, Morteza (
committee chair
), Hackel, Leor (
committee member
), Read, Stephen J. (
committee member
)
Creator Email
s.abdurahman@gmx.de,sabdurah@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113189883
Unique identifier
UC113189883
Identifier
etd-Abdurahman-11983.pdf (filename)
Legacy Identifier
etd-Abdurahman-11983
Document Type
Thesis
Format
theses (aat)
Rights
Abdurahman, Suhaib
Internet Media Type
application/pdf
Type
texts
Source
20230627-usctheses-batch-1058
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
fake news
information sharing
misinformation
moral foundations theory
moral values
natural language processing
social media