Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The spread of moral content in online social networks
(USC Thesis Other)
The spread of moral content in online social networks
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THE SPREAD OF MORAL CONTENT IN ONLINE SOCIAL NETWORKS
by
Linle Jiang
A Thesis Presented to the
FACULTY OF THE USC DORNSIFE COLLEGE OF
LETTERS, ARTS AND SCIENCES
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
MASTER OF ARTS
(PSYCHOLOGY)
August 2020
Copyright [2020] Linle Jiang
ii
TABLE OF CONTENTS
Page
LIST OF TABLES .................................................................................................................. iv
LIST OF FIGURES .................................................................................................................. v
ABSTRACT .............................................................................................................................vi
INTRODUCTION .................................................................................................................... 1
METHOD ............................................................................................................................6
Dataset......................................................................................................................6
Procedures and Analysis ..........................................................................................7
“Moral Contagion” Effect of Moral-Emotional Language ..........................8
Model Set 1 The Effect of Moral-Emotional Expression (Binary)
on Retweet Count in the Combined Dataset ....................................8
Model Set 2 The Effect of Moral-Emotional Expression (Binary)
on Retweet Count in Each Individual Dataset .................................9
Model Set 3 The Effect of Moral-Emotional Word Count on
Retweet Count in the Combined Dataset .........................................9
Model Set 4 The Effect of Moral-Emotional Word Count on
Retweet Count in Each Individual Dataset ......................................9
“Moral Contagion” Effect of Human Annotated Moral Sentiments ............9
Model Set 5 The “Moral Contagion” Effect of Moral against Non-
Moral Content in Each Individual Dataset .....................................10
Model Set 6 The “Moral Contagion” Effects of Specific Moral
Concerns in Each Individual Dataset .............................................11
iii
Page
RESULTS ..........................................................................................................................12
“Moral Contagion” Effect of Moral-Emotional Language ....................................12
Model Set 1 ................................................................................................12
Model Set 2 ................................................................................................12
Model Set 3 ................................................................................................13
Model Set 4 ................................................................................................14
“Moral Contagion” Effect of Human Annotated Moral Sentiments ......................16
Model Set 5 ................................................................................................16
Model Set 6 ................................................................................................18
DISCUSSION ......................................................................................................................... 22
REFERENCES ........................................................................................................................ 27
iv
LIST OF TABLES
Table Page
1. Moral Foundations Twitter Corpus Overview .......................................................... 6
2. Descriptive Statistics for all Datasets ...................................................................... 12
3. Frequency of Tweets per Moral Concern Calculated by Annotators’ Majority Vote
per Corpus ............................................................................................................ 17
4. Retweet Count as a Function of Moral Concern ................................................... 19
v
LIST OF FIGURES
Figure Page
1. Predicted Retweet Count by Moral and Emotional Language per Dataset ............ 16
vi
ABSTRACT
Brady et al., (2017) posit that moral emotional language, in general, has a unique
facilitative effect on the transmission of moral contents, controlling for distinctly
emotional and distinctly moral words. However, another study (Burton et al., 2019) found
mixed results regarding this "moral contagion" effect on the diffusion of polarizing
political or morality-laden messages. In light of the broad implications of morality
transmitted online, the current research first examines the robustness of the "moral
contagion" effect by investigating the role of the expression of moral emotions in the
spread of moral conversations drawn from the Moral Foundations Twitter Corpus
(Hoover et al., 2020). The results indicate that the "moral contagion" effect is contingent
on specific online topics. Also, the current study looks deeper into the robustness of the
“moral contagion” effect, using communications categorized by the Moral Foundations
Theory, across different moral or political topics on Twitter. Findings again highlight that
the “moral contagion” effects are topic-specific.
1
INTRODUCTION
Bauman and Skitka (2009) defined moral conviction as one's subjective attitude
about a particular issue that reflects one's core moral beliefs, which matter fundamentally
to one's sense of right and wrong. Given that moralization reflects one's core beliefs about
the world, when an issue is moralized, it is consequential, as such attitudes matter
fundamentally to one's sense of right or wrong. Luttrell et al. (2016) suggested that
merely labeling an attitude as of moral value significantly improves corresponding
behavioral intentions and resistance to persuasion, and these effects cannot be accounted
for by other attitude strength measures, such as attitude extremity, attitude importance,
and attitude certainty. Other studies have indicated that moral conviction is inherently
motivating (Garrett, 2019). It is also associated with less tolerance of dissimilar attitudes
(Cole Wright et al., 2008; Skitka et al., 2005), with stronger resistance to authority
influence (Skitka et al., 2009) and group norms (Hornsey et al., 2007), with more violent
protests (Mooijman, Hoover, et al., 2018) and with increased attitude polarization
(Clifford, 2019).
In light of the strong downstream effects associated with moral conviction, it is
important to understand how an issue becomes moralized. In this case, some researchers
propose emotional experience as the antecedent of moralization, and the most
representative theoretical work is the Moral Foundations (Haidt, 2001). According to this
theory, there are five foundations that form one's moral system, which gives people
feelings and intuitions that guide their daily judgment and decision making. Specifically,
the care and harm concerns describe human's tendency to protect other people from harm;
the fairness and cheating concerns describe human's sense about social equality and
2
social justice; the loyalty and betrayal concerns describe human's need of belongingness
to a particular group, such as to a community or a nation; the authority and subversion
concerns describe human's demand for order and hierarchical structure of society, and the
purity and degradation concerns are dominated by the psychology of disgust, which
describes human's motive to avoid pathogenic things (Graham et al., 2013). Moral
Foundations Theory is rooted in Haidt's (2001) social intuitionist model for moral
judgment, which suggests that moral judgment is mostly based on "fast gut feeling",
rather than deliberative processes. That is, affect-laden intuitions inform an individual
whether something is morally right or wrong. Specifically, compassion is linked with
violations of the care domain; anger is linked with violations of the fairness domain;
resentment is linked with violations of the authority domain; rage is linked with
violations of the loyalty domain; disgust is linked with violations of the purity domain
(Horberg et al., 2011; Rozin & Singh, 1999).
Based on Moral Foundations Theory, an issue can be framed to yield moral
relevance, by inducing affect-laden intuitions. This type of moral framing, such as
messages framed in terms of moral concerns, may have real-world impacts in decision-
making, such as charity donations (Joe Hoover et al., 2018). In another case, Mooijman,
Meindl, et al., (2018) found that when individuals were primed with binding moral
concerns, which include loyalty, authority, and purity, they were also more likely to
moralize self-control and showed corresponding behavioral change. In the political realm,
Voelkel and Feinberg (2018) discovered that conservatives would reduce support to
conservative candidates when they were exposed to opposing arguments framed in
conservative-relevant concerns (e.g., loyalty) and vice versa for the liberals. Feinberg and
3
Willer (2013) discovered that liberals were more likely to moralize environmental issues
compared to conservatives. However, this difference disappeared when these issues were
framed using purity rhetoric, which was highly valued by conservatives.
Compared to the rich literature on the implications of morality, little work has
been done in investigating the role of moral emotion in the transmission of morality in
the context of social networks. Therefore, Brady et al. (2017) extended the emotional
contagion theory to the moral domain, and coined a concept, known as "moral
contagion". Emotional contagion embodies the idea that people tend to synchronize their
emotions and behaviors with others consciously and unconsciously (Schoenewolf, 1990).
Similar to the role of emotional contagion in an online social network context, Brady,
Crockett, et al. (2019) defined:
Moral contagion is the spread of moralized content as a result of people
incorporating others’ moral-emotional expressions as informational input into
their own appraisal of a situation which can guide their decisions to share the
content on social media and inform their own emotional state. (p. 15)
According to their definition, moralized contents are the contents “construed in terms of
the interests or good of a unit” that is over and above an individual. In this case, moral
and political discourse, such as the #MeToo and gun control topics, in online social
networks can be considered as moralized content. Brady et al. (2017) conducted an
analysis using a large (n = 563,312) corpus of tweets from Twitter across three moral and
political topics (i.e., gun control, same-sex marriage, and climate change). They found
that the expression of moral emotions increased the transmissions (i.e., retweet count) of
moral content on Twitter, even after controlling for the emotion language and distinctly
moral words contained in each message. Perhaps the strongest piece of evidence comes
from Brady, Wills, et al. (2019), in which they found the use of moral-emotional
4
language contributed to the spread of conservative and democratic elites' communications
on Twitter during the 2016 U.S. presidential election, replicating the "moral contagion"
effect. In line with these findings, Valenzuela et al., (2017) performed a content analysis
on 3,409 news articles from six national news networks, and they found that news
adopting a morality frame is more likely to be shared on Facebook and Twitter, after
controlling for the topics. These studies suggest that contents with moral and emotional
expressions tend to spread more virally compared with neutral languages.
However, the "moral contagion" effect may not occur as ubiquitously as
suggested in all topic domains. Burton et al. (2019) adopted the same methodology as
Brady et al. (2017)'s to examine the "moral contagion" effect with another five moral or
political corpora collected on Twitter, yet mixed results were found. The effect that
moral-emotional words increased information diffusion could only be replicated in one
dataset (i.e., #MullerReport Tweets), whereas such an effect was not significant in the
#WomensMarch, the Post-Brexit, and the Viral 2016 US Election datasets. Moreover,
this effect even occurred in the opposite direction in the #MeToo dataset. These
inconsistent findings challenge the robustness of the “moral contagion” effect, suggesting
that it may be context-sensitive.
Previous studies (Brady et al., 2017; Brady, Wills, et al., 2019) have investigated
the “moral contagion” effect in terms of the expressions of moral-emotional language,
measured by dictionary-based methods, and suggested that moral-emotional expressions
facilitate the spread of moral and political ideas (Brady et al., 2019). This finding is not
consistent with the results from a recent replication study (Burton et al., 2019).
Additionally, Brady, Crockett, et al. (2019) indicated that individuals took into account
5
the expressions of moral emotions as informational input into their appraisal of the
situation (Fischer et al., 2003). Building on top of this, some studies (Brady et al., 2017;
Brady, Wills, et al., 2019) further show that the expression of moral emotion, even
measured just by counting the number of moral-emotional words, is an influential factor
that contributes to the spread of moralized content. If this is true, in the context of online
transmission of moralized content, messages annotated as signaling a moral concern
(categorized by the Moral Foundations Theory) from the Moral Foundations Twitter
Corpus (Hoover et al., 2020) may be more “contagious” against messages that are non-
moral. This might be expected because these annotations were given based on the affect-
laden intuitions induced by the messages, and thus should be a more valid measure of
moral-emotional expression, as opposed to the expressions measured by dictionary-based
methods (i.e., word count; Brady et al., 2017). In light of the considerable academic
impact of the "moral contagion" phenomenon, the present study aims at examining the
robustness of the “moral contagion” effect, using new datasets from the Moral
Foundations Twitter Corpus (Hoover et al., 2020). No research has yet examined how
specific human-annotated moral sentiments, based on the Moral Foundations Theory,
affect the spread of messages in the context of the online social networks. Therefore, this
study also uses the annotated data, provided by the Moral Foundations Twitter Corpus
(Hoover et al., 2020), to explore the transmission pattern of moral content in the
Twitterverse.
6
METHOD
Dataset
The present study used the Moral Foundations Twitter Corpus collected by
Hoover et al. (2020). There are seven datasets in this corpus (see Table 1 for the corpus
overview). They cover
Table 1
Moral Foundations Twitter Corpus Overview
Corpus Corpus Description Collection
Method
Selection Criteria N
All Lives
Matter
Tweets related to the All Lives
Matter movement
Purchased from
Spinn3r.com
#AllLivesMatter,
#BlueLivesMatter
4,424
Black Lives
Matter (BLM)
Tweets related to the Black
Lives Matter Movement
Purchased from
Spinn3r.com
#BLM, #BlackLivesMatter 5,257
Baltimore
Protests
Tweets posted during the
Baltimore protests against the
death of Freddie Gray
Purchased from
Gnip.com
All tweets from cities with
Freddie Gray protests
5,593
2016 U.S.
Presidential
Election
Tweets posted during the 2016
U.S. Presidential Election
Scraped via
Twitter
Application
Programming
Interface (API)
Followers of @HillaryClinton,
@realDonaldTrump,
@NYTimes, @washingtonpost,
and @WSJ
5,358
Hurricane
Sandy
Tweets related to Hurricane
Sandy, a hurricane that caused
record damage in the United
States
Purchased from
Gnip.com
#HurricaneSandy, #Sandy 4,591
#MeToo Tweets related to the #MeToo
movement
Purchased from
Gnip.com
Random subset from 12 million
tweets mentioning user IDs
associated with high-profile
allegations of sexual misconduct
4,891
Davidson Hate
Speech
Tweets collected by Davidson et
al. (2017) for hate speech and
offensive language research
Obtained from
Davidson et al.
(2017)
Random sample from 85.4
million tweets that contained
words in Davidson et al. (2017)
4,873
Note. Moral Foundations Twitter Corpus (MFTC) Discourse Domains. From " Moral Foundations Twitter Corpus: A
collection of 35k tweets annotated for moral sentiment," by Hoover et al., 2019, Psychological and Personality
Science, p. 4. Copyright 2020 by SAGE Publications. Adapted with permission.
popular topics (e.g., ideological polarization) that are considered morally relevant in
psychological research. Also, these corpus domains were selected across the ideological
7
spectrum to ensure that a wide variety of moral virtues and vices were included. Each
dataset includes tweet id, tweet message, and tweet-level annotated outcome for moral
concerns. In particular, each tweet in the dataset was already coded, by at least three
annotators, according to the moral sentiment(s) (i.e., moral virtues and vices) or non-
moral category. All tweets were written in English. Importantly, these tweets were
originally sampled to maximize the moral relevance and the variety of moral virtues and
vices for its annotation purposes. The metadata (i.e., retweet count, whether a tweet
contains media other than texts, number of the follower) for each tweet were self-
collected via the Twitter REST API with the “tweepy” package in Python. To be included
in this study, each tweet must have no missing metadata. Notably, the #MeToo dataset in
the original corpus was excluded from the current study, as no metadata were available.
Procedures and Analysis
The data were first wrangled using the "pandas" package in Python. The diffusion
measure of each tweet in this study was the retweet count metadata collected through
Twitter API. Next, each tweet message was preprocessed, using the “dplyr” and “tm”
packages in R, into isolated words, and all punctuations were removed. Following this
“tokenization” step, a dictionary-based method was then used to measure the moral
emotional expression with the remaining contents of each tweet. The dictionaries
previously validated and used (Brady et al., 2017) were employed in the present study.
The emotion dictionary (n = 819; e.g., worry) contains words and stems that are distinctly
emotional; the moral dictionary (n = 316; e.g., free) includes words and stems related to
morality; and the moral-emotional dictionary (n = 72; e.g., hate) is the subset of the two
dictionaries above, comprising of both morally and emotionally relevant words and
8
stems. By matching the “tokenized” tweets with these three dictionaries, the word count
for each moral and emotional category was computed and served as the moral-emotion
expression indicators.
“Moral Contagion” Effect of Moral-Emotional Language
To examine the robustness of the "moral contagion" effects elicited by the use of
moral-emotional language, the original research paradigm was applied with new datasets
in the current study. Specifically, a negative binomial regression model with maximum
likelihood estimation was used to account for overdispersion when modeling count data
(Hilbe, 2011). All the analyses were conducted with Proc GENMOD in SAS 9.4, with the
retweet count variable serving as the diffusion metric in these models; media (e.g.,
whether a tweet has an image or video) and the number of followers was entered as the
covariates. These covariates were found to be associated with the spread of tweets (Brady
et al., 2017; Stieglitz & Dang-Xuan, 2013), independent of the moral-emotional language
variables. The incidence rate ratios were reported to be consistent with previous studies
(Brady et al., 2017; Burton et al., 2019).
Model Set 1 The Effect of Moral-Emotional Expression (Binary) on Retweet
Count in the Combined Dataset. To examine whether tweets with at least one moral-
emotional word are significantly more "contagious" compared to those without them
("all-or-none”), a dummy variable was created to indicate whether a tweet contains
moral-emotional words. This (binary) moral-emotional expression variable, the dataset
identification variable, and their interaction term were entered as the predictors of the
model. The interaction term was included in order to test whether the effect was
9
independent of the datasets. Note that the entire six datasets were combined for this
analysis, which resulted in one model.
Model Set 2 The Effect of Moral-Emotional Expression (Binary) on Retweet
Count in Each Individual Dataset. Compared with Model Set 1, this set of models was
to examine whether the “moral contagion” effect was “all-or-none” in each individual
dataset. This resulted in six models in total.
Model Set 3 The Effect of Moral-Emotional Word Count on Retweet Count
in the Combined Dataset. Similar to Model Set 1, the six datasets were combined in
this analysis. However, in contrast to Model Set 1, the three types of moral and/or
emotional word count, and a dataset identification variable were included as predictors
into the model. Additionally, to investigate whether the effect of moral-emotional words
on retweet count was independent of the datasets, an interaction term between moral-
emotional words and the dataset was also entered into the model.
Model Set 4 The Effect of Moral-Emotional Word Count on Retweet Count
in Each Individual Dataset. To replicate the primary analyses in Brady et al.’s (2017)
study, this model was estimated for the retweet count as a function of the three types of
moral and emotional word count predictors for each dataset. Six models were estimated
in this analysis.
“Moral Contagion” Effect of Human Annotated Moral Sentiments
The purpose of this study is, first, to examine whether the “moral contagion”
effects are robust, with human-annotated moral sentiments as the foundational predictor,
instead of using the number of moral-emotional words as the moral-emotion expression
indicators. In other words, whether moral contents are more “viral” as opposed to non-
10
moral contents, as categorized using the Moral Foundations Theory, is of particular
interest in this study. Specifically, each tweet was first evaluated by human annotators.
And each annotator judged the moral relevance of the tweet, if it was deemed as morally
relevant, the tweet was then labeled with the specific moral concern(s). Otherwise, the
tweet was labeled as “non-moral”. The final label for the tweet was based on the majority
votes, where the label(s) that received at least 50% of annotators’ votes was considered
“majority”. Note that these tweets were annotated by annotators trained with the Moral
Value Coding Guide (Hoover et al., 2017) and that some tweets can have two labels if
there are two majority vote outcomes. Tweets that did not have majority vote outcomes
were excluded from the analyses. The same negative binomial models were applied again
in the analysis, with the covariates included, except that the moral or emotional word
variables were replaced with the annotation variable.
Model Set 5 The “Moral Contagion” Effect of Moral against Non-Moral
Content in Each Individual Dataset. The “moral contagion” effect of human
annotations was tested to examine whether this effect was “all-or-none” in each dataset,
resulting in six models in total. Specifically, a binary variable (indicating whether the
tweet was labeled as “non-moral” from the annotations) was entered as the predictor in
each model. Note that in Model Set 2, tweets with no moral-emotional words were coded
as not containing any moral-emotional expressions, while in this set of models, “non-
moral” tweets were determined by human annotators. Additionally, with the moral
sentiment annotation for the tweets, since each tweet can have more than one label, if
“non-moral” is one of the tweet’s final labels, this tweet was considered as “non-moral”.
11
Model Set 6 The “Moral Contagion” Effects of Specific Moral Concerns in
Each Individual Dataset. The present study looked into how specific moral
sentiment(s) changed the diffusion significantly compared to non-moral tweets in each
dataset. The moral concern annotation variable, which indicated the specific moral virtue
or vice of the tweets, was used as the predictor for each of these six models. Note that in
this analysis, tweets that had “non-moral” as its own annotation, including tweets with
more than one annotation, were deemed as “non-moral”. On the other hand, if tweets
were labeled more than once and the labels didn’t include “non-moral”, such tweets were
duplicated the same times as the number of labels in the dataset, with only one unique
label for each duplicated tweet.
12
RESULTS
Overall, a total of 22,130 tweets across the six datasets were included in the
analysis (See Table 2 for the descriptive statistics across all datasets).
Table 2
Descriptive Statistics for all Datasets
Corpus
Variable ALM Baltimore BLM Election Davidson Sandy Total
Number of
Tweets
3117 3954 3978 4159 3243 3679 22130
Emotion
Words
1.06(1.202) .66(.932) .88(1.125) 1.22(1.272) 1.33(1.192) 1.55(1.336) 1.11(1.218)
Moral Words 1.08(1.079) .47(.807) 1.06(1.105) .79(.972) .15(.415) .68(.866) .71(.962)
Moral-
Emotion
Words
.70(.889) .23(.512) .51(.790) .53(.781) .17(.429) .56(.761) .45 (.736)
Media 13.9% 17.9% 20.5% 9.4% 0% 4.6% 11.4%
Follower 7,552
(116,275)
14,869
(101,364)
5,334 (27,198) 20,322
(59,039)
651,469
(1312,192)
25,669
(514,773)
108,234
(592,918)
Note. Bolded values reflect means, and values in the parenthesis reflect standard deviations. Media was a binary
variable
“Moral Contagion” Effect of Moral-Emotional Language
Model Set 1. There was a significant interaction between moral-emotional
expression (binary) and dataset ( χ
2
(5) = 23.38, p < .001), suggesting that whether the
“moral contagion” effect of tweets with moral-emotional words, compared with those
without moral-emotional words, was “all-or-none” depends upon the specific topic
domains (i.e., datasets), and Model Set 2 examines this effect for each dataset.
Model Set 2. The original finding of the “moral contagion” effect that was
estimated with the moral-emotional expression as a dichotomous variable replicated in
13
three out of six datasets. The effect was observed in the ALM, Baltimore, and Sandy
datasets. Specifically, for the ALM dataset, there was a significant increase (by 51%) in
retweet counts if tweets contained moral-emotional words (IRR = 1.51, p < .001, 95% CI
= 1.20, 1.89), compared with no moral-emotional word tweets. In terms of the Baltimore
dataset, the presence of moral-emotional words had a significant effect on retweet count
(IRR = 1.42, p = .006, 95% CI = 1.11, 1.82). Regarding the Sandy dataset, a significant
increase in diffusion was observed when moral-emotional words were included (IRR =
1.46, p = .001, 95% CI = 1.16, 1.84).
However, for the remaining datasets, the “moral contagion” effect wasn’t
reproduced in the present study. In fact, the effect was not significant across the three
datasets. For the BLM dataset, whether moral-emotional words were included in tweets
didn’t significantly change the retweet count (IRR = 0.87, p = .084, 95% CI = 0.73, 1.02).
For the Election dataset, there was no significant differences in terms of retweet count
between tweets containing moral-emotional words vs. non-moral emotional words (IRR
= 1.05, p = .659, 95% CI = 0.83, 1,33). In terms of the Davidson dataset, whether or not
there was a moral-emotional word in tweets did not change the retweet count
significantly (IRR = 0.93, p = .477, 95% CI = 0.75, 1.14).
Overall, these results suggested that, regardless of whether the “moral contagion”
effect was “all-or-none”, this effect was not generally observed as found in Brady et al.
(2017)’s study.
Model Set 3. There was a significant interaction between moral-emotional words
and datasets ( χ
2
(5) = 30.79, p < .001), suggesting that the effects of "moral contagion",
14
indicated by moral-emotional word count, varied by datasets. Model Set 4 looked closely
at this effect per dataset.
Model Set 4. Similar to what was found in the previous study (Burton et al.,
2019), the “moral contagion” effect might not be as ubiquitous as proposed in Brady et
al. (2017)’s study. Among the six datasets, the “moral contagion” effect was only
observed in one dataset (i.e., the Baltimore dataset). Specifically, there was a significant
main effect of distinctly moral words on retweet count (IRR = 0.87, p = .038, 95% CI =
0.76, 0.99). No significant main effect was observed for the basic emotion words (IRR =
1.05, p = .508, 95% CI = 0.91, 1.20). However, moral-emotional words had a significant
and positive effect on retweet count (IRR = 1.32, p = .016, 95% CI = 1.05, 1.65), over
and above the other two variables. This result showed that every time an extra moral-
emotional word was included in a tweet, its predicted retweet rate increased by 32%.
One particular finding that contradicts the “moral contagion” effect was observed
in the Election dataset. The results of this model showed an opposite pattern to the “moral
contagion” effect, indicating that the inclusion of moral-emotional language actually
hindered the diffusion of messages. Specifically, a significant main effect of basic
emotion words was observed on diffusion (IRR = 1.10, p = .040, 95% CI = 1.00, 1.21),
showing a small emotional contagion effect. There was also a significant main effect of
distinctly moral words (IRR = 1.21, p < .001, 95% CI = 1.08, 1.35) on diffusion.
However, after controlling for basic emotion words and distinctly moral words, moral-
emotional words (IRR = 0.76, p < .001, 95% CI = 0.65, 0.89) yielded a significantly
negative effect on diffusion. It showed that with every one moral-emotional word added
to the message, there was a corresponding 32% decrease in retweet rate.
15
In addition, the “moral contagion” effect was not significant in the other four
datasets. For the ALM dataset, the regression model didn’t show significant main effects
of distinctly moral words (IRR = 1.01, p = .826, 95% CI = 0.94, 1.08) or moral-emotional
words (IRR = 1.03, p = .711, 95% CI = 0.88, 1.20) on retweet count. However, there was
a significant emotional contagion effect (IRR = 1.31, p < .001, 95% CI = 1.15, 1.50). In
terms of the BLM dataset, only the main effect of distinctly moral words (IRR = 1.08, p
= .028, 95% CI = 1.01, 1.15) on diffusion was significant, while neither basic emotion
words (IRR = 1.02, p = .650, 95% CI = 0.95, 1.09) or moral-emotional words (IRR =
0.97, p = .591, 95% CI = 0.88, 1.07) showed a significant effect on diffusion. Regarding
the Davidson dataset, results showed that both distinctly moral words (IRR = 0.93, p
= .027, 95% CI = 0.87, 0.99) and basic emotion words (IRR = 0.60, p < .001, 95% CI =
0.51, 0.71) were negatively associated with the retweet count. However, there was no
significant main effect of moral-emotional words (IRR = 0.93, p = .422, 95% CI = 0.77,
1.12) on diffusion. For the Sandy dataset, the results here were in parallel with the
findings for the BLM dataset. Specifically, there were no significant main effects of
moral-emotional words (IRR = 1.14, p = .140, 95% CI = 0.96, 1.36) or basic emotion
words (IRR = 1.05, p = .455, 95% CI = 0.92, 1.20) on diffusion. Only distinctly moral
words had a significantly positive effect on diffusion (IRR = 1.13, p = .016, 95% CI =
1.02, 1.25).
Overall, results in the current study suggest that the “moral contagion” effect may
be specific to the topic domains. Based on the observed effect, the hypothetical diffusion
trend, predicted by the number of moral-emotional words, was plotted for each dataset
(Figure 1).
16
Figure 1. Predicted Retweet Count by Moral and Emotional Language per Dataset. A = ALM; B = BLM; C =
Davidson; D = Election; E = Sandy; F = Baltimore. Shaded areas represent 95% CIs.
“Moral Contagion” Effect of Human Annotated Moral Sentiments
Model Set 5. Mixed results were found with regards to whether tweets annotated
as morally relevant (i.e., annotated with a moral concern) spread more widely, compared
to tweets annotated as “non-moral” in the datasets (See Table 3 for the frequency of each
moral concern across the datasets). First, most of the tweets were “non-moral” across all
datasets. Further, results suggest that the observed patterns from Model Set 5 were not
entirely in parallel with findings in Model Set 2, which estimated a model with a
dichotomous variable specifying whether a moral-emotional word was in a tweet for each
dataset. In particular, the effect that tweets with moral relevance, based on human
17
annotations, were more “contagious” were found in only two datasets in Model Set 5.
This ALM dataset was the only one that the “moral
Table 3
Frequency of Tweets per Moral Concern Calculated by Annotators’ Majority Vote per
Corpus
Corpus
Moral Concern ALM Baltimore BLM Election Davidson Sandy
Non-moral 1,513 3,401 1,701 2,909 3,171 1,538
Care 240 89 207 175 3 616
Harm 385 24 387 212 26 520
Fairness 314 34 385 303 0 111
Cheating 227 90 395 246 3 250
Loyalty 132 183 381 48 8 221
Betrayal 10 95 54 29 11 52
Authority 147 4 100 44 3 242
Subversion 46 35 177 43 2 222
Purity 52 11 65 147 0 16
Degradation 70 3 136 34 16 19
Total 3,136 3,969 3,988 4,190 3,243 3,807
contagion” effect was observed in both Model Set 2 and Model Set 5. These inconsistent
findings add more complexities to the “moral contagion” effect, highlighting that this
effect might be sensitive to the specific topics and to the way moral-emotional expression
was identified among the tweets.
Specifically, the result obtained from the ALM dataset was parallel with the
outcome obtained in Model Set 2 (IRR = 1.80, p < .001, 95% CI = 1.43, 2.26). This effect
was even stronger, showing that retweet count can increase by 80% if tweets were
labeled with a moral sentiment, compared with "non-moral". Consistently, in the Election
18
dataset, there was a strong contagion effect (IRR = 3.24, p < .001, 95% CI = 2.56, 4.11),
compared with the output in Model Set 2.
In contrast, the “moral contagion” effect was not found in the remaining four
datasets. For the BLM dataset, there was no differential effect in retweet count, regardless
of whether tweets were labeled with a moral concern or not (IRR = 1.08, p = .374, 95%
CI = 0.92, 1.26). Regarding the Davidson dataset, there was a significant negative effect
in diffusion if a tweet was labeled with a moral concern (IRR = 0.52, p = .014, 95% CI =
0.31, 0.88). However, due to the small number of positive cases (annotated as morally
relevant), the interpretation of this result should be cautious. In terms of the Sandy
dataset, result suggested that no significant difference was found in retweet count (IRR =
1.24, p = .081, 95% CI = 0.97, 1.57). For the Baltimore dataset, the “moral contagion”
effect also failed to replicate, compared with the result from Model Set 2 and Brady et
al.’s (2017) findings (IRR = 0.88, p = .369, 95% CI = 0.67, 1.16).
Model Set 6. Overall, results indicated that the diffusion effect from specific
moral concerns differed across different datasets, compared to non-moral messages.
However, as mentioned in Model Set 5, non-moral tweets were over-represented in the
current study. This issue was especially apparent for the Davidson dataset, which had a
very small number of annotated moral concern cases, and thus results obtained from this
dataset should be carefully interpreted. Nonetheless, there were some interesting patterns
observed in the present study (See Table 4).
First, tweets annotated as “degradation” exhibited an overall suppressing effect on
message transmissions across the datasets, compared to morally irrelevant messages.
Although such an effect was significant only in the ALM (IRR = 0.32, p = .005, 95% CI
19
= 0.14, 0.71) and the BLM dataset (IRR = 0.12, p < .001, 95% CI = 0.07, 0.21), the same
effect pattern, albeit not significant, was also found in the remaining datasets.
Second, other annotated moral concern cases yield non-conclusive results across
the datasets. For instance, tweets annotated as “harm” reflected a strong “moral
contagion” effect in the ALM dataset (IRR = 4.36, p < .001, 95% CI = 3.10, 6.13), the
Baltimore dataset (IRR = 4.00,
Table 4
Retweet Count as a Function of Moral Concern
Retweet Count
Moral Concern ALM Baltimore BLM Election Davidson Sandy
Care 0.56* 0.35* 1.28 1.40 0.03* 1.70*
(0.22) (0.33) (0.18) (0.32) (1.28) (0.16)
Harm 4.36* 4.00* 1.21 2.88* 0.44* 0.74
†
(0.17) (0.63) (0.14) (0.25) (0.43) (0.18)
Fairness 0.85 0.28* 1.42* 2.59* 1.00 0.41*
(0.20) (0.53) (0.14) (0.21) (0) (0.36)
Cheating 2.21* 0.83 0.90 7.83* 1.50 0.91
(0.22) (0.33) (1.14) (0.24) (1.26) (0.24)
Loyalty 1.49 0.44* 1.21 5.02* 0.78 0.41*
(0.28) (0.24) (0.14) (0.51) (0.77) (0.26)
Betrayal 0.37 0.55 0.80 0.38 0.95 1.10
(1.08) (0.32) (0.35) (0.66) (0.66) (0.48)
Authority 1.04 0.12 0.95 3.72* 0.20 0.45*
(0.27) (1.55) (0.26) (0.54) (1.27) (0.25)
Subversion 0.35* 4.13* 0.90 1.14 0.10 2.68*
(0.52) (0.52) (0.20) (0.54) (1.55) (0.24)
Purity 0.52 0.01* 1.06 0.03* 1.00 10.28*
(0.46) (0.95) (0.32) (0.31) (0) (0.83)
Degradation 0.32* 0.12 0.12* 0.39 0.27* 0.24
(0.41) (1.78) (0.28) (0.61) (0.55) (0.88)
parenthesis
Note.
†
p<.10; *p<.05. Coefficients refer to incident rate ratios per moral concern over non-moral;
refer to standard errors
20
p = .028, 95% CI = 1.16, 13.80), and the Election dataset (IRR = 2.88, p < .001, 95% CI
= 1.76, 4.71), whereas such tweets received lower attentions in the Sandy dataset (IRR =
0.71, p = .087, 95% CI = 0.52, 1.05) and no significant difference in the BLM dataset,
compared with non-moral messages. In another case, tweets labeled with “loyalty” were
more “contagious” against non-moral tweets in the Election dataset (IRR = 5.02, p
= .002, 95% CI = 1.84, 13.73). however, such tweets spread less widely in the Baltimore
dataset (IRR = 0.44, p < .001, 95% CI = 0.28, 0.70) and the Sandy dataset (IRR = 0.41, p
< .001, 95% CI = 0.25, 0.68). No significant effect was found among the other datasets
for tweets annotated with “loyalty”. Therefore, conclusions have to contextualized to the
specific topics in relation to whether specific moral concerns promote or suppress social
transmissions.
Third, when the moral virtues and vices were taken into account as a pair to
examine their effects simultaneously on message transmissions, mixed effect patterns
were again found in the present study. It was observed that both moral virtue and vice
could promote diffusions. In particular, tweets annotated with either “fairness” (IRR =
2.59, p < .001, 95% CI = 1.70, 3.93) or “cheating” (IRR = 7.83, p < .001, 95% CI = 4.93,
12.43) were more “contagious” as opposed to non-moral tweets in the Election dataset.
Also, some pairs of the moral virtues and vices were associated with differential
transmission effects in the dataset. Specifically, messages labeled as “care” reduced the
diffusion in the ALM dataset (IRR = 0.56, p = .010, 95% CI = 0.36, 0.87), compared
with non-moral messages; while messages labeled as “harm” increased the transmission
(IRR = 4.36, p < .001, 95% CI = 3.10, 6.13). In addition, in the Sandy dataset, tweets
annotated as “authority” were spread less widely compared to non-moral tweets (IRR =
21
0.45, p < .001, 95% CI = 0.28, 0.73), whereas the opposite effect was found for
“subversion” (IRR = 2.68, p = .005, 95% CI = 1.67, 4.30).
Findings from this model set suggested that the “moral contagion” effect did not
occur generally, to the extent that such effect was irrespective of the annotated moral
concerns or context-independent.
22
DISCUSSION
In contrast to Brady et al. (2017), the present study shows that not all moralized
contents with moral-emotional expressions are more likely to be shared, as opposed to
contents without moral-emotional expressions. In fact, this effect replicated in merely
half of the six datasets (i.e., ALM, Baltimore, and Sandy). Additionally, in the attempt to
replicate the “moral contagion” effect (i.e., retweet count of moralized content increases
as a function of moral-emotional expression, controlling for distinctly moral and
distinctly emotional words), a similar finding was only observed in the Baltimore dataset.
In other words, when the dictionary-based method was applied, only the results from the
Baltimore dataset consistently replicate Brady et al. (2017)’s finding. In line with Burton
et al. (2019), these findings suggest that the “moral contagion” effect may not occur
universally across all datasets, and that such an effect may be specific to the topic
domains.
One methodological issue that may lead to inconsistent findings against the prior
study (Brady et al., 2017) is the approach to compute the diffusion metric (i.e., retweet
count) used in the current study. In the original study, Brady et al., (2017) adopted a
"collapse-and-count" aggregation approach to generate the retweet count variable for a
given message. Specifically, this approach assigned the retweet count to the originating
source, by aggregating the retweet count of the original message and the retweet count
from the retweets of the original message within the dataset. However, this approach
wasn’t applied in the present study for three technical considerations. First, this
collapsing approach neglects comments, if any, added in the retweet messages by users.
If there were any moral-emotional expressions in these comments, it would be impossible
23
to estimate the effect. Second, in order to explore the diffusion pattern of the annotated
moral sentiments, the collapsing method made it difficult to aggregate the annotation
results. Third, unlike the original study, datasets in the current study consist only of a
random subset of all the tweets in the respective topics. Therefore, even if the "collapse-
and-count" aggregation approach was employed, it might still not reflect the true
“virality”. On the other hand, adopting this approach would substantially decrease the
sample size. In fact, obtaining the retweet counts for each individual tweet without
including retweet counts from the retweets of these individual tweets could be considered
as a more conservative approach in the analysis. Importantly though, Burton et al.,
(2019)’s study used the same diffusion metric computation method as the original design,
yet their findings are consistent with the present study. Therefore, it seems unlikely that
the discrepant findings can be explained by the different diffusion metrics.
When it comes to human-annotated moral sentiments, the effect that morally
relevant tweets were more “contagious”, compared to non-moral tweets, was found in the
ALM dataset and the Election dataset. The inconsistent “moral contagion” patterns
between the moral-emotional word count and human-annotated data may be caused by
the specific moral and emotional expression measurements used in the current study.
From a methodology perspective, using the dictionary-based method to quantify moral
and emotional expressions may be problematic because the dictionary-based method in
this study relies on morphological similarity (Garten et al., 2018). However, as pointed
out by Hoover et al. (2017), estimating moral sentiment is a subjective task. It requires a
semantic understanding of the content in order to infer the latent psychological
constructs. Moreover, that is why, at this stage, human annotation is still the gold
24
standard for such tasks. The use of a dictionary-based methodology might be a less
accurate indirect measure for moral-emotional expression, compared with human-
annotated moral sentiments. Nonetheless, the human annotation task is tedious work, and
it is hard to scale up. Future research should re-evaluate the validity of using such a
method in quantifying psychological constructs to inform research decisions better.
Another possibility for this discrepancy may pertain to the sample size. For
instance, the frequency of annotated moral concerns is small relative to non-moral labels
in the Davidson dataset, and there were excessive zeroes in the moral-emotional word
counts. However, no differential effects of either moral emotional words or moral
concerns on retweet count were observed, after performing the same analyses excluding
the Davidson dataset from the combined dataset. Also, because of the missing metadata, a
substantial number of samples were excluded from this study. However, there is no way
to learn whether these data were missing at random. It is possible that the tweets deleted
by Twitter were due to inappropriate content or produced by a fake account, which was
often emotionally charged and could have affected the results (Vosoughi et al., 2018).
Brady et al.’s (2017) work used an observational approach and found that moral-
emotion expression was associated with the spread of moralized contents in online social
networks. However, their study provides limited information about how specific moral
emotions function in the transmission of information. Therefore, the current study took a
step further and probed into the moral contents, investigating how specific moral
sentiments, as categorized by the Moral Foundations Theory, affect the spread of moral
or political conversations online. First, tweets annotated as “degradation” suppresses the
diffusion compared to non-moral tweets across the datasets in general. This label signals
25
that its content is in violation of “purity”. Such concerns are associated with human’s
motive to avoid pathogenic things (Graham et al., 2013). Thus, it follows that those
tweets that induce an intuition of disgust are less likely to be shared. Second, when
examining the effect of each moral concern on diffusion, mixed results were observed.
For instance, in the Baltimore dataset, tweets annotated as “care” significantly suppressed
message transmission, whereas such tweets showed a strong "moral contagion" effect in
the Sandy dataset. This again suggests that the “moral contagion” effect is sensitive to the
topic domains. Moreover, the decision about whether or not to share a message is
affected by different factors (Brady, Crockett, et al., 2019). Prior studies suggest that such
decision may take into account whether the message conveys a moral concern that
matched with individual’s moral values (Feinberg & Willer, 2015) and whether the
source of such message belongs to the ingroup (Brady et al., 2019; Wolsko et al., 2016).
These context-sensitive factors are consistent with the idea that the “moral contagion”
effect is dependent on the context.
A rich literature (Feinberg & Willer, 2013; Hoover et al., 2018; Mooijman et al.,
2018; Voelkel & Feinberg, 2018) has explored the downstream effect of moral framing in
attitudes and behaviors, yet little work has examined the diffusion of moral framing
online. Future studies should pay attention to the moderating factors that promote or
suppress the spread of information framed with different moral concerns. For instance,
Brady, Wills, et al. (2019) discovered an asymmetrical diffusion pattern split by political
ideology. Specifically, they found that the “moral contagion” effect was stronger in
conservative elites, as opposed to liberal elites. In another study, Graham et al., (2009)
found that the individualizing foundations, including care and fairness concerns, are the
26
most highly valued moral systems among liberals, whereas conservatives weighted both
the individualizing and binding foundation systems (including the loyalty, authority, and
purity concerns) more or less equally. Therefore, future research regarding the effects of
specific moral emotions or moral framing on social transmissions may shed light on this
differential diffusion phenomenon found by Brady, Wills, et al. (2019). In another case,
prior study shows that moral rhetoric can predict violent protests (Mooijman, Hoover, et
al., 2018). Research investigating how particular moral concern drives information
transmissions may help prevent a protest turning violent, by monitoring and predicting
the trend of events from online discourse progressively.
27
REFERENCES
Brady, W. J., Crockett, M., & Van Bavel, J. J. (2019). The MAD Model of Moral
Contagion: The role of motivation, attention and design in the spread of
moralized content online [Preprint]. PsyArXiv.
https://doi.org/10.31234/osf.io/pz9g6
Brady, W. J., Wills, J. A., Burkart, D., Jost, J. T., & Van Bavel, J. J. (2019). An
ideological asymmetry in the diffusion of moralized content on social media
among political leaders. Journal of Experimental Psychology: General, 148(10),
1802–1813. https://doi.org/10.1037/xge0000532
Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion
shapes the diffusion of moralized content in social networks. Proceedings of the
National Academy of Sciences, 114(28), 7313–7318.
https://doi.org/10.1073/pnas.1618923114
Burton, J. W., Cruz, N., & Hahn, U. (n.d.). How Real is Moral Contagion in Online
Social Networks? 7.
Clifford, S. (2019). How Emotional Frames Moralize and Polarize Political Attitudes.
Political Psychology, 40(1), 75–91. https://doi.org/10.1111/pops.12507
Cole Wright, J., Cullum, J., & Schwab, N. (2008). The cognitive and affective
dimensions of moral conviction: Implications for attitudinal and behavioral
measures of interpersonal tolerance. Personality & Social Psychology Bulletin,
34(11), 1461–1476. https://doi.org/10.1177/0146167208322557
Feinberg, M., & Willer, R. (2013). The Moral Roots of Environmental Attitudes.
Psychological Science, 24(1), 56–62. https://doi.org/10.1177/0956797612449177
28
Feinberg, M., & Willer, R. (2015). From Gulf to Bridge: When Do Moral Arguments
Facilitate Political Influence? Personality and Social Psychology Bulletin, 41(12),
1665–1681. https://doi.org/10.1177/0146167215607842
Fischer, A. H., Manstead, A. S. R., & Zaalberg, R. (2003). Social influences on the
emotion process. European Review of Social Psychology, 14(1), 171–201.
https://doi.org/10.1080/10463280340000054
Garrett, K. N. (2019). Fired Up by Morality: The Unique Physiological Response Tied to
Moral Conviction in Politics. Political Psychology, 40(3), 543–563.
https://doi.org/10.1111/pops.12527
Garten, J., Hoover, J., Johnson, K. M., Boghrati, R., Iskiwitch, C., & Dehghani, M.
(2018). Dictionaries and distributions: Combining expert knowledge and large
scale textual data content analysis. Behavior Research Methods, 50(1), 344–361.
https://doi.org/10.3758/s13428-017-0875-9
Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different
sets of moral foundations. Journal of Personality and Social Psychology, 96(5),
1029–1046. https://doi.org/10.1037/a0015141
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to
moral judgment. Psychological Review, 108(4), 814–834.
https://doi.org/10.1037/0033-295X.108.4.814
Hilbe, J. M. (2011). Negative Binomial Regression. Cambridge University Press.
Hoover, Joe, Johnson, K., Boghrati, R., Graham, J., & Dehghani, M. (2018). Moral
Framing and Charitable Donation: Integrating Exploratory Social Media Analyses
29
and Confirmatory Experimentation. Collabra: Psychology, 4(1), 9.
https://doi.org/10.1525/collabra.129
Hoover, Joe, Portillo-Wightman, G., Yeh, L., Havaldar, S., Davani, A. M., Lin, Y.,
Kennedy, B., Atari, M., Kamel, Z., Mendlen, M., Moreno, G., Park, C., Chang, T.
E., Chin, J., Leong, C., Leung, J. Y., Mirinjian, A., & Dehghani, M. (2020). Moral
Foundations Twitter Corpus: A Collection of 35k Tweets Annotated for Moral
Sentiment. Social Psychological and Personality Science, 1948550619876629.
https://doi.org/10.1177/1948550619876629
Hoover, Joseph, Johnson-Grey, K., Dehghani, M., & Graham, J. (2017). Moral Values
Coding Guide. https://doi.org/10.31234/osf.io/5dmgj
Horberg, E. J., Oveis, C., & Keltner, D. (2011). Emotions as Moral Amplifiers: An
Appraisal Tendency Approach to the Influences of Distinct Emotions upon Moral
Judgment. Emotion Review, 3(3), 237–244.
https://doi.org/10.1177/1754073911402384
Hornsey, M. J., Smith, J. R., & Begg, D. (2007). Effects of norms among those with
moral conviction: Counter‐conformity emerges on intentions but not behaviors.
Social Influence, 2(4), 244–268. https://doi.org/10.1080/15534510701476500
Luttrell, A., Petty, R. E., Briñol, P., & Wagner, B. C. (2016). Making it moral: Merely
labeling an attitude as moral increases its strength. Journal of Experimental Social
Psychology, 65, 82–93. https://doi.org/10.1016/j.jesp.2016.04.003
Mooijman, M., Hoover, J., Lin, Y., Ji, H., & Dehghani, M. (2018). Moralization in social
networks and the emergence of violence during protests. Nature Human
Behaviour, 2(6), 389–396. https://doi.org/10.1038/s41562-018-0353-0
30
Mooijman, M., Meindl, P. J., Oyserman, D., Monterosso, J., Dehghani, M., Doris, J. M.,
& Graham, J. (2018). Resisting Temptation for the Good of the Group: Binding
Moral Values and the Moralization of Self-Control. Journal of Personality and
Social Psychology. https://doi.org/10.1037/pspp0000149
Rozin, P., & Singh, L. (1999). The Moralization of Cigarette Smoking in the United
States. Journal of Consumer Psychology, 8(3), 321–337.
https://doi.org/10.1207/s15327663jcp0803_07
Schoenewolf, G. (1990). Emotional contagion: Behavioral induction in individuals and
groups. Modern Psychoanalysis, 15(1), 49–61.
Skitka, L. J., Bauman, C. W., & Lytle, B. L. (2009). Limits on legitimacy: Moral and
religious convictions as constraints on deference to authority. Journal of
Personality and Social Psychology, 97(4), 567–578.
https://doi.org/10.1037/a0015998
Skitka, L. J., Bauman, C. W., & Sargis, E. G. (2005). Moral conviction: Another
contributor to attitude strength or something more? Journal of Personality and
Social Psychology, 88(6), 895–917. https://doi.org/10.1037/0022-3514.88.6.895
Stieglitz, S., & Dang-Xuan, L. (2013). Emotions and Information Diffusion in Social
Media—Sentiment of Microblogs and Sharing Behavior. Journal of Management
Information Systems, 29(4), 217–248. https://doi.org/10.2753/MIS0742-
1222290408
31
Valenzuela, S., Piña, M., & Ramírez, J. (2017). Behavioral Effects of Framing on Social
Media Users: How Conflict, Economic, Human Interest, and Morality Frames
Drive News Sharing: Framing Effects on News Sharing. Journal of
Communication, 67(5), 803–826. https://doi.org/10.1111/jcom.12325
Voelkel, J. G., & Feinberg, M. (2018). Morally Reframed Arguments Can Affect Support
for Political Candidates. Social Psychological and Personality Science, 9(8), 917–
924. https://doi.org/10.1177/1948550617729408
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online.
Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Wolsko, C., Ariceaga, H., & Seiden, J. (2016). Red, white, and blue enough to be green:
Effects of moral framing on climate change attitudes and conservation behaviors.
Journal of Experimental Social Psychology, 65, 7–19.
https://doi.org/10.1016/j.jesp.2016.02.005
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
How misinformation exploits moral values and framing: insights from social media platforms and behavioral experiments
PDF
How perceived moral congruence shapes propensities to engage in pro-group behaviors
PDF
Generalization of social rejection in social networks
PDF
Difficulty-as-sanctifying: when difficulties build character, purify the self, and elevate the soul
PDF
Bound in hatred: a multi-methodological investigation of morally motivated acts of hate
PDF
Sex differences in moral judgements across 67 countries
PDF
Powerful guts: how power limits the role of disgust in moral judgment
PDF
The moral foundations of needle exchange attitudes
PDF
“What difficulty means for me”: predictors and consequences of difficulty mindsets
PDF
People can change when you want them to: changes in identity-based motivation affect student and teacher Pathways experience
PDF
Outlier-robustness in adaptations to the lasso
PDF
Can ideologically relevant threat shift group-oriented values? Relevant threatening tweets cause Progressives to be as prejudiced as Conservatives
PDF
The morality of technology
PDF
Socially-informed content analysis of online human behavior
PDF
A roadmap for changing student roadmaps: designing interventions that use future “me” to change academic outcomes
PDF
A self-knowledge model of social inference
PDF
Analysis and prediction of malicious users on online social networks: applications of machine learning and network analysis in social science
PDF
Psychological distance in the public’s response to terrorism: An investigation of the 2016 Orlando nightclub shooting
PDF
Global consequences of local information biases in complex networks
PDF
Socio-ecological psychology of moral values
Asset Metadata
Creator
Jiang, Linle
(author)
Core Title
The spread of moral content in online social networks
School
College of Letters, Arts and Sciences
Degree
Master of Arts
Degree Program
Psychology
Publication Date
07/06/2020
Defense Date
05/21/2020
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
moral contagion,moral emotion,Morality,OAI-PMH Harvest,online communications,Psychology,social networks
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Dehghani, Morteza (
committee chair
), Read, Stephen (
committee member
), Schwarz, Norbert (
committee member
)
Creator Email
linle0203@hotmail.com,linlejia@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-325514
Unique identifier
UC11663534
Identifier
etd-JiangLinle-8639.pdf (filename),usctheses-c89-325514 (legacy record id)
Legacy Identifier
etd-JiangLinle-8639.pdf
Dmrecord
325514
Document Type
Thesis
Rights
Jiang, Linle
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
moral contagion
moral emotion
online communications
social networks