Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Expressing values and group identity through behavior and language
(USC Thesis Other)
Expressing values and group identity through behavior and language
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: POLITICAL IDENTITY POLARIZATION 1
Expressing Values and Group Identity through Behavior and Language
Kate Marie Johnson-Grey
A dissertation presented to
the Graduate school of the University of Southern California
in partial fulfillment of the requirements for the degree
Doctor of Philosophy (PSYCHOLOGY)
May 2018
VALUES AND GROUP IDENTITY 2
Dedication
This dissertation is dedicated to my grandmother, Marie Dorothy Johnson. You are my
inspiration. You always believed in me, and I could have never gotten here without you.
VALUES AND GROUP IDENTITY 3
TABLE OF CONTENTS
Expressing Values and Group Identity through Behavior and Language ....................................... 1
Dedication ....................................................................................................................................... 2
Dissertation Introduction ................................................................................................................ 6
References ........................................................................................................................... 9
Chapter 1: Political Identity Polarization: Measurement, Consequences, and Implications for
Intervention Success ..................................................................................................................... 11
Abstract ............................................................................................................................. 11
Introduction ....................................................................................................................... 12
Political Extremism, Left and Right ..................................................................... 12
Measuring Political Division ................................................................................ 14
The Current Research ........................................................................................... 15
Study 1 .............................................................................................................................. 17
Method .................................................................................................................. 19
Results ................................................................................................................... 22
Discussion ............................................................................................................. 27
Study 2: Political Identity Polarization, Liking, and Discrimination ................................ 27
Method .................................................................................................................. 29
Results ................................................................................................................... 30
Discussion ............................................................................................................. 32
Study 3: Political Identity Polarization and Hostile Political Activism ............................ 32
Method .................................................................................................................. 33
Results ................................................................................................................... 34
Discussion ............................................................................................................. 35
Study 4: Political Identity Polarization, Online Anti-Prejudice Interventions, and Implicit
Attitudes ............................................................................................................................ 36
Method .................................................................................................................. 38
Results ................................................................................................................... 40
Discussion ............................................................................................................. 43
Study 5: Political Identity Polarization and In-Person Anti-Prejudice Interventions ....... 44
Method .................................................................................................................. 45
VALUES AND GROUP IDENTITY 4
Results ................................................................................................................... 46
Discussion ............................................................................................................. 47
General Discussion ........................................................................................................... 48
Future directions ................................................................................................... 51
Conclusion ............................................................................................................ 51
References ......................................................................................................................... 53
Chapter 2: Measuring Abstract Mindsets through Syntax: Improvements in Automating the
Linguistic Category Model ........................................................................................................... 63
Abstract ............................................................................................................................. 63
Introduction ....................................................................................................................... 64
Construal Level Theory ........................................................................................ 64
The LCM: Measuring Abstraction in Language ................................................... 66
Automated Methods for Coding Abstract Mindsets ............................................. 67
Syntax-LCM: Automating the LCM Using Syntax .............................................. 68
Syntax-LCM Method Development ................................................................................. 69
Establishing Ground Truth: LCM Manual, Hand-Coded Abstraction. ................. 70
Syntax-LCM Method ............................................................................................ 71
Study 1 .............................................................................................................................. 73
Method .................................................................................................................. 74
Results and Discussion ......................................................................................... 75
Study 2 .............................................................................................................................. 76
Method .................................................................................................................. 77
Results ................................................................................................................... 77
Discussion ............................................................................................................. 78
Study 3 .............................................................................................................................. 78
Method .................................................................................................................. 79
Results ................................................................................................................... 79
General Discussion ........................................................................................................... 81
Conclusion ............................................................................................................ 83
References ......................................................................................................................... 84
VALUES AND GROUP IDENTITY 5
Chapter 3: Do Moral Judgments Align with Moral Behaviors? A Meta-Analytic Review .......... 87
Abstract ............................................................................................................................. 87
Introduction ....................................................................................................................... 88
Defining Moral Judgment and Moral Behavior .................................................... 90
Competing Hypotheses 1 & 2: Morality as a Powerful Motivator ....................... 91
Competing Hypotheses 3 & 4: Morality as Objective Fact (Universally Applied
Across Judgment Targets) ..................................................................................... 93
Competing Hypotheses 5 & 6: Morality is Universal Across Time and Space .... 94
Competing Hypotheses 7 & 8: Self Reports of Moral Behavior .......................... 95
The Current Meta-Analysis ................................................................................... 97
Method .............................................................................................................................. 97
Search Strategy ..................................................................................................... 97
Inclusion and Exclusion Criteria ........................................................................... 99
Coding Procedures .............................................................................................. 100
Analysis Overview .............................................................................................. 102
Results ............................................................................................................................. 103
Overall Analyses ................................................................................................. 103
Moderator Analyses ............................................................................................ 106
General Discussion ......................................................................................................... 109
Morality should be different. .............................................................................. 109
Measurement Matters .......................................................................................... 110
Theoretical Implications ..................................................................................... 112
Practical Implications for Future Research ......................................................... 114
Conclusion .......................................................................................................... 115
References ....................................................................................................................... 117
VALUES AND GROUP IDENTITY 6
Dissertation Introduction
Cultural and individual differences in moral concerns underlie many of the important
interpersonal and inter-societal conflicts of our time (Graham et al., 2011; Lovett, Jordan, &
Wiltermuth, 2012; Skitka, 2010). Moral values are uniquely motivating due to their deep
connection to one’s identity and overarching sense of self (Aquino & Reed, 2002; Feather,
1995), and our morals are strongly emotional, driven by deeply-felt gut responses such as disgust
and outrage (Skitka, 2010). The tenacity of these beliefs can lead people to endorse and uphold
their moral convictions across situations and time, even in the face of strong arguments and facts
against their position.
Since 2001, there has been an explosion of interest across the social sciences in moral
values and ethical conduct (Greene et al., 2001; see also Haidt, 2007 for discussion). Studies
illuminating the mechanisms underlying moral judgment and cognition have continued to
develop through experimental designs, and generalized moral measurement scales such as Moral
Foundations Theory (Graham et al., 2011) have played a central role in the measurement of
moral beliefs. As the field continues to grow, targeted measurement scales that capture these
moral mechanisms provide the opportunity for deeper theoretical understanding, and a need
arises for guidance in measurement best practices.
The present work provides two novel techniques and meta-analytic guidance for the
measurement of psychological constructs related to the moral domain. In Chapter 1, we present
the Political Identity Polarization Scale (PIPS), which we developed to capture differences in
group-based prejudice between two morally and ideologically opposing groups: conservatives
and liberals. In five studies, we find that the PIPS outperforms existing methods, including Right
Wing Authoritarianism, Social Dominance Orientation, affective polarization, and ideological
VALUES AND GROUP IDENTITY 7
extremity, when predicting negative intergroup outcomes. In Studies 1 and 2, we show that
Political Identity Polarization predicts partisan social distancing preferences and ideology-based
discriminatory behavior for participants across the political spectrum, regardless of whether they
self-identify as conservative, liberal, or moderate. In Study 3, we transition to assessing political
outcomes and find that the PIPs is uniquely predictive of likelihood to participate in hostile
activism. Finally, in Studies 4 and 5, we use the PIPS as an outcome measure to test the
effectiveness of online and in-person political polarization-reduction intervention attempts. We
find that our multi-faceted measure of polarization provides unique insight into the groups of
people who are positively affected by these interventions, as well as those who the interventions
are failing to reach.
In Chapter 2, we transition from explicit, self-report measures to measurement of
linguistic artifacts. Researchers of Construal Level Theory (CLT; Trope & Liberman, 2010) have
found that the extent to which a person holds a relatively abstract (i.e., big picture, central
features) or concrete (i.e., focused on the present contextual details, secondary features)
representation of an object or event may have significant implications for a host of morally-
relevant outcomes including the harshness of moral judgments (Kahn & Björklund, 2017) and
extent to which moral judgments guide moral behaviors (Torelli & Kaikati, 2009). Current
hand-coded (Linguistic Category Model; Coenen, Hedebouw, & Semin, 2006) and computer
automated methods (Brysbaert Concreteness Ratings; Brysbaert, Warriner & Kuperman, 2014;
LIWC LCM; Seih, Beier, & Pennebaker, 2016) of measuring construal level in text exist.
However, the hand-coding technique is particularly resource-intensive, and automated methods
do not incorporate key context-based coding rules (e.g., copulas as adjectives). To fill the need
for a more nuanced automated measure, we developed the Syntax-LCM, a set of R functions that
VALUES AND GROUP IDENTITY 8
use abstract and concrete syntactic features to approximate hand-coded LCM scores in text. In
three studies, we provide evidence for the unique predictive validity of our new method
compared to the Brysbaert and LIWC LCM methods when predicting hand-coded abstraction
scores. Additionally, we find that our measure is generalizable to a variety of content domains,
scores coded by researchers both in and outside of our lab, and to Twitter datasets with unique
syntactic and linguistic patterns.
Finally, in Chapter 3, we meta-analytically assess the existing moral judgment-moral
behavior research (252 independent effect sizes from ~140,000 participants) to test theoretical
assumptions in the field of moral psychology and identify methodological best practices. While
we often highlight the unique consistency of moral judgments and behaviors compared to other
types of non-moral attitudes (Skitka, 2010), we also lament the frequency with which we fail to
live up to our moral values (Gino, Ayal, & Ariely, 2013). We find that our moral judgments do
not align more strongly with our behaviors than other types of attitudes, and that significant
variability exists in the size of the relationship between these two constructs. We caveat that it is
possible that moral behaviors are more personally costly, and that moral imperatives may be
necessary to drive even moderate relationships between judgments and these types of behaviors.
We also find that while self-reported past behavior may be a good proxy for objectively
measured behaviors, intentions are susceptible to unique measurement moderators (e.g.,
correspondence between time and context measurement features) and are much more strongly
correlated with moral judgments than other types of behavioral measures.
VALUES AND GROUP IDENTITY 9
References
Aquino, K., & Reed II, A., (2002). The self-importance of moral identity. Journal of Personality
and Social Psychology, 83(6), 1423-1440.
Brysbaert, M.,Warriner, A. B., & Kuperman, V. (2014). Concreteness ratings for 40 thousand
generally known English word lemmas. Behavioral Research Methods, 46(3), 904–911.
Coenen, L.H.M., Hedebouw, L., & Semin, G.R. (2006). Measuring language abstraction: The
Linguistic Category Model (LCM). Retrieved January 20, 2017, from
http://www.cratylus.org/Text/1111548454250-3815/pC/1111473983125
6408/uploadedFiles/1151434261594-8567.pdf
Feather, N. T. (1995). National identification and ingroup bias in majority and minority groups:
A field study. Australian Journal of Psychology, 47(3), 129-136.
Gino, F., Ayal, S., & Ariely, D. (2013). Self-serving altruism? The lure of unethical actions that
benefit others. Journal of Economic Behavior & Organization, 93, 285-292
Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the
moral domain. Journal of Personality and Social Psychology. 101, 366-385.
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., Cohen, J. D. (2001). And fMRI
investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-
2108.
Haidt, J. (2007). Th new synthesis in moral psychology. Science, 316, 998-1002.
Kahn, D. T., & Björklund, F. (2017). Judging those closest from afar: The effect of
psychological distance and abstraction on value–judgment correspondence in responses
to ingroup moral transgressions. Peace and Conflict: Journal of Peace Psychology, 23(2),
153.
VALUES AND GROUP IDENTITY 10
Lovett, B. J., Jordan, A. H., & Wiltermuth, S. S. W. (2012). Individual differences in the
moralization of everyday life. Ethics and Behavior, 22(4), 248-257.
Seih, Y., Beier, S., & Pennebaker, J.W. (2016). Development and examination of the linguistic
category model in a computerized text analysis method. Journal of Language and Social
Psychology, 1-13.
Skitka, L. J. (2010). The psychology of moral conviction. Social and Personality Psychology
Compass, 4(4), 267-281.
Torelli, C. J., & Kaikati, A. M. (2009). Values as a predictor of judgments and behaviors: The
role of abstract and concrete mindsets. Journal of Personality and Social Psychology,
96(1), 231-247.
Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance.
Psychological Review, 117(2), 440–463.
VALUES AND GROUP IDENTITY 11
Chapter 1: Political Identity Polarization: Measurement, Consequences, and Implications
for Intervention Success
Abstract
Catastrophic outcomes can result from moral crusades of both the political left and right, whether
they be mass executions of the aristocracy in the name of equality and liberty during the French
Revolution or genocide in the name of racial group purity during the Nazi regime. The present
studies develop a method of measuring “us versus them” ideological prejudice called the
Political Identity Polarization Scale (PIPS), and use this measure to explore the implications of
political identity polarization for hostile political intergroup interactions and civility intervention
success. Across five studies, we find that the PIPS successfully predicts negative intergroup
outcomes such social and physical closeness (preregistered Study 1), hostile activism (Study 2),
and discrimination against those perceived to belong to the opposing party (preregistered Study
3) for individuals across the political spectrum. Finally, we find that this new scale has pragmatic
utility in helping to understand when intergroup prejudice interventions will successfully reduce
ideological polarization. Both in-person (Study 4) and online (Study 5) classic intergroup
prejudice interventions were effective at reducing prejudice for people who were only slightly
polarized. However, we find that few extremely polarized people volunteer to attend in-person
civility events, and that online intervention attempts may not be effective for people high in
political identity polarization.
Keywords: prejudice, political ideology, extremism, intergroup conflict
VALUES AND GROUP IDENTITY 12
Introduction
Ideological division between liberals and conservatives is a major societal problem in the
United States, and it has been getting worse (Pew Research Center, 2014). Polarization between
Democrats and Republicans has steadily increased in both houses of the United States Congress
since the 1970s, and current polarization levels have reached record highs both within the
government and in the general public (Bonica, McCarty, Poole, & Rosenthal, 2015; Pew
Research Center, 2010, 2016). Given the significant, negative ramification of extreme
polarization for communal and governmental functioning (Hetherington & Rudolph, 2015;
Valdesolo & Graham, 2016) it has become increasingly important for researchers to understand
the sources of political prejudice, measure its prevalence in our communities, and identify
effective interventions to decrease polarization and partisan hostility.
Political Extremism, Left and Right
During the 2016 presidential race, Republican nominee Donald Trump stated that Hillary
Clinton, the Democratic Party front-runner, was “a major national security risk” and “perhaps the
most dishonest person to ever have run for president” (New York Times, 2016). Democratic
nominee Hillary Clinton referred to Donald Trump in a speech saying “Donald Trump's ideas
aren't just different -- they are dangerously incoherent. They're not even really ideas, just a series
of bizarre rants, personal feuds, and outright lies” (Reilly, 2016). Compromise with the opposing
party has become taboo, and even candidates within the same political party target their
competitors for being too moderate, pressuring political leaders to present themselves as more
ideologically extreme.
As political leaders have become more polarized and hostile toward opposing parties, the
divide between liberals and conservatives in the public has also widened (Iyengar, Sood, &
VALUES AND GROUP IDENTITY 13
Lelkes, 2012; Valdesolo & Graham, 2016). While publicly expressed negative attitudes towards
others based on characteristics such as race and gender have become less socially and legally
acceptable over time (Pew Research Center, 2010, 2016), discrimination and negative group
perceptions based on political and ideological affiliation have not been similarly constrained.
Affective polarization—the tendency to feel favorably toward ingroup partisans and negatively
toward outgroup partisans (Green, Palmquist, & Schickler, 2004)—has continued to increase
dramatically since the 1970s (Iyengar et al., 2012; Pew Research Center, 2014). In residential
neighborhood choice (Bishop, 2008; Motyl, 2014; Motyl, 2016; Motyl, Iyer, Oishi, Trawalter, &
Nosek, 2014), as well as social tie preferences (Chopik & Motyl, 2016; Rosenfeld, Reuben, &
Falcon, 2011), partisans are purposefully distancing themselves from each other. Evidence even
suggests that partisans are likely to discriminate against those from opposing political parties
more strongly than they discriminate against others based on race (Iyengar & Westwood, 2015).
Notably, conservatives are not the only ideological group exhibiting increased intergroup
hostility. While classic measures of intergroup intolerance have primarily predicted hostility as a
problem for political conservatives (Right-Wing Authoritarianism, RWA; Altemeyer, 1981;
1998; Zakrisson, 2005; Social Dominance Orientation, SDO; Pratto et al., 1994), recent research
has emphasized that the survey items within these scales may have lead to biased outcomes due
to their reliance on conservative beliefs. Researchers have noted that both liberals and
conservatives show equal cross-ideological hostility when the target is changed (Crawford,
Brandt, Inbar, Chambers, & Motyl, 2017; Crawford, Mallinas, & Furman, 2015; Crawford,
Modri, & Motyl, 2013). For example, prejudice on the right against African Americans may
reflect, in part, the fact that this group tends to vote for Democrats. When groups that vote for
VALUES AND GROUP IDENTITY 14
Republicans (such as Christians and members of the military) are the targets, prejudice is
stronger on the left.
Measuring Political Division
In order to test the effectiveness of these interventions, researchers have relied on various
attitudinal measures to capture political prejudice on both the right and the left. The most
frequently used single measure of cross-political party prejudice is the partisan temperature
rating scale of affective polarization, which has been used widely by polls and political scientists
for decades (e.g., ANES, 2010; Iyengar, Sood, & Lelkes 2012). This scale benefits from its
simplicity; respondents rate how warm or cold they feel toward different political groups on a
scale from 0 (cold) to 100 (warm), and these ratings can then be compared with the respondents’
self-identified political party to calculate negative perceptions of outgroups. People’s responses
to these simple and straightforward questions have been shown to correlate with their implicit
biases against political groups (Iyengar & Westwood, 2015) and predictions of partisan
candidate success (Iyenger, Sood, & Lelkes, 2012).
Ideological extremity has also often been used as an indicator of intergroup hostility
(Toner, Leary, Asher, & Jongman-Sereno, 2013; van Prooijen, Krouwel, Boiten, & Eendebak,
2015). However, stronger party identification also predicts outcomes that communities want to
promote, such as voting (Bartels, 2000) and participation in political campaign activities
(Groenendyk & Banks, 2014). These effects make it difficult to disentangle the potential
negative intergroup ramifications we hope to minimize from the correlated positive engagement
outcomes we want to promote.
VALUES AND GROUP IDENTITY 15
The Current Research
Prejudice is a multi-faceted psychological phenomenon stemming from both affective
and more reasoned or cognitive processes, and research indicates that many of the components of
prejudice that apply to other social categories are also relevant to political prejudice and
polarization. Negative emotions (Cottrell & Neuberg, 2005; Iyengar & Westwood, 2015),
stereotyping (Graham, Nosek, & Haidt, 2012), motivational attributions (Waytz et al., 2014), and
social distancing (Iyengar, Sood, & Lelkes, 2012) all contribute to reinforcing negative political
intergroup perceptions and hostility.
Research has also shown important differences in the motivational outcomes of ingroup
identification and outgroup hate. While some research has shown that ingroup love and outgroup
hate are strongly correlated in the political realm, other researchers have argued that many
negative intergroup outcomes between morally charged groups are a function more of outgroup
hate than ingroup love (Parker & Janoff-Bulman, 2013; Brewer 1999).
Given that most current measures rely on a single aspect of prejudice or identity, we
hypothesized that researchers would benefit from taking a more comprehensive view of political
prejudice when attempting to implement and test the effectiveness of political prejudice
reduction interventions. To this end, we developed the Political Identity Polarization Scale
(PIPS), a measure which synthesizes current theoretical components of prejudice and captures
and compares negative conservative/liberal group perceptions. We developed the items in the
Political Identity Polarization Scale (PIPS) to capture differences in people’s negative
perceptions toward both liberals and conservatives: high Political Identity Polarization scores
indicate that the person holds strongly negative attitudes toward one partisan group but not the
other.
VALUES AND GROUP IDENTITY 16
Across five studies, we test four general PIPS hypotheses. First, we hypothesized that a
multi-faceted measure would account for unique variance when predicting hostile intergroup
outcomes compared with existing measures (i.e., ideological extremity and affective
polarization) (H1). If our measure is distinct from these scales, then we would expect the PIPS,
ideological extremity, and affective polarization to either account for unique variance or lead to
different intergroup outcomes. However, if PIPS is highly correlated with existing measures and
does not provide additional predictive accuracy for hostility beyond existing measures, we would
affirm that our measure is not necessary when quantifying political prejudice.
Second, we hypothesized that the difference between perceptions of liberals and
conservatives will contribute unique variance when compared with measurement of perceptions
toward either group alone. If perceptions of any one group is sufficient to account for PIPS
explanatory power, then this would indicate that hostility is not a function of group contrasts.
However, if we find that the difference between group perceptions accounts for unique variance
beyond perceptions of either group, then this result would indicate the importance of capturing
group disparities when predicting hostile outcomes.
Additionally, an increasing number of Americans identify as either moderate or
independent, making it difficult for researchers to study extremism and prejudice in those who
refuse openly to take a side (Jones, 2015). Despite explicit affiliation, however, many people still
lean toward one side or the other (Hawkins & Nosek, 2012). Our measure benefits from
removing personal group identification from the equation. Instead, we measure both perceptions
of liberals and conservatives, allowing for direction comparison between the two groups for all
people across the ideological spectrum (as well as outside of it). Therefore, we also test the
hypothesis that PIPS will predict hostility for both self-identified partisans and non-partisans
VALUES AND GROUP IDENTITY 17
(H3; Studies 1 and 2). If we find that our scale is predictive for partisans and not predictive for
non-partisans, then we would expect that the applicability of our measure will be limited to only
those individuals who belong to the two groups being assessed. However, if PIPS is predictive
across all self-identified ideologies, we would provide a measure that could be used to determine
whether moderates or independents are truly agnostic or if they too may sometimes be politically
polarized and act out of prejudice.
Finally, in Studies 4 and 5, we sought to test our final main hypothesis: whether the PIPS
could provide a more nuanced understanding of political prejudice interventions’ effectiveness
for reducing hostility and negative group perceptions. In Study 4, we conducted an exploratory
analysis to identify whether political hostility interventions are differentially effective for people
across the PIPS spectrum, and in Study 5 we sought to confirm that intervention effectiveness is
predicted by PIPS scores in field settings. For all studies, we report how we determined our
sample size, all data exclusions (if any), all manipulations, and all measures in the study. The
hypotheses and analysis plans for Studies 1 and 2 were preregistered on the Open Science
Framework, with details provided in the study methods sections. All materials, datasets, and R
analysis syntax are available at osf.io/7v8zw/?view_only=4139ebba7eac4b5fab46812386cd2c14.
Study 1
In Study 1, we develop the Political Identity Polarization Scale (PIPS) and test its validity
for predicting differences in intergroup social preferences. We hypothesized that PIPS scores
would account for unique variance in disparities between liberal and conservative social
closeness preferences beyond measures of ideology (Hypotheses 1 and 2). Specifically, we
hypothesized that liberals and conservatives high in political identity polarization would show
greater disparities in the extent to which they would like to be socially and physically close to
VALUES AND GROUP IDENTITY 18
liberals and conservatives (Hypothesis 1), and PIPS scores would account for variance beyond
that of ideological extremity (Hypothesis 2). We also hypothesize that PIPS scores should predict
differences in social closeness preferences even when the polarized participants do not explicitly
identify as either a liberal or a conservative (Hypothesis 3).
1
Next, we compared our measure to two other existing measures of political prejudice:
Right Wing Authoritarianism (RWA) and Social Dominance Orientation (SDO). We
hypothesized that Political Identity Polarization scores would be distinct from RWA and SDO,
such that the PIPS would predict physical and social closeness preferences for both liberals and
conservatives, whereas RWA and SDO would only predict closeness preferences for
conservatives (Hypothesis 4).
Finally, we test whether political identity polarization and ideological extremity have
unique predictive validity for ingroup social closeness preferences and outgroup social closeness
preferences (Hypothesis 5, pre-registered hypothesis 3). Specifically, we hypothesize that
ideological extremity will be a significant predictor of preferences to be closer to one’s ingroup
members, but we expect that PIPS scores will account for unique variance beyond ideological
extremity for predicting outgroup closeness preferences.
All hypotheses and analysis decisions in Study 1 were preregistered prior to data
collection on the Open Science Framework and are available at
https://osf.io/zg2vf/?view_only=e6f74322ecff464e8fcec98d8c90bc82.
1
See supplemental materials for full details about the pilot study development of the political
identity polarization scale (Supplemental Study: Pilot) and a supplemental study replicating the
results of Study 1 (Supplemental Study A). Note that Hypotheses are in order they were
presented in the preregistration and not in order of presentation in the paper.
VALUES AND GROUP IDENTITY 19
Method
Participants and procedure. We recruited 500 participants (M age = 34.67, 45.9%
female) through Amazon Mechanical Turk (AMT; Buhrmester, Kwang, & Gosling, 2011) to
allow for sufficient conservative representation in our sample given the ideological breakdown
and effect sizes identified in a pilot study. Five hundred and five participants completed the
survey. We deleted six cases due to duplicate IP addresses, leaving a final sample size of 499;
261 participants identified as liberal, 82 moderate, 111 conservative, 9 libertarian, and the
remainder responded “don’t know, “not political,” or “other.”
First, all participants completed an eight-item measure of physical and social distance
preferences. They then completed the 10-item PIPS, the Right-Wing Authoritarianism Scale, and
the Social Dominance Orientation scale in randomized order. Finally, participants provided
demographic information including gender, age, ideology, and religious attendance.
Social closeness measure. Participants answered the following eight questions in
randomized order (four item pairs, Cronbach’s α = 0.70) reflecting physical and social closeness
preferences toward liberals and conservatives: “If you were sitting on a bench with a
[conservative/liberal], how close to them would you be willing to sit?”, “How similar do you
think you are to [liberals/conservatives]?”, “How much would you want to have a conversation
with a [liberal/conservative]”, and “According to my first feelings (reactions), I would willingly
admit a [conservative/liberal] into the following classifications:” (Dehghani, et al., 2016). All
questions were on a 5-point scale from not at all to very much, except for the last pair of items
which were on a 7-point scale from as close relatives by marriage to would exclude from my
country. 4 items were then reverse coded so that higher numbers indicate higher physical and
social closeness preferences.
VALUES AND GROUP IDENTITY 20
Next, we created five scores using these items reflecting liberal closeness preferences,
conservative closeness preferences, ingroup closeness preferences, outgroup closeness
preferences, and differences in closeness preferences. We standardized all items and combined
the four standardized liberal items to create the liberal closeness preferences score (M = 0.00, SD
= 0.76, Cronbach’s α = 0.76), and the four standardized conservative items to create the
conservative closeness preference score (M = 0.00, SD = 0.71, Cronbach’s α = 0.67). To create a
partisan closeness preferences score, we computed the absolute value of the difference between
each of the four closeness preference item pairs, standardized these four scores, and then
averaged across them ((M = 0.00, SD = 0.84, Cronbach’s α = 0.86).
Finally, for participants who identified as liberal or conservative, we also differentiated
between ingroup and outgroup social closeness preferences using their self-identified ingroup’s
subscale as the ingroup closeness preference score (e.g., all “liberals” items for liberal
participants) and the opposing group as the outgroup closeness preference score (e.g., all
“conservatives” items for liberal participants); These scores were not calculated for those
identifying as moderate, libertarian, “don’t know,” or “other.”
The Political Identity Polarization Scale (PIPS). The Political Identity Polarization
Scale (PIPS) consists of 10 items (see Table 1)
2
. This scale is made up of two subscales with 5
items measuring prejudiced perceptions of liberals (anti-liberal prejudice subscale; Cronbach’s a
= 0.93, mean inter-item correlation = 0.67; scale M = 2.3, SD = 1.4) and 5 items measuring
prejudiced perceptions of conservatives (anti-conservative prejudice subscale, Cronbach’s a =
0.90, mean inter-item correlation = 0.64; scale M = 2.9, SD = 1.3). Participants answer all items
2
The original PIPS included 20 items, and all 20 items were measured in Studies 1 and 2. However, the
final version of the scale used for interventions was decreased to 10 items that retained sufficient
Cronbach’s α to increase its practical applicability in real-world settings. All results are consistent when
the 20-item measure is used in place of the 10 item measure.
VALUES AND GROUP IDENTITY 21
in randomized order on a 7-point Likert-type scale from 1 (strongly disagree) to 7 (strongly
agree). The anti-liberal prejudice and anti-conservative prejudice subscales were created by
taking the average of the 8 items for each target group, and the political identity polarization
score was computed by taking the average difference between each item pair (anti-liberal item
minus anti-conservative item for each statement).
Table 1
The Political Identity Polarization Scale
Anti-Conservative Prejudice Items
1. Most conservatives are motivated in part by their hatred of poor people.
2. Conservatives are generally good people. (r) *
3. Most conservatives are unsophisticated rednecks.*
4. I generally dislike conservatives.*
5. Conservatives have gained so much power in American society that they are on the
verge of destroying the country.*
Anti-Liberal Prejudice Items
1. Most liberals are motivated in part by their hatred for America.
2. Liberals are generally good people. (r)*
3. Most liberals are lazy people who want government handouts.*
4. I generally dislike liberals.*
5. Liberals have gained so much power in American society that they are on the verge of
destroying the country.*
Note: (r) items reverse coded. Items marked with * used for the Political Identity Polarization
short-form measure in Study 3.
Right Wing Authoritarianism. Participants completed the 15-item short form Right
Wing Authoritarianism scale with items in randomized order (RWA; α = 0.93; Zakrisson, 2005).
For each statement, participants rated their agreement using a 6-point Likert scale ranging from 1
(Strongly Disagree) to 6 (Strongly Agree). 7 items were reverse scored, and participants’ RWA
scores were then calculated by averaging their responses across all items.
VALUES AND GROUP IDENTITY 22
Social Dominance Orientation. Participants completed the 15 item Social Dominance
Orientation scale with items in randomized order (SDO; α = 0.95; Sidanius & Pratto, 2001). For
each statement, they rated their agreement using a 7-point Likert scale ranging from 1 (Strongly
Disagree) to 7 (Strongly Agree). 7 items were reverse scored, and participants’ average SDO
scores were then calculated by averaging their responses across all items.
Ideological extremity. Participants answered a single item measuring their ideological
identification, “When it comes to politics, do you usually think of yourself as liberal, moderate,
conservative, or something else?” on a 7-point scale ranging from 1 (very liberal) to 7 (very
conservative) with three additional options available: not “political/don’t know”, “libertarian”, or
“other”. We created an ideological extremity measure ranging from 0 (neutral) to 3 (very) by
folding the scale so that moderates were coded as 0, and liberals and conservatives with similar
extremity levels were coded as increasingly higher numbers
3
.
Results
Correlations between all outcomes of interest are summarized in Table 2.
3
Results are consistent for all analyses when all participants who did not identify as liberal or
conservative were included at the neutral point.
Table 2
Study 1 Correlations between PIPS, ideological extremity, RWA and SDO
Partisan closeness
preferences
Ideological
extremity RWA SDO
Political identity
polarization
0.64*** 0.55*** -0.05 -0.13
Partisan closeness
preferences
1 0.39*** 0.24*** 0.06
Ideological extremity 1 -0.10 -0.18*
RWA 1 0.48**
SDO 1
Note: * p < .05, ** p < .01, *** p < .001
VALUES AND GROUP IDENTITY 23
Hypotheses 1 and 2. PIPS scores account for unique variance in disparities between
liberal and conservative social closeness preferences beyond measures of ideology. First, we ran
a Pearson’s correlation analysis to quantify the size of the relationship between PIPS, partisan
social closeness preferences, ideological extremity. Supporting Hypothesis 1, we found that
higher PIPS scores were positively correlated with a greater preference to be close to
ideologically similar others and a preference to be further from ideologically dissimilar others
(see Table 2).
However, PIPS scores and ideological extremity were also significantly and positively
correlated. To test Hypothesis 1 and ensure that PIPS scores account for unique variance beyond
ideological extremity, we ran a hierarchical linear regression with ideological extremity entered
at the first step and PIPS scores at the second step predicting partisan closeness preferences (see
Table 3, models a).
Results of this analysis supported Hypothesis 2: in the single predictor model, ideological
extremity was a significant predictor of partisan social closeness preferences. However, in the
two-predictor model, PIPS scores were predictive of partisan closeness preferences, and
ideological extremity was no longer significant. The inclusion of PIPS in the model explained an
additional 26% of variation in partisan social closeness preferences, and this change in R
2
was
significant.
VALUES AND GROUP IDENTITY 24
Table 3
Summary of Hierarchical Regression Analysis for Variables Predicting Partisan Closeness Preferences
(Study 1: H1).
Variable β SE t pETA
2
R
2
∆ R
2
Model Compare F
Step 1
Ideological Extremity
.27
.02
11.16**
.22
.22
.22
Step 2a
Ideological Extremity
PIPS
.09
.21
.02
.02
3.77**
13.40**
.03
.29
.44
.22
F(1,447) = 179.52**
Step 2b
Ideological Extremity
SDO
.28
.05
.02
.02
11.52**
2.65*
.23
.02
.23 .01 F(1,447) = 6.99*
Step 2c
Ideological Extremity
RWA
.29
.11
.02
.02
12.65**
7.97**
.26
.12
.10 .08 F(1,447) = 63.52**
Step 3b
Ideological Extremity
SDO
PIPS
.10
.07
.22
.02
.02
.02
4.26**
4.10**
13.88**
.04
.04
.30
.46
.23
F(1,447) = 192.66**
Step 3c
Ideological Extremity
RWA
PIPS
.11
.11
.21
.02
.02
.02
4.96*
9.62**
14.66**
.05
.17
.33
.54
.26
F(1,446) = 214.85**
Note: * p < .05, ** p < .001
Hypothesis 3: Political identity polarization predicts partisan closeness preferences for
liberals, conservatives, and people unaffiliated with either group. We hypothesized that higher
PIPS scores would be correlated with larger differences in liberal and conservative closeness
preferences for people across the ideological spectrum. To test these hypotheses, we computed
Pearson’s r correlations between PIPS scores and partisan closeness differences for all
participants, liberals, conservatives, and participants who did not self-identify with either
ideology. Supporting Hypothesis 3, higher PIPS scores predicted more partisan social closeness
preferences for liberals, r (259) = 0.60, 95% CI [0.51,0.67], p < .001, for conservatives, r (109) =
VALUES AND GROUP IDENTITY 25
0.71, 95% CI [0.61,0.79], p < .001, and for those who did not report belonging to either
ideological group, r (109) = 0.62, 95% CI [0.49,0.73], p < .001.
Hypothesis 4: Political identity polarization predicts closeness preferences across the
ideological spectrum, while Right Wing Authoritarianism and Social Dominance Orientation
only predict closeness preferences for conservatives. Next, to test Hypothesis 4, we ran two sets
of hierarchical linear regressions predicting partisan social closeness preferences: a single
predictor model with political extremity, two, two-predictor model with political extremity and
either RWA or SDO, two three-predictor models with political extremity, RWA or SDO, and
PIPS (See Table 3, models b and c). Finally, to test whether RWA/SDO are predictive for
conservatives but not liberals, we ran the two tree-predictor hierarchical tests again separately for
liberal and conservative subsets of our sample (See Table 3).
In both two-predictor models, higher scores on classic prejudice measures (i.e., RWA and
SDO) and greater ideological extremity accounted for unique variance in predicting more
partisan social closeness preferences. In both three-predictor models, we again found that classic
prejudice measures predicted unique variance in the model, but also found that PIPS scores
added a large portion of additional explained variance. In both models, the variance accounted
for by ideological extremity dropped to either small (RWA model 3b) or non-significant (SDO
model 3c). These results support Hypothesis 1 and indicate that PIPS scores are unique from
classic measures and explain partisan social closeness preferences beyond ideological extremity.
Next, we tested the second half of Hypothesis 4
4
; we hypothesized that RWA and SDO
would only be predictive of closeness preferences for conservatives. To test for this relationship,
we ran two separate Pearson’s R correlation analyses comparing RWA, SDO and partisan social
4
The first half of the prediction was tested in the section for Hypothesis 3.
VALUES AND GROUP IDENTITY 26
closeness preferences for liberal participants and conservative participants. Results of these
analyses supported our fourth hypothesis for RWA: while RWA was a significant predictor of
partisan social closeness preferences for conservatives, r (109) = .43, 95% CI [.27,.57], p < .001,
it did not predict partisan social closeness preferences for liberals, r (259) = -.07, 95% CI [-
.19,.05], p =.253. However, results for SDO were unpredicted: we found that SDO was not a
significant predictor of partisan social closeness preferences for conservatives, r (109) = .07,
95% CI [-.11,.26], p = .435, it negatively correlated with partisan social closeness preferences for
liberals, r (259) = -.14, 95% CI [-.26,-.02], p = .021. These results indicate that RWA is
predictive for conservatives and not liberals, but that SDO does not positively correlate with
social closeness preferences for either group.
Hypothesis 5: ideological extremity will be a stronger predictor of ingroup social
closeness preferences, but PIPS scores will account for unique variance beyond ideological
extremity for predicting outgroup closeness preferences. Finally, we sought to test for
differences in strength of both ideological extremity and PIPS for predicting different types of
social closeness preferences. For this analysis, we ran two linear regression models with
ideological extremity and PIPS scores predicting social closeness to one’s ingroup and social
closeness to one’s outgroup.
Results of these analyses supported hypothesis 6. Ideological extremity was a stronger
predictor of ingroup closeness preferences, β = 0.25, SE = .04, partial ETA
2
= .12, p < .001,
though PIPS scores also contributed to the model, β = 0.07, SE = .02, partial ETA
2
= .04 p =
.003, R
2
= .024, F (2,256) = 40.28, p < .001. However, PIPS scores were the only significant
predictor of outgroup social closeness preferences, β = -0.28, SE = .04, partial ETA
2
= .37, p <
.001, and ideological extremity did not account for additional variance, β = -0.03, SE = .08,
VALUES AND GROUP IDENTITY 27
partial ETA
2
= .00, p = .744. These results indicate that ideological extremity and political
identity polarization provide novel contributions when predicting social closeness preferences
with different ideological groups.
Discussion
In Study 1, we found that political identity polarization scores are predictive of intergroup
preferences above and beyond existing measures such as ideological extremity, RWA and SDO.
Our results supported our hypotheses that people with more polarized scores on the PIPS were
more likely to want to be closer to ingroup members and to distance themselves from outgroup
members. These effects were consistent for liberals, conservatives, and those who identify with a
third option or with no ideological group (e.g., libertarians, not political).
Additionally, PIP scores and ideological extremity provided unique predictive validity for
social closeness preferences relating to ingroups and outgroups. While both measures predicted
ingroup closeness, ideological extremity was the stronger predictor of how close people wanted
to be to their ingroup members. By comparison, political identity polarization--and not
ideological extremity--was the strongest predictor of how close people wanted to be to outgroup
members. This result indicates that while ideological extremists prefer greater ingroup closeness,
political identity polarization is better able to capture both ingroup closeness and outgroup
closeness preferences. This provides support for the importance of capturing perceptions toward
both groups when attempting to predict partisan intergroup outcomes.
Study 2: Political Identity Polarization, Liking, and Discrimination
In Study 2, we sought to compare the predictive validity of the PIPS to that of the most
commonly used cross-partisan political prejudice indicator: affective polarization. Typical
affective polarization measures assess the difference between people’s warmth ratings for
VALUES AND GROUP IDENTITY 28
Democrats and Republicans, (Iyengar & Westwood, 2015), though some work has also used
differences in liking (Chambers, Schlenker, & Collison, 2013). Because these measures hold
great explanatory power for discriminatory behavior toward outgroup members in favor of
ingroup members (e.g., Iyengar & Westwood, 2015), the PIPS would need to add predictive
power beyond assessments of liking or warmth for it to be of benefit for predicting hostile
intergroup outcomes and intervention success.
To compare affective polarization and political identity polarization, we obtained the
original study design materials in an attempt to replicate the discrimination results found by
Iyengar and Sherwood (2015, Study 3) for liberal and conservative target groups. Using
economic games with real monetary risk and outcomes, they found that that Democratic and
Republican participants gave more money to game partners they perceive to be copartisans than
those perceived to be political outgroup members.
In this study, we sought to test three hypotheses. First, we sought to replicate the results
found by Iyengar and Westwood (2015) showing biased game behavior against ideological
outgroup members for liberals and conservatives (Hypothesis 1). Additionally, we hypothesized
that this effect would be stronger for highly polarized participants as measured by the PIPS
(Hypothesis 2). Finally, we hypothesized that the PIPS would be a stronger predictor of
intergroup discrimination in these games than either ideological extremity or differences in
partisan liking (Study Hypothesis 3, General Hypothesis 1). All hypotheses and analysis
decisions in Study 2 were pre-registered prior to data collection on AsPredicted.org (#1769) and
are available in the Supplemental Materials. All materials, scripts and data are available on OSF
at https://osf.io/7v8zw/?view_only=4139ebba7eac4b5fab46812386cd2c14.
VALUES AND GROUP IDENTITY 29
Method
Participants and procedure. We recruited 300 American adults through Amazon’s
Mechanical Turk to participate in an online study titled “Decisions in Games with Bonus
Money” (M age = 35.13, 56.3% female). Participants indicated their ideological identification on
a 5-point scale to maintain consistency with the original Iyengar & Westwood (2015) study,
ranging from 1 (conservative) to 5 (liberal). 91 participants identified as conservative, 89 as
moderate, and 120 as liberal. Participants were not informed in advance of the relevance of
political ideology to the study. Each person was randomly assigned to complete either four game
rounds of a dictator game or 4 rounds of a trust game, and they were asked to answer a few
demographic questions to be shared in a profile blurb with each of their four game partners. All
participants were told that they would be given $0.10 at the beginning of each round, which they
could choose to split between themselves and their partner as they wished. They were informed
that their actual bonus outcomes tied to their monetary allocation choices in the game rounds,
and they completed two practice rounds of the game to ensure they understood the payout
structure.
In the dictator game, participants received their $0.10 and were told that they could
allocate the money each round between themselves and a partner. They were reminded that if
they chose not to give any money to the other participant, that participant would receive no
bonus money for that round. Participants in the trust game were told after receiving their $0.10
that they could choose to allocate their money as they saw fit, but any amount of money given to
the other participant would be tripled. The second participant would then get to choose how
much of that money they wanted to give back to the first participant, although they were not
required to give any back. Participants were informed that they would not be given feedback
VALUES AND GROUP IDENTITY 30
after each round, but they would receive information about the total amount they would get for
their bonus at the end of the study with their completion code.
At the beginning of each round, participants viewed a list of demographic information
about their game partner, including age, gender, political affiliation, and income. As in the
Iyengar and Westwood (2015) study, we varied the age of the participant between 32 and 38
years of age, income between $39,000 and $42,000, and fixed gender as male. All participants
received two game partners who were liberal and two who were conservative in random order.
After the four game rounds, participants then completed the 2-item affective polarization
measure and the PIPS. The order in which they answered these scales was randomized.
Biased Game Behavior. To create a measure of biased game behavior, we calculated the average
money given in all liberal trials across both games and the average money given of all
conservative trials across both games. Then, we subtracted the amount of money given in liberal
trials from that given in conservative trials and took the absolute value of this difference.
Differences in partisan liking. Following the methods outlined by Iyengar and
Westwood (2015), participants were asked to rate how warm or cold they perceived liberals and
conservatives to be on scales from 0 (very cold) to 100 (very warm). We then computed affective
polarization by subtracting liberal warmth from conservative warmth, and took the absolute
value of this score.
Results
300 participants completed all parts of the study, with 211 participants identifying
themselves as liberal or conservative.
Hypothesis 1: Participants should exhibit biased game behavior against ideological
outgroup members. First, to replicate Iyengar and Westwood (2015) and to test Hypothesis 1, we
VALUES AND GROUP IDENTITY 31
conducted a paired samples t-test to compare the total amount of money given to game partners
who matched participants’ political affiliation (political ingroup members) and game partners
who were from the participants’ political outgroup (political outgroup members). Results of this
analysis indicated that participants gave significantly more money on average to partners who
were copartisans (M = 5.28, SD = 2.99) than to partners who belonged to the political outgroup
(M = 4.73, SD = 2.92), t(210) = 4.84, p < .001, Cohen’s d = 0.31.
Hypothesis 2: PIPS scores predict greater giving to political ingroup game partners
compared to outgroup game partners. To test Hypothesis 2, we ran a Pearson’s R correlation to
assess the relationship between PIPS scores and biased game behavior. The results of this
analysis supported Hypothesis 2; higher PIPS scores were correlated with more money given to
co-partisan game partners than outgroup game partners, r(209) = 0.26, 95% CI [.15,.40], p <
.001, Cohen’s d = 0.42.
Hypothesis 3: PIPS scores will be a stronger predictor of intergroup discrimination than
either ideological extremity or differences in partisan liking. Finally, we ran a hierarchical linear
regression predicting discrimination, with ideological extremity in model 1, ideological
extremity and affective polarization in model 2, and ideological extremity, affective polarization,
and PIPS scores in model 3 (see Table 4). Supporting Hypothesis 3, we found that political
identity polarization was predictive of biased game behavior above and beyond affective
polarization and ideological extremity.
VALUES AND GROUP IDENTITY 32
Table 4
Summary of Hierarchical Regression Analysis for Variables Predicting Biased Game Behavior (Study
2).
Variable β SE t pETA
2
R
2
∆ R
2
Model Compare F
Step 1
Ideological Extremity
.62
.24
2.56*
.03
.03
.03
Step 2a
Ideological Extremity
Affective Polarization
.42
.01
.24
.00
1.73
2.13**
.01
.04
.07
.04
F(1,208) = 10.08**
Step 3c
Ideological Extremity
Affective Polarization
PIPS
.38
.01
.25
.24
.01
.12
1.55
1.10
2.04*
.01
.00
.02
.08
.02
F(1,207) = 4.16*
Note: * p < .05, ** p < .001
Discussion
In Study 2, we replicate results found by Iyengar and Westwood (2015) such that
participants in economic games give more money to copartisans than to people who they
perceive to be from the opposing ideological party. Additionally, we found that scores on the
PIPS account for variance beyond affective polarization when predicting biased behavior in
economic games in favor of one’s ideological ingroup.
Study 3: Political Identity Polarization and Hostile Political Activism
Our main goals for Study 3 were twofold. First, we wanted to test the predictive efficacy
of the Political Identity Polarization Scale on a third measure of negative partisan outcomes:
likelihood to participate in hostile political activism. Following previous studies, we
hypothesized that the PIPS would be a stronger predictor than ideological extremity when
predicting cross-partisan hostile activism such as yelling at disagreeing (Activism Orientation
Scale; Corning & Meyers, 2002) (Hypothesis 1). Additionally, we found that ideological
extremity was a stronger predictor of ingroup closeness and PIPS was a stronger predictor of
VALUES AND GROUP IDENTITY 33
outgroup distancing in Study 1. Therefore, we expected ideological extremity to be the strongest
predictor of conventional, group-promoting activism such as voting and canvassing (Hypothesis
2). If we find support for our first two hypotheses, these results would indicate that PIPS and
ideological extremity are useful for predicting unique subsets of political outcomes.
Method
Participants and procedure. 1,119 adult participants from the United States who had
previously completed registration at YourMorals.org volunteered to take the PIPS as one of
many possible survey opportunities available on the website (M age = 43, 33% female).
Upon registration on the Yourmorals.org website, all participants were asked to indicate
their political ideology on a 7 point Likert scale from 1 (extremely liberal) to 7 (extremely
conservative). At time of registration, 1,946 participants identified as liberal, 426 identified as
moderate, 924 identified as conservative, 789 identified as libertarian, 359 responded “Other” or
“Not political/don’t know”, and 61 did not report political ideology. We calculated ideological
extremity by folding the scale at the moderate point with final scores ranging from 0 (neutral) to
3 (extreme liberal/conservative).
Finally, a subset of the participants also chose to complete one or more additional
psychological measures on the website. For these participants, we assessed relationships between
ideological extremity, the PIPS, and the Activism Orientation Scale (AOS; n = 230).
Activism Orientation Scale. The Activism Orientation Scale (AOS; Corning & Meyers,
2002) was developed to capture an individual’s propensity to engage in general social actions
across various activism behaviors. Items are collapsed into two subscales: conventional activism
(α = 0.77), which includes group-promotion behaviors such as likelihood of voting in the next
election and displaying a bumper sticker with a political message, and high risk activism (α =
VALUES AND GROUP IDENTITY 34
0.73), which includes hostile activism behaviors such as engaging in violent protest and yelling
at a disagreeing other. Participants completed the short-form AOS (Klar & Kasser, 2009), in
which they rate 13 items reflecting these two facets of activism (four items reflecting high-risk
activism and nine items reflecting conventional activism) on a 4-point Likert scale from 0
(Definitely will not) to 3 (Definitely will). In this paper, high-risk activism is referred to as hostile
activism for clarity.
Results
See Table 5 for full zero-order pairwise correlations between variables.
Hypothesis 1: PIPS will be a stronger predictor of cross-partisan propensity to commit
hostile activism than ideological extremity. To test our first hypothesis, we conducted a
hierarchical regression analyses predicting propensity to commit hostile activism with
ideological extremity entered as a single predictor in model 1 and ideological extremity and PIPS
entered into model 2 (AOS; see Table 5 for zero-order correlations, Table 6 for model statistics).
Hypothesis 2: Conventional activism will be better predicted by ideological extremity
than political identity polarization. Supporting Hypothesis 2, the regression assessing
Table 5
Study 3 Correlations between PIPS, ideological extremity, and activism orientation
Political identity
polarization
Ideological
extremity
Conventional
activism
Hostile
activism
Political identity
polarization
1 0.47* 0.31* 0.33*
Ideological
extremity
1 0.35* 0.33*
Conventional
activism
1 0.43*
Hostile activism 1
Note: * p < .001,
VALUES AND GROUP IDENTITY 35
conventional activism showed the opposite pattern. We found that ideological extremity was the
strongest predictor of conventional activism, β = .17, p < .001, and after controlling for ideology,
political identity polarization was no longer significant, β = .09, p = .051, F (1, 286) = 12.23, p <
.001. These results indicate that supporting ingroup leaders through voting and canvassing may
be a result of ingroup identification and not group perception differences.
Table 6
Summary of Hierarchical Regression Analysis for Variables Predicting Hostile Activism Likelihood
(Study 3).
Variable β SE t pETA
2
R
2
∆ R
2
Model Compare F
Step 1
Ideological Extremity
.17
.04
4.03**
.09
.09
.09
Step 2a
Ideological Extremity
PIPS
.08
.18
.05
.06
1.60
3.35**
.02
.07
.15
.06
F(1,157) = 11.19**
Step 2b
Ideological Extremity
Affective Polarization
.05
.20
.05
.06
0.98
3.29**
.01
.06
.15
.06
F(1,157) = 8.59**
Step 3b
Ideological Extremity
Affective Polarization
PIPS
.38
.01
.28
.24
.01
.12
1.58
1.04
2.31*
.01
.01
.03
.10
.02
F(1,207) = 5.33*
Note: * p < .05, ** p < .001
Discussion
While ideological extremity was the strongest predictor of propensity to participate in
conventional activism that promotes a political group such as voting or canvassing, In Study 3
we found that PIPS scores—and not ideological extremity—predicted propensity to participate in
hostile activism. This result suggests that political identity polarization plays a unique role in
promoting hostility between partisan groups. Finally, using a unique sample, we again found
VALUES AND GROUP IDENTITY 36
evidence that the PIPS captures additional variability beyond that of affective polarization in
predicting hostile intergroup outcomes.
Study 4: Political Identity Polarization, Online Anti-Prejudice Interventions, and Implicit
Attitudes
In the first three studies, we found evidence for the unique predictive validity of the PIPS
for negative intergroup interactions and willingness to commit hostile activism. In Study 4, we
examined an intervention study that hand been previously conducted by researchers at Project
Implicit (Ebersole, Motyl, & Nosek, unpublished data, see OSF for study details,
https://osf.io/qe9zp/) to investigate whether the PIPS could help to predict receptiveness to
political civility interventions and their effectiveness. Additionally, we investigated whether
political identity polarization uniquely predicts implicit intergroup attitudes, similar to how it
uniquely predicted explicit attitudes in previous studies.
Social psychology has produced many techniques for reducing intergroup conflict (see
Hewstone, Rubin, & Willis, 2002; Paluck & Green, 2009, for reviews). However, most of these
techniques have not been tested in the context of ideological conflict. To better inform efforts to
promote civility across political lines, researchers at Project Implicit compiled a wide range of
interventions designed to improve intergroup relations, adapted them to be relevant to liberal-
conservative conflict, and tested their effectiveness against one another. They expected that, if
ideological group conflict functions similarly to other types of intergroup conflict (e.g., conflict
along lines of race, nationality), some of these past techniques might demonstrate effectiveness
at reducing biased attitudes of political opponents. However, they did not find support for this
effect. The various interventions tested on Project Implicit failed to produce attitude change
across the political spectrum (see Results, below, for a summary), suggesting that these previous
VALUES AND GROUP IDENTITY 37
interventions may not translate to the political realm.
Alternatively, we propose that it could also be the case that these interventions are
effective for some people, but not others. Ideally, these interventions would lead politically
polarized people to become less biased against political outgroups. However, there are reasons to
suspect that strong partisans might be particularly resistant to attempts to promote civility. As
suggested in the results of previous research, political disagreements are often framed in moral
terms, whereby one side’s position is viewed as upholding moral good in the face of a
threatening evil opponent (Graham & Haidt, 2012). Moral sacredness has been shown to lead to
boomerang effects, whereby profane offers of money or reduced sanctions can produce
reactance, moving attitudes in ways opposite than intended (Dehghani et al., 2010).
To examine this second possibility, we conducted an exploratory follow-up of the Project
Implict data to investigate whether intervention success varied by different levels of Political
Identity Polarization Scale. We hypothesized that PIPS scores would interact with intervention
success such that participants who have moderate PIPS scores would show more intervention
success (operationalized as more similar perceptions of both groups) than those with high or low
PIPS scores (Hypothesis 1). Support for this hypothesis would indicate that the success of
political bias interventions is reliant on participants coming in to the intervention with some, but
not too much, bias. This result would also indicate that different intervention techniques may be
necessary for reducing political prejudice for people who are more or less biased to begin with.
Additionally, it is possible that these interventions may have succeeded at decreasing bias
for some aspects of negative political group perceptions while not significantly affecting other
aspects. For example, if these interventions were unsuccessful at increasing warmth perceptions,
but did successfully decrease negative stereotypes, then capturing the effects of the interventions
VALUES AND GROUP IDENTITY 38
on affective polarization alone may lead researchers to determine that the interventions were
ineffective. However, because the PIPS captures five aspects of political prejudice, it may be a
more sensitive tool for determining intervention effectiveness. We hypothesize that we will find
that the interventions were successful for moderately polarized participants when using the PIPS
as the key dependent measure but that, like the original researchers, we will not find intervention
success when using affective polarization as the key dependent measure (Hypothesis 2).
Finally, this study provided the opportunity to investigate the relation between political
identity polarization, affective polarization, and implicit political attitudes. Past research has
indicated that political attitudes show some of the strongest implicit-explicit correlations among
many measured domains (Nosek, 2005; Nosek & Hansen, 2008). Many features of political
attitudes, such as their lack of self-presentation demands and high elaboration, lend themselves
to producing high implicit-explicit correspondence (Nosek, 2005). We hypothesized that the
PIPS would be the strongest predictor of implicit political attitudes, above and beyond affective
polarization and ideological extremity (Hypothesis 3).
Method
Participants and procedure. 844 residents of the United States participated in the study
on Project Implicit (implicit.harvard.edu), an online survey site. Participants visiting Project
Implicit are randomly assigned to one of the ongoing studies hosted on the site. In this study,
participants were randomly assigned to one of five intervention conditions or a control condition.
Afterward, participants completed the PIPS, two measures of ideology (see below), a measure of
implicit bias (Republican/Democrat, Good/Bad Implicit Association Test; Greenwald, McGhee,
& Schwartz, 1998), and affective polarization (self-reported warmth perceptions toward
Republicans and Democrats, calculated as in previous studies).
VALUES AND GROUP IDENTITY 39
Ideological extremity. The original study included both a measure of ideology on social
issues and a measure of ideology on economic issues. Given that the majority of ideological hate
stems from differences in social issue perceptions, we used social issue ideology and created an
ideological extremity measure using the same method as previous studies.
Implicit bias extremity. The dataset provided by the original researchers at Project Implicit
included a pre-calculated IAT implicit bias score. As detailed on OSF, this score was calculated
using the D algorithm recommended by Greenwald, Nosek, & Banaji (2003). They excluded
participants that met any of the following criteria: more than 10% of critical trials were faster
than 300 ms, critical block error rate was greater than 40%, or overall error rate across all
combined response blocks was over 30% (Nosek, et al., 2007). They then removed response
latencies under 400 ms or over 10000 ms, replaced categorization errors with the block mean of
correct latencies plus 600 ms, and included all other trials in the d score calculations. They
calculated the means of correct responses for the critical blocks (blocks 3, 4, 6, and 7) and
inclusive standard deviations for blocks 3 and 6, and 4 and 7. Finally, they took the difference of
blocks 6 and 3, and 7 and 4, and divide each difference by their respective inclusive standard
deviations.
The IAT implicit bias D score was then calculated by averaging these two quotients. A
positive D score indicates on average faster response between Republican/good word pairs and
Democratic/bad word pairs compared to the reverse. Positive scores are interpreted as an implicit
preference for Republicans compared to Democrats. Finally, we calculated implicit bias
extremity by taking the absolute value of the D score, with larger numbers indicating more bias.
Political bias interventions. The five interventions were based on intergroup conflict
interventions that have been successful in previous social psychological research. They are each
VALUES AND GROUP IDENTITY 40
briefly described here (see OSF for full intervention text). With the exception of self-affirmation,
all interventions were conceptual instantiations of past demonstrations. Humanization had
participants learn about several positive qualities of an individual before later revealing that they
were a member of the opposing political party (c.f., Motyl, Hart, Pyszcynski, Weise, Maxfield &
Siedel, 2011). In superordinate threat, participants read an article explaining a national threat (in
this case, cyber-attacks against the United States) that could only be addressed through bipartisan
efforts (similar to Pyszczynski, et al., 2012). This should create a superordinate goal between the
two groups, which has been shown to reduce intergroup conflict (Gaertner et al., 1999; Sherif et
al., 1961). In observing civility, participants watched a brief video describing the relationship
between Ronald Reagan (R) and Tip O’Neill (D). The video explains that, although the two often
disagreed, they maintained a civil and amicable relationship, providing an opportunity for social
learning on the part of the participant (e.g., Bandura, 1965; 1971). The zero-sum intervention
exposed participants to an article that suggests that zero-sum politics are to blame for current
gridlock in government. By explaining this issue to participants, we hoped to reduce use of zero
sum mentalities which could reduce compromise and increase hostility. Finally, in the self-
affirmation intervention, participants selected a personal trait that they valued and spent a few
minutes writing about how they embody that trait. Self-affirmation has been shown to reduce
defensive responding (Critcher, Dunning, & Armor, 2010; Sherman & Cohen, 2006) and
increase processing of opposing views (Cohen, Aronson, & Steele, 2000). Participants in the
control condition simply proceeded to the dependent measures.
Results
We excluded 23 participants because they did not complete the partisan liking difference
score dependent variable, and an additional ten participants were excluded because they did not
VALUES AND GROUP IDENTITY 41
complete the PIPS. This left a final sample size of 811.
Replication of previous results: online anti-prejudice interventions do not differentially
affect outcome measures. First, recreating the primary test of the original Project Implicit
intervention study analyses, we sought to determine whether there were significant differences
between the five intervention conditions in either affective polarization or PIPS scores. Results
of an ANOVA with intervention condition predicting affective polarization did not indicate
significant differences between intervention conditions, F (5, 610) = 0.49, p = .788, and an
ANOVA with intervention condition predicting PIPS was also not significant, F (5, 610) = 0.76,
p = .582. Therefore, we collapsed participants from all intervention conditions into one overall
intervention condition which we compared to the control condition for further analysis
5
, and a t-
test comparing intervention conditions to control also did not show a significant effect of
condition on outgroup liking, t (730) = 1.57, p = .118, or PIPS scores, t (730) = 1.31, p = .190.
Hypothesis 1: The effectiveness of online anti-prejudice interventions varies across levels
of political identity polarization. Next, to test Hypothesis 1, we conducted a follow-up
exploratory analysis to determine whether the effectiveness of the anti-prejudice interventions
varied across PIPS scores. The Project Implicit data is cross-sectional, so previous analyses
relied on quantifying means differences when assessing intervention success. However, in our
case, we were interested in exploring differences in intervention effectiveness at low, moderate
and high levels of bias because we expect that intervention effects are not homogenous amongst
participants across the polarization spectrum. Following recommendations by Wilcox (2016), we
conducted a quantile shift function where we compare the independently bootstrapped median
estimated difference in median PIPS scores for participants at the 50
th
and 75
th
quantiles for
5
A table of means are included in the supplemental materials. There were also no significant
differences in ingroup liking or outgroup liking by condition.
VALUES AND GROUP IDENTITY 42
control and intervention conditions using the qcomhd function in the WSR2 R package (Mair &
Wilcox, 2017; Wilcox & Erceg-Hurn, 2012). The shift analysis quantile technique uses the
Harrell and Davis estimator (1982) and computes a bootstrapped confidence interval of the
differences across quantiles controlling for multiple comparisons. This quantile shift analysis
provides a robust estimate of the extent two which two groups have distributional differences at
each specified quantile and describes how data would need to be shifted for the two distributions
to match (Rousselet, Pernet, & Wilcox, 2017).
Results of this analysis indicated that there was a significant difference between control
and intervention conditions at the 50
th
quantile, with a lower median PIPS score for participants
in the intervention conditions, (bootstrapped median estimate = 0.76) compared to participants
in the control condition (bootstrapped median estimate = 1,05), 95% CI [.02, .56], p = .012.
However, we did not find a significant difference between control and intervention conditions for
median PIPS score estimates at the 75% quantile, 95% CI [-.09, .58], p = .160. This result
supports our first hypothesis that effectiveness of interventions varies across levels of political
identity polarization.
Hypothesis 2: Use of affective polarization alone would lead to the conclusion that
interventions were unsuccessful. Next, we ran the same shift analysis replacing PIPS scores with
affective polarization to test whether reliance on warmth ratings alone would lead us to conclude
that the interventions were ineffective. Results of this analysis indicated that the difference
between control (bootstrapped median estimate = 1.57) and intervention conditions
(bootstrapped median estimate = 1.91) at the .50 quantile in median estimated affective
polarization scores was not significant, 95% CI [-.96, .47], p = .390. This result supports our
second hypothesis that reliance on warmth ratings alone would lead researchers to assume that
VALUES AND GROUP IDENTITY 43
the intervention was not effective.
Hypothesis 3: Political identity polarization uniquely predicts implicit attitudes. Finally,
we sought to test whether the PIPS is a unique predictor of individuals’ implicit partisan group
bias above and beyond affective polarization and ideological extremity. As in previous studies,
we conducted a hierarchical linear regression predicting bias on the Democrat/Republican
implicit associations test (IAT) with ideological extremity entered into a single predictor model,
affective polarization added to create a two-predictor model, and finally PIPS scores included to
create a three-predictor model (see Table 7). Results of this analysis indicated that PIPS scores
contributed significant unique variance to the model beyond both of the other predictors.
Table 7
Summary of Hierarchical Regression Analysis for Variables Predicting IAT extremity (Study 4).
Variable β SE t pETA
2
R
2
∆ R
2
Model Compare F
Step 1
Ideological Extremity
.05
.01
3.75**
.02
.02
.02
Step 2a
Ideological Extremity
PIPS
.03
.03
.01
.01
2.30*
3.43**
.01
.02
.03
.02
F(1,721) = 11.79**
Step 3b
Ideological Extremity
Affective Polarization
PIPS
.03
.01
.04
.01
.01
.02
1.99*
1.43
2.44*
.01
.00
.01
.04
.01
F(1,720) = 5.94*
Note: * p < .05, ** p < .001
Discussion
Results of Study 5 provide further evidence for the unique contributions of the Political
Identity Polarization Scale (PIPS). We found that the PIPS was a stronger predictor of implicit
attitudinal bias on the Republican/Democrat IAT than affective polarization or ideological
VALUES AND GROUP IDENTITY 44
extremity. We also found that the PIPS was a useful tool for gauging the effectiveness of
intervention attempts. Specifically, people with moderate bias on the PIPS were most responsive
to interventions designed to decrease bias against politically dissimilar others. However, people
who have strong bias on the PIPS did not benefit from participating in the interventions.
Study 5: Political Identity Polarization and In-Person Anti-Prejudice Interventions
Study 4 provided preliminary evidence that PIPS scores are a useful measure for testing
political civility intervention effectiveness. These results suggest that those with moderate levels
of political identity polarization have the most to gain from civility interventions, while those
high in such polarization may be less receptive to these attempts. However, the previous study
has some important limitations. First, the interventions were rather brief and implemented online,
both of which could reduce their strength. Second, the intervention moderation by PIPS was not
the study’s original intent and was discovered during a re-analysis of previously collected data.
Finally, Study 4’s outcomes focused on attitudinal measures rather than behavioral intentions,
which may be more important to target as outcomes of civility efforts.
To address these shortcomings, we conducted Study 5 in conjunction with an event
specifically tailored toward decreasing inter-partisan hostility hosted by a non-profit
organization, The Village Square (http://tlh.villagesquare.us). By sharing a meal with people
from various sides of the political debate—in what organizers describe as deliberate efforts to
emphasize superordinate goals and common humanity between diverse attendees—previous
events have been shown to measurably change people's opinions of people whom they disagree
with (http://tlh.villagesquare.us/blog/science-survival-guide/).
This event provided us with the opportunity to validate and extend Study 4’s findings in
the field, using a more extensive, in-person intervention and to replicate our previous results.
VALUES AND GROUP IDENTITY 45
Specifically, we sought to test three hypotheses. First, replicating our previous results, we
hypothesized that PIPS scores should account for unique variance beyond ideological extremity
when predicting likelihood of partisan hostility at baseline (i.e., likelihood of yelling at another
person who disagrees with you on a political issue) at baseline (Hypothesis 1). Second, we
hypothesized that participation in the intervention would decrease both PIPS scores and
likelihood to yell at a disagreeing other (Hypothesis 2). Finally, following results from Study 4,
we hypothesized that participants with moderate PIPS scores will show greater decrease in bias
post-intervention when compared to those with high levels of bias (Hypothesis 3).
Method
Participants and procedure. 160 participants were recruited to participate in our
questionnaires as part of a dinner and lecture event entitled “A Citizen’s Election Season
Survival Guide” hosted on September 20, 2016 in Tallahassee, Florida by The Village
Square (full event description is available at http://tlh.villagesquare.us/event/survival-guide).
Participants were invited to take a pre-intervention online survey, they attended the event, and
then they were asked to participate in a post-intervention online survey. All survey participation
was voluntary, and previous questionnaires using similar methodologies at Village Square events
typically yield 20-40% response rates.
Pre-intervention survey. Prior to attending the event, participants were given a link to a
premeasure online survey where they completed a 10-item questionnaire with items presented in
randomized order. The PIPS was measured using a 8-item short-form version of the original
scale (see asterisked items in Table 1). Likelihood of hostile activism was measured using a
single item from the Activism Orientation Scale which reflects likelihood of yelling at someone
who disagreed with them on a political issue over in the next four years from very unlikely to
VALUES AND GROUP IDENTITY 46
very likely. Finally, ideology was measured using a single item which asked participants to
indicate where they consider themselves to be on a 10-point scale ranging from 1 (very liberal)
to 10 (very conservative). Both PIPS scores and ideological extremity were calculated in the
same manner as in previous studies.
The Village Square event. At the event, participants were seated at tables together for a
catered dinner. While eating, they listened to two featured speakers—one liberal and one
conservative—who talked about the election through the lens of their pre-existing personal
friendship in an effort to learn lessons, seek wisdom and find humor when possible.
Post-intervention survey. After the event, participants were asked to fill out the same
measures as were provided in the pre-intervention survey before the event (post-intervention).
Participants were first asked at the venue at the end of the event. In addition to the in-person
request, they received a follow-up email reminder that evening. A final reminder email was sent
a few days after the event to participants who had not yet responded.
Results
Sixty-four participants completed our pre-intervention survey (demographics not
available due to the nature of the online survey; 40% response rate). 40 of these participants also
completed the post-intervention survey (63% response rate of pre-intervention sample, 25%
response rate of total attendees).
Hypothesis 1: Pre-intervention political identity polarization predicts likelihood of
partisan hostility beyond ideological extremity. To test our first hypothesis, we followed the
same hierarchical linear regression analysis plan as in previous studies, with ideological
extremity included in a single-predictor model and PIPS added into the second model.
VALUES AND GROUP IDENTITY 47
Replicating previous results, we found that PIPS scores accounted for significant variance
beyond ideological extremity, β = 0.30, SE = 0.13, p = .028, pETA
2
= .08.
Hypothesis 2: PIPS scores and partisan hostility likelihood will be lower post-
intervention compared to pre-intervention. To test the effectiveness of the intervention, we
conducted a paired sample t-test to compare PIPS scores pre-intervention and post-intervention.
Results of this analysis support the first half of hypothesis 2; we found a significant difference in
PIPS scores between surveys such that participants’ post-intervention PIPS scores decreased
from their pre-intervention PIPS scores, mean difference = 0.29, 95% CI [.10,.49], t (39) = 3.10,
p = .004, Cohen’s d = 0.49. Next, we conducted a second paired sample t-test to compare
partisan hostility likelihood pre- and post-intervention. Results of this analysis did not support
hypothesis two; we found no significant difference in likelihood to yell at a disagreeing other
between the pre-intervention survey and the post-intervention survey, mean difference = 0.08,
95% CI [-.08,.23], t (39) = 1.00, p = .324, Cohen’s d = 0.16.
Hypothesis 3: The intervention will be more effective at decreases bias for participants
with moderate levels of bias compared to participants with high levels of bias. Notably, the
average Political Identity Polarization score for people who volunteered to take our survey and
participate in the anti-prejudice intervention was much lower than we have found in our previous
online samples (M = 1.04, SD = 0.91). Therefore, we were unable to assess differences in the
effectiveness of this intervention between moderate and highly polarized populations.
Discussion
In Study 5, we found promising evidence that in-person prejudice reduction interventions
do help to decrease intergroup prejudice. Replicating the conclusions of Study 4, interventions
aimed at reducing political hostility can be effective at decreasing political bias for individuals
VALUES AND GROUP IDENTITY 48
with low to moderate political identity polarization. However, we did not find significant
decreases in reported likelihood to yell at someone who disagrees on a political issue after the
intervention.
It is still unclear as to whether these more extensive, in-person interventions could have
effects for those who are more strongly polarized given the nature of people who attend these
events—such individuals would need to attend these events to gauge their effectiveness. In this
study, we found that those who choose to attend such events tend to be lower in polarization than
the average population. Nonetheless, such civility events provide a useful blueprint for
mitigating the negative perceptions captured by the Political Identity Polarization Scale, at least
for some of the population and in a relatively small sample.
General Discussion
Across five studies both online and in the field, we tested the effectiveness of the Political
Identity Polarization Scale (PIPS), a measure of political identity polarization, for predicting a
range of negative political intergroup outcomes. In all studies, we find evidence that the PIPS
accounts for unique variance above and beyond existing self-reported measures of ideological
extremity and classic measures such as affective polarization, Right Wing Authoritarianism and
Social Dominance Orientation. For liberals, conservatives, and even self-professed moderates,
political identity polarization predicts a wide array of negative intergroup outcomes including
outgroup distancing, willingness to engage in hostile activism, discrimination against game
partners, and reactance to civility interventions.
While PIPS scores, affective polarization, and ideological extremity were strongly
correlated across our studies, we found that these three constructs tap into unique components of
political and communal engagement. We find that ideological extremity captures one’s
VALUES AND GROUP IDENTITY 49
endorsement of a single ideological group and is a stronger predictor of positive intragroup
outcomes such as engagement with ingroup members and conventional activism. By comparison,
the Political Identity Polarization Scale was better suited to capturing negative intergroup
outcomes by assessing differences in people’s perceptions of both liberals and conservatives
across multiple facets of prejudice. While affective polarization was related to our outcomes in a
similar manner to PIPS scores, we find that the Political Identity Polarization Scale added
predictive power for negative intergroup political outcomes beyond this measure. These results
indicate that negative intergroup outcomes may not rely solely on identification with an ingroup
or a single component of prejudice (e.g. affect); instead, it seems that it is differences in negative
perceptions about opposing groups that motivate intergroup conflict.
These findings confirm previous research that shows that hostility is not just a problem of
the conservative right (Morgan et al., 2010; Skitka & Washburn, 2015). While previous research
on Right Wing Authoritarianism and Social Dominance Orientation have linked conservatism
with outgroup hostility and violence (Chambers, Schlenker, & Collisson, 2013; Cornelis & Van
Hiel, 2006; Sibley & Duckitt, 2008), we find that undemocratic and hostile tendencies can be
found in people across the ideological spectrum. Our results indicate that RWA and SDO are
both correlated with conservative—but not liberal—biases. As perceived intolerance on the part
of liberal-leaning institutions has become an increasingly prominent issue at places like Yale
(Friedersdorf, 2015), Rutgers (NPR, 2015), Claremont McKenna (Altman, 2015), and Boston
College (Fitzgibbons, 2014), the growing evidence for political prejudice on both sides of the
ideological spectrum emphasizes the importance of identifying ways to decrease hostilities for
both liberals and conservatives.
VALUES AND GROUP IDENTITY 50
Finally, we find that the Political Identity Polarization Scale may help us to better
understand the effectiveness of social-psychological intervention attempts for curbing hyper-
partisanship and intergroup partisan prejudice. We find that PIPS scores more strongly reflected
people’s implicit attitudes on the Democrat/Republican Implicit Attitudes Test than ideology
alone. When attempting to intervene using in-person events with voluntary participation, we find
that people who choose to engage with civility interventions tend to have lower PIPS scores than
the general population, but that these interventions are effective for decreasing willingness to act
hostilely to disagreeing others and prejudice for those who do choose to attend.
However, when we attempted to use classic anti-prejudice intervention techniques for an
online sample, we find a curvilinear relationship between political identity polarization and
intervention success. People who have below-average bias on the PIPS benefit from these anti-
prejudice interventions, and subsequently have more favorable perceptions of the group they are
biased against. However, the interventions were not effective for those who score above average
on the PIPS, with those in the intervention conditions having even less favorable perceptions of
outgroup members after receiving an intervention compared to the control. These results indicate
that the same interventions which may help people who are slightly biased toward one group
over the other may exacerbate the problem for those who are more biased. In this way the PIPS
can be a useful addition to prejudice-reduction interventions both in the lab and in the field; it
may in fact be the case that PIPS could help explain why these interventions often don’t work
when scaled up to public policy levels (Wilson, 2015; Mitchell, 2012). For instance, findings
from the lab based on college sophomores might not hold for society at large because those
students are more or less polarized than people in general (Henrich, Heine, & Norenzayan, 2010;
Sears, 1986).
VALUES AND GROUP IDENTITY 51
Future directions
Our studies provide support for the importance of capturing political identity polarization
for predicting hostile activism. We find that by looking at the difference between a person’s
cognitive, affective, and social perceptions of two opposing ideological groups, we can better
capture hostility than if we only know their position about one of those groups or about a single
facet of prejudice. Another feature of this scale is that it allows us to capture people who feel
ambivalent about the two parties. Ambivalent individuals may express highly negative attitudes
about both groups, or relatively neutral or positive attitudes about both. Because the United
States is a two-party system, often people feel like their endorsement of a candidate or ideology
is one of picking “the lesser of two evils.” Future research could use the PIPS to investigate the
consequences of this ideological ambivalence.
An additional feature of the PIPS is that it can be easily adapted to measure intergroup
hostilities between opposing groups other than liberals and conservatives. Living Room
Conversations, an organization that leads “living room” dialogues about contentious issues, has
held several events using polarization measures adapted to the issue of criminal justice reform,
which is an issue that does not necessarily cut along traditional liberal-conservative lines (Hirsh,
2015), and found pragmatic utility in using these measures to understand the effects of these
events. Future researchers may want to apply this same methodology to numerous other divisions
and existing group conflicts, from the everyday (e.g., Yankees fans vs. Red Sox fans; Cikara,
Botvinck, & Fiske, 2011) to the seemingly intractable (e.g., Israel vs. Palestine; Ginges & Atran,
2009), or even to novel group paradigms.
Conclusion
The Political Identity Polarization Scale builds on previous research on bipartisan bias by
VALUES AND GROUP IDENTITY 52
providing a measure specifically targeting differences in ideological perceptions. While
ideological measures best predict community-promoting activism such as voting, political
identity polarization best predict hostile activism such as violent protests. Moreover, intervention
attempts to reduce intergroup hostility can may vary in effectiveness between those who are only
slightly biased and those who are the most polarized. As political opponents become more
hostile and polarized in both government and everyday life, understanding the psychology of
politically polarized citizens will be increasingly important for efforts to promote intergroup
cooperation and civility.
VALUES AND GROUP IDENTITY 53
References
Altemeyer, B. (1981) Right-wing authoritarianism. Winnipeg, Manitoba: University of Manitoba
Press.
Altemeyer, B. (1998). The other "authoritarian personality". In M. P. Zanna (Ed.), Advances in
Experimental Social Psychology (Vol. 30, pp. 47-92). San Diego, CA: Academic Press.
Altman, J. (2015). Claremont McKenna dean of students resigns following student protests.
USA Today. Retrieved January 8, 2016 from
college.usatoday.com/2015/11/13/claremont-mckenna-dean-resigns-following-student-
protests/.
Bandura, A. (1965). Influence of models' reinforcement contingencies on the acquisition of
imitative responses. Journal of Personality and Social Psychology, 1(6), 589-595.
Bandura, A. (1971). Social learning theory. New York, NY: General Learning Press.
Bartels, L. M. (2000). Partisanship and voting behavior, 1952-1996. American Journal of
Political Science, 44(1), 35-50.
Bishop, B. (2008). The big sort: Why the clustering of like-minded America is tearing us apart.
Boston, MA: Houghton Mifflin Harcourt.
Bonica, A., McCarty, N., Poole, K. T., & Rosenthal, H. (2015). Congressional polarization and
its connection to income inequality. In J. A. Thurber & A. Yoshinaka (Eds), American
Gridlock: The Sources, Character, and Impact of Congressional Polarization (pp. 357-
377). New York, NY: Cambridge University Press.
Brewer, M. B. (1999). The psychology of prejudice: Ingroup love or outgroup hate? Journal of
Social Issues, 55(3), 429-444.
VALUES AND GROUP IDENTITY 54
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk: A new source
of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6(1), 3-5.
Chambers, J. R., Schlenker, B. R., & Collisson, B. (2013). Ideology and prejudice: The role of
value conflicts. Psychological Science, 24, 140-149.
Chopik, W., & Motyl, M. (2016). Ideological fit enhances interpersonal orientations. Social
Psychological and Personality Science, 7, 759-768.
Cikara, M., Botvinick, M. M., & Fiske, S. T. (2011). Us versus them social identity shapes
neural responses to intergroup competition and harm. Psychological Science, 22(3), 306-
313.
Cohen, G. L., Aronson, J., & Steele, C. M. (2000). When beliefs yield to evidence: Reducing
biased evaluation by affirming the self. Personality and Social Psychology Bulletin,
26(9), 1151-1164.
Cornelis, I., & Van Hiel, A. (2006). The impact of cognitive styles on authoritarianism based
conservatism and racism. Basic and Applied Social Psychology, 28, 37–50.
Corning, A. F., & Myers, D. J. (2002). Individual orientation toward engagement in social
action. Political Psychology, 23(4), 703-729.
Cottrell, C. A., & Neuberg, S. L. (2005). Different emotional reactions to different groups: A
sociofunctional threat-based approach to “prejudice”. Journal of Personality and Social
Psychology. 88(5), 770-789.
Crawford, J. T., Brandt, M. J., Inbar, Y., Chambers, J. R., & Motyl, M. (2017). Social and
economic ideologies differentially predict prejudice across the political spectrum, but
social issues are most divisive. Journal of Personality and Social Psychology, 112, 383-
412.
VALUES AND GROUP IDENTITY 55
Crawford, J. T., Mallinas, S. R., & Furman, B. J. (2015). The balanced ideological antipathy
model: Explaining the effects of ideological attitudes on inter-group antipathy across the
political spectrum. Personality and Social Psychology Bulletin, 41(12), 1607-1622.
Crawford, J. T., Modri, S. A., & Motyl, M. (2013). Bleeding-heart liberals and hard-hearted
conservatives: Subtle political dehumanization through differential attributions of human
nature and human uniqueness traits. Journal of Social and Political Psychology, 1(1), 86-
104.
Critcher, C. R., Dunning, D., & Armor, D. A. (2010). When self-affirmations reduce
defensiveness: Timing is key. Personality and Social Psychology Bulletin, 36(7), 947-
959.
Dehghani, M., Atran, S., Iliev, R., Sachdeva, S., Medin, D. & Ginges, J. (2010). Sacred values
and conflict over Iran’s nuclear program. Judgment and Decision Making, 5(7), 540-546.
Dehghani, M., Johnson, K. M., Hoover, J., Sagi, E., Garten, J., Parmar, N. J., Vaisey, S., Iliev, R.
& Graham, J. (2016). Purity homophily in social networks. Journal of Experimental
Psychology: General, 145(3), 366-375.
Ebersole, C. R., Nosek, B. A., & Motyl, M. (2014). Contest study to promote political civility
[Data file and materials]. Retrieved from https://osf.io/qe9zp/
Fitzsimmons, E. (2014). Condoleezza Rice backs out of Rutgers speech after student protests.
New York Times. Retrieved January 8, 2016 from
www.nytimes.com/2014/05/04/nyregion/rice-backs-out-of-rutgers-speech-after-student-
protests.html
VALUES AND GROUP IDENTITY 56
Friedersdorf, C. (2015). The new intolerance of student activism. The Atlantic. Retrieved January
8, 2016 from http://www.theatlantic.com/politics/archive/2015/11/the-new-intolerance-
of-student-activism-at-yale/414810/
Gaertner, S. L., Dovidio, J. F., Rust, M. C., Nier, J. A., Banker, B. S., Ward, C. M., Mottola, M.,
& Houlette, M. (1999). Reducing intergroup bias: Elements of intergroup cooperation.
Journal of Personality and Social Psychology, 76(3), 388-402.
Ginges, J., & Atran, S. (2009). What motivates participation in violent political action. Annals of
the New York Academy of Sciences, 1167(1), 115-123.
Graham, J., & Haidt, J. (2012). Sacred values and evil adversaries: A moral foundations
approach. In P. Shaver & M. Mikulincer (Eds.), The Social Psychology of Morality:
Exploring the Causes of Good and Evil (pp. 11-31). New York: APA Books
Graham, J., Nosek, B. A., & Haidt, J. (2012). The moral stereotypes of liberals and
conservatives: Exaggeration of differences across the political spectrum. PLoS ONE, 7,
e42366.
Green, D., Palmquist, B., & Schickler, E. (2004). Partisan hearts and minds: Political parties
and the social identities of voters. New Haven, CT: Yale University Press.
Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring individual differences in
implicit cognition: The implicit association test. Journal of Personality and Social
Psychology, 74(6), 1464-1480.
Greenwald, A. G., Nosek, B. A., & Banaji, M. R. (2003). Understanding and using the implicit
association test: I. An improved scoring algorithm. Journal of Personality and Social
Psychology, 85(2), 197-216.
VALUES AND GROUP IDENTITY 57
Groenendyk, E. W., & Banks, A. J. (2014). Emotional rescue: How affect helps partisans
overcome collective action problems. Political Psychology, 35(3), 359-378.
Harrell, F. E., & Davis, C. E. (1982). A new distribution-free quantile
estimator. Biometrika, 69(3), 635-640.
Hawkins, C. B., & Nosek, B. A. (2012). Motivated independence? Implicit party identity
predicts political judgments among self-proclaimed independents. Personality and Social
Psychology Bulletin, 38(11), 1437-1452.
Hetherington, M. J., & Rudolph, J. T. (2015). Why Washington won’t work: Polarization,
political trust, and the governing crisis . Chicago: University of Chicago Press.
Hewstone, M., Rubin, M., & Willis, H. (2002). Intergroup bias. Annual Review of Psychology,
53(1), 575-604.
Hirsh, M. (2015). Charles Koch, liberal crusader? Politico. Retrieved April 24, 2016 from
www.politico.com/magazine/story/2015/03/charles-koch-overcriminalization-115512
Iyengar, S., Sood, G., & Lelkes, Y. (2012). Affect, not ideology a social identity perspective on
polarization. Public Opinion Quarterly, 76(3), 405-431.
Iyengar, S., & Westwood, S. J. (2015). Fear and loathing across party lines: New evidence on
group polarization. American Journal of Political Science, 59(3), 690-707.
Jones, J. (2015). In U.S., new record 43% are political independents. Gallup.com. Retrieved from
http://www.gallup.com/poll/180440/new-record-political-
independents.aspx?version=print.
Klar, M., & Kasser, T. (2009). Some benefits of being an activist: Measuring activism and its
role in psychological well- being. Political Psychology, 30(5), 755-777.
VALUES AND GROUP IDENTITY 58
Mair, P., Schoenbrodt, F., & Wilcox, R. (2017). WRS2: a collection of robust statistical
methods [R package version 0.9-2]. Available online at: http://CRAN. R-project.
org/package=WRS2.
Mitchell, G. (2012). Revisiting truth or triviality: The external validity of research in the
psychological laboratory. Perspectives on Psychological Science, 7(2), 109-117.
Morgan, G. S., Mullen, E., & Skitka, L. J. (2010). When values and attributions collide: Liberals’
and conservatives’ values motivate attributions for alleged misdeeds. Personality and
Social Psychology Bulletin, 36(9), 1241-1254.
Motyl, M. (2014). “If he wins, I'm moving to Canada”: Ideological migration threats following
the 2012 US presidential election. Analyses of Social Issues and Public Policy, 14(1),
123-136.
Motyl, M. (2016). Liberals and conservatives are geographically dividing. In P. Valdesolo & J.
Graham (Eds.), Social Psychology of Political Polarization (pp. 7-37). New York, NY:
Routledge.
Motyl, M., Hart, J., Pyszczynski, T., Weise, D., Maxfield, M., & Siedel, A. (2011). Subtle
priming of shared human experiences eliminates threat-induced negativity toward Arabs,
immigrants, and peace-making. Journal of Experimental Social Psychology, 47(6), 1179-
1184.
Motyl, M., Iyer, R., Oishi, S., Trawalter, S., & Nosek, B. A. (2014). How ideological migration
geographically segregates groups. Journal of Experimental Social Psychology, 51, 1-14.
New York Times (March 6, 2016). Transcript of the Democratic presidential debate in Flint,
Mich. New York Times. Retrieved April 2, 2016 from
VALUES AND GROUP IDENTITY 59
http://www.nytimes.com/2016/03/07/us/politics/transcript-democratic-presidential-
debate.html?_r=0
Nosek, B. A. (2005). Moderators of the relationship between implicit and explicit
evaluation. Journal of Experimental Psychology: General, 134(4), 565.
Nosek, B. A., & Hansen, J. J. (2008). The associations in our heads belong to us: Searching for
attitudes and knowledge in implicit evaluation. Cognition & Emotion, 22(4), 553-594.
Nosek, B. A., Smyth, F. L., Hansen, J. J., Devos, T., Lindner, N. M., Ranganath, K. A., ... &
Banaji, M. R. (2007). Pervasiveness and correlates of implicit attitudes and
stereotypes. European Review of Social Psychology, 18(1), 36-88.
NPR (2015). Video and transcript: NPR’s interview with President Obama. NPR. Retrieved
January 8, 2016 from http://www.npr.org/2015/12/21/460030344/video-and-transcript-
nprs interview-with-president-obama/.
Paluck, E. L., & Green, D. P. (2009). Prejudice reduction: What works? A review and
assessment of research and practice. Annual Review of Psychology, 60, 339-367.
Parker, M. T., & Janoff-Bulman, R. (2013). Lessons from morality-based social identity: The
power of outgroup “hate,” not just ingroup “love”. Social Justice Research, 26(1), 81-96.
Pew Research Center (July 1, 2010). Gender equality universally embraced, but inequalities
acknowledged. Pew Research Center. Retrieved January 8, 2016 from
http://www.pewglobal.org/2010/07/01/gender-equality/
Pew Research Center (June 12, 2014) Political identity polarization in the American public. Pew
Research Center. Retrieved January 8, 2016 from http://www.people-
press.org/2014/06/12/political-polarization-in-the-american-public/#
VALUES AND GROUP IDENTITY 60
Pew Research Center (January 18, 2016). 5 facts about race in America. Pew Research Center.
Retrieved January 19, 2016 from http://www.pewresearch.org/fact-tank/2016/01/18/5-
facts-about-race-in-america/
Pratto, F., Sidanius, J., Stallworth, L. M., & Malle, B. F. (1994). Social dominance orientation: A
personality variable predicting social and political attitudes. Journal of Personality and
Social Psychology, 67, 741–763.
Pyszczynski, T., Motyl, M., Vail III, K. E., Hirschberger, G., Arndt, J., & Kesebir, P. (2012).
Drawing attention to global climate change decreases support for war. Peace and
Conflict: Journal of Peace Psychology, 18(4), 354.
Reilly, K. (2016, June 2). Read Hillary Clinton’s speech on Donald Trump and national security.
Time. Available at http://time.com/4355797/hillary-clinton-donald-trump-foreign-policy-
speech-transcript/
Rosenfeld, M. J., Reuben, T. J., Falcon, M. (2011). How couples meet and stay together, waves
1, 2, and 3: Public Version 3.04. Stanford, CA: Stanford University Libraries.
Rousselet, G. A., Pernet, C. R., & Wilcox, R. R. (2017). Beyond differences in means: Robust
graphical methods to compare two groups in neuroscience. European Journal of
Neuroscience, 46(2), 1738-1748.
Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on
social psychology's view of human nature. Journal of Personality and Social
Psychology, 51(3), 515-530.
Sherif, M., Harvey, O. J., White, B. J., Hood, W. R., & Sherif, C. W. (1961). Intergroup conflict
and cooperation: The Robbers Cave experiment (Vol. 10). Norman, OK: University
Book Exchange.
VALUES AND GROUP IDENTITY 61
Sherman, D. K., & Cohen, G. L. (2006). The psychology of self- defense: Self- affirmation
theory. Advances in Experimental Social Psychology, 38, 183-242.
Sibley, C. G., & Duckitt, J. (2008). Personality and prejudice: A meta-analysis and theoretical
review. Personality and Social Psychology Review, 12, 248–279.
Sidanius, J., & Pratto, F. (2001). Social dominance: An intergroup theory of social hierarchy and
oppression. Cambridge, UK: Cambridge University Press.
Skitka, L.J. & Washburn, A (2015). Are conservatives from Mars and liberals from Venus?
Maybe not so much. In P. Valdesolo & J. Graham (Eds.), Bridging Ideological Divides:
The Claremont Symposium for Applied Social Psychology, New York, NY: Wiley-
Blackwell.
Toner, K., Leary, M. R., Asher, M. W., & Jongman-Sereno, K. P. (2013). Feeling superior is a
bipartisan issue: Extremity (not direction) of political views predicts perceived belief
superiority. Psychological Science, 24(12), 2454-2462.
Valdesolo, P. & Graham, J. (2016a). Ideological divides in society and in social psychology. In
P. Valdesolo & J. Graham (Eds.), Bridging Ideological Divides: The Claremont
Symposium for Applied Social Psychology. New York, NY: Wiley-Blackwell.
Valdesolo, P. & Graham, J. (2016b). Social Psychology of Political Polarization. New York,
NY: Routledge.
van Prooijen, J. W., Krouwel, A. P., Boiten, M., & Eendebak, L. (2015). Fear among the
extremes: How political ideology predicts negative emotions and outgroup derogation.
Personality and Social Psychology Bulletin, 41(4), 485-497.
VALUES AND GROUP IDENTITY 62
Waytz, A., Young, L. L., & Ginges, J. (2014). Motive attribution asymmetry for love vs. hate
drives intractable conflict. Proceedings of the National Academy of Sciences, 111(44),
15687 -15692.
Wilcox, R. R. (2016). Understanding and applying basic statistical methods using R. John Wiley
& Sons.
Wilcox, R. R., & Erceg-Hurn, D. M. (2012). Comparing two dependent groups via
quantiles. Journal of Applied Statistics, 39(12), 2655-2664.
Wilson, T. (2011). Redirect: The surprising new science of psychological change. Penguin UK.
Zakrisson, I. (2005). Construction of a short version of the Right-Wing Authoritarianism (RWA)
scale. Personality and Individual Differences, 39, 863–872.
VALUES AND GROUP IDENTITY 63
Chapter 2: Measuring Abstract Mindsets through Syntax: Improvements in Automating
the Linguistic Category Model
Abstract
Construal Level Theory (Trope & Liberman, 2010), a theoretical framework linking distance and
abstraction, has emerged as an influential and generative social-psychological perspective.
Increasingly, researchers are interested in exploring construal-level patterns outside of the
laboratory context. In this paper, we introduce the Syntax-LCM, a new method for quantifying
abstract mindsets in text by quantifying differences in abstract and concrete syntactic features.
We test the accuracy of the Syntax-LCM for approximating hand-coded scores using an
established manual content coding scheme, the Linguistic Category Model (LCM; Coenen,
Hedebouw, & Semin, 2006). We also compare the performance of this method to two previously
used word-choice automated methods, across lab-generated and Twitter text, finding that the
Syntax-LCM most consistently approximates LCM hand-coding. We discuss practical and
theoretical implications of these findings.
Keywords: Construal Level Theory; text analysis; syntax; Linguistic Category Model; language
VALUES AND GROUP IDENTITY 64
Introduction
Construal level theory (CLT; Trope & Liberman, 2010) has emerged as an important
theoretical framework driving research on differences in the cognitive processing of objects and
events. Although many experimental tasks can be used to measure individuals’ construal level,
or the degree to which they are processing an item concretely or abstractly, linguistic coding
offers an opportunity to examine abstraction levels using more naturalistic, unconstrained tasks
and real-world archival data (e.g., Twitter, online reviews). In the current paper, we present a
syntax-based method for automating the Linguistic Category Model (LCM; Semin & Fiedler,
1988), a well-validated approach for coding linguistic abstraction in text. We first describe both
CLT and the LCM in more detail, and then introduce the Syntax-LCM, our syntax-based
approach. In three studies, we validate this method and compare it to existing automated
methods.
Construal Level Theory
Construal level theory distinguishes between two types of cognitive representations:
high-level and low-level construals. High-level construals are abstract, structured representations
of objects and events that focus on invariant characteristics; such representations tend to
emphasize primary over secondary object features. In contrast, low-level construals are concrete,
contextualized representations that distinguish less strongly between what is central and what is
peripheral, instead focusing on rich and nuanced features of the object/event.
CLT posits that people can think about the same object or action along a continuum from
more abstract (high-level) to more concrete (low-level). For example, a relatively low-level
construal of “recycling” might be “placing a bottle in a recycling bin.” This representation details
the concrete action involved in the activity. A higher-level construal of the same action might be
VALUES AND GROUP IDENTITY 65
“caring for the environment.” This abstract representation omits information about the objects
and concrete actions involved, but communicates the core meaning of the behavior.
One of CLT’s main tenets is that construal-level is influenced by object distance; we are
more likely to think about recycling abstractly (e.g., as “caring for the environment”) when that
action is done in a distant location, at a distant time, by a socially distant person, or if it is less
likely to happen. Construal-level can also be activated as a general mindset or processing
orientation through engaging in procedures that activate a more abstract or concrete cognitive
style (Trope & Liberman, 2010).
As research on CLT has advanced, an important challenge has been identifying a
consistent and easy-to-implement way of measuring the level of construal at which a person is
representing information. Primarily, researchers have used constrained tasks for this purpose. For
example, one popular measure is the Behavior Identification Form (Vallacher & Wegner,1989),
which asks participants to choose between a concrete, means-related or abstract, ends-related
identification for a set of behaviors (cf., Liberman & Trope, 1998). Researchers have also used
various categorization measures, as well as performance on measures that require abstract or
concrete processing (for a review, see Burgoon Henderson, & Markman, 2013).
A natural candidate for measuring construal using a less constrained task is to code
linguistic descriptions for their abstraction level. Researchers have approached such linguistic
coding in a variety of ways. For example, Liberman & Trope (1998) coded open-ended activity
descriptions as low-level if they fit the structure “[activity] by [description]” and as high-level if
they fit the structure “[description] by [activity].” Fujita and colleagues (2006) advanced on this
by using an established method for coding linguistic abstraction: Semin and Fiedler’s Linguistic
Category Model (LCM; 1988).
VALUES AND GROUP IDENTITY 66
The LCM: Measuring Abstraction in Language
To systematically code text for abstraction, researchers have increasingly relied on the
Linguistic Category Model (Coenen et al., 2006), a framework that considers the social-cognitive
functions of different linguistic categories. In this model, adjectives (along with adverbs and
noun-modifiers) form the most abstract linguistic category, as they emphasize decontextualized,
invariant features of an object or event. By comparison, verbs are more concrete than adjectives
because they provide specific contextual information which changes over time.
Within verb classes, the LCM distinguishes between three types of verbs based on the
extent to which the action is grounded in physical experience, requires interpretation, or is an
ongoing mental or emotional state. Descriptive action verbs (DAVs), which describe an
observable action with a clear beginning and end that is grounded in a physical body part (e.g.,
eating, walking), are most concrete. Interpretive action verbs (IAVs) are verbs that capture an
action with a clear beginning and end that require some amount of interpretation (e.g., helping,
exercising). Due to the necessity of interpretation, IAVs are coded as more abstract than DAVs.
Finally, state verbs (SVs) describe enduring mental or emotional states (e.g., love, admire). SVs
are coded as more abstract than IAVs and DAVs, but less abstract than adjectives.
The LCM was created to better understand the psychological functions served by the
different linguistic categories used to describe people and behavior (Semin & Fiedler, 1988), and
has a strong history of use for investigating intergroup biases in language (e.g., Maas, Salvi,
Arcuri, & Semin, 1989). More recently, researchers have used the LCM to code longer passages
of text for evidence of abstract or concrete construal mindsets (e.g, Fujita et al., 2006; Joshi &
Wakslak, 2014).
VALUES AND GROUP IDENTITY 67
Although promising, the LCM is particularly complicated and costly to implement by
human coders due to the manual's length and complexity (Coenen et al., 2006). Coders must be
extensively trained, and disagreement among coders is often high. Additionally, this coding
difficulty necessitates coders limit coding to comparatively small amounts of text or participant
responses. While an issue for small datasets, this limitation is especially problematic as
researchers have attempted to explore CLT using larger datasets (longer passages or higher
volume of short messages; Bhatia & Walasek, 2016; Joshi, Wakslak, & Huang, 2017; Snefjella
& Kuperman, 2015).
Automated Methods for Coding Abstract Mindsets
To facilitate coding, CLT researchers have turned to automated methods (Louwerse, Lin,
Drescher, & Semin, 2010; Seih et al., 2016; Snefjella et al., 2015). One approach relies on
research by Brysbaert and colleagues (2014), who had groups of 30 participants rate each of
forty thousand English word lemmas for their level of abstractness in an effort to estimate
sentence comprehension difficulty and memorability. One concern with utilizing this dataset for
construal-level research, however, is whether participants’ lay understanding of abstraction
mirrors the CLT conceptualization (cf., Burgoon et al., 2013); a second concern is whether
coding individual word-level abstraction is the best way to capture general construal-level
mindsets. However, loss of precision or signal from this method may be counteracted by the
ability to analyze a much larger dataset (Snefjella et al., 2015).
Another approach has been to automate LCM coding by incorporating both verb
abstraction weights and part-of-speech tags. Seih and colleagues (2016) took 1000 commonly
used verbs which had been pre-sorted into the three LCM verb categories in the General Inquirer
dictionary (Stone et al. 1966). They had three coders classify an additional 800 new verbs from
VALUES AND GROUP IDENTITY 68
their corpora as either DAV, IAV, or SV. Finally, they used a combination of part-of-speech
tagging and the Linguistic Inquiry and Word Count Program (LIWC; Pennebaker et al., 2015) to
calculate a LIWC-LCM score. Using this method, they found that participants who wrote about
distal events received higher abstract language scores than those who wrote about proximal
events.
While Brysbaert ratings and the LIWC-LCM method offer the benefit of decreased
coding costs, neither method has been rigorously compared to LCM scores to establish their
accuracy at approximating human-generated codes. Both Brysbaert ratings and LIWC-LCM
scores rely primarily on word dictionaries, which do not consider the larger sentence context
integral to many of the LCM coding decision rules. Brysbaert ratings do not consider part of
speech at all, as the original intent was explicitly to assess words in isolation. The LIWC-LCM is
an improvement in that it incorporates adjectives, nouns, and verbs which are key to the LCM.
However, LCM coding rules for adjectives and nouns are not straightforward: codes often rely
on both words’ part-of-speech and their role within the sentence. For example, a noun is coded as
an adjective when it modifies another noun (e.g., copulas, clausal subjects). Thus, coding all
nouns would inaccurately capture abstraction, as would leaving out important noun and verb
modifiers such as adverbs.
Syntax-LCM: Automating the LCM Using Syntax
Since the LCM coding scheme relies heavily on how words are used in a sentence, there
is potential for a syntax-based method to provide additional predictive accuracy beyond word
choice when approximating human-coded LCM scores. Generally, syntactic approaches to
annotation emphasize that psychological processes can be captured not only in the words people
use, but also in how they put words together. Recent techniques have begun to show the promise
VALUES AND GROUP IDENTITY 69
of using syntactic differences to predict psychological phenomena (Boghrati et al., 2017; Iliev &
Axelrod, 2016), and these techniques can be particularly powerful for capturing mental processes
where the overall topic of discussion is constant.
In this research, we develop the Syntax-LCM, a method for quantifying abstract and
concrete syntactic features. Since many of the LCM manual coding rules rely heavily on
syntactical organization (e.g., copulas and clausal nouns), we hypothesized that quantifying the
presence of part-of-speech tags and dependency tree features may more accurately approximate
LCM scores. In this method, we combine the verb lists created by Seih and colleagues (2016)
with the syntactic features generated by the Syntax-LCM to create an abstraction score that
captures both the context and specific verb word choice reflected in the LCM manual.
In what follows, we describe the development of the Syntax-LCM method and present
three studies validating its effectiveness. In Study 1, we test the generalizability and predictive
accuracy of Syntax-LCM abstraction scores using a corpus collected by our lab. In Study 2, we
tested the predictive accuracy of the Syntax-LCM on a dataset collected and hand-coded by
researchers at an unaffiliated lab to ensure cross-coder generalizability. Finally, in Study 3, we
examined whether the Syntax-LCM would accurately predict hand-coded scores for Twitter data,
a major source of textual data for social scientists. All materials, data, and R scripts are available
at https://osf.io/hsnmq/?view_only=8e33ec6a2c6644f58a0437bc95d4d2e5. For all studies, we
report how we determined sample size, all data exclusions (if any), and all measures used for
comparison.
Syntax-LCM Method Development
We began by selecting an existing, open-ended response dataset (henceforth referred to
as development corpus) to develop the Syntax-LCM method syntax dictionaries. This corpus was
VALUES AND GROUP IDENTITY 70
collected by the first author’s research lab and is comprised of 256 undergraduate psychology
participants' responses to two writing prompts. In the first prompt, participants wrote about the
importance of being loyal or fair to other students, and in the second prompt they wrote about
their perceptions of another student's work. Participants generated a total of 973 sentences for the
first prompt and 466 sentences for the second prompt. Because data were collected using the
same participants, we combined the responses to both prompts.
Establishing Ground Truth: LCM Manual, Hand-Coded Abstraction.
We hand-coded each sentence using the Linguistic Category Model manual to establish
ground truth. During the course of training coders, we identified several LCM coding rules
which leave room for interpretation and coder disagreement. We corresponded extensively with
Gun Semin, an original LCM manual author, to develop a LCM coding addendum clarifying
these rules (available on OSF). Next, two independent coders used this addendum in conjunction
with existing LCM manuals to hand-code the corpus sentences for DAV, IAV, SV, and ADJ
categories. Coding disagreements were resolved through discussion (average inter-coder
reliability k = .84).
Next, we computed hand-coded LCM abstraction score (hLCM) for each sentence in the
corpus using the LCM manual equation:
((𝑫𝑨𝑽∗𝟏'(𝑰𝑨𝑽∗𝟐)'(𝑺𝑽∗𝟑)'(𝑨𝑫𝑱∗𝟒)))
(𝑫𝑨𝑽'𝑰𝑨𝑽'𝑺𝑽'𝑨𝑫𝑱)
(1)
In this equation, each of the four categories are assigned a weight based on their theorized
abstraction level, with concrete verbs (DAVs) receiving the lowest weight and adjectives (ADJ)
receiving the highest weight. The total weighted sum of the sentence's codes are divided by the
VALUES AND GROUP IDENTITY 71
number of coded items to generate abstraction scores ranging from 1 (concrete) to 4 (abstract).
We use hand-coded LCM scores as the ground truth for comparison with automated methods
throughout the rest of our analyses, as this manual is one of the strongest theoretical
approximations of abstract/concrete thinking.
Syntax-LCM Method
After establishing ground truth, we developed the Syntax-LCM method using three steps.
Step 1: Syntax feature generation. First, we parsed each sentence to extract its syntactic
part-of-speech (e.g., noun, adjective) and dependency parse tree features (e.g., copula, clausal
subject) using the coreNLP R version 3.4.2 package (Arnold & Tilden, 2016). This step results in
a syntactic representation of the sentence that can be analyzed in place of the sentence itself (see
Supplemental Materials for an in-depth explanation of these features).
Step 2: Syntax-LCM dictionary creation. After feature generation, we created a
“concrete” syntax dictionary and an “abstract” syntax dictionary list. Whereas typical
dictionaries are comprised of lists of words related to a theme, these dictionaries instead are
comprised of syntactic and dependency tree features related to either abstract language or
concrete language.
To identify which syntactic features distinguish reliably between abstract and concrete
sentences, we created two groupings of text, one containing the top third most concrete sentences
in the corpus and one containing the top third most abstract sentences in the corpus. Then, we
conducted a binary logistic regression with all possible syntactic features predicting abstract or
concrete group membership with 10-fold cross validation
6
. The classification algorithm achieved
6
To test accuracy robustness, we used 10-fold cross validation: the dataset is randomly divided into 10 equal-sized
subsets, 9 of which are combined to create the training dataset used in the binary logistic regression to generate
feature coefficients and model accuracy. These coefficients are used to predict the dependent variable in the
VALUES AND GROUP IDENTITY 72
83% accuracy (83% precision, 82% recall, and f1 score of 0.83), demonstrating the effectiveness
of syntactic and dependency features for distinguishing between abstract and concrete sentences.
Then, we compared the logistic regression coefficients generated for each feature across
the 10-fold cross validation and identified features that consistently and significantly predicted
each group across all folds. Eleven features positively predicted abstract sentences consistently
and were compiled into an abstract mindset syntax dictionary (6 adjective-related features and 5
verb-related features), and eleven features negatively predicted abstract sentences consistently
and were compiled into a concrete feature dictionary (i.e., predicted concrete sentences; see
Table 1).
Table 1.
Syntax-LCM Features List
Abstract Features Concrete Features
LCM-Specified Features
amod: adjectival modifier
auxpass: passive auxiliary
cop: copula
compound: noun compound
mark: subordinate clause marker
nmod:npmod: noun as adverb modifier
xcomp: clausal compliment
Theory-Supported Features
expl: expletive
vpn: past participle verb
vbz: 3
rd
person present tense verb
Theory-Supported Features
aposs: appositional modifier
advcl: adverbial clause modifier
case: case marking
conj: conjunct
csubj: clausal subject
discourse: discourse element
mwe: multi-word expression
nnps: proper plural noun
nsubj: nominal subject
nummod: numeric modifier
vbg: present participle verb
Notably, these features mirrored both LCM coding manual coding rules (e.g., copulas,
adjectives) and novel syntactic features that are consistent with CLT but not currently in the
LCM manual. Specifically, we found that verbs indicating a third-person or past perspective
remaining 1 subset. The resulting accuracy score reflects the predictive accuracy on new data. These steps are
conducted an additional 9 times, with a different subset of the data left out on each round.
VALUES AND GROUP IDENTITY 73
signified abstract sentences and first-person and present verbs indicated concrete sentences.
These features directly parallel CLT research that finds that objects/events that are further away
temporally or physically are more abstract than things that are near. These results give us new
evidence that many of the findings core to Construal Level Theory are identifiable in language.
Step 3: Computing Syntax-LCM abstraction scores. Finally, we created two R
functions for calculating Syntax-LCM scores. The ParsedCorpus function parses the syntax and
dependency tree features from each sentence and provides a representation containing these
features. The Syntax-LCM function then takes a column of these representations, imports the
LIWC-LCM verb dictionaries (Seih et al., 2016) and the new syntax dictionaries to count the
total number of features present in each sentence. Then, it uses the following equation to apply
the weights from the LCM manual to the appropriate categories and calculate an abstraction
score using the following equation, where SADJ and SVERBs stand for syntax adjectives and
syntax verbs, respectively:
(𝒂𝒃𝒔𝒕𝒓𝒂𝒄𝒕 𝑺𝑨𝑫𝑱𝒔∗𝟒)'(𝑺𝑽𝒔∗𝟑)'( 𝑰𝑨𝑽𝒔'𝒂𝒃𝒔𝒕𝒓𝒂𝒄𝒕 𝑺𝑽𝑬𝑹𝑩𝒔 ∗𝟐)'((𝑫𝑨𝑽𝒔'𝒄𝒐𝒏𝒄𝒓𝒆𝒕𝒆 𝑺∗𝟏)
(𝒂𝒃𝒔𝒕𝒓𝒂𝒄𝒕 𝑺𝑨𝑫𝑱𝒔'𝑺𝑽𝒔'𝑰𝑨𝑽𝒔' 𝒂𝒃𝒔𝒕𝒓𝒂𝒄𝒕 𝑺𝑽𝑬𝑹𝑩𝒔'𝑫𝑨𝑽𝒔'𝒄𝒐𝒏𝒄𝒓𝒆𝒕𝒆 𝑺)
(2)
This calculation leaves a Syntax-LCM score ranging from 1 (concrete) to 4 (abstract). Having
described the Syntax-LCM method and its creation, we now move on to explore its predictive
accuracy across several contexts.
Study 1
In Study 1, we apply the Syntax-LCM method to a novel corpus of text and compare its
effectiveness for approximating hand-coded LCM scores to two existing automated methods:
Brysbaert ratings and LIWC-LCM. When selecting the corpus for this study, we ensured that the
VALUES AND GROUP IDENTITY 74
set of text was collected by a different research team using unique populations and response
prompts from those that were used to develop the method; this variety ensures that the resulting
method’s effectiveness does not depend on idiosyncratic qualities of a particular discussion topic
or population.
Method
Data set. The corpus was collected by an affiliated research lab and is comprised of 71
business school participants' responses to a question asking them about a day in the life of a
business school student. Participants generated 500 sentences in response to the writing prompt.
Procedure. Three independent coders used the LCM to hand-code the corpus to establish
ground truth abstraction scores (average inter-coder reliability k = .89). Then, we calculated
Syntax-LCM abstraction scores and abstraction scores for two alternative methods (the Brysbaert
ratings method and the LIWC-LCM method; details described below). Finally, we compared the
variance accounted for by each method in predicting the hand-coded scores.
Brysbaert scores. Brysbaert scores were calculated by a weighted word count algorithm
using the 40-thousand-word dataset (Brysbaert, Warriner & Kuperman, 2014). This method
counts each of the words from the ratings dataset and weights them using their associated
concreteness scores. The weights are then summed and divided by the total number of counted
words to get an average Brysbaert score. We reverse scored these ratings so higher scores reflect
more abstract sentences for cross-method consistency.
LIWC-LCM. We calculated LIWC-LCM scores following procedures detailed by Seih
and colleagues (2016). First, we used the coreNLP part-of-speech tagger to identify nouns,
adjectives, and verbs in each sentence. Then, we applied the LCM Verb dictionary to distinguish
between DAVs, IAVs, and SVs, assigned the appropriate weight for each category, summed the
VALUES AND GROUP IDENTITY 75
features, and divided the sum by the total number of features to create LIWC-LCM scores (for
full explanation, see Seih et al., 2016).
Results and Discussion
We began by conducting Pearson correlation analyses of the relationship between
Syntax-LCM, LIWC-LCM, Brysbaert, and hand-coded LCM scores (hLCM; see Table 2, below
diagonal). Results of this analysis indicated that Syntax-LCM scores were most strongly
correlated with hLCM scores.
Table 2.
Study 1 and Study 2 Correlations Between Abstraction Scores
hLCM Syntax-LCM Brysbaert LIWC-LCM
hLCM 0.42
(.000)
0.19
(.000)
0.28
(.000)
Syntax-LCM 0.56
(.000)
0.16
(.000)
0.48
(.000)
Brysbaert 0.26
(.000)
0.31
(.000)
0.10
(.028)
LIWC-LCM 0.38
(.000)
0.26
(.000)
0.00
(.841)
Note: p values in parentheses. Study 1 correlations below diagonal. Study 2 correlations above
diagonal.
Next, we tested our hypothesis that the Syntax-LCM method would provide unique
predictive accuracy for approximating hLCM scores beyond existing methods. We ran a
hierarchical regression analysis with Brysbaert scores entered at step 1, LIWC LCM scores at
step 2, and Syntax-LCM scores at step 3 predicting hLCM scores (see Table 3).
VALUES AND GROUP IDENTITY 76
Table 3.
Summary of Hierarchical Regression Analysis for Automated Methods Predicting Hand-Coded
LCM scores (Study 1).
Variable β SE t p 95%CI pETA
2
R
2
∆ R
2
Model Compare F
p-value
Step 1
Brysbaert
0.19
.03
5.95
.000
[.13,.25]
.07
.07
.07
Step 2
Brysbaert
LIWC-LCM
0.19
0.28
.03
.03
6.57
9.37
.000
.000
[.13,.25]
[.22,.33]
.08
.15
.21
.14
F(1,489) = 89.41
p = .000
Step 3
Brysbaert
LIWC-LCM
Syntax-LCM
0.10
0.10
0.32
.03
.03
.03
3.34
2.98
9.38
.001
.003
.000
[.04,.15]
[.03,.16]
[.25,.39]
.02
.01
.15
.33
.12
F(1,487) = 88.02
p = .000
Results indicated that Syntax-LCM scores accounted for significant, unique variance in h
LCM scores beyond both Brysbaert scores and LIWC-LCM. The Syntax-LCM method
accounted for the largest variance, though all three measures contributed to the model.
Study 2
In Study 2, we more thoroughly test the Syntax-LCM method’s predictive accuracy by
applying it to text captured and hand-coded by a research lab unaffiliated with our own. Our goal
in including this corpus was to conduct a stricter test of the Syntax-LCM method; we wanted to
ensure that the Syntax-LCM effectively approximates hand-coded scores from researchers using
the LCM who have not been trained within our lab. This dataset also allowed us to include a
sample of students not at our institution who were responding to a different set of stimuli, thus
increasing the diversity of content within our analyses.
VALUES AND GROUP IDENTITY 77
Method
Dataset. We acquired the Study 2 corpus from researchers unaffiliated with our lab or
institution (Yip-Bannicq, Kalkstein, & Trope, 2017). In this dataset, 102 participants completed a
lab study where they were asked to watch five video clips of shapes interacting. After each clip,
participants wrote a sentence describing what they saw in the video. A research assistant trained
by the lab that collected the data coded each sentence using the LCM manual, and after removing
sentences with no hand-coded score, there were 504 sentences.
Results
Using Study 1’s empirical approach, Pearson correlation analyses showed that Syntax-
LCM scores most strongly correlated with hLCM scores (see Table 2, above diagonal). The
hierarchical regression analysis results also indicated that Syntax-LCM scores accounted for the
largest variance in hLCM scores after controlling for the other two methods (see Table 4).
Table 4.
Summary of Hierarchical Regression Analysis for Automated Methods Predicting Hand-Coded
LCM scores (Study 2).
Variable β SE t p 95%CI pETA
2
R
2
∆
R
2
Model Compare F
p-value
Step 1
Brysbaert
0.17
.03
4.89
.000
[.10,.24]
.05
.05
.05
Step 2
Brysbaert
LIWC-LCM
0.15
0.23
.03
.03
4.59
6.61
.000
.000
[.09,.22]
[.16,.29]
.04
.08
.13
.08
F(1,482) = 43.71
p = .000
Step 3
Brysbaert
LIWC-LCM
Syntax-LCM
0.12
0.11
0.26
.03
.04
.04
3.67
2.89
7.16
.000
.004
.000
[.05,.18]
[.03,.18]
[.19,.33]
.03
.02
.10
.21
.08
F(1,481) = 51.24
p = .000
VALUES AND GROUP IDENTITY 78
Discussion
As with data from our own lab (Study 1), we found the Syntax-LCM was the best
automated approximation of hand-coded LCM scores from an external lab source. This result
provides evidence that Syntax-LCM method effectiveness is not constrained to our own research
lab or experimental contexts. In addition, each of the three datasets used across the method
creation and Studies 1-2 asked participants to provide written responses for different topic
domains, further validating the generalizability of the method (i.e., values in method creation
corpus; day-in-the-life descriptions in Study 1, and description of videos in Study 2).
Study 3
In Studies 1 and 2, we validated the Syntax-LCM for approximating hand-coded LCM
scores for responses to explicit writing prompts. In Study 3, we sought to test whether the
Syntax-LCM could also approximate hand-coded LCM scores for Twitter data. We selected this
source of data for two primary reasons. First, Twitter text is a readily available source of social
media data, and therefore has become a popular research tool for social scientists. For example,
two recent papers that explored CLT ideas in natural language usage each made use of Twitter
data, both using the Brysbaert ratings as their automated coding method (e.g., Bhatia & Walasek,
2016; Snefjella & Kuperman, 2015). We would need to ensure that the Syntax-LCM can
effectively approximate hand-coded LCM scores for this source of data if we hope to provide a
useful tool for many current, large scale corpora.
Second, Tweet syntax is unique due to Tweet character limits (at time of data collection,
140). Limitations on Tweet length may lead users to generate text with different syntactic
patterns from regular speech or written prompts, and these sentence structures may not be
VALUES AND GROUP IDENTITY 79
comparable to everyday English syntax. Thus, it is feasible that our Syntax-LCM method could
be less effective for predicting hand-coded LCM scores in this context.
Method
Dataset. We used a previously purchased dataset of Tweets from Gnip.com that
contained Hurricane Sandy related hashtags (e.g., “#sandy”, “#HurricaneSandy”). We selected a
subset of this dataset that included Tweets directly containing the word “hurricane” to ensure that
the tweets were related to the same topic. After removing Tweets that were duplicates, non-
English, retweets, or indecipherable (e.g., hyperlinks without additional text), we were left with a
final sample size of 52,183 tweets. We used the same methods as in prior studies to calculate
Syntax-LCM scores, Brysbaert ratings, and LIWC-LCM scores for the entire corpus. To test the
effectiveness of each of these measures for approximating hand-coded scores, we then selected a
random subset of 1500 tweets to be hand-coded using the LCM manual. Two research assistants
and the main author hand-coded each Tweet and resolved disagreements through discussion
(average inter-coder reliability k = .81).
Results
287 of the 1500 tweets were not hand-codable as they lacked features reflecting the LCM
coding scheme, leaving a final sample size of 1287 Tweets for model validation. As in previous
studies, we first conducted Pearson correlation analyses of the relationship between Syntax-
LCM, LIWC-LCM, Brysbaert and hand-coded LCM abstraction scores (see Table 5). In this
corpus, we found that both the LIWC-LCM and the Syntax-LCM were positively correlated with
hand-coded LCM scores, while Brysbaert ratings were negatively correlated with hand-coded
scores. This final correlation is notable given the surprising direction of association, and raises
questions about whether the Brysbaert methodology is appropriate for coding abstraction on
VALUES AND GROUP IDENTITY 80
Twitter (see e.g., Bhatia & Walasek, 2016; Snefjella & Kuperman, 2015, who use this
methodology to code Tweets for construal-level).
Table 5.
Study 3 Correlations Between Abstraction Scores
hLCM Syntax-LCM Brysbaert
hLCM
Syntax-LCM 0.39
(.000)
Brysbaert -0.07
(.020)
-0.17
(.000)
LIWC-LCM 0.37
(.000)
0.64
(.000)
-0.23
(.000)
Note: p values in parentheses
Next, we used the same hierarchical regression analysis method as in previous studies to
assess the predictive accuracy of each method (see Table 6). Replicating previous studies, results
of this analysis indicated that Syntax-LCM scores were the strongest predictor of hand-coded
LCM scores, with LIWC-LCM scores also contributing significantly to the model. However,
unlike previous studies, we did not find that Brysbaert scores provided unique predictive
accuracy beyond these two methods.
VALUES AND GROUP IDENTITY 81
Table 6.
Summary of Hierarchical Regression Analysis for Automated Methods Predicting Hand-Coded
LCM scores (Study 3).
Variable β SE t p 95%CI pETA
2
R
2
∆ R
2
Model Compare F
Step 1
Brysbaert
-0.05
.02
-2.32
.020
[-.10,-.01]
.01
.00
.00
Step 2
Brysbaert
LIWC-LCM
0.00
0.30
.02
.02
0.16
13.57
.872
.000
[-.04,.05]
[.26,.34]
.00
.14
.14
.13
F(1,1133) = 184.19
p = .000
Step 3
Brysbaert
LIWC-LCM
Syntax-LCM
0.00
0.16
0.22
.02
.03
.03
0.03
5.80
7.65
.976
.000
.000
[-.04,.04]
[.11,.22]
[.16,.27]
.00
.03
.05
.19
.04
F(1,1132) = 58.55
p = .000
General Discussion
Across three studies and four datasets, we compare the effectiveness of previous methods
for automating abstraction with Syntax-LCM, a novel method that incorporates syntactic features
when measuring abstraction in text. We found that the Syntax-LCM is more accurate at
approximating hand-coded LCM scores than either Brysbaert ratings or the LIWC-LCM method
across topic prompts and labs. We also found that the Syntax-LCM scores are the most effective
for approximating hand-coded scores for Twitter data, a unique context where syntax usage is
often idiosyncratic.
The syntactic features identified during the creation of the Syntax-LCM also indicated
theoretical validity, paralleling the coding rules within the LCM manual by indicating the
importance of adjective-based syntax for abstract sentences. Notably, the Syntax-LCM also
contributed novel linguistic evidence for consistency between typical CLT results and CLT in
VALUES AND GROUP IDENTITY 82
language: third-person and past tense verbs indicated abstract sentences, whereas first-person and
present tense verbs indicated concrete sentences.
The effectiveness of the Syntax-LCM makes it a practical tool for researchers who have
struggled to find an automated method that fits with the theoretical conceptualization of abstract
mindsets while simultaneously facilitating the ability to code data efficiently, reliably, and with
interpretable output. As larger corpora of naturally-occurring social interactions on social media
and in digital spaces have become available, accurately automating psychological coding
schemes has become increasingly important. By leveraging these datasets, social scientists can
both test the generalizability of their findings in real-world contexts and identify novel
theoretical relationships that can be further explored in the lab (Iliev et al., 2015).
Furthermore, with complex coding schemes such as the LCM, automated methods may
help to minimize noise in the human coding process. The original LCM manual has been
supplemented twice by the authors with addendum manuals to clarify coding ambiguities.
During the course of our work, we collaborated with LCM authors to develop an additional
addendum clarifying remaining ambiguities to achieve greater than 70% inter-rater reliability.
However, our final set of coding rules is likely different from those that other labs have
developed to address these same complexities. Methods such as the Syntax-LCM are a potential
step forward as they are able to help mitigate the costs associated with abstraction coding while
systematically ensuring that coding between and within labs is consistent and replicable over
time.
The Brysbaert scores dataset was developed to automate coding of abstractness of
individual words. This method was never intended to approximate the LCM or measure abstract
cognitive processing. Therefore, it is somewhat unsurprising that it underperformed compared to
VALUES AND GROUP IDENTITY 83
the LIWC-LCM and the Syntax-LCM methods, both of which were explicitly developed to
capture this construct. What was more surprising was the result from Study 3, where Brysbaert
scores were negatively correlated with the other three methods. The results in total suggest the
need of further investigations into the differences between word abstraction and abstract
cognitive processes: when might abstract thinking be grounded in abstract words, and when
might these two constructs diverge?
Conclusion
Complex structures such as abstractness often require us to go beyond word-level
analysis to look at the relationships between words. By quantifying syntactic differences, the
Syntax-LCM method successfully approximates LCM hand coding by incorporating these
relationships. This technique allows researchers interested in exploring evidence of abstract and
concrete mindsets to leverage the increasingly large sets of naturally-occurring, digital data
available to them while simultaneously ensuring construct validity and avoiding semantic
restrictions. The Syntax-LCM thus represents a practical tool to help researchers test their ideas
with larger and more varied data sources, providing a useful bridge from the lab to the field.
VALUES AND GROUP IDENTITY 84
References
Arnold, T., & Tilton, L. (2016). coreNLP: Wrappers around Stanford CoreNLP tools [Computer
software manual] (R package version 0.4-2). Retrieved from
https://CRAN.R-project.org/package=coreNLP
Bhatia, S., & Walasek, L. (2016). Event construal and temporal distance in natural language.
Cognition, 152, 1-8
Brysbaert, M.,Warriner, A. B., & Kuperman, V. (2014). Concreteness ratings for 40 thousand
generally known English word lemmas. Behavioral Research Methods, 46(3), 904–911.
Burgoon, E. M., Henderson, M. D., & Markman, A. B. (2013). There are many ways to see the
forest for the trees a tour guide for abstraction. Perspectives on Psychological Science,
8(5), 501-520.
Coenen, L.H.M., Hedebouw, L., & Semin, G.R. (2006). Measuring language abstraction: The
Linguistic Category Model (LCM). Retrieved January 20, 2017, from
http://www.cratylus.org/Text/1111548454250-3815/pC/1111473983125
6408/uploadedFiles/1151434261594-8567.pdf
Fujita, K., Henderson, M. D., Eng, J., Trope, Y., & Liberman, N. (2006). Spatial distance and
mental construal of social events. Psychological Science, 17(4), 278–282.
Fujita, K., Trope, Y., Liberman, N., & Levin-Sagi, M. (2006). Construal levels and self control.
Journal of Personality and Social Psychology, 90(3), 351-367.
Iliev, R., & Axelrod, R. (2016). The paradox of abstraction: Precision versus concreteness.
Journal of Psycholinguistic Research, 1–15.
Iliev, R., Dehghani, M., & Sagi, E. (2015). Automated text analysis in psychology: methods,
applications, and future developments. Language and Cognition, 7(02), 265–290.
VALUES AND GROUP IDENTITY 85
Liberman, N., & Trope, Y. (1998). The role of feasibility and desirability considerations in near
and distant future decisions: A test of temporal construal theory. Journal of Personality
and Social Psychology, 75(1), 5.
Louwerse, M., Lin, K.-I., Drescher, A., & Semin, G. (2010). Linguistic cues predict fraudulent
events in a corporate social network. In Proceedings of the 32nd annual conference of the
cognitive science society (pp. 961–966).
Maass, A., Salvi, D., Arcuri, L., & Semin, G. R. (1989). Language use in intergroup contexts: the
linguistic intergroup bias. Journal of Personality and Social Psychology, 57(6), 981.
Pennebaker, J., Booth, R., Boyd, R., & Francis, M. (2015). Linguistic inquiry and word count:
Liwc 2015 operators manual.
Sagristano, M. D., Trope, Y., & Liberman, N. (2002). Time-dependent gambling: odds now,
money later. Journal of Experimental Psychology: General, 131(3), 364.
Seih, Y., Beier, S., & Pennebaker, J.W. (2016). Development and examination of the linguistic
category model in a computerized text analysis method. Journal of Language and Social
Psychology, 1-13.
Semin, G. R., & Fiedler, K. (1991). The linguistic category model, its bases, applications and
range. European Review of Social Psychology, 2(1), 130.
Snefjella, B., & Kuperman, V. (2015). Concreteness and psychological distance in natural
language use. Psychological Science, 26(9), 1449–1460.
Stone, P. J., Dunphy, D. C., & Smith, M. S. (1966). The general inquirer: A computer approach
to content analysis.
Takano, K., & Utsumi, A. (2016). Grounded distributional semantics for abstract words. In
Proceedings of the 38th annual meeting of the cognitive science society (p. 2171-2176).
VALUES AND GROUP IDENTITY 86
Austin, TX: Cognitive Science Society.
Trope, Y., & Liberman, N. (2010). Construal-level theory of psychological distance.
Psychological Review, 117(2), 440–463.
Vallacher, R. R., & Wegner, D. M. (1989). Levels of personal agency: Individual variation in
action identification. Journal of Personality and Social psychology, 57(4), 660.
Yip-Bannicq, M., Kalkstein, D. A., & Trope, Y. (2017). Abstraction in shared reality.
Manuscript in preparation.
VALUES AND GROUP IDENTITY 87
Chapter 3: Do Moral Judgments Align with Moral Behaviors? A Meta-Analytic Review
Abstract
The past two decades have seen an explosion of psychological research on moral judgment and
separately on moral behavior. Surprisingly little is known about how these two constructs relate.
Morality might be strongly tied to behavior because of the central role of morals to the self.
Alternatively, the social desirability of moral judgments might not carry through to action.
Compiling 401 correlations from 139,000 participants, the current meta-analysis provides a
systematic, quantitative review of the moral judgment-behavior relationship. An overall positive
correlation (r = .40, k = 252, 95% CI [.38,.43]) suggested that people often act in line with their
moral principles. However, moral judgments were most strongly related to behavioral intentions
(r = .52, k = 127, 95% CI [.36,.68]), followed by self-reports of past behaviors (r = .36, k = 170,
95% CI [.20,.34] ). Morality was less strongly related to directly-observed behaviors (r = .27, k
= .40, 95% CI [.20,.52]). The large variation in effect sizes suggests that people are driven by
social desirability concerns in reports on the moral domain. More objective behavioral
observations are needed to better understand when moral judgments motivate moral behavior.
These results integrate the existing body of research on morality, identify best practices for
research procedures, and pinpoint context variations that help or hinder living up to one’s moral
values.
Keywords: morality; moral judgment; moral behavior; meta-analysis
VALUES AND GROUP IDENTITY 88
Introduction
Serving as Senator from Idaho for 18 years, Larry Craig was outspoken in his moral
opposition to homosexuality, gay marriage, and gays in the military: “It is unacceptable to risk
the lives of American soldiers and sailors merely to accommodate the sexual lifestyles of certain
individuals” (Saletan, 2007). When Craig was later arrested for soliciting gay sex in a
Minneapolis airport bathroom, he was widely decried as a hypocrite for engaging in the very
behavior about which he had so publicly and harshly expressed a moral judgment.
Hypocrisy like Craig’s garners outrage and attention. People expect that a person’s moral
judgments of right and wrong should be consistent with that person’s own behavior. However,
despite thousands of empirical investigations into moral judgment and moral behavior,
surprisingly little is known about how moral judgments drive behavior. The explosion of
research on morality, starting around 2001, has created two largely divergent subfields: one on
moral thought (e.g., morally relevant judgments, values, concerns, or attitudes) and a separate
field of inquiry on moral behavior (e.g., prosocial behaviors like helping, and antisocial
behaviors like lying or cheating). Behavior, as opposed to thought, reflects observable responses
that are socially meaningful in that they can have personal consequences and impact others (see
Baumeister, Vohs, & Funder, 2007).
As our theoretical understanding of moral judgments and moral behaviors in isolation has
each become more sophisticated, the gap between the two literatures has become more striking.
Researchers of moral judgments are not blind to the importance of behavior but have typically
assumed that, of course, most judgments translate into action. Given their inherent perceived
objectivity (Skitka, 2014), moral beliefs are assumed by laypeople and researchers alike to
consistently guide our actions (Atran & Axelrod, 2008; Feather, 1995; Tetlock, 2003). The many
VALUES AND GROUP IDENTITY 89
elaborate theoretical models of moral judgment (e.g., Cushman, 2013; Graham et al., 2013; Gray,
Young, & Waytz, 2012; Haidt, 2001; Janoff-Bulman & Carnes, 2013; Narvaez, 2008) thus
remain detached from explicit prediction of behavior.
Moral behavior researchers are similarly aware of the importance of people’s values, but
rarely include measures of values or judgments when quantifying behaviors. The inventive
approaches to assessing and manipulating moral behavior (e.g., manipulating levels of light in
the room; Zhong, Bohns, & Gino, 2010) are thus separated from theories of judgment. As a
result, the science of morality remains divided in ways that impede progress (Graham &
Valdesolo, 2017).
The present research attempts to bridge moral judgment and behavior, thereby addressing
this divide. Using meta-analytic methods, we comprehensively investigated the relationship
between moral judgments and moral behaviors by incorporating a broad range of moral
judgment operationalizations, behavioral domains, and measurement approaches. To identify
potential moderators of this relationship, we relied on perspectives in moral judgment and
behavior and in addition adapted insights developed to explain variations in the judgment-
behavior relationship.
The rest of the paper is organized as follows: first, we operationalize moral judgment and
moral behavior. Second, we incorporate previous literature to generate 6 hypotheses about the
relationship between these two constructs and moderators that may affect this relationship. Third,
we assess these hypotheses through a comprehensive meta-analysis, and finally, we discuss the
implication of our results for the field.
VALUES AND GROUP IDENTITY 90
Defining Moral Judgment and Moral Behavior
A challenge for morality research is the lack of agreed-upon definitions of morality and
its sub-constructs. Many definitions of morality in philosophy and psychology stake out a
normative position of what is right and what is wrong (e.g., morality as maximizing welfare for
utilitarians, or morality as “prescriptive judgments of justice, rights, and welfare pertaining to
how people ought to relate to each other” in Turiel, 1983, p. 3). However, definitions in terms of
specific values such as these are disputed across research labs and subfields.
Conceptual and empirical work in moral psychology has provided several inclusive,
descriptive (not normative) definitions of moral judgment and behavior, which we rely on here.
Defining moral judgment. Blasi’s work (1980) provides a clear definition of moral
judgment that encompasses three types of cognitions:
“(a) moral information (i.e., the verbal recognition of moral norms, at least as defined by
a specific culture); (b) moral attitudes or values, expressing either a personal belief, an
affective inclination, or a tendency to behave in a certain moral manner; and (c) moral
judgment or moral reasoning (these two terms are used here interchangeably),
characterized by the justification of a moral conclusion and by the general or specific
criteria by which moral decisions are supported.” (p. 128)
Following from this definition, we operationalize moral judgment as any measure
assessing whether or to what extent one’s values guide one’s judgments of a given behavior,
person, or event (Manstead, 2000). Specifically, we include measures of moral judgment that
VALUES AND GROUP IDENTITY 91
explicitly reference morality (e.g., questions about moral right and wrong, but not right and
wrong in general, which could be construed in non-moral ways such as factual accuracy).
Defining moral behavior. Following from his definition of moral judgment, Blasi
(1983) defined moral behavior as:
“behavior which is preceded by moral judgment, whether or not it corresponds to the
judgment; morally positive behavior is that behavior that corresponds to the agent’s
moral judgment and is performed because the agent understands it to be morally good.”
(p. 185)
Therefore, moral behavior is operationalized within this paper as any behavior relevant to
a moral judgment.
Competing Hypotheses 1 & 2: Morality as a Powerful Motivator
To what extent do more judgments about self and others reflect a strong personal
standard that guides and organizes moral behaviors? One answer is that moral judgments are
uniquely powerful predictors of action. Most moral judgment frameworks assume that moral
behaviors naturally and consistently align with moral judgments (e.g., Atran & Axelrod, 2008;
Feather, 1995; Skitka, 2002, 2004; Tetlock, 2003). Behaviors stemming from moral values are
not subject to the same cost/benefit analyses as non-moral attitudes. Moral principles persist
despite individual personal costs (Atran & Axelrod, 2008; Dehghani, Iliev, Sachdeva, Atran,
Ginges, & Medin, 2009). The personal importance of these values also increases the cross-
situational consistency of moral actions (Frimer & Walker, 2009). A moral judgment implicates
VALUES AND GROUP IDENTITY 92
the self-concept, and violating or supporting it is tied to important emotions of shame, guilt, and
pride (Rozin, Lowery, Imada, & Haidt, 1999).
Given this interpretation of morality as a powerful life principle, we would expect a
strong relationship between moral judgments and moral behaviors. Supporting this assumption,
prior meta-analyses of the Theory of Planned Behavior have shown that measures of “felt moral
obligation” account for an additional 1-10% of the variance in intentions and behavioral
outcomes even after controlling for attitudinal and motivational variables in these models
(Harland, Staats, & Wilke, 1999; Rivis, Sheeran, & Armitage, 2009; c.f. Conner & Armitage,
1998 for a narrative review).
Yet there is another possible answer to this question. Research on moral hypocrisy
indicates that people do always live up to their values. They downplay the significance of their
own transgressions. Despite feeling a sense of moral responsibility to be fair, participants who
are told to assign rewards based on a coin toss consistently assign the reward to themselves at a
rate much higher than chance (Batson, 2008). People frequently act in immoral ways, at least
when the chance that they will get caught is low (Batson, 2011). Individuals often explain away
their self-serving immorality by attributing it to upholding other values, such as fairness or
altruism (Gino, Ayal, & Ariely, 2013). Sometimes, real tradeoffs necessitate acting against one
important value in the service of another: Do you stay loyal to your friend and lie to protect them
when they break a rule, or do you do what’s fair and tell the truth but be disloyal?
Competing hypotheses: Hypothesis 1: If people are guided by powerful moral standards,
then we should see a large, consistent relationship between moral judgments and behaviors that
is larger than the established attitude-behavior correlation. Hypothesis 2: However, if moral
VALUES AND GROUP IDENTITY 93
values are not always followed, as indicated by moral hypocrisy research, then the relationship
between moral judgments and moral behaviors should be weaker and more variable.
Competing Hypotheses 3 & 4: Morality as Objective Fact (Universally Applied Across
Judgment Targets)
Are moral judgments applied universally to self as well as to others? One answer, based
on the idea that moral judgments are felt as objective truths applicable to everyone (Skitka,
2002), is that judgments about oneself are similar to judgments about others. For example, if
someone judges murder to be morally wrong for others to do, then that person should not
themselves commit murder. This perceived universal standard has lead most morality researchers
to equate moral judgment measures targeted at the self, at others, and even at broad value
domains when discussing their role in moral cognition and behavior.
Another answer is that people have self-serving attribution biases and moral double
standards, judging others’ behavior by stricter standards than they judge themselves. Given
morality’s importance to the self, people may protect their self-esteem by interpreting their past
moral failings as situational and not diagnostic of their own moral standing (Campbell &
Sedikides, 1999; Malle, 2006). When judging the moral or immoral behaviors of others,
however, individuals often interpret value violations much more strictly and as diagnostic of
personality (Valdesolo & DeSteno, 2008).
Competing hypotheses: Hypothesis 3: If moral judgments truly are objective and
universally applied, then we would expect that moral judgments about the self and moral
judgments about others should have similar correlations with moral behavior. Hypothesis 4:
However, if people’s moral judgments are more subjective, then we would expect moral
VALUES AND GROUP IDENTITY 94
judgments about the self to differ systematically from moral judgments about others or in the
abstract, and hence show different correlations with moral behaviors.
Competing Hypotheses 5 & 6: Morality is Universal Across Time and Space
Next, we questioned whether the classic analysis of correspondence between judgment
and behavior measurement features (Ajzen & Fishbein, 1977) applies to moral judgments. On
the one hand, the moral domain is unique because we assume that moral judgments are not
situationally or temporally constrained. Moral imperative judgments are consistent across
situations (Skitka, 2014), and inconsistency across situations in moral judgments and behaviors
is uniquely damaging to both social relationships and one’s self concept (Meindl, Johnson, &
Graham, 2016). If someone says it is morally wrong to cheat on a spouse, we don’t expect or
need them to qualify when or where. We expect that they would not cheat on their spouse ever –
not tomorrow, not at work, not three years from now in Bemidji.
On the other hand, attitude researchers have determined that best measurement practices
dictate that the time frame and context given for the judgment measure should align with those of
the behavior measure in order to maximize the size of the relationship between them (Ajzen &
Fishbein, 1977). This measurement concern acknowledges that people’s attitudes and behaviors
are situationally and temporally constrained, and that attitudes will shift depending on the
context that is provided by the researcher. Asking Sue if she likes the chocolate ice cream they
will be serving at work tomorrow will be much more predictive of her eating said chocolate ice
cream than if you ask if she likes chocolate ice cream overall. If moral judgments are similar to
non-moral judgments, then ensuring that the time and context framing of the judgment and
behavior measures match will increase the size of the relationship between them.
VALUES AND GROUP IDENTITY 95
Competing hypotheses: Hypothesis 5: If moral value judgments are unique and not
situationally or temporally constrained, then correspondence between judgment and behavioral
measurement features would not be a significant moderator of the moral judgment-behavior
relationship. Hypothesis 6: However, if temporal and contextual constraints do affect moral
behavior, as the literature on correspondence in attitudes would suggest, we would anticipate
greater correspondence for studies in which the moral judgment measure specified the same
action, context, and time elements as the moral behavior measure (Fishbein & Ajzen, 1977;
Eagly & Chaiken, 1993).
Competing Hypotheses 7 & 8: Self Reports of Moral Behavior
Do moral judgments predict actual behavior as well as self-reported actions? One answer,
assumed by much of the moral judgment literature, is that people accurately report their moral
actions (Graham et al., 2011; Skitka, 2010). Following the norm in much of psychology, research
in this review interchangeably assessed intentions, self-reported past behaviors, and objectively
measured behavior. By far the most common measures were “behavioroid” ones that involved
participants’ reports of what they had done (e.g., “Have you cheated on an exam in the last
month?”) or planned to do (e.g., “Will you cheat on the next exam?”). Comparatively few
researchers – ourselves included – directly quantified behavior in experimental or real-world
situations (e.g., providing a participant the opportunity to cheat on a test in the lab). If people do
accurately report their moral judgments and behaviors, then correlations with self-reported
behaviors (retrospective and intentions) should approximate those with objectively measured
behaviors.
VALUES AND GROUP IDENTITY 96
Another answer, however, is that self-reports are susceptible to biases when social
processes are in play. As we explain, the validity of self-reports in this area is challenged by
concerns about social desirability.
Social desirability concerns. Inaccurate self-reports could arise when people are
concerned that their behaviors or judgments have negative social implications. In fact, social
desirability concerns may be particularly problematic in the moral domain. Even a single
immoral action can harm one’s reputation. More than other kinds of traits, people quickly form
negative character judgments about others after hearing about one morally-incongruent behavior
(Meindl et al., 2016). Some studies measuring moral judgments have found that participants’
judgments were more susceptible to social desirability biases when the situation described was
more unethical (Chung & Monroe, 2003), and studies measuring moral behaviors show that
reports of ethical conduct are strongly influenced by the perceived desirability of the behavior
being measured (Randall & Fernandes, 1991). Given such social desirability concerns, then we
expect that all self-reports of behavior (both retrospective and prospective) will be more strongly
correlated with moral judgments than objectively measured behaviors.
Competing hypotheses: Hypothesis 7: If self-reports are generally accurate and not
uniquely susceptible to bias, we expect correlations between moral judgments and all behavior
measurement types to be equivalent and show similar patterns of moderator affects. Hypothesis
8: If moral values are subject to self-serving attribution bias, then the lack of self-insight should
be evident especially with behavioral intentions. We expect behavioral intentions to be more
strongly aligned with moral judgments due to the non-diagnosticity of past immoral actions, but
past reports to act much more similarly to objective behavior as their incongruence with moral
beliefs has already been explained away.
VALUES AND GROUP IDENTITY 97
The Current Meta-Analysis
Studies in our review varied greatly. The typical study proceeded as follows: participants first
rated some person, action, or event on a scale that referenced morality (e.g., felt moral
obligation, is it morally right, is it morally relevant). Within the same testing session, participants
completed a task involving moral behavior or provided behavioral self-reports (i.e., past
behavior, future intentions).
Method
All analysis scripts, coded datasets, excluded studies, and supplemental materials are
available at https://osf.io/8f6pm/?view_only=35561473072a4fd68087187ac179ccf9.
Search Strategy
We conducted the primary literature search for studies terminating February 1, 2015
using electronic databases (PsychINFO, PsycARTICLES, Google Scholar, Dissertation
Abstracts Online, Web of Science, and the university online library database) and the reference
lists from relevant articles. We used the following keywords in our searches: (moral* OR ethic*
OR honest* OR cheat* OR altruism* OR help*) AND behavior AND judg* AND (subject* OR
experiment OR participant*) NOT child* NOT develop* NOT criminal NOT adolescent* NOT
delinquen*. Email requests were sent to the Society for Personality and Social Psychology,
Society for Judgment and Decision-Making, and Moral Psychology Research Group listservs to
request unpublished data. This request resulted in five additional eligible studies, both published
and unpublished (Aguilar, np; Canova & Manganelli, 2015; Mackay, 2014; Sims, 2014; Vaisey,
2012).
When vital statistical information for calculating effect sizes was missing, we contacted
the authors directly to request that information (n = 107). In cases where authors did not respond
VALUES AND GROUP IDENTITY 98
to our first request, we sent two follow-up reminder requests. This resulted in the inclusion of
data from 41 articles, and exclusion of 66 potentially eligible articles due to lack of necessary
data. A full list of studies that did not meet our inclusion criteria and their reason for exclusion
are available on OSF at the link listed above.
In total, our search of the literature resulted in 238 studies from 216 eligible reports
(7.5% unpublished) with 401 correlations, 252 independent sample effect estimates, and just
over 139,000 participants (see Figure 1 for MARS study selection flow chart). For a complete
summary of descriptive information for the 238 studies, see Online Supplemental Materials.
VALUES AND GROUP IDENTITY 99
Inclusion and Exclusion Criteria
We selected studies using three inclusion criteria (described in detail below): presence of
non-manipulated moral judgment measures, presence of non-manipulated moral behavior
measures, and a healthy, adult sample.
Presence of moral judgment measure. The study must have included a quantifiable
measure of self-reported moral judgment, defined as a participant rated something or someone as
being moral or immoral (e.g., adapted Moral Disengagement, Bandura et al., 1996; Personal
Ecological Norms, Torjusen et al., 2001; Ethical Value Assessment, Peppas & Peppas, 2000).
Studies measuring the reasons why people judge actions to be moral or immoral (e.g.,
Kohlberg’s judgment stages, Erkut, Jaquette, & Staub, 1981; Normlessness Scale, Dean, 1961)
or the importance of specific value concepts to self-identity (e.g., Schwartz Values Survey,
Schwartz 1992; Social Value Orientation, Murphy, Ackermann, & Handgraaf, 2011) were
excluded as these measures do not directly ask individuals to rate the morality of a person or
action. Finally, we excluded studies that used guilt as a proxy for moral judgment, as guilt is
related to many non-moral constructs (e.g., social ramifications).
Presence of moral behavior measure: The study must have included a behavioral
measure related to the moral judgment made by the participant. This behavior could be a self-
reported past behavior (e.g., “I gave blood last week”), a future behavioral intention commitment
within a specified time frame (e.g., “I will give blood next week”), or a behavior objectively
measured by the researcher (e.g., researchers tracked whether the participant gave blood at a
blood drive). Studies that only assessed behavioral intention without specifying a time frame
were excluded, as we wanted to maximize the likelihood that participants would provide a
concrete intention to act and not just a judgment of whether the action is moral. Studies that used
VALUES AND GROUP IDENTITY 100
willingness-to-pay or vignette behavioral measures were excluded as they assess hypothetical
action likelihood and do not directly reflect a behavioral commitment (c.f., Kraus, 1995). Studies
that attempted to change judgments or behaviors during the study were not included, with the
exception of any non-manipulated control condition. Manipulations may affect the size of the
judgment/behavior relationship.
Healthy, adult sample population. To be included in our review, studies must have
recruited healthy, adult participants. Criminal populations were excluded to avoid introducing
unnecessary noise due to deviant perceptions of social norms. Adolescent populations were
excluded as their still-developing moral systems may affect their ability to act and judge
coherently.
Coding Procedures
Study coding was completed independently by the first author and two research assistants
after a series of extensive coding-training sessions
7
. After coding was completed, all
discrepancies were resolved by discussion (average ICC = 0.83).
Moral judgment target. We coded judgment measures into four categories based on the
target of the moral judgment: self-focused, other-focused, actor-independent, or broad value
scale. Self-focused moral judgment (k = 281) was defined as a participant rating his or her own
actions as moral or immoral (e.g., “I have a moral obligation to donate blood”). Other focused
moral judgment (k =24) was defined as a participant rating the morality of a researcher-specified
third party’s action (e.g., “Parents have a moral obligation to donate blood”). Actor-independent
7
We also coded a number of other variables that proved to have no impact on the relationship
between moral judgments and behaviors: year of publication, academic field, number of
judgments, number of behaviors, measurement feature abstraction, judgment-behavior
presentation order, and both continuous and categorical time lag between measurements. A full
list of codes is available on OSF, and each of these is discussed in more depth in the
supplemental materials.
VALUES AND GROUP IDENTITY 101
moral judgment (k = 43) was defined as a participant rating the morality of an action without a
target actor specified (e.g., “Donating blood is a moral obligation”). Finally, broad value scale
moral judgment (k = 52) was defined as a participant rating the morality of a list of statements
related to a target domain which includes more than one context, behavior, or target actor (e.g.
Ethical Value Assessment, Peppas & Peppas, 2000).
Judgment-behavior measure correspondence. We evaluated the correspondence
between the measure of moral judgment and moral behavior for each of three components: time,
action, and context. Congruent measures (congruent time k = 188; congruent action k = 245;
congruent context k = 274) were defined as the two measures directly matching on that specific
measurement feature (e.g., both measures described the time frame as “next week”; both
measures described the action as “donate blood”; both measures described the context as “at the
school blood drive”). Incongruent measures (incongruent time k = 207; incongruent action k =
150; incongruent context k = 121) were defined as the two measures mismatching on that
specific measurement feature (e.g., the moral judgment measure’s time frame was not specified
and the behavior time frame was “next week”; the moral judgment measure’s action was
“medical donation” and the moral behavior was “donate blood”; the moral judgment context was
not specified and the behavior measure was “at the school blood drive”).
Moral behavior measurement. We coded the behavior measures into one of three
categories: behavioral intention, retrospective behavior, or objective behavior. Retrospective
behavior (k = 214) was defined as self-reports of past behaviors in the absence of objective
measures. Objective behavior (k = 42) was defined as a behavior objectively assessed by the
researchers without participant input. Behavioral intention (k = 144) was defined as self-reports
of intent to perform the behavior in the future.
VALUES AND GROUP IDENTITY 102
Analysis Overview
Analysis strategy for overall effect size. We conducted all analyses using the Metaphor
R package (Viechtbauer, 2010). We used the correlation r for our effect size calculations,
transformed each into Fisher’s Z statistics for computations, and then converted back to
correlations for reporting effect size estimates. In rare cases in which the correlations reported
were broken out into conditions, we used the control condition’s correlation coefficient and
excluded other coefficients from analysis. A positive effect size reflects that more moral
judgments corresponded to greater moral action. A negative effect size reflects strong moral
judgment leading to decreased moral action. We used a random-effects model to calculate the
overall effect size between moral judgments and moral behaviors in order to account for the
large variance in study features in our sample (Borenstein, Hedges, Higgins, and Rothstein,
2009; Cooper, 2010).
Many studies reported correlations for multiple moral judgment measures or multiple
moral behavior measures. As these effect sizes are not independent (Borenstein et al., 2009), we
used Cooper’s (1998) shifting-unit-of-analysis approach as a compromise between utilizing all
available data and maintaining independence assumptions. We used individual effects when
testing moderators. We used the omnibus effect for a study in the analysis of the overall effect
size. This approach ensures that each included sample would only contribute one effect size to
the calculation, while preserving the effect sizes for judgment/behavior relationships reflecting
different variables of interest (e.g., studies that measured both behavioral intention and objective
behavior). This approach preserves the independence assumption of the overall effect
calculation, maximizes available data, and captures important differences in our key variables of
interest at the moderator level (Cooper, 2010). For each effect size, we report the 95%
VALUES AND GROUP IDENTITY 103
confidence interval, and for the overall effect we also provide two measures of variance
heterogeneity: the T
2
measure of study-to-study variances (Borenstein et al., 2009), and the I
2
index of cross-study inconsistency (Higgins, Thompson, Deeks, & Altman, 2003).
We then employed mixed-effects models for our analysis of categorical, univariate
moderator analyses (e.g., moral judgment and moral behavior measurement). For the meta-
regression on analysis of continuous moderators (e.g., publication year), we used the Hunter and
Schmidt (2014) method-of-moments method (see Table 2). We also calculated multiple
regressions using this method and report within each section when main effects drop to non-
significance in the presence of other moderators (see Table 3).
Results
Overall Analyses
First, we examined the overall meta-analytic effect size of the relationship between moral
judgments and all behavior measures (see visual forest plot in Figure 2). The results revealed a
positive, moderate association between moral judgments and moral behaviors, r = .40, k = 252,
95% CI [.38, .43], Significant variation emerged in the distribution of effect sizes (T
2
= .05, SE =
.00), and considerable heterogeneity among included studies, I
2
= 95.67; Q (250) = 6,389.27, p =
.000. This effect lends support to Hypothesis 2, that the moral judgment/behavior relationship is
variable and not as strong as we might assume.
VALUES AND GROUP IDENTITY 104
Note: This forest plot includes point estimates and confidence intervals for all studies. The
diamond and its width represents the combined effect size (r = .40) and its 95% confidence
interval.
Figure 2.
Forest plot of all effect sizes
To test whether this analysis may have been affected by publication bias, we examined
the distribution of effect sizes in a funnel plot (see Figure 3; Egger et al., 1997) and employed
Duval and Tweedie’s (2000) Trim-and-Fill procedure on the random-effects model. Bias would
be suggested by asymmetry in the funnel plot scatterplot, reflecting a relationship between effect
size and study precision. In the current meta-analysis, selective publication or reporting is likely
VALUES AND GROUP IDENTITY 105
to be evident in fewer small-N and less precise studies in the non-predicted direction, thus
truncating the lower left of the funnel plot (Egger et al., 1997). Results of the Trim-and-Fill
analysis indicated that there were no missing studies from the scatterplot. As a follow-up, we
conducted Egger’s test of symmetry to test for bias due to underestimation (Sterne & Egger,
2005). Results of this test did not indicate significant asymmetry, z = 0.01, p = .992, and this test
in combination with the previous tests indicates that publication bias is unlikely.
Figure 3.
Funnel plot of effect sizes.
VALUES AND GROUP IDENTITY 106
Moderator Analyses
Next, we conducted univariate analyses to test the remaining hypothesized moderator
effects (see Table 2).
Table 2
Univariate Moderator Effect Sizes for Moral Judgment/Behavior Correlations
Variable and Class Q
M
(df)
p Q
E
(df)
p
T
k
r
95% CI
LL/UL
Judgment Measure
Self-focused
Other-focused
Actor-independent
Broad value scale
16.32 (3) .000 6207.26 (258)
.000 0.22
175
18
34
35
.44
.27
.32
.36
.41/.47
.13/.41
.21/.43
.24/.47
Time Correspondence
Congruent
Incongruent
4.02 (1) .045 7257.56 (267) .000 .23
134
135
.43
.38
.39/.47
.28/.47
Action Correspondence
Congruent
Incongruent
12.10 (1) .000 5871.02 (254) .000 .21
153
103
.45
.35
.41/.48
.26/.44
Context Correspondence
Congruent
Incongruent
4.89 (1) .027 6585.64 (257) .000 .22
177
82
.43
.36
.39/.46
.27/.45
Correspondence
(continuous)
21.59 (1) .000 6890.16(288) .000 .21 290 .35 .27/.43
Behavior Measure Type
(Overall)
Self-reported behavior
Objective behavior
14.00 (2) .000 6427.71 (260) .000 .22
222
40
.42
.27
.39/.45
.20/.34
Behavior Measure Type
(Split)
Retrospective behavior
Objective behavior
Behavioral intention
59.25 (2) .000 7088.17 (334) .000 .21
170
40
127
.36
.27
.52
.20/.52
.20/.34
.36/.68
Moral judgment target. First, we tested whether the moral judgment/behavior
relationship is not subject to judgment target effects (Hypothesis 3) or if the target of the
judgment systematically moderates this relationship (Hypothesis 4). We found that studies that
employed self-focused moral judgment measures had significantly higher correlations between
VALUES AND GROUP IDENTITY 107
moral judgments and behaviors, r = .44, k = 175, 95% CI [.41, .47], compared with studies that
used moral judgment measures that were other-focused, r = .27, k = 18, 95% CI [.13, .41], actor-
independent, r = .32, k = 34, 95% CI [.21, .43], or broad value scale, r = .35, k = 35, 95% CI [.24,
.47]. Consistent with Hypothesis 4, then, moral standards are not applied universally across all
targets. Much like non-moral attitudes, studies that focused on judgments of one’s own actions
correlated most strongly with one’s own moral behaviors.
Moral judgment/behavior correspondence. Next, we sought to test whether moral
judgments are universally applied across time and space (Hypothesis 5) or if the
judgment/behavior relationship is temporally and contextually constrained in the moral domain
as it is in non-moral domains (Hypothesis 6). Across all three potential constraints (action,
context, and time), the correlation between moral judgments and behaviors was significantly
higher when there was correspondence between each of the judgment and behavior measurement
features. After controlling for all other variables in a meta-regression, both time and action
correspondence remained significant, though context dropped to non-significance (see Table 2).
The overall effect of correspondence also remained significant when correspondence was
operationalized as a continuous variable ranging from 0 (no corresponding features) to 3 (all
features correspond), estimate = .04, z = 2.83, p = .005, 95% CI [.01, .06].
However, a mixed-effects models with type of behavior measure and measurement
correspondence predicting effect sizes provided an important caveat to this effect. We found a
significant interaction between behavior type and measurement feature correspondence, Q (2) =
6.25, p = .044. Specifically, correspondence significantly increased the size of the
judgment/behavior relationship for behavioral intentions, Q (1) = 16.80, p = .000, but did not
affect the size of the relationship for objective behaviors, Q (1) = 0.69, p = .406, or retrospective
VALUES AND GROUP IDENTITY 108
behaviors, Q (1) = 1.36, p = .243. This result indicates that behavioral intentions are subject to
correspondence effects and thus are situationally and temporally constrained, as would support
Hypothesis 6. However, past recall and objective indicators may not depend on time, action, and
context measurement framing to the same extent, which provides partial support for Hypothesis
5.
Moral behavior measurement. To test for social desirability bias (Hypotheses 7-8), we
next explored the effects of the type of behavior measured: retrospective self-report, objective
indicator, or prospective behavioral intentions. First, we tested whether self-report measures
overall differed significantly from objectively measured behaviors. We found a significantly
smaller relationship between moral judgments and objective behavior r = .27, k = 40, 95% CI
[.20, .34], compared to self-reports of behavior, r = .42, k = 262, 95% CI [.39, .45]. This result
provides support for the hypothesis that self-reports in the moral domain are subject to social
desirability bias (Hypothesis 8).
Next, we compared intentions and retrospective reports separately to objectively
measured behavior. We found that correlations for behavioral intentions, r = .52, k = 127, 95%
CI [.36, .68] were particularly inflated beyond objective behavior/judgment correlations, though
retrospective reports of behavior were also significantly higher, r = .36, k = 170, 95% CI [.20,
.52]. However, we found that the moral judgments/retrospective behavior correlation, r = .38,
95% CI [.23, .54] was not significantly different from moral judgment/objective behavior
correlations, r = .31, 95% CI [.23, .38] after controlling for all other moderators in a meta-
regression, z = 1.95, p = .053 (see Table 3). Importantly, the significant, larger relationship
between moral judgments and behavioral intentions persisted, r = .52, 95% CI [.36, .69], z =
VALUES AND GROUP IDENTITY 109
4.22, p < .001. This result indicates that social desirability concerns may be particularly
problematic for accurate self-reports of behavioral intentions but not for retrospective reports.
General Discussion
In the current meta-analysis, we find that moral judgments and moral behaviors are
moderately correlated, and the size of this overall effect for morality proved to be surprisingly
similar to past meta-analytic reviews linking attitudes and behavior (e.g., Fishbein & Ajzen,
1988; Glasman & Albarracín, 2006; Kraus, 1995).
Morality Should Be Different
Psychological theories of morality typically conceptualize and measure moral judgments
as stable individual differences or personality traits (e.g., moral foundations, Graham et al., 2013;
moral convictions, Skitka, 2010). Moral values are assumed to be cross-situationally consistent,
Table 3
Mixed Effects Multiple Regression Model Predicting the Correlation between Moral
Judgments and Moral Behaviors
95% CI
Variables b SE p LL UL
Intercept 0.27 0.04 .000 0.18 0.36
Retrospective behavior 0.04 0.04 .291 -0.04 0.12
Behavioral intention 0.19 0.04 .000 0.11 0.28
Other-focused judgment -0.07 0.05 .191 -0.17 0.03
Actor-independent judgment -0.11 0.04 .004 -0.18 -0.04
Broad value scale -0.01 0.04 .822 -0.09 0.07
Correspondence 0.04 0.01 .005 0.01 0.06
Model Statistics T
2
SE
R
2
Q
M
(df) Q
M
(df)
.04 .01 .29 91.61(7) 7146 (359)
Note. Retrospective and Behavior Intention variables were coded as 1, with objective
behavior coded as 0 as the comparison group. Other-focused, Actor-independent, and
Broad value scale where each coded as 1 with self-focused judgment coded as 0 as the
comparison group. Correspondence ranged from 0 (no corresponding features) to 3 (all
features corresponding)
VALUES AND GROUP IDENTITY 110
and perceived as universal truths that should be applicable to all people across all situations
(Skitka, 2010). Additionally, we assume that this value and judgment consistency will translate
into behavioral consistency. Unlike other non-moral domains, researchers and laypeople alike
expect this moral consistency. We judge ourselves on our own moral consistency (Graham et al.,
2015), and we make inferences about other’s character based on single instances of morally-
inconsistent behavior (Meindl et al, 2016).
The current findings call into question this folk-conceptualization of moral consistency
across judgments and behaviors. Although moral judgments may be more consistent than non-
moral judgments, we did not find that moral behaviors necessarily manifest from these
judgments. Thus, it is possible that the extent of moral hypocrisy that we see in daily life is a
reflection of a more generalized judgment-behavior incongruence: There are many interpersonal
and situational constraints to behavior, and moral behaviors are not unique in this manner.
Measurement Matters
As with the broader psychological literature, 90% of moral psychology studies relied on
“behavioroid” measures involving self-reported past behaviors or behavioral intentions.
However, these self-report measures functioned very differently in our analysis. We find that the
judgment-behavior relationship effect was significantly larger for behavioral intentions when
compared to objective behaviors, but that the correlation between judgments and behaviors were
similar for retrospective and objective behaviors.
In line with the fundamental attribution error, one possibility for this finding is that we do
not interpret our past moral failings as reflective of our character (Graham, 2014; see also
Hoffman et al., 2014). While we can admit to ourselves and to others that our past behaviors
were inconsistent, we take this as non-diagnostic and continue to maintain positive illusions
VALUES AND GROUP IDENTITY 111
about our future behaviors, as gauged by our behavioral intentions. To the extent that we have
already explained away our past morally-incongruent behavior, we would feel comfortable
admitting to having not acted in alignment with our values. Alternatively, we may believe that
we have become better people than we were before, and that we would never repeat these
behaviors in the future. By failing to acknowledge the meaning of those actions for our self-
concept, it would also be unlikely that we would account for our past incongruent behavior when
assessing our likelihood of acting in alignment with our moral values in the future.
Methodologically, these results combined with the much closer correlation sizes indicate that
researchers using self-reported past behaviors as a proxy for objective behavior may have more
accurate results than those using behavioral intentions.
Along with the larger-sized effects, behavioral intentions were also especially sensitive to
measurement correspondence. Studies with greater correspondence in the time frame and context
given for the judgment and behavior measures obtained larger correlations between judgments
and intentions, but this same correspondence effect was not observed for past or objective
behaviors. These results indicate unique patterns with regards to the universality of moral
judgments: our moral intentions may be partly fabrications based on the judgments themselves,
thus leading to higher correlations when we are given specific time and context details. However,
these details do not actually increase our moral actions when they are objectively measured or
when we are asked about the past.
These results may also have important implications for the attitudes literature. Given how
frequently we use intentions as a proxy for behavior, these results alone would lead us to
conclude that measurement correspondence is key for increasing predictive power for both
attitude-behavior and moral judgment-behavior relationships. However, since these higher
VALUES AND GROUP IDENTITY 112
correlations were not reflected in objective behavior, the increase in predictive power observed
due to correspondence is likely artificial.
Theoretical Implications
The results of this meta-analysis provide several insights for moral psychology.
Mind the moral gap. First and foremost, moral judgments do not necessarily translate
into moral behavior. Moral hypocrisy – operationalized here as an intrapersonal gap between
moral judgments and moral behavior (Graham et al., 2015) – is real and pervasive. What’s more,
as the behavioral intention studies show, our intended behaviors may relate more to our
judgments and self-concepts than our actual subsequent behaviors – that is, we all intend to act in
accordance with our moral values, but we often don’t follow through on this behaviorally. For
instance, when the moral judgment and behavior are on the same topic (e.g., recycling) there is a
higher correlation, in part likely due to people not wanting to be seen as moral hypocrites. This
pattern is not seen for attitudes in general.
Judgment target matters. The literature on moral hypocrisy has tended to operationalize
hypocrisy in interpersonal ways, as moral duplicity (claiming moral motives to others falsely) or
moral double standards (judging others more harshly than oneself). But the gap displayed in this
meta-analysis reflects a primarily intrapersonal form of moral hypocrisy, moral weakness
(behaviors not living up to moral values and judgments; Graham et al., 2015). It is also important
to note that while a lower judgment-behavior correlation intuitively may result from individuals
saying that something is morally right and failing to do it, it is also possible that a low correlation
could reflect individuals doing an ostensibly morally good action regardless of their personal
moral judgments. Future meta-analyses should assess behavioral means to address this question.
VALUES AND GROUP IDENTITY 113
Including behavior could help resolve theoretical disputes about moral judgment.
While moral behavior studies tend to be simple demonstrations of effects (e.g., shifting focus
from money to time can reduce cheating behavior; Gino & Mogilner, 2014), studies of moral
judgment have resulted in detailed and elaborated theories, such as Moral Foundations Theory
(Graham et al., 2013), the Model of Moral Motives (Janoff-Bulman & Carnes, 2013), Dyadic
Morality Theory (Schein & Gray, 2017), and Relationship Regulation Theory (Rai & Fiske,
2011).
As with most moral judgment research, work testing these theories has seldom included
moral behavior along with moral judgment. Including moral behaviors could help provide a
common phenomenon and domain in which to test and integrate these various theories of moral
judgment. For instance, Moral Foundations Theory (MFT) is a decade-old, widely-used approach
with dozens of articles and thousands of citations; however, we could find only one study using
MFT measures in conjunction with moral behavior measures. And even when moral judgment
theories do include moral behavior, it is typically measured with behavioroid measures like
behavioral intentions or hypothetical willingness to pay (e.g., Jannson, Marell, & Nordlund,
2010; Skitka, Morgan, & Wisneski, 2015).
Recent methodological advancements in moral psychology, such as the use of ecological
momentary assessment to capture everyday moral judgments (Hofmann et al., 2014) and the
electronically-activated recorder to capture everyday moral behaviors (Bollich, Doris, Vazire,
Raison, Jackson, & Mehl, 2016), could be particularly useful for testing and integrating theories
if they gauged both moral judgments and behaviors in the same sample. Given the consistency of
moral judgments over time, moral behavior is likely to be a primary dependent variable for
VALUES AND GROUP IDENTITY 114
testing and reconciling different theories of moral judgment, and for specifying where exactly
they make conflicting predictions.
Practical Implications for Future Research
Measure moral behavior directly; when direct observation is not available, use
retrospective reports. Our results echo calls from narrative summaries of the field for more
studies that assess investigator-observed behavior (e.g., Baumeister et al., 2007). The shockingly
small number of studies including investigator-observed behaviors (k = 40, less than 10% of our
total sample) in the moral domain compared to other types of behavioral measures (k = 361)
highlights the need for future research assessing observed behavior in order to draw more
accurate conclusions. Hearteningly, though, we found that while behavioral intentions act quite
differently than actual behaviors, retrospective reports of past behaviors show similar patterns.
This makes for a practical take-away for researchers in this area: if direct observation of behavior
is not possible or feasible, use retrospective assessments of past behaviors.
Order could matter, if we ever reported it. We also found that over half of the studies
in our analysis did not report the order of their measurement. This is problematic, as in the moral
domain order effects seem particularly likely, given the self-presentation concerns associated
with not wanting to be seen as a moral hypocrite (e.g., being more likely to avoid an action if you
just said it’s immoral, or being less likely to harshly judge an action one has just done). All
studies including moral judgment and moral behavior should report the order in which they were
assessed.
Use self-targeted judgments, or possibly value scales. Moral judgments focused on
one’s own morality were more highly correlated with behavior than judgments about other
people or judgments about actions (with no actor specified). Therefore, we recommend that,
VALUES AND GROUP IDENTITY 115
whenever possible, researchers ask moral judgment questions in reference to the self rather than
to others when they are interested in predicting personal behavior (e.g., what is the morally right
thing for you do, not what is the morally right thing for Matthew to do).
Compared to other targets (judgments about self, judgments about other people,
judgments about actions), value scales -- moral judgments in the abstract (e.g., Moral
Foundations Questionnaire, Identification with All Humanity Scale) -- were highly variable in
their relations with moral behavior. The source of this variability remains unidentified so far, and
future work identifying this source would be of great value to understanding our moral nature.
Why do some abstract moral judgments predict concrete behaviors, and some do not? Do some
types of question formats (e.g., agreement with abstract value statements, rank-ordering values),
or some types of moral content (e.g., empathy, loyalty) in value scales have higher predictive
validity, and if so, why?
Conclusion
Despite flourishing research areas exploring moral judgments and moral behaviors, cross-
pollination between these two subfields is relatively infrequent. In fact, of the almost 2,000
studies we assessed in detail, only 13% of them contained measures of both moral judgment and
moral behavior in the same study. Summarizing this overlap area, we find meta-analytic
evidence that moral judgments are moderately correlated with moral behaviors. Notably,
although many studies used intentions as a behavioral proxy, we found that correlations between
moral judgments and intentions were affected by unique moderators (e.g., correspondence
effects) compared with self-reported past behaviors or objectively-observed behaviors. Overall,
results of the meta-analysis indicate that our moral judgments and values have a strong bearing
VALUES AND GROUP IDENTITY 116
on how we intend to behave, but are less predictive of how we actually behave when directly
observed.
VALUES AND GROUP IDENTITY 117
References
* = included in meta-analysis
*Abrahamse, W., & Steg, L. (2009). How do socio-demographic and psychological factors relate
to households’ direct and indirect energy use and savings? Journal of Economic
Psychology, 30(5), 711-720.
*Abrahamse, W., Steg, L., Gifford, R., & Vlek, C. (2009). Factors influencing car use for
commuting and the intention to reduce it: A question of self-interest or morality?
Transportation Research Part F: Traffic Psychology and Behaviour, 12(4), 317-324.
*Aguilar-Luzón, M. D. C., García-Martínez, J. M. Á., Calvo-Salguero, A., & Salinas, J. M.
(2012). Comparative study between the theory of planned behavior and the value–belief–
norm model regarding the environment, on Spanish housewives' recycling
behavior. Journal of Applied Social Psychology, 42(11), 2797-2833.
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision
Processes, 50(2), 179-211.
*Akeley Spear, J., & Miller, A. N. (2012). The effects of instructor fear appeals and moral
appeals on cheating-related attitudes and behavior of university students. Ethics &
Behavior, 22(3), 196-207.
*Alberici, A. I., & Milesi, P. (2012). The influence of the Internet on the psychosocial predictors
of collective action. Journal of Community & Applied Social Psychology, 23(5), 373-388.
*Alberici, A. I., & Milesi, P. (2016). Online discussion, politicized identity, and collective
action. Group Processes & Intergroup Relations, 19(1), 43-59.
*Alhidari, I. (2014). Investigating individuals’ monetary donation behaviour in Saudi
Arabia. (Unpublished doctoral dissertation). Cardiff University, United Kingdom.
VALUES AND GROUP IDENTITY 118
Anderl, C., Hahn, T., Notebaert, K., Klotz, C., Rutter, B., Windmann, S. (2015). Cooperative
preferences fluctuate across the menstrual cycle. Judgment and Decision Making, 10(5),
400-406.
*Andersson, M., & von Borgstede, C. (2010). Differentiation of determinants of low-cost and
high-cost recycling. Journal of Environmental Psychology, 30(4), 402-408.
*Aquino, K., McFerran, B., & Laven, M. (2011). Moral identity and the experience of moral
elevation in response to acts of uncommon goodness. Journal of Personality and Social
Psychology, 100(4), 703.
Aquino, K., & Reed II, A., (2002). The self-importance of moral identity. Journal of Personality
and Social Psychology, 83(6), 1423-1440.
Armitage, C. J., & Conner, M. (2001). Efficacy of the theory of planned behavior: A meta-
analytic review. British Journal of Social Psychology, 40, 471-499.
*Arpan, L. M., Opel, A. R., & Lu, J. (2013). Motivating the skeptical and unconcerned:
Considering values, worldviews, and norms when planning messages encouraging energy
conservation and efficiency behaviors. Applied Environmental Education &
Communication, 12(3), 207-219.
*Arvola, A., Vassallo, M., Dean, M., Lampila, P., Saba, A., Lähteenmäki, L., & Shepherd, R.
(2008). Predicting intentions to purchase organic food: The role of affective and moral
attitudes in the theory of planned behaviour. Appetite, 50(2), 443-454.
*Atanasov, P., & Dana, J. (2011). Leveling the playing field: Dishonesty in the face of
threat. Journal of Economic Psychology, 32(5), 809-817.
Atran, S., & Axelrod, R. (2008). Reframing sacred values. Negotiation Journal, 24(3), 221-246.
*Axelrod, L. J., & Lehman, D. R. (1993). Responding to environmental concerns: What
VALUES AND GROUP IDENTITY 119
factors guide individual action? Journal of Environmental Psychology, 13(2), 149-159.
*Bamberg, S., Hunecke, M., & Blöbaum, A. (2007). Social context, personal norms and the use
of public transportation: Two field studies. Journal of Environmental Psychology, 27(3),
190-203.
Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). Mechanisms of moral
disengagement in the exercise of moral agency. Journal of Personality and Social
Psychology, 71, 364–374.
Bandura, A. (1999). Moral disengagement in the perpetuation of inhumanities. Personality and
Social Psychology Review, 3(3), 193-209.
Barkan, R., Ayal, S., Gino, F., & Ariely, D. (2012). The pot calling the kettle black: Distancing
response to ethical dissonance. Journal of Experimental Psychology, 141, 757-773.
Bartels, D. M., Bauman, C. W., Cushman, F. A., Pizarro, D. A., & McGraw, A. P. (2016), Moral
Judgment and Decision Making. In G. Keren & G. Wu (Eds.) The Wiley Blackwell
Handbook of Judgment and Decision Making (pp. 478-515). Chichester, UK: Wiley
*Barnett, T., & Vaicys, C. (2000). The moderating effect of individuals' perceptions of ethical
work climate on ethical judgments and behavioral intentions. Journal of Business
Ethics, 27(4), 351-362.
*Barr, S., Gilg, A. W., & Ford, N. (2005). The household energy gap: examining the divide
between habitual-and purchase-related conservation behaviours. Energy Policy, 33(11),
1425-1444.
*Bass, K., Barnett, T., & Brown, G. (1999). Individual difference variables, ethical judgments,
and ethical behavioral intentions. Business Ethics Quarterly, 9(02), 183-205.
Batson, C. D. (2008). Moral masquerades: Experimental exploration of the nature of moral
VALUES AND GROUP IDENTITY 120
motivation. Phenomenology and the Cognitive Sciences, 7(1), 51-66
Batson, C. D. (2011). What’s wrong with morality? Emotion Review, 3, 230–236.
*Batson, C. D., Kobrynowicz, D., Dinnerstein, J. L., Kampf, H. C., & Wilson, A. D. (1997). In a
very different voice: unmasking moral hypocrisy. Journal of Personality and Social
Psychology, 72(6), 1335.
*Bauman, C. W., & Skitka, L. J. (2009). Moral disagreement and procedural justice: Moral
mandates as constraints to voice effects. Australian Journal of Psychology, 61(1), 40-49.
Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-
reports and finger movements: Whatever happened to actual behavior? Perspectives on
Psychological Science, 2(4), 396-403.
*Beck, L., & Ajzen, I. (1991). Predicting dishonest actions using the theory of planned behavior.
Journal of Research in Personality, 25(3), 285-301.
*Bélanger, D., Godin, G., Alary, M., Noél, L., Côté, N., & Claessens, C. (2002). Prediction of
needle sharing among injection drug users. Journal of Applied Social Psychology, 32(7),
1361-1378.
*Beldad, A., Gosselt, J., Hegner, S., & Leushuis, R. (2014). Generous but not morally obliged?
Determinants of Dutch and American donors’ repeat donation intention
(REPDON). Voluntas: International Journal of Voluntary and Nonprofit
Organizations, 26(2), 442-465.
*Black, J. S., Stern, P. C., & Elworth, J. T. (1985). Personal and contextual influences on
household energy adaptations. Journal of Applied Psychology, 70(1), 3.
*Blanthorne, C., & Kaplan, S. (2008). An egocentric model of the relations among the
VALUES AND GROUP IDENTITY 121
opportunity to underreport, social norms, ethical beliefs, and underreporting
behavior. Accounting, Organizations and Society, 33(7), 684-703.
Blasi, A. (1980). Bridging moral cognition and moral action: A critical review of the
literature. Psychological Bulletin, 88(1), 1-45.
Blasi, A. (1983). Moral cognition and moral action: A theoretical perspective. Developmental
Review, 3(2), 178-210.
Blasi, A. (1999). Emotions and moral motivation. Journal for the Theory of Social
Behaviour, 29(1), 1-19.
*Blok, V., Wesselink, R., Studynka, O., & Kemp, R. (2014). Encouraging sustainability in the
workplace: A survey on the pro-environmental behaviour of university
employees. Journal of Cleaner Production, 105, 55-67.
*Bolin, A. U. (2004). Self-control, perceived opportunity, and attitudes as predictors of academic
dishonesty. The Journal of Psychology, 138(2), 101-114.
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-
analysis. Chichester, UK: Wiley
Borenstein, M., Hedges, L., Higgins, J., & Rothstein, H. (2009). Introduction to meta-analysis.
West Sussex, UK: John Wiley & Sons.
Borenstein, M., Hedges, L., Higgins, J., & Rothstein, H. (2014). Comprehensive meta-analysis
version 3 [computer program]. Englewood, NJ: Biostat.
*Botetzagias, I., Malesios, C., & Poulou, D. (2014). Electricity curtailment behaviors in Greek
households: Different behaviors, different predictors. Energy Policy, 69, 415-424.
*Boudreau, F., & Godin, G. (2014). Participation in regular leisure-time physical activity among
VALUES AND GROUP IDENTITY 122
individuals with type 2 diabetes not meeting Canadian guidelines: The influence of
intention, perceived behavioral control, and moral norm. International Journal of
Behavioral Medicine, 21(6), 918-926.
*Bozionelos, G., & Bennett, P. (1999). The theory of planned behaviour as predictor of exercise
the moderating influence of beliefs and personality variables. Journal of Health
Psychology, 4(4), 517-529.
*Brosch, T., Patel, M. K., & Sander, D. (2014). Affective influences on energy-related decisions
and behaviors. Frontiers in Energy Research, 2, 1-12.
*Brummel, B. J., & Parker, K. N. (2015). Obligation and entitlement in society and the
workplace. Applied Psychology, 64(1), 127-160.
Campbell, W.K.C.; Sedikides, C. (1999). "Self-threat magnifies the self-serving bias: A meta-
analytic integration". Review of General Psychology. 3 (1): 23–43. doi:10.1037/1089-
2680.3.1.23.
*Canete Benitez, S. N. (2014). University researchers and public communication:
What influences their intention to engage with non-experts? (Unpublished doctoral
dissertation). North Carolina State University, Raleigh, NC.
*Canova, L., Bobbio, A., & Manganelli, A. M. (2008). Analysis of psychosocial models of
energy saving intentions formation and the importance of normative influence. Paper
presented at International Association for Research in Economic Psychology Conference
at LUISS, Rome.
Canova, L., & Manganelli Rattazzi, A. M. (2015). The role of normative influence in the Theory
of Planned Behaviour: The case of organic food. Unpublished manuscript.
*Casper, J. M., & Pfahl, M. E. (2012). Environmental behavior frameworks of sport and
VALUES AND GROUP IDENTITY 123
recreation undergraduate students. Sport Management Education Journal, 6, 8-20.
Chan, R. Y. K., Wong, Y. H., & Leung, T. K. P. (2008). Applying ethical concepts to the study
of “green” consumer behavior: An analysis of Chinese consumers’ intentions to bring
their own shopping bags. Journal of Business Ethics, 79, 469-481.
*Chan, L., & Bishop, B. (2013). A moral basis for recycling: Extending the theory of planned
behaviour. Journal of Environmental Psychology, 36, 96-102.
*Chan, R. Y., Wong, Y. H., & Leung, T. K. (2008). Applying ethical concepts to the study of
“green” consumer behavior: An analysis of Chinese consumers’ intentions to bring their
own shopping bags. Journal of Business Ethics, 79(4), 469-481.
*Chen, M. F. (2015). An examination of the value-belief-norm theory model in predicting pro-
environmental behaviour in Taiwan. Asian Journal of Social Psychology, 18(2), 145-151.
*Cheung, C. K., & Chan, C. M. (2000). Social-cognitive factors of donating money to charity,
with special attention to an international relief organization. Evaluation and Program
Planning, 23(2), 241-253.
*Chorlton, K., Conner, M., & Jamson, S. (2012). Identifying the psychological determinants of
risky riding: An application of an extended theory of planned behaviour. Accident
Analysis & Prevention, 49, 142-153.
*Christian, J. S., & Ellis, A. P. (2014). The crucial role of turnover intentions in transforming
moral disengagement into deviant behavior at work. Journal of Business Ethics, 119(2),
193-208.
*Chuang, T. M. (2013). Know it morally vs. do it morally: The ethical gap of college
students in informational norms. (Conference Paper, ICPE 2013).
*Chugh, D., Kern, M. C., Zhu, Z., & Lee, S. (2014). Withstanding moral disengagement:
VALUES AND GROUP IDENTITY 124
Attachment security as an ethical intervention. Journal of Experimental Social
Psychology, 51, 88-93.
Cialdini, R. B., & Trost, M. R. (1998). Social influence: Social norms, conformity and
compliance. In D. T. Gilbert, S T. Fiske, & G. Lindzey (Eds.), The Handbook of Social
Psychology, Volume 2, 4
th
Edition (pp. 151-192). New York, NY: McGraw-Hill
*Cohen, T. R., Panter, A. T., Turan, N., Morse, L., & Kim, Y. (2014). Moral character in the
workplace. Journal of Personality and Social Psychology, 107(5), 943-963.
Conner, M., & Armitage, C. (1998). Extending the theory of planned behavior: A review and
avenues for further research. Journal of Applied Social Psychology, 28, 1429–1464.
*Conner, M., & Flesch, D. (2001). Having casual sex: Additive and interactive effects of
alcohol and condom availability on the determinants of intentions. Journal of Applied
Social Psychology, 31(1), 89-112.
*Conner, M., & McMillan, B. (1999). Interaction effects in the theory of planned behaviour:
Studying cannabis use. British Journal of Social Psychology, 38, 195-222.
*Conner, M., Graham, S., & Moore, B. (1999). Alcohol and intentions to use condoms:
Applying the theory of planned behaviour. Psychology and Health, 14(5), 795-812.
*Conner, M., Lawton, R., Parker, D., Chorlton, K., Manstead, A. S., & Stradling, S. (2007).
Application of the theory of planned behaviour to the prediction of objectively assessed
breaking of posted speed limits. British Journal of Psychology, 98(3), 429-453.
*Conner, M., Smith, N., & McMillan, B. (2003). Examining normative pressure in the theory of
planned behaviour: Impact of gender and passengers on intentions to break the speed
limit. Current Psychology, 22(3), 252-263.
Cooper, H. (2010). Research synthesis and meta-analysis: A step-by-step approach (Vol. 2).
VALUES AND GROUP IDENTITY 125
Thousand Oaks, CA: Sage publications.
Cooper, Robinson, & Patall. (2006).
Cooper, H. M. (1998). Synthesizing research: A guide for literature reviews (3rd edition).
Thousand Oaks, CA: Sage.
*Corey, S. M. (1937). Professed attitudes and actual behavior. Journal of Educational
Psychology, 28(4), 271.
*Coyle, J. R., Gould, S. J., Gupta, P., & Gupta, R. (2009). “To buy or to pirate”: The matrix
of music consumers' acquisition-mode decision-making. Journal of Business
Research, 62(10), 1031-1037.
*Cronan, T. P., & Al-Rafee, S. (2008). Factors that influence the intention to pirate software and
media. Journal of Business Ethics, 78(4), 527-545.
*Culiberg, B., & Bajde, D. (2013). Consumer recycling: An ethical decision- making
process. Journal of Consumer Behaviour, 12(6), 449-459.
*Culiberg, B. & Bajde, D. (2014). Do you need a receipt? Exploring consumer participation in
consumption tax evasion as an ethical dilemma. Journal of Business Ethics. 124(2), 271-
282.
*d’Astous, A., Colbert, F., & Montpetit, D. (2005). Music piracy on the web–how effective are
anti-piracy arguments? Evidence from the theory of planned behaviour. Journal of
Consumer Policy, 28(3), 289-310.
*Davies, J., Foxall, G. R., & Pallister, J. (2002). Beyond the intention–behaviour mythology an
integrated model of recycling. Marketing Theory, 2(1), 29-113.
*De Groot, J. I., & Steg, L. (2009). Morality and prosocial behavior: The role of awareness,
VALUES AND GROUP IDENTITY 126
responsibility, and norms in the norm activation model. The Journal of Social
Psychology, 149(4), 425-449.
*de Leeuw, A., Valois, P., Morin, A. J., & Schmidt, P. (2014). Gender Differences in
Psychosocial Determinants of University Students’ Intentions to Buy Fair Trade
Products. Journal of Consumer Policy, 37(4), 485-505.
*De Pelsmacker, P., & Janssens, W. (2007). The effect of norms, attitudes and habits on
speeding behavior: Scale development and model building and estimation. Accident
Analysis & Prevention, 39(1), 6-15.
*Dean, M., Raats, M. M., & Shepherd, R. (2008). Moral concerns and consumer choice
of fresh and processed organic Foods1. Journal of Applied Social Psychology, 38(8),
2088-2107.
*Dean, M., Raats, M. M., & Shepherd, R. (2012). The role of self- identity, past behavior, and
their interaction in predicting intention to purchase fresh and processed organic
food. Journal of Applied Social Psychology, 42(3), 669-688.
Deci, E. L., & Ryan, R. M. (2000). The “what” and “why” of goal pursuits: Human needs and
self-determination of behavior. Psychological Inquiry, 11, 227–268.
Dehghani, M., Iliev, R., Sachdeva, S., Atran, S., Ginges, J., & Medin, D. (2009). Emerging
sacred values: The Iranian nuclear program. Judgment and Decision Making, 4, 990-993.
*Dobbins, E. (2013). Organ donation decision making among non-catholic Christians: An
expansion of the theory of planned behavior. (Unpublished doctoral dissertation),
Appalachian State University, North Carolina.
*Doherty, K. L. (2014). From alarm to action: closing the gap between belief and behavior in
VALUES AND GROUP IDENTITY 127
response to climate change. (Doctoral dissertation). Antioch New England Graduate
School.
*Donald, I. J., Cooper, S. R., & Conchie, S. M. (2014). An extended theory of planned behaviour
model of the psychological factors affecting commuters' transport mode use. Journal of
Environmental Psychology, 40, 39-48.
Donaldson, S. I., & Grant-Vallone, E. J. (2002). Understanding self-report bias in organizational
behavior research. Journal of Business and Psychology, 17(2), 245-260.
*Dowd, K., & Burke, K. J. (2013). The influence of ethical values and food choice motivations
on intentions to purchase sustainably sourced foods. Appetite, 69, 137-144.
*Dunlap, K. (2014). One and done: predicting paper towel use using a reasoned action
approach. (Unpublished doctoral dissertation), Ball State University, Muncie, Indiana.
Eisenberg, N., Zhou, Q., & Koller, S. (2001). Brazilian Adolescents' Prosocial Moral Judgment
and Behavior: Relations to Sympathy, Perspective Taking, Gender- Role Orientation,
and Demographic Characteristics. Child Development, 72(2), 518-534.
*Elliott, M. A. (2012). Testing the capacity within an extended theory of planned behaviour to
reduce the commission of driving violations. Transportmetrica, 8(5), 321-343.
*Elliott, M. A., & Thomson, J. A. (2010). The social cognitive determinants of offending
drivers’ speeding behaviour. Accident Analysis & Prevention, 42(6), 1595-1605.
*Eriksson, L., Garvill, J., & Nordlund, A. M. (2008). Interrupting habitual car use: The
importance of car habit strength and moral motivation for personal car use
reduction. Transportation Research Part F: Traffic Psychology and Behaviour, 11(1),
10-23.
Erkut, S., Jaquette, D. S., & Staub, E. (1981). Moral judgment-situation interaction as a basis for
VALUES AND GROUP IDENTITY 128
predicting prosocial behavior. Journal of Personality, 49(1), 1-14.
*Farnese, M. L., Tramontano, C., Fida, R., & Paciello, M. (2011). Cheating behaviors in
academic context: Does academic moral disengagement matter?. Procedia-Social and
Behavioral Sciences, 29, 356-365.
Feather, N. T. (1995). National identification and ingroup bias in majority and minority groups:
A field study. Australian Journal of Psychology, 47(3), 129-136.
*Fernandes, M. F., & Randall, D. M. (1992). The nature of social desirability response
effects in ethics research. Business Ethics Quarterly, 2(02), 183-205.
*Fida, R., Paciello, M., Tramontano, C., Fontaine, R. G., Barbaranelli, C., & Farnese, M. L.
(2014). An integrative approach to understanding counterproductive work behavior: The
roles of stressors, negative emotions, and moral disengagement. Journal of Business
Ethics, 130(1) 1-14.
Fishbein, I., & Ajzen, M. (1977). Attitude-behavior relations: A theoretical analysis and review
of empirical research. Psychological Bulletin, 84(5), 888-918.
Gantman, A., Adriaanse, M. A., Gollwitzer, P. M., & Oettingen, G. (in press). Why did I do that?
Explaining Actions Activated Outside of Awareness. Psychonomic Bulletin & Review.
*Gbadamosi, G. (2004). Academic ethics: What has morality, culture and administration got to
do with its measurement? Management Decision, 42(9), 1145-1161.
Gino, F., Ayal, S., & Ariely, D. (2013). Self-serving altruism? The lure of unethical actions that
benefit others. Journal of Economic Behavior & Organization, 93, 285-292
Glasman, L. R., & Albarracín, D. (2006). Forming attitudes that predict future behavior: A meta-
analysis of the attitude-behavior relation. Psychological Bulletin, 132(5), 778-822.
*Godin, G., Conner, M., & Sheeran, P. (2005). Bridging the intention–behaviour gap: The role
VALUES AND GROUP IDENTITY 129
of moral norm. British Journal of Social Psychology, 44(4), 497-512.
*Godin, G., Gagnon, H., Lambert, L. D., & Conner, M. (2005). Determinants of condom use
among a random sample of single heterosexual adults. British Journal of Health
Psychology, 10(1), 85-100.
*Godin, G., Sheeran, P., Conner, M., Germain, M., Blondeau, D., Gagné, C., Beaulieu, D., &
Naccache, H. (2005). Factors explaining the intention to give blood among the general
population. Vox Sanguinis, 89(3), 140-149.
*Godin, G., Valois, P., Jobin, J., & Ross, A. (1991). Prediction of intention to exercise of
individuals who have suffered from coronary heart disease. Journal of Clinical
Psychology, 47(6), 762-772.
Gorsuch, R. L., & Ortberg, J. (1983). Moral obligation and attitudes: Their relation to behavioral
intentions. Journal of Personality and Social Psychology, 44(5), 1025-1028.
Graham, J., Meindl, P., & Beall, E. (2012). Integrating the streams of morality research: The case
of political ideology. Current Directions in Psychological Science, 21, 373-377.
Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. (2011). Mapping the
moral domain. Journal of Personality and Social Psychology. 101, 366-385.
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., Cohen, J. D. (2001). And fMRI
investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-
2108.
*Grimes, P. W. (2004). Dishonesty in academics and business: A cross-cultural evaluation of
student attitudes. Journal of Business Ethics, 49(3), 273-290.
*Guido, G., Prete, M. I., Peluso, A. M., Maloumby-Baka, R. C., & Buffa, C. (2010). The role of
VALUES AND GROUP IDENTITY 130
ethics and product personality in the intention to purchase organic food products: a
structural equation modeling approach. International Review of Economics, 57(1), 79-
102.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral
judgment. Psychological Revue, 108(4), 814-834.
Haidt, J. (2007). Th new synthesis in moral psychology. Science, 316, 998-1002.
Haidt, J. (2008). Morality. Perspectives on psychological science, 3(1), 65-72.
*Harding, T. S., Carpenter, D. D., & Finelli, C. J. (2012). An exploratory investigation of the
ethical behavior of engineering undergraduates. Journal of Engineering
Education, 101(2), 346-374.
*Harland, P., Staats, H., & Wilke, H. A. (1999). Explaining proenvironmental intention and
behavior by personal norms and the theory of planned behavior. Journal of Applied
Social Psychology, 29(12), 2505-2528.
Hart, D. (2005). Adding identity to the moral domain. Human Development, 48(4), 257-261.
*Heath, Y., & Gifford, R. (2002). Extending the theory of planned behavior: predicting the use
of public transportation. Journal of Applied Social Psychology, 32(10), 2154-2189.
*Heyman, G. D., Hsu, A. S., Fu, G., & Lee, K. (2013). Instrumental lying by parents in the US
and China. International Journal of Psychology, 48(6), 1176-1184.
Higgins, J. P. T., & Thompson S. G. (2002). Quantifying heterogeneity in a meta-analysis.
Statistics in Medicine, 21,1539-1558.
*Hofenk, D., van Birgelen, M. J. H., Bloemer, J. M. M., & Semeijn, J. (2010). Integrating the
theory of planned behavior and the norm-activation theory to explain pro-environmental
VALUES AND GROUP IDENTITY 131
buying behavior. (Working paper series in management MAR10-05). Nijmegen Radboud
Universiteit, Netherlands.
*Høie, M., Moan, I. S., & Rise, J. (2010). An extended version of the theory of planned
behavour: Prediction of intentions to quit smoking using past behaviour as
moderator. Addiction Research & Theory, 18(5), 572-585.
*Hornsey, M. J., Smith, J. R., & Begg, D. (2007). Effects of norms among those with moral
conviction: Counter-conformity emerges on intentions but not behaviors. Social
Influence, 2(4), 244-268.
*Hsiao, C. H. (2015). Impact of ethical and affective variables on cheating: comparison of
undergraduate students with and without jobs. Higher Education, 69(1), 55-77.
*Hübner, G., & Kaiser, F. G. (2006). The moderating role of the attitude-subjective norms
conflict on the link between moral norms and intention. European Psychologist, 11(2),
99-109.
Huedo-Medina, T. B., Sánchez-Meca, J., Marín-Martínez, F., & Botella, J. (2006). Assessing
heterogeneity in meta-analysis: Q statistic or I² index?. Psychological Methods, 11(2),
193.
*Hunecke, M., Blöbaum, A., Matthies, E., & Höger, R. (2001). Responsibility and environment
ecological norm orientation and external factors in the domain of travel mode choice
behavior. Environment and Behavior, 33(6), 830-852.
*Hyde, M. K., Knowles, S. R., & White, K. M. (2013). Donating blood and organs: using an
extended theory of planned behavior perspective to identify similarities and differences in
individual motivations to donate. Health Education Research, 28(6), 1092-1104.
*Hyde, M. K., & White, K. M. (2009). To be a donor or not to be? Applying an extended theory
VALUES AND GROUP IDENTITY 132
of planned behavior to predict posthumous organ donation intentions. Journal of Applied
Social Psychology, 39(4), 880-900.
*Hyde, M. K., & White, K. M. (2013). Testing an extended theory of planned behavior to predict
young people's intentions to join a bone marrow donor registry. Journal of Applied Social
Psychology, 43(12), 2462-2467.
*Hystad, S. W., Mearns, K. J., & Eid, J. (2014). Moral disengagement as a mechanism between
perceptions of organisational injustice and deviant work behaviours. Safety Science, 68,
138-145.
*Ibtissem, M. H. (2010). Application of value beliefs norms theory to the energy conservation
behaviour. Journal of Sustainable Development, 3(2), p129.
*Izadi, N., & Hayati, D. (2014). Appraising some Iranian maize growers' ecological behavior:
Application of path analysis. Journal of Agricultural Science and Technology, 16(5),
993-1003.
*Jackson, C., Smith, A., & Conner, M. (2003). Applying an extended version of the theory of
planned behaviour to physical activity. Journal of Sports Sciences, 21(2), 119-133.
*Jansson, J. (2011). Consumer eco-innovation adoption: assessing attitudinal factors and
perceived product characteristics. Business Strategy and the Environment, 20(3), 192-
210.
*Jansson, J., Marell, A., & Nordlund, A. (2011). Exploring consumer adoption of a high
involvement eco- innovation using value-belief-norm theory. Journal of Consumer
Behaviour, 10(1), 51-60.
*Kaiser, F. G. (2006). A moral extension of the theory of planned behavior: Norms and
VALUES AND GROUP IDENTITY 133
anticipated feelings of regret in conservationism. Personality and Individual
Differences, 41(1), 71-81.
*Kaiser, F. G., & Scheuthle, H. (2003). Two challenges to a moral extension of the theory of
planned behavior: Moral norms and just world beliefs in conservationism. Personality
and Individual Differences, 35(5), 1033-1048.
*Kaiser, F. G., Hübner, G., & Bogner, F. X. (2005). Contrasting the theory of planned behavior
with the value- belief- norm model in explaining conservation behavior. Journal of
Applied Social Psychology, 35(10), 2150-2170.
*Kallgren, C. A., Reno, R. R., & Cialdini, R. B. (2000). A focus theory of normative conduct:
When norms do and do not affect behavior. Personality and Social Psychology
Bulletin, 26(8), 1002-1012.
*Kashif, M., & De Run, E. C. (2015). Money donations intentions among Muslim donors: An
extended theory of planned behavior model. International Journal of Nonprofit and
Voluntary Sector Marketing, 20(1), 84-96.
*Kim, Y. S., Kang, S. W., & Ahn, J. A. (2013). Moral sensitivity relating to the application of
the code of ethics. Nursing Ethics, 20(4), 470-478.
*Klöckner, C. A., & Matthies, E. (2009). Structural modeling of car use on the way to the
university in different settings: Interplay of norms, habits, situational restraints, and
perceived behavioral control. Journal of Applied Social Psychology, 39(8), 1807-1834.
*Klöckner, C. A., & Ohms, S. (2009). The importance of personal norms for purchasing organic
milk. British Food Journal, 111(11), 1173-1187.
*Klöckner, C. A., & Oppedal, I. O. (2011). General vs. domain specific recycling behaviour—
VALUES AND GROUP IDENTITY 134
Applying a multilevel comprehensive action determination model to recycling in
Norwegian student homes. Resources, Conservation and Recycling, 55(4), 463-471.
*Knowles, S. R., Hyde, M. K., & White, K. M. (2012). Predictors of young People's
charitable intentions to donate money: an extended theory of planned behavior
perspective. Journal of Applied Social Psychology, 42(9), 2096-2110.
*Koklic, M. K., Kukar-Kinney, M., & Vida, I. (2014). Three-level mechanism of
consumer digital piracy: Development and cross-cultural validation. Journal of Business
Ethics, 1-13.
* Kovač, V., & Rise, J. (2011). Predicting the intention to quit smoking in a Norwegian sample:
An extended theory of planned behaviour in light of construal level theory. Nordic
Psychology, 63(3), 68-82.
Kraus, S. J. (1995). Attitudes and the prediction of behavior: A meta-analysis of the empirical
literature. Personality and Social Psychology Bulletin, 21(1), 58-75.
*Krömker, D., & Matthies, E. (2014). Differences between occasional organic and regular
organic food consumers in Germany. Food and Nutrition Sciences, 5(19), 1914.
*Krueger, L. (2014). Academic dishonesty among nursing students. Journal of
Nursing Education, 53(2), 77.
Laham, S. M. (2009). Expanding the moral circle: Inclusion and exclusion mindsets and the
circle of moral regard. Journal of Experimental Social Psychology, 45(1), 250-253.
*LaRose, R., & Kim, J. (2006). Share, steal, or buy? A social cognitive perspective of music
downloading. Cyber Psychology & Behavior, 10(2), 267-277.
*LaRose, R., Lai, Y. J., Lange, R., Love, B., & Wu, Y. (2005). Sharing or piracy? An
VALUES AND GROUP IDENTITY 135
exploration of downloading behavior. Journal of Computer-Mediated
Communication, 11(1), 1-21.
*Lechner, L., De Vries, H., & Offermans, N. (1997). Participation in a breast cancer screening
program: Influence of past behavior and determinants on future screening
participation. Preventive medicine, 26(4), 473-482.
*Légaré, F., Godin, G., Dodin, S., Turcot, L., & Laperrière, L. (2003). Adherence to hormone
replacement therapy: A longitudinal study using the theory of planned
behaviour. Psychology and Health, 18(3), 351-371.
*Lemmens, K. P. H., Abraham, C., Hoekstra, T., Ruiter, R. A. C., De Kort, W. L. A. M., Brug,
J., & Schaalma, H. P. (2005). Why don’t young people volunteer to give blood? An
investigation of the correlates of donation intentions among young
nondonors. Transfusion, 45(6), 945-955.
*Lemmens, K. P. H., Abraham, C., Ruiter, R. A. C., Veldhuizen, I. J. T., Bos, A. E. R., &
Schaalma, H. P. (2008). Identifying blood donors willing to help with recruitment. Vox
Sanguinis, 95(3), 211-217.
*Lemmens, K. P. H., Abraham, C., Ruiter, R. A. C., Veldhuizen, I. J. T., Dehing, C. J. G., Bos,
A. E. R., & Schaalma, H. P. (2009). Modelling antecedents of blood donation
motivation among non-donors of varying age and education. British Journal of
Psychology, 100(1), 71-90.
*Liu, N. T., & Ding, C. G. (2012). General ethical judgments, perceived organizational
support, interactional justice, and workplace deviance. The International Journal of
Human Resource Management, 23(13), 2712-2735.
*Liu, X. (2014). Use Tax Compliance: The role of norms, audit probability, and
VALUES AND GROUP IDENTITY 136
sanction severity. Academy of Accounting and Financial Studies Journal, 18(1), 65.
*Lo, S. H., Peters, G. J. Y., van Breukelen, G. J., & Kok, G. (2014). Only reasoned action?
An interorganizational study of energy-saving behaviors in office buildings. Energy
Efficiency, 7(5), 761-775.
*López, A. G., & Cuervo-Arango, M. A. (2008). Relationship among values, beliefs, norms and
ecological behaviour. Psicothema, 20(4), 623-629.
Lovett, B. J., Jordan, A. H., & Wiltermuth, S. S. W. (2012). Individual differences in the
moralization of everyday life. Ethics and Behavior, 22(4), 248-257.
Luco, A. (2014). The definition of morality: threading the needle. Social Theory and Practice,
361-387.
*Lysonski, S., & Durvasula, S. (2008). Digital piracy of MP3s: consumer and ethical
predispositions. Journal of Consumer Marketing, 25(3), 167-178.
*Mackay, S. (2014). The utility of an extended theory of planned behaviour in determining
university students’ engagement in three types of volunteering: Traditional volunteering,
online volunteering, and online micro-volunteering. Unpublished manuscript.
*Mäkiniemi, J. P., & Vainio, A. (2013). Moral intensity and climate-friendly food
choices. Appetite, 66, 54-61.
*Mann, E., & Abraham, C. (2012). Identifying beliefs and cognitions underpinning commuters'
travel mode choices. Journal of Applied Social Psychology, 42(11), 2730-2757.
*Martín, A. M., Hernández, B., Frías-Armenta, M., & Hess, S. (2014). Why ordinary people
comply with environmental laws: A structural model on normative and attitudinal
determinants of illegal anti-ecological behaviour. Legal and Criminological
Psychology, 19(1), 80-103.
VALUES AND GROUP IDENTITY 137
*Matthies, E., Klöckner, C. A., & Preißner, C. L. (2006). Applying a modified moral decision
making model to change habitual car use: how can commitment be effective? Applied
Psychology, 55(1), 91-106.
*Mcmillan, B., & Conner, M. (2003). Applying an extended version of the theory of planned
behavior to illicit drug use among students. Journal of Applied Social Psychology, 33(8),
1662-1683.
*McMillan, B., & Conner, M. (2003). Using the theory of planned behaviour to understand
alcohol and tobacco use in students. Psychology, Health & Medicine, 8(3), 317-328.
Meindl, P., Johnson, K. M., & Graham, J. (2016). The immoral assumption effect: Moralization
drives negative trait attributions. Personality and Social Psychology Bulletin, 42(4), 540-
553.
*Meyer, J. (2014). The role of values, beliefs and norms in female consumers' clothing disposal
behavior. (Unpublished doctoral dissertation). University of Pretoria, South Africa.
*Milhausen, R. R., Reece, M., & Perera, B. (2006). A theory-based approach to understanding
sexual behavior at Mardi Gras. Journal of Sex Research, 43(2), 97-106.
*Minton, A. P., & Rose, R. L. (1997). The effects of environmental concern on environmentally
friendly consumer behavior: An exploratory study. Journal of Business Research, 40(1),
37-48.
*Moan, I. S. (2013). Whether or not to ride with an intoxicated driver: Predicting intentions
using an extended version of the theory of planned behaviour. Transportation Research
Part F: Traffic Psychology and Behaviour, 20, 193-205.
*Moan, I. S., & Rise, J. (2005). Quitting smoking: Applying an extended version of the theory of
VALUES AND GROUP IDENTITY 138
planned behavior to predict intention and behavior. Journal of Applied Biobehavioral
Research, 10(1), 39-68.
*Moore, C., Detert, J. R., Klebe Treviño, L., Baker, V. L., & Mayer, D. M. (2012). Why
employees do bad things: Moral disengagement and unethical organizational
behavior. Personnel Psychology, 65(1), 1-48.
*Morgan, G. S., Skitka, L. J., & Wisneski, D. C. (2010). Moral and religious convictions and
intentions to vote in the 2008 presidential election. Analyses of Social Issues and Public
Policy, 10(1), 307-320.
*Nag, M. (2012). Pro-environmental behaviors in the workplace: Is concern for the environment
enough? (Unpublished doctoral dissertation). University of Maryland, College Park.
Narvaez, D. (2008). Triune ethics: The neurobiological roots of our multiple moralities. New
Ideas in Psychology, 26, 95-119.
*Newton, J. D., Newton, F. J., Ewing, M. T., Burney, S., & Hay, M. (2013). Conceptual overlap
between moral norms and anticipated regret in the prediction of intention: Implications
for theory of planned behaviour research. Psychology & Health, 28(5), 495-513.
*Nigbur, D., Lyons, E., & Uzzell, D. (2010). Attitudes, norms, identity and environmental
behaviour: Using an expanded theory of planned behaviour to participation in a kerbside
recycling programme. British Journal of Social Psychology, 49(2), 259-284.
*Niven, K., & Healy, C. (2015). Susceptibility to the ‘dark side’ of goal-setting: Does moral
justification influence the effect of goals on unethical behavior? Journal of Business
Ethics, 1-13.
*Nordlund, A. M., & Garvill, J. (2002). Value structures behind proenvironmental
behavior. Environment and Behavior, 34(6), 740-756.
VALUES AND GROUP IDENTITY 139
*Nordlund, A. M., & Garvill, J. (2003). Effects of values, problem awareness, and personal norm
on willingness to reduce personal car use. Journal of Environmental Psychology, 23(4),
339-347.
*Ojala, A. (2012). What makes us environmentally friendly? Social psychological studies on
environmental concern, components of morality and emotional connectedness to nature
(Unpublished doctoral dissertation). University of Helsinki, Helsinki.
*Ong, T. F., & Musa, G. (2011). An examination of recreational divers' underwater behaviour by
attitude–behaviour theories. Current Issues in Tourism, 14(8), 779-795.
*Palmer, N. (2013). The effects of leader behavior on follower ethical behavior: Examining the
mediating roles of ethical efficacy and moral disengagement. (Unpublished doctoral
dissertation). University of Nebraska, Lincoln.
*Poliakoff, E., & Webb, T. L. (2007). What factors predict scientists' intentions to participate in
public engagement of science activities? Science Communication, 29(2), 242-263.
*Pomazal, R. J., & Jaccard, J. J. (1976). An informational approach to altruistic
behavior. Journal of Personality and Social Psychology, 33(3), 317.
*Poulin, M. J. (2013) Fairness Judgments [Data File]. Retrieved from Qualtrics survey request.
*Pratt, C. B., & McLaughlin, G. W. (1989). Ethical inclinations of public relations
majors. Journal of Mass Media Ethics, 4(1), 68-91.
*Prestholdt, P. H., Lane, I. M., & Mathews, R. C. (1987). Nurse turnover as reasoned action:
Development of a process model. Journal of Applied Psychology, 72(2), 221.
*Raats, M. M., Shepherd, R., & Sparks, P. (1995). Including moral dimensions of choice within
the structure of the theory of planned behavior. Journal of Applied Social Psychology,
25(6), 484-494.
VALUES AND GROUP IDENTITY 140
*Randall, D. M., & Fernandes, M. F. (1991). The social desirability response bias in
ethics research. Journal of Business Ethics, 10(11), 805-817.
*Raymond, C. M., Brown, G., & Robinson, G. M. (2011). The influence of place attachment,
and moral and normative concerns on the conservation of native vegetation: A test of two
behavioural models. Journal of Environmental Psychology, 31(4), 323-335.
Reidenbach R. E., & Robin D. P. (1988). Some initial steps toward improving the measurement
of ethical evaluations of marketing activities. Journal of Business Ethics, 7, 871–879.
*Reynolds, S. J., & Ceranic, T. L. (2007). The role of moral knowledge in everyday immorality:
What does it matter if I know what is right? Journal of Applied Psychology, 92(6), 1610-
1624.
*Reynolds, S. J., Dang, C. T., Yam, K. C., & Leavitt, K. (2014). The role of moral knowledge in
everyday immorality: What does it matter if I know what is right?. Organizational
Behavior and Human Decision Processes, 123(2), 124-137.
*Rise, J., & Ommundsen, R. (2011). Predicting the intention to quit smoking: A comparative
study among Spanish and Norwegian students. Europe’s Journal of Psychology, 7(1),
143-163.
Rivis, A., Sheeran, P., & Armitage, C. J. (2009). Expanding the affective and normative
components of the theory of planned behavior: A meta-analysis of anticipated affect and
moral norms. Journal of Applied Social Psychology, 39(12), 2985-3019.
*Robinson, N. G. (2004). Young women’s sun-protective attitudes and behaviours: The role of
social influence factors. (Unpublished doctoral dissertation). Queensland University of
Technology, United Kingdom.
*Robinson, N. G., White, K. M., Hamilton, K., & Starfelt, L. C. (2014). Predicting the sun-
VALUES AND GROUP IDENTITY 141
protective decisions of young fe male Australian beachgoers. Journal of Health
Psychology, 1-10.
*Romani, S., Grappi, S., & Bagozzi, R. P. (2014). Corporate socially responsible initiatives and
their effects on consumption of green products. Journal of Business Ethics, 1-12.
Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis: a mapping
between three moral emotions (contempt, anger, disgust) and three moral codes
(community, autonomy, divinity). Journal of Personality and Social Psychology, 76(4),
574-586.
*Saidon, I. M. (2012). Moral disengagement in manufacturing: a Malaysian study of antecedents
Their Effects on Consumption of Green Products. Journal of Business Ethics, 1-12.
Saletan, W. (2007, August 30). Sam sex: Larry Craig’s anti-gay hypocrisy. Slate. Retrieved from
www.slate.com/articles/health_and_science/human_nature/2007/08/same_sex.html
*Samnani, A. K., Salamon, S. D., & Singh, P. (2014). Negative affect and counterproductive
workplace behavior: The moderating role of moral disengagement and gender. Journal of
Business Ethics, 119(2), 235-244.
*Schafer, M. H. (2011). Ambiguity, religion, and relational context: Competing influences on
moral attitudes? Sociological Perspectives, 54(1), 59-81.
*Scherbaum, C. A., Popovich, P. M., & Finlinson, S. (2008). Exploring individual- level factors
related to employee energy-conservation behaviors at work. Journal of Applied Social
Psychology, 38(3), 818-835.
*Schlenker, B. R. (2008). Integrity and character: Implications of principled and expedient
ethical ideologies. Journal of Social and Clinical Psychology, 27(10), 1078-1125.
*Schultz, P. W., Messina, A., Tronu, G., Limas, E. F., Gupta, R., & Estrada, M. (2014).
VALUES AND GROUP IDENTITY 142
Personalized normative feedback and the moderating role of personal norms: A field
experiment to reduce residential water consumption. Environment and Behavior, 1-25.
Schwartz, S. H. (1992). Universals in the content and structure of values: Theoretical advances
and empirical tests in 20 countries. Advances in Experimental Social Psychology, 25(1),
1-65.
Schwartz, S. H., & Howard, J. A. (1981). A normative decision-making model of altruism. In J.
P. Rushton & R. M. Sorrentino (Eds.), Altruism and helping behavior (pp. 189–211).
Hillsdale: Erlbaum.
*Schwartz, S. H., & Tessler, R. C. (1972). A test of a model for reducing measured attitude-
behavior discrepancies. Journal of Personality and Social Psychology, 24(2), 225.
Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist,
54, 93-105.
*Schwitzgebel, E., & Rust, J. (2011). The self-reported moral behavior of ethics
professors. Unpublished manuscript.
*Setiawan, B., & Tjiptono, F. (2013). Determinants of consumer intention to pirate digital
products. International Journal of Marketing Studies, 5(3), p48.
*Setiawan, R., Santosa, W., & Sjafruddin, A. (2014). Integration of theory of planned behavior
and norm activation model on student behavior model using cars for traveling to
campus. Civil Engineering Dimension, 16(2), 117-122.
*Shaw, D., & Shiu, E. (2002). An assessment of ethical obligation and self-identity in ethical
consumer decision-making: A structural equation modelling approach. International
Journal of Consumer Studies, 26(4), 286-293.
*Shu, L. L., Gino, F., & Bazerman, M. H. (2011). Dishonest deed, clear conscience: When
VALUES AND GROUP IDENTITY 143
cheating leads to moral disengagement and motivated forgetting. Personality and Social
Psychology Bulletin, 37(3), 330-349.
Shweder, R. A., Much, N. C, Mahapatra, M., & Park, L. (1997). The "big three" of morality
(autonomy community, and divinity), and the "big three" explanations of suffering. In A.
Brandt & P. Rozin (Eds.), Morality and health, pp. 119-169. Stanford, CA: Stanford
University Press.
*Simkin, M. G., & McLeod, A. (2010). Why do college students cheat? Journal of Business
Ethics, 94(3), 441-453.
*Sims, (2014). Examining fair trade purchasing using an extended theory of planned behavior.
[Unpublished data].
Skitka, L. J. (2010). The psychology of moral conviction. Social and Personality Psychology
Compass, 4(4), 267-281.
Skitka, L. J. (2014). The psychological foundations of moral conviction. In J. Wright & H.
Sarkissian (Eds.), Advances in Moral Psychology, (pp. 148 - 166), New York, NY:
Bloomsbury Academic Press.
*Skitka, L. J., & Bauman, C. W. (2008). Moral conviction and political engagement. Political
Psychology, 29(1), 29-54.
Skitka, L. J., Bauman, C. W., & Sargis, E. G. (2005). Moral conviction: Another contributor to
attitude strength or something more? Journal of Personality and Social Psychology,
88(6), 895-917.
*Sliwinski Jr, J. R. (2011). The impact of normative beliefs, religion, and personality on college
drinking behavior. (Unpublished Doctoral dissertation). Texas State University-San
Marcos.
VALUES AND GROUP IDENTITY 144
*Smith, J. R., & McSweeney, A. (2007). Charitable giving: The effectiveness of a revised theory
of planned behaviour model in predicting donating intentions and behaviour. Journal of
Community & Applied Social Psychology, 17(5), 363-386.
Stams, G. J., Brugman, D., Deković, M., van Rosmalen, L., van der Laan, P., & Gibbs, J. C.
(2006). The moral judgment of juvenile delinquents: A meta-analysis. Journal of
Abnormal Child Psychology, 34(5), 692-708.
*Steg, L., & Groot, J. (2010). Explaining prosocial intentions: Testing causal relationships in the
norm activation model. British Journal of Social Psychology, 49(4), 725-743.
*Stephens, J. M., Young, M. F., & Calabrese, T. (2007). Does moral judgment go offline when
students are online? A comparative analysis of undergraduates' beliefs and behaviors
related to conventional and digital cheating. Ethics & Behavior, 17(3), 233-254.
*Stern, P. C., Dietz, T., Abel, T. D., Guagnano, G. A., & Kalof, L. (1999). A value-belief-norm
theory of support for social movements: The case of environmentalism. Human Ecology
Review, 6(2), 81.
*Taneja, A. (2006). Determinants of adverse usage of information systems assets: A study of
antecedents of IS exploit in organizations. (Unpublished doctoral dissertation) University
of Texas, Arlington.
*Tanner, C., & Kast, S. W. (2003). Promoting sustainable consumption: Determinants of green
purchases by Swiss consumers. Psychology and Marketing, 20(10), 883-902.
*Thøgersen, J. (1999). The ethical consumer. Moral norms and packaging choice. Journal of
Consumer Policy, 22(4), 439-460.
*Thøgersen, J. (2009). The motivational roots of norms for environmentally responsible
behavior. Basic and Applied Social Psychology, 31(4), 348-362.
VALUES AND GROUP IDENTITY 145
*Thøgersen, J., & Ölander, F. (2006). The dynamic interaction of personal norms and
environment-friendly buying behavior: A panel study. Journal of Applied Social
Psychology, 36(7), 1758-1780.
*Thomson, A. L., & Siegel, J. T. (2013). A moral act, elevation, and prosocial behavior:
Moderators of morality. The Journal of Positive Psychology, 8(1), 50-64.
*Thomson, A. L., Nakamura, J., Siegel, J. T., & Csikszentmihalyi, M. (2014). Elevation and
mentoring: An experimental assessment of causal relations. The Journal of Positive
Psychology, 9(5), 402-413.
*Tillman, C. J. (2011). Character, conditions, and cognitions: The role of personality, climate,
intensity, and moral disengagement in the unethical decision-making process.
(Unpublished doctoral dissertation). The University of Alabama, Tuscaloosa.
*Tonglet, M., Phillips, P. S., & Bates, M. P. (2004). Determining the drivers for householder
pro-environmental behaviour: waste minimisation compared to recycling. Resources,
Conservation and Recycling, 42(1), 27-48.
*Udo, G., Bagchi, K., & Maity, M. (2014) Exploring factors affecting digital piracy using the
norm activation and UTAUT models: The role of national culture. Journal of Business
Ethics, 1-25.
Vaisey, S. (2012). [Measuring Morality Survey]. Unpublished raw data.
Valdesolo, P., & DeSteno, D. (2008). The duality of virtue: Deconstructing moral hypocrisy.
Journal of Experimental Social Psychology, 44(5), 1334-1338.
*van der Linden, S. (2011). Charitable intent: A moral or social construct? A revised theory of
planned behavior model. Current Psychology, 30(4), 355-374.
*Van Dijke, M., & Verboon, P. (2010). Trust in authorities as a boundary condition to
VALUES AND GROUP IDENTITY 146
procedural fairness effects on tax compliance. Journal of Economic Psychology, 31(1),
80-91.
*van Riper, C. J., & Kyle, G. T. (2014). Understanding the internal processes of behavioral
engagement in a national park: A latent variable path analysis of the value-belief-norm
theory. Journal of Environmental Psychology, 38, 288-297.
*van Zomeren, M., Postmes, T., & Spears, R. (2012). On conviction's collective consequences:
Integrating moral conviction with the social identity model of collective action. British
Journal of Social Psychology, 51(1), 52-71.
*Vida, I., Kos Koklic, M., Kukar-Kinney, M., & Penz, E. (2012). Predicting consumer digital
piracy behavior: The role of rationalization and perceived consequences. Journal of
Research in Interactive Marketing, 6(4), 298-313.
Viechtbauer W (2010). Metafor: Meta-analysis package for R (version 1.4-0). [R package].
Available from http://CRAN.R-project.org/package=metafor.
*Vilas, X., & Sabucedo, J. M. (2012). Moral obligation: A forgotten dimension in the
analysis of collective action. Revista de Psicología Social, 27(3), 369-375.
*Wenzel, M. (2004). An analysis of norm processes in tax compliance. Journal of
Economic Psychology, 25(2), 213-228.
*White, K. M., Smith, J. R., Terry, D. J., Greenslade, J. H., & McKimmie, B. M. (2009). Social
influence in the theory of planned behaviour: The role of descriptive, injunctive, and in-
group norms. British Journal of Social Psychology, 48(1), 135-158.
*White, K. M., Starfelt, L. C., Young, R. M., Hawkes, A. L., Cleary, C., Leske, S., & Wihardjo,
K. (2015). A randomised controlled trial of an online theory-based intervention to
improve adult Australians' sun-protective behaviours. Preventive Medicine, 72, 19-22.
VALUES AND GROUP IDENTITY 147
*White, K. M., Starfelt, L. C., Young, R. M., Hawkes, A. L., Leske, S., & Hamilton, K. (2015).
Predicting Australian adults' sun- safe behaviour: Examining the role of personal and
social norms. British Journal of Health Psychology, 20(2), 396-412.
*Wiltermuth, S. S. (2011). Cheating more when the spoils are split. Organizational Behavior and
Human Decision Processes, 115(2), 157-168.
*Xu, Y., Li, Y., & Zhang, F. (2013). Pedestrians’ intention to jaywalk: Automatic or planned? A
study based on a dual-process model in China. Accident Analysis & Prevention, 50, 811-
819.
*Yoon, C. (2011). Theory of planned behavior and ethics theory in digital piracy: An integrated
model. Journal of Business Ethics, 100(3), 405-417.
*Zaal, M. P., Laar, C. V., Ståhl, T., Ellemers, N., & Derks, B. (2011). By any means necessary:
The effects of regulatory focus and moral conviction on hostile and benevolent forms of
collective action. British Journal of Social Psychology, 50(4), 670-689.
*Zhang, Y., Wang, Z., & Zhou, G. (2013). Antecedents of employee electricity saving behavior
in organizations: An empirical study based on norm activation model. Energy Policy, 62,
1120-1127.
Zhong, C. B., Bohns, V. K., & Gino, F. (2010). Good lamps are the best police: Darkness
increases dishonesty and self-interested behavior. Psychological Science, 21(3), 311-314.
*Zuckerman, M., & Reis, H. T. (1978). Comparison of three models for predicting altruistic
behavior. Journal of Personality and Social Psychology, 36(5), 498.
VALUES AND GROUP IDENTITY 148
Table 1
Studies Reporting the Correlation Between Moral Judgments and Moral Behaviors
Study N r J B AC CC TC C
Abrahamse & Steg (2009)
Study 1a 189 0.04 1 1 0 1 1 2
Study 1b 189 0.03 1 1 0 1 1 2
Abrahamse et al. (2009)
Study 1 238 0.03 1 1 1 1 1 3
Aguilar-Luzon et al. (2012)
Study 1 120 0.27 1 3 1 1 1 3
Akeley et al. (2012)
Study 1 30 0.26 4 1 0 1 1 2
Alberici & Milesi (2012)
Study 2a 147 0.11 3 1 0 0 0 0
Study 2b 147 0.15 3 3 0 1 0 1
Alberici & Milesi (2015)
Study 1 95 0.66 3 1 0 0 0 0
Study 2a 192 0.15 3 1 0 0 0 0
Study 2b 192 0.29 3 3 0 0 0 0
Alhidari (2014)
Study 1a 432 0.27 1 1 1 1 0 2
Study 1b 432 0.69 1 3 1 1 0 2
Andersson & von Borgstede (2010)
Study 1 418 0.39 1 1 0 1 1 2
Aquino et al. (2011)
Study 4 129 0.34 2 2 0 0 0 0
Arpan et al. (2013)
Study 1 409 0.13 1 1 1 1 0 2
Arvola et al. (2008)
Study 1a 200 0.58 1 3 1 1 1 3
Study 1b 270 0.47 1 3 1 1 1 3
Study 1c 202 0.69 1 3 1 1 1 3
Atanasov & Dana (2011)
Study 1 167 0.42 1 2 1 1 1 3
Axelrod & Lehman (1993)
Study 1 350 0.45 3 1 0 0 0 0
Bagot et al. (2015)
Study 1a 166 0.00 1 2 1 1 0 2
Study 1b 527 0.28 1 2 1 1 0 2
Bamberg et al. (2007a)
Study 1a 437 0.16 1 1 1 1 0 2
Study 1b 437 0.29 1 3 1 1 0 2
Bamberg et al. (2007b)
Study 1a 517 0.60 1 1 1 1 0 2
VALUES AND GROUP IDENTITY 149
Study 1b 517 0.82 1 3 1 1 0 2
Barnett & Vaicys (2000)
Study 1 207 0.72 3 3 1 1 0 2
Barr et al. (2005)
Study 1 1223 0.15 1 1 0 1 1 2
Bass et al. (1999)
Study 1 602 0.64 3 3 1 1 0 2
Batson et al. (1997)
Study 1 20 0.25 1 2 1 1 1 3
Study 2 20 0.40 1 2 1 1 1 3
Study 3 20 0.39 1 2 1 1 1 3
Beck & Ajzen (1991)
Study 1a 226 0.49 1 1 1 1 1 3
Study 1b 226 0.73 1 3 1 1 1 3
Belanger et al. (2002)
Study 1 459 0.21 1 1 1 1 1 3
Beldad et al. (2014)
Study 1a 196 0.29 1 3 1 1 1 3
Study 1b 184 0.37 1 3 1 1 1 3
Black et al. (1985)
Study 1a 478 0.28 1 1 0 1 0 1
Study 1b 478 0.15 1 1 0 1 0 1
Blathorne & Kaplan (2008)
Study 1 355 0.47 3 1 1 0 1 2
Blok et al. (2014)
Study 1a 411 0.32 1 1 1 1 1 3
Study 1b 411 0.23 1 3 1 1 1 3
Bolin (2004)
Study 1 661 0.51 4 1 0 0 1 1
Bosnjak & Whittmann (2005)
Study 1 400 0.60 1 3 1 1 1 3
Botetzagias et al. (2015)
Study 1 293 0.38 1 3 0 1 1 2
Botetzagias et al. (2014)
Study 1 285 0.17 1 1 0 1 1 2
Boudreau & Godin (2014)
Study 1a 200 0.47 1 1 1 1 0 2
Study 1b 200 0.70 1 3 1 1 0 2
Bozionelos & Bennett (1999)
Study 1a 114 0.34 1 1 1 1 1 3
Study 1b 114 0.47 1 3 1 1 1 3
Brosch et al. (2014)
Study 1 168 0.62 1 3 1 0 1 2
Brummel & Parker (2015)
Study 1a 8015 0.39 1 1 0 0 1 1
VALUES AND GROUP IDENTITY 150
Study 1b 10822 0.21 1 1 0 1 1 2
Study 1c 10822 0.10 1 1 0 0 0 0
Study 2a 207 0.16 1 1 0 0 1 1
Study 2b 207 0.16 1 1 0 0 0 0
Canete Benitez (2014)
Study 1 404 0.36 3 3 1 1 0 2
Canova et al. (2008)
Study 1 255 0.71 1 3 1 1 1 3
Canvoa & Manganelli (2015)
Study 1a 433 0.43 1 1 1 1 0 2
Study 1b 433 0.58 1 3 1 1 0 2
Study 2a 240 0.43 1 1 1 1 0 2
Study 2b 240 0.60 1 3 1 1 0 2
Study 3a 243 0.26 1 1 1 1 0 2
Study 3b 243 0.61 1 3 1 1 0 2
Casper & Pfahl (2012)
Study 1a 330 0.50 1 1 0 1 1 2
Study 1b 333 0.51 1 1 1 1 1 3
Chan & Bishop (2013)
Study 1 271 0.35 1 1 0 1 0 1
Chan et al. (2008)
Study 1a 250 0.15 1 3 1 0 1 2
Study 1b 250 0.14 3 3 1 0 0 1
Chen (2014)
Study 1 757 0.41 1 1 0 1 1 2
Cheung & Chan (2000)
Study 1 277 0.15 1 3 1 1 0 2
Chorlton et al. (2011)
Study 1 1479 0.46 1 1 0 0 1 1
Christian & Ellis (2013)
Study 1a 52 0.84 4 1 0 0 0 0
Study 1b 44 0.44 4 1 0 0 0 0
Chuang (2013)
Study 1a 388 0.38 1 1 1 1 1 3
Study 1b 387 0.32 2 1 1 1 1 3
Study 1c 387 0.16 1 1 1 1 1 3
Chugh et al. (2014)
Study 3 315 0.13 4 2 0 0 0 0
Cohen et al. (2014)
Study 3a 460 0.13 4 1 0 0 1 1
Study 3b 418.33 0.12 4 1 0 0 1 1
Conner & Flesch (2001)
Study 1 384 0.44 1 1 1 0 1 2
Conner & McMillan (1999)
Study 1a 118 0.50 1 1 1 1 0 2
VALUES AND GROUP IDENTITY 151
Study 1b 249 0.62 1 3 1 1 0 2
Conner et al. (2007)
Study 1a 83 0.65 1 3 1 1 1 3
Study 1b 83 0.47 1 2 1 0 0 1
Study 1c 83 0.35 1 1 1 1 0 2
Study 2a 303 0.59 1 1 1 1 1 3
Study 2b 303 0.74 1 3 1 1 1 3
Study 2c 303 0.33 1 2 1 1 0 2
Conner et al. (1999)
Study 2 200 0.26 1 1 1 0 1 2
Conner et al. (2003)
Study 1 158 0.18 1 1 1 0 1 2
Corey (1937)
Study 1 67 0.02 3 2 0
Coyle et al. (2009)
Study 1a 203 0.77 3 3 1 1 1 3
Study 1b 201 0.49 3 1 1 1 0 2
Cronan & Al-Rafee (2008)
Study 1 280 0.44 1 1 1 1 1 3
Culiberg & Bajde (2013)
Study 1a 367 0.27 3 3 1 1 0 2
Study 1b 367 0.47 1 3 1 1 0 2
Culiberg & Bajde (2014)
Study 1 367 0.47 1 3 1 1 0 2
d’Astous et al. (2005)
Study 1a 139 0.23 2 1 0 0 0 0
Study 1b 130 0.16 2 3 0 0 0 0
Davies et al. (2002)
Study 1a 317 0.12 1 2 1 1 0 2
Study 1b 317 0.37 1 1 1 1 0 2
Study 1c 317 0.38 1 3 1 1 0 2
de Groot & Steg (2009)
Study 5a 374 0.44 1 2 1 0 0 1
Study 5b 374 0.63 1 3 1 1 1 3
de Leeuw et al. (2014)
Study 1a 782 0.34 1 1 1 0 1 2
Study 1b 782 0.57 1 3 1 1 1 3
De Pelsmacker & Janssens (2007)
Study 1a 334 0.67 1 1 1 0 0 1
Study 1b 334 0.38 1 3 1 0 0 1
Dean et al. (2008) 281 0.51 1 3 1 0 0 1
Study 1
Dean et al. (2012)
Study 1a 486 0.50 1 1 1 1 1 3
Study 1b 486 0.29 1 1 1 1 0 2
VALUES AND GROUP IDENTITY 152
Diekhoff et al. (1999)
Study 1a 272 0.39 4 1 0 1 1 2
Study 1b 390 0.29 4 1 0 1 1 2
Dobbins (2013)
Study 1a 173 0.32 1 1 0 1 1 2
Study 1b 173 0.52 1 3 0 1 0 1
Doherty (2014)
Study 1 702 0.66 1 1 1 1 0 2
Donald et al. (2014)
Study 1a 827 0.36 1 1 1 1 0 2
Study 1b 827 0.43 1 3 1 1 0 2
Dowd & Burke (2013)
Study 1 137 0.73 1 3 1 0 0 1
Dunlap (2014)
Study 1 216 0.40 1 1 1 1 1 3
Elliot & Thomson (2010)
Study 1a 1403 0.51 1 1 1 1 1 3
Study 1b 1403 0.48 1 1 1 1 1 3
Study 1c 1403 0.65 1 3 1 1 1 3
Elliott (2012)
Study 1a 198 0.47 1 1 1 1 0 2
Study 1b 198 0.50 1 3 1 1 0 2
Eriksson et al. (2008)
Study 1 44 0.19 1 1 1 0 0 1
Farnese et al. (2011)
Study 1 419 0.53 4 1 0 0 0 0
Fernandes & Randall (1992)
Study 1a 428 0.17 3 2 0 1 0 1
Study 1b 428 0.12 3 1 0 0 0 0
Fida et al. (2014)
Study 1 1143 0.38 4 1 0 1 0 1
Gbadamosi (2004)
Study 1a 450 0.13 4 1 0 1 1 2
Study 1b 450 0.19 4 1 0 1 1 2
Godin et al. (2005a)
Study 1a 1083 0.42 1 1 1 1 0 2
Study 1b 1083 0.68 1 3 1 1 0 2
Godin et al. (2005b)
Study 1a 155 0.26 1 1 1 1 0 2
Study 1b 83 0.38 1 1 1 1 0 2
Study 1c 574 0.69 1 3 1 1 0 2
Godin et al. (2005c)
Study 1 283 0.34 1 3 1 1 1 3
Study 3 94 0.63 1 3 1 1 1 3
Godin et al. (1991)
VALUES AND GROUP IDENTITY 153
Study 1a 161 0.34 1 1 1 1 1 3
Study 1b 161 0.33 1 3 1 1 1 3
Goles et al. (2008)
Study 1a 455 0.45 1 1 1 1 0 2
Study 1b 455 0.66 1 3 1 1 0 2
Grimes (2004)
Study 1a 2492 0.05 3 1 0 0 1 1
Guido et al. (2009)
Study 1a 207 0.56 1 3 1 1 0 2
Study 1b 207 0.09 4 3 0 0 0 0
Guido et al. (2010)
Study 1a 160 0.38 1 3 1 1 0 2
Study 1b 160 0.40 4 3 0 1 0 1
Harding et al. (2012)
Study 1a 518.5 0.32 1 1 1 0 1 2
Study 1b 525 0.70 1 3 1 0 1 2
Harland et al. (1999)
Study 1a 256.75 0.52 1 1 1 1 0 2
Study 1b 258 0.60 1 3 1 1 0 2
Heath & Gifford (2002)
Study 1a 175 0.14 1 1 1 1 0 2
Study 1b 175 0.27 1 1 1 1 0 2
Study 1c 175 0.33 1 3 1 1 0 2
Study 1d 175 0.36 1 3 1 1 0 2
Heyman et al. (2013)
Study 1a 114 0.56 3 1 1 1 1 3
Study 1b 85 0.31 3 1 1 1 1 3
Hofenk et al. (2010)
Study 1 272 0.70 1 3 1 1 1 3
Hoie et al. (2009)
Study 1a 357 0.11 1 1 1 1 1 3
Study 1b 357 0.37 1 3 1 1 0 2
Hom et al. (1979)
Study 1a 228 0.26 1 2 1 0 1 2
Study 1b 373 0.34 1 3 1 1 1 3
Hornsey et al. (2007)
Study 1 147 0.30 1 2 1 0 0 1
Hsiao (2015)
Study 1a 525 0.27 3 3 0 0 0 0
Study 1b 525 0.56 4 3 0 0 0 0
Hubner & Kaiser (2006)
Study 1 639 0.54 1 3 0 0 0 0
Hunecke et al. (2001)
Study 1 77 0.40 1 1 0 0 0 0
Hyde & White (2009)
VALUES AND GROUP IDENTITY 154
Study 1 146.5 0.54 1 3 1 1 0 2
Hyde & White (2013)
Study 1a 174 0.05 1 1 1 1 1 3
Study 1b 174 0.70 1 3 1 1 1 3
Hyde et al. (2013)
Study 1 258 0.09 1 1 1 1 1 3
Hystad et al. (2014)
Study 1 340 0.18 4 1 0 0 0 0
Ibtissem (2010)
Study 1 703 0.24 1 1 0 1 1 2
Izadi & Hayati (2014)
Study 1a 220 0.21 3 1 0
Study 1b 220 0.17 3 3 0
Jackson et al. (2003)
Study 1a 85 0.26 1 1 0 1 1 2
Study 1b 169 0.48 1 3 1 1 1 3
Jansson (2010)
Study 1 642 0.41 1 2 0 0 0 0
Jansson et al. (2011)
Study 1 474 0.41 1 1 0 1 0 1
Kaiser & Scheuthle (2003)
Study 1a 820 0.52 1 1 1 0 1 2
Study 1b 820 0.50 1 1 1 0 1 2
Study 1c 893 0.49 1 1 1 0 1 2
Study 1d 821 0.49 1 1 1 1 1 3
Study 1e 845 0.58 1 3 1 1 1 3
Kaiser (2006)
Study 1a 607 0.44 1 1 0 0 1 1
Study 1b 607 0.46 1 3 0 0 1 1
Study 1c 607 0.41 2 1 0 0 1 1
Study 1d 607 0.47 2 3 0 0 1 1
Study 1a 787 0.51 1 1 0 0 1 1
Study 1b 787 0.53 1 3 0 0 1 1
Study 1c 787 0.45 2 1 0 0 1 1
Study 1d 787 0.39 2 3 0 0 1 1
Kaiser et al. (2005)
Study 1 468 0.56 1 1 0 1 1 2
Kallgren et al. (2000)
Study 1 107 0.29 1 2 0 1 0 1
Kashif & De Run (2015)
Study 1a 223 0.89 1 1 0 1 0 1
Study 1b 223 0.39 1 3 1 1 0 2
Kim et al. (2012)
Study 1 303 0.34 1 1 0 1 1 2
Klockner & Matthies (2009)
VALUES AND GROUP IDENTITY 155
Study 1 430 0.10 1 1 0 0 0 0
Klockner & Ohms (2009)
Study 1a 63 0.60 1 1 1 1 1 3
Study 1b 63 0.50 1 2 1 0 0 1
Klockner & Oppendal (2011)
Study 1a 693.25 0.30 1 1 0 1 0 1
Study 1b 703.75 0.47 1 3 0 1 0 1
Knowles et al. (2012)
Study 1a 210 0.50 1 3 1 1 1 3
Study 1b 210 0.25 1 1 1 1 0 2
Koklic et al. (2014)
Study 1 1485 0.57 4 3 0 1 0 1
Kovac & Rise (2011)
Study 1a 96 0.16 1 1 1 1 1 3
Study 1b 96 0.22 1 1 1 1 0 2
Study 1c 96 0.30 1 3 1 1 0 2
Kromker & Matthies (2014)
Study 1 562 0.41 1 1 1 0 1 2
Krueger (2014)
Study 1a 336 0.20 4 1 0 0 0 0
Study 1b 336 0.22 4 1 0 0 0 0
LaRose & Kim (2007)
Study 1a 134 0.15 1 3 0 0 0 0
Study 1b 134 0.32 3 3 0 0 0 0
LaRose et al. (2005)
Study 1a 265 0.14 3 1 1 1 0 2
Study 1b 265 0.15 3 3 1 1 0 2
Lechner et al. (1997)
Study 1a 395 0.18 1 1 1 0 0 1
Study 1b 395 0.12 1 2 1 0 0 1
Study 1c 395 0.36 1 3 1 0 0 1
Légaré et al. (2003)
Study 1a 209 0.52 1 3 1 1 1 3
Study 1b 209 0.48 1 3 1 1 1 3
Study 1c 172 0.61 1 3 1 1 1 3
Study 1d 172 0.53 1 3 1 1 1 3
Lemmens et al. (2005)
Study 1 284 0.50 1 3 1 1 0 2
Lemmens et al. (2008)
Study 1a 415 0.22 1 1 1 1 1 3
Study 1b 415 0.28 1 1 1 1 1 3
Lemmens et al. (2009)
Study 1 246 0.39 1 3 1 1 1 3
Study 2 678 0.46 1 3 1 1 1 3
Liu & Ding (2012)
VALUES AND GROUP IDENTITY 156
Study 1 460 0.16 3 1 0 0 1 1
Liu (2014)
Study 1 245 0.38 1 1 1 0 1 2
Lo et al. (2014)
Study 1a 814.5 0.44 2 1 1 1 0 2
Study 1b 814.5 0.59 2 3 1 1 0 2
Lopez & Cuervo-Arango (2008)
Study 1a 403 0.39 1 1 0 1 1 2
Study 1b 403 0.33 4 1 0 1 1 2
Lysonski & Durvasula (2008)
Study 1a 364 0.20 3 1 1 1 1 3
Study 1b 364 0.21 3 3 1 1 0 2
Mackay (2014)
Study 1a 171 0.30 1 1 1 1 0 2
Study 1b 303 0.67 1 3 1 1 0 2
Mäkiniemi & Vainio (2013)
Study 1a 350 0.24 3 1 0 1 0 1
Study 1b 350 0.22 3 3 1 1 0 2
Mann & Abraham (2013)
Study 1a 229 0.45 1 1 1 1 1 3
Study 1b 229 0.44 1 1 1 1 1 3
Study 1c 229 0.49 1 3 1 1 1 3
Martin et al. (2014)
Study 1a 439 0.35 1 3 1 1 1 3
Study 1b 439 0.23 1 1 1 1 0 2
Matthies et al. (2006)
Study 1a 286 0.09 1 1 1 1 0 2
Study 1b 288.75 0.11 1 2 1 0 0 1
McMillan & Conner (2003a)
Study 1a 461 0.34 1 1 1 1 0 2
Study 1b 461 0.38 1 3 1 1 0 2
McMillan & Conner (2003b)
Study 1a 141 0.25 1 1 1 1 0 2
Study 1b 471 0.16 1 3 1 1 0 2
Meyer (2013)
Study 1 306 0.32 1 1 0 1 1 2
Milhausen et al. (2006)
Study 1 253 0.15 2 1 1 1 1 3
Minton & Rose (1997)
Study 1 144 0.52 1 1 0 1 1 2
Moan & Rise (2005)
Study 1a 698 0.11 1 1 1 1 1 3
Study 1b 698 0.19 1 1 1 1 0 2
Study 1c 698 0.40 1 3 1 1 0 2
Moan (2012)
VALUES AND GROUP IDENTITY 157
Study 1a 1025 0.20 1 1 1 1 1 3
Study 1b 1025 0.33 1 3 1 1 1 3
Moore et al. (2012)
Study 2 242 0.31 4 1 0 0 0 0
Morgan et al. (2010)
Study 1 436 0.09 3 3 1 0 0 1
Nag (2012)
Study 1 511 0.36 1 1 0 0 1 1
Newton et al. (2013)
Study 1 352 0.59 1 3 1 1 1 3
Study 2 1815 0.52 1 3 1 1 1 3
Nigbur et al. (2010)
Study 1 527 0.63 1 3 1 0 1 2
Study 2a 264 0.57 1 3 1 0 1 2
Study 2b 264 0.33 1 1 1 0 0 1
Niven & Healy (2015)
Study 1 106 0.27 4 2 0 1 0 1
Nordlund & Garvill (2002)
Study 1 1414 0.47 1 1 0 1 1 2
Nordlund & Garvill (2003)
Study 1 1467 0.44 1 3 0 0 0 0
Ojala (2012)
Study 1 978 0.17 1 1 0 1 1 2
Ong & Musa (2011)
Study 1 413 0.21 1 1 0 1 1 2
Palmer (2013)
Study 1 114 0.63 4 2 0 0 0 0
Piff et al. (2012)
Study 6 195 0.18 4 1 0 0 0 0
Poliakoff & Webb (2007)
Study 1a 169 0.27 1 1 1 1 0 2
Study 1b 169 0.30 1 3 1 1 0 2
Pomazal & Jaccard (1976)
Study 1a 270 0.43 1 2 1 0 1 2
Study 1b 270 0.50 1 3 1 1 1 3
Poulin (2014)
Study 1 125 0.12 2 2 0
Pratt & McLaughlin (1988)
Study 1 258 0.35 3 1 1 1 1 3
Prestholdt et al. (1987)
Study 1 441 0.56 1 2 1 1 0 2
Raats et al. (1995)
Study 1 233 0.42 1 3 1 1 0 2
Randall & Fernandes (1991)
Study 1 319 0.16 4 1 1 1 1 3
VALUES AND GROUP IDENTITY 158
Raymond et al. (2011)
Study 1a 659 0.19 1 1 0 1 0 1
Study 1b 659 0.30 1 3 0 1 0 1
Study 1c 664 0.08 1 1 0 1 0 1
Study 1d 664 0.40 1 3 0 1 0 1
Reynolds et al. (2014)
Study 1 104 0.17 4 1 0 0 1 1
Study 2 129 0.12 4 2 0 0 0 0
Study 4 118 0.53 4 1 0 0 1 1
Study 5a 169 0.14 4 2 0 0 0 0
Study 5b 45 0.06 3 2 1 0 0 1
Riper & Kyle (2014)
Study 1 357 0.34 1 1 0 1 1 2
Rise & Ommundsen (2011)
Study 1a 179 0.30 1 3 1 1 0 2
Study 1b 159 0.33 1 3 1 1 0 2
Robinson et al. (2014)
Study 1a 336 0.58 1 3 1 1 1 3
Study 1b 336 0.12 1 1 1 1 0 2
Robinson (2004)
Study 3a 123 0.11 1 1 1 1 1 3
Study 3b 123 0.57 1 3 1 1 1 3
Romani et al. (2014)
Study 1 330 0.23 2 1 0 1 1 2
Saidon (2012)
Study 1a 669 0.36 4 1 0 1 1 2
Study 1b 669 0.41 4 1 0 1 1 2
Samnani et al. (2013)
Study 1 221 0.43 4 1 0 0 1 1
Schafer (2011)
Study 1 1808 0.09 2 1 0 1 0 1
Scherbaum et al. (2008)
Study 1 154 0.38 1 1 0 0 1 1
Schlenker (2008)
Study 1 155 0.27 4 1 0 0 0 0
Study 2a 234 0.20 4 1 0 1 1 2
Study 2b 234 0.28 4 1 0 1 0 1
Schultz et al. (2014)
Study 1 301 0.23 1 2 0 0 0 0
Schwartz & Tessler (1972)
Study 1 132 0.38 1 2 0 0 0 0
Schwitzgebel & Rust (2011)
Study 1a 208 0.24 3 1 1 1 1 3
Study 1b 198 0.20 3 1 1 1 1 3
Study 1c 167 0.16 3 1 1 1 1 3
VALUES AND GROUP IDENTITY 159
Setiawan & Tjiptono (2013)
Study 1a 218 0.31 1 1 1 1 1 3
Study 1b 218 0.36 1 3 1 1 1 3
Setiawan et al. (2014)
Study 1a 312 0.43 1 1 1 0 0 1
Study 1b 312 0.38 1 3 1 0 0 1
Shaw, & Shui (2002)
Study 1 686 0.24 1 3 1 0 0 1
Shu et al. (2011)
Pretest 61 0.52 4 1 0 0 0 0
Simkin & McLeod (2010)
Study 1 144 0.49 1 3 0
Sims (2014)
Study 1a 244 0.72 1 1 1 1 1 3
Study 1b 145 0.41 1 3 1 1 1 3
Skitka & Bauman (2008)
Study 1 1853 0.27 3 1 1 0 0 1
Study 2 514 0.11 3 3 1 0 0 1
Sliwinski (2011)
Study 1 140 0.51 2 1 0 1 1 2
Smith & McSweeney (2007)
Study 1a 227 0.34 1 1 0 1 0 1
Study 1b 67 0.12 1 1 0 1 0 1
Study 1c 227 0.44 1 3 0 1 0 1
Steg & de Groot (2010)
Study 1 74 0.54 1 3 0 0 1 1
Stephens et al (2007)
Study 1 1205 0.18 1 1 0 1 0 1
Stern et al. (1999)
Study 1 420 0.41 1 1 0 1 1 2
Taneja (2006)
Study 1a 293 0.37 1 1 1 1 1 3
Study 1b 293 0.46 1 3 1 1 1 3
Tanner & Kast (2003)
Study 1 547 0.30 1 1 1 1 1 3
Thøgersen (1999
Study 1 633 0.33 1 1 1 1 0 2
Thøgersen (2009)
Study 1 206 0.65 1 1 1 1 1 3
Thogerson & Grunert-Beckmann (1997)
Study 1 813.5 0.30 1 1 1 1 0 2
Thogerson & Olander (2006)
Study 1 1520 0.89 1 1 1 1 1 3
Thomson & Siegel (2013)
Study 2 132 0.25 2 2 0 0 0 0
VALUES AND GROUP IDENTITY 160
Study 3 182 0.17 2 2 0 0 0 0
Study 4 90 0.27 2 2 0 0 0 0
Thomson et al. (2014)
Study 2 188.5 0.17 2 2 0 0 0 0
Tillman (2011)
Study 1a 610 0.18 4 1 0 1 1 2
Study 1b 610 0.43 4 3 0 1 1 2
Tillman et al. (2014)
Study 1 365 0.02 4 1 0 1 1 2
Tonglet, Phillips, & Bates (2004)
Study 1 191 0.09 1 1 0 0 1 1
Tonglet, Phillips, & Read (2004)
Study 1 191 0.19 1 3 1 1 0 2
Udo et al. (2014a)
Study 1a 231 0.26 1 1 1 1 0 2
Study 1b 231 0.30 1 3 1 1 0 2
Udo et al. (2014b)
Study 1a 331 0.46 1 1 1 1 0 2
Study 1b 331 0.33 1 3 1 1 0 2
van der Linden (2011)
Study 1a 143 0.61 1 1 1 1 1 3
Study 1b 143 0.67 1 3 1 1 1 3
van Dijke & Verboon (2009)
Study 2 567 0.62 1 1 0 1 1 2
van Zomoren et al. (2012)
Study 2 118 0.19 3 2 0 0 0 0
Vida et al (2012)
Study 1a 1210 0.38 4 1 0 1 1 2
Study 1b 1201 0.54 4 3 0 1 0 1
Vilas & Sabucedo (2012)
Study 1 316 0.76 3 3 1 1 1 3
Wenzel (2004)
Study 1a 1306 0.04 1 1 0 1 1 2
Study 1b 1306 0.18 1 1 1 1 0 2
Study 1c 1306 0.15 1 1 0 1 0 1
White et al. (2009)
Study 1a 129 0.53 1 1 0 1 1 2
Study 1b 164 0.49 1 3 1 1 1 3
Study 2 175 0.49 1 3 1 1 1 3
White et al. (2014)
Study 1a 577 0.48 1 1 1 1 0 2
Study 1b 577 0.36 1 1 1 1 0 2
Study 1c 577 0.62 1 3 1 1 0 2
White et al. (2015)
Study 1a 26 0.74 1 3 1 1 1 3
VALUES AND GROUP IDENTITY 161
Study 1b 20 0.38 1 1 1 1 0 2
Study 1c 19 0.29 1 1 1 1 0 2
Wiltermuth (2011)
Study 2 262 0.14 2 2 1 1 0 2
Study 3 112 0.24 2 2 1 1 0 2
Study 4 223 0.06 2 2 1 1 0 2
Xu et al. (2013)
Study 1a 323 0.63 1 1 1 1 0 2
Study 1b 323 0.59 1 3 1 1 0 2
Yoon (2011)
Study 1a 298 0.48 1 1 1 1 1 3
Study 1b 298 0.58 1 3 1 1 1 3
Zaal & Laar (2011)
Study 2a 151 0.23 3 2 0 0 0 0
Study 2b 151 0.04 3 2 0 0 0 0
Zhang et al. (2013)
Study 1 273 0.44 1 1 0 1 1 2
Zuckerman & Reis (1978)
Study 1a 189 0.24 1 2 1 1 0 2
Study 1b 189 0.40 1 3 1 1 0 2
Note. N = number of participants included in the effect size estimate; J = judgment target; B =
behavior measurement type; AC = action correspondence, CC = context correspondence; TC =
time correspondence; C = correspondence (continuous); TL = time lag
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Identity is the lens through which moral values predict action
PDF
Culture's consequences: a situated account
PDF
Socio-ecological psychology of moral values
PDF
How misinformation exploits moral values and framing: insights from social media platforms and behavioral experiments
Asset Metadata
Creator
Johnson-Grey, Kate Marie
(author)
Core Title
Expressing values and group identity through behavior and language
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Psychology
Publication Date
01/30/2018
Defense Date
01/08/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
construal level theory,meta-analysis,methodology,Morality,OAI-PMH Harvest,political polarization
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Dehghani, Morteza (
committee chair
), Graham, Jesse (
committee chair
), Schwarz, Norbert (
committee member
), Wakslak, Cheryl (
committee member
)
Creator Email
kate.johnson@usc.edu,katejohn@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-466314
Unique identifier
UC11266946
Identifier
etd-JohnsonGre-5974.pdf (filename),usctheses-c40-466314 (legacy record id)
Legacy Identifier
etd-JohnsonGre-5974.pdf
Dmrecord
466314
Document Type
Dissertation
Rights
Johnson-Grey, Kate Marie
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
construal level theory
meta-analysis
methodology
political polarization