Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Democrats vs. Republicans: how Twitter users share and react to political fact-checking messages
(USC Thesis Other)
Democrats vs. Republicans: how Twitter users share and react to political fact-checking messages
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: Political Fact-checking on Social Media i
Democrats Vs. Republicans: How Twitter Users Share and React to Political Fact-Checking
Messages
By
Jieun Shin
______________________________________________________________________________
A Dissertation Presented to the Faculty of The USC Graduate School
University of Southern California
In Partial Fulfillment of the Requirements for the Degree
Doctor of Philosophy
(Communication)
August 2016
Political Fact-checking on Social Media ii
Table of Contents
Chapter 1: Introduction …………………………………………………………...…………....…1
Chapter 2: Literature Review ...………………………………………………….………........…. 7
Understanding the Fact-checking Movement ......…………………………….………...…...... 7
a) Fear of misinformation spread online ..…..………………………...………..…... 7
b) Types of fact-checking …………………………………...….……………….….. 8
c) Fact-checking as specialized journalistic genre …………......…….…………… 10
d) Previous research on the effects of fact-checking ..…………….............….…… 12
Social Transmission of Online Content …………………………………….........……....….. 14
a) Increased content sharing on social media ……………………………...……… 14
b) Selective sharing of political information ……………………………………… 16
c) Motivational approaches ……………………………………………………...…19
d) The social identity approach ………………………………………………….... 20
User Generated Comments ...………………………………………………………………... 27
a) Online commentaries ………………………………...………………………… 27
b) Media bias expressed in reader comments …………………………………….. 29
c) Hostility toward fact-checkers triggered by social identity ….………………… 31
d) Corrective action ...…………………………………………………….….….… 34
Chapter 3: Methods …………………...…………………………………………………………37
Data Collection ……………………………………………………………………………… 37
Measurement …………………………………………………………………………………39
a) Fact-check level ...….…………………………………………………………….....39
Fact-check’s ruling ...……..………………………………………………..…….39
Political Fact-checking on Social Media iii
Relative advantage of fact-check for one candidate over the other ...………...…40
Valence of the fact-check toward Obama & toward Romney ...……….…..……41
b) User level ...………………………………………………………………...……….42
User’s party preference ...……………………………………………….……….42
c) User’s Response Level …...…………………………………………………...…….47
Identification of retweets and replies ...………………………………………….47
Replies expressing media bias ...………………………………………….……..48
Chapter 4: Results ...…………………………………………………………………….……….49
Preliminary Analysis …………………………………………………………………………49
a) Fact-checking tweets …...………………………………………………..…….……49
b) Users ...……………………………………………………………..………….……50
c) Retweets of the fact-checks by users …………………………………….................52
d) Replies to the fact-checks by users …………………………………………...….....53
e) Replies expressing concerns about media bias ...………………...…………….…...54
Hypotheses testing ……………………………………………………………………………55
a) Retweeting as an intergroup phenomenon ………………………………...………..55
b) Ingroup status differences in retweeting ………………………………...…………60
c) Hostile comments as an intergroup phenomenon ...………………………..…….…63
d) Ingroup status differences in corrective comments ...…………………..…….…….66
Chapter 5: Discussion ...…………………………………………………………………………69
Discussion ……………………………………………………………...…...………………..69
a) Selective retweeting of ingroup favorable messages ……………...………………70
b) Differences between Democrats and Republicans in selective sharing ...….….…..72
Political Fact-checking on Social Media iv
c) Hostile media comments …………………………………………………....…….74
d) Differences between Democrats and Republicans in hostile comments…………..76
Limitations …………………………………………………………………...……….………76
Conclusion ………………………………………………………...………………..………..77
Bibliography ……………………………………………………………...……………………..79
Appendix A …………………………………………………………………………………….105
Appendix B …………………………………………………………………………………….109
Political Fact-checking on Social Media v
Abstract
This dissertation investigated social media users’ responses to political fact-checking
messages, focusing on selective sharing and hostile commenting behavior. Drawing on the social
identity theory, the current project examined whether partisans selectively shared fact-checking
messages that were advantageous to a candidate from the same political party and whether
partisans expressed concern about the fact-checker’s bias against their side in the comments.
Potential differences between two political groups – Democrats and Republicans – were also
explored.
The study utilized a large set of Twitter data gathered unobtrusively and in real-time
during the 2012 U.S. presidential election season. Specifically the study examined Twitter users’
sharing and commenting patterns of messages posted on three major fact-checking
organizations’ accounts. This study found a polarizing response to fact-checking messages based
on users’ party preference. While Democrats mainly shared fact-checking messages that were
advantageous to the Obama campaign, Republicans mostly shared messages that benefitted the
Romney campaign. When fact-checking messages were not favorable to their own group, both
Democrats and Republicans cited media bias. The extent to which Republicans shared negative
fact-checks against the Obama campaign was greater than the extent to which Democrats shared
negative fact-checks against the Romney campaign. Some evidence indicated that Republicans
engaged in stronger hostile media behavior than Democrats did. Implications for these results for
political polarization are discussed.
Political Fact-checking on Social Media vi
Acknowledgements
I owe my gratitude to all those people who made this project possible. First and foremost,
I'd like to thank my advisor, Lian Jian, for her consistent support and guidance during my
dissertation writing process. She ignited my passion for research and helped me navigate through
tough times during my Ph.D. The day I first took her seminar, a plethora of research ideas burst
into my mind while listening to her. Ever since she took me under her wing, Lian has taught me
how to conduct research, write grants, respond to reviewers, and work with other team members.
Thanks to her, I was able to explore the field of computational social science and work on
interesting projects that ultimately inspired my thesis. It has been an honor to be her first Ph.D
advisee.
I am also indebted to my mentor and committee member, Margaret McLaughlin, for
several discussions that helped me focus my dissertation and identify the unique contribution that
I was trying to make to the field. Despite her delicate health condition, she gladly read over my
drafts and always gave me excellent feedback. Whenever I returned from meetings with her, I
felt much clearer about what to do next. In addition, her thorough copy editing of my final
manuscript was beyond what I expected. It is such a privilege to receive help from someone like
Peggy, one of the most respected scholars in the field who has served as editor of several
prestigious jorunals for many years.
I would also like to thank my two other wonderful committee members, François Bar and
Kjerstin Thorson, for their invaluable feedback. François’s guidance on how to narrow down
abstract ideas into a feasible study was an essential part of my dissertation. In addition, he
introduced me to the dataset collected by the Annenberg Innovation Lab (AIL) which served as
the basis of my dissertation. Kjerstin helped me apply my general interests and expertise to a
Political Fact-checking on Social Media vii
new domain (political fact-checking) I explored for my dissertation. Her suggestions that I
should “use my data in a big way” and “find patterns of fact-checking messages first”
revolutionized my analytical strategy. I can pinpoint her suggestions and the influence of her
feedback in every chapter of my dissertation. I am very lucky to have met her during her stay at
Annenberg.
I want to express my gratitude to a few friends of mine and colleagues who assisted with
my dissertation. Two former AIL colleagues, Kevin Driscoll and Alex Leavitt, set up the
database on a server and kindly responded to my requests whenever I bugged them about the
dataset. I also appreciate Chicheng Ren for his helpful comments on my python scripts and
sharing available resources with me. I would like to thank Hilary Jin for her thorough
proofreading which made my writing flow much more smoothly.
Last but not least, my deepest gratitude goes to my husband, Remy Pae, who supported
me with his love and encouragement throughout my entire Ph.D journey. Despite his own fight
with cancer, he never stopped sending me positive energy and always cheered me up with
humor. He was my biggest inspiration and motivation. Thank you.
Political Fact-checking on Social Media 1
Chapter 1: Introduction
Political fact-checking, which focuses on verifying political actors’ claims, has grown
into an explosive phenomenon over the last half decade. Unlike traditional journalism, which
emphasizes detached objectivity and adheres to the “he said, she said” style of reporting,
contemporary fact-checking directly engages in adjudicating factual disputes by publicly
deciding whose claim is correct or incorrect (Graves, 2013). One of the most prominent fact-
checking websites, Politifact, rates political statements to be “True,” “Mostly True,” “Half True”
“Mostly False,” “False,” and “Pants on Fire”. Due to its unique format and contribution to the
political sphere, the popularity of fact-checking has been on the rise since 2008. One study found
that fact-checking stories increased by 300 percent from 2008 to 2012 (Graves & Glaisyer,
2012), which made 2012 “the most fact-checked campaign in history (Adair, 2012).” With the
2016 U.S. Presidential race heating up, fact-checking continues to gain traction and is likely to
surpass the record set in 2012 (Mantzarlis, 2015).
Previous research on the fact-checking phenomenon (Amazeen et al., 2015; Garrett,
2011; Gottfried, Hardy, Winneg, & Jamieson, 2013; Nyhan & Reifler, 2015; Weeks, 2015;
Weeks and Garrett, 2014) has reported generally positive results of fact-checking. These studies
find that exposure to fact-checking messages reduces the recipients’ belief in misinformation
(Weeks & Garrett, 2014; Nyhan & Reifler, 2015). People who regularly visit fact-checking
websites are associated with advanced political knowledge (Gottfried et al., 2013). Moreover,
fact-checking organizations serve as external monitoring for legislators and lead them to make
fewer inaccurate public statements for fear of damaging their reputations (Nyhan & Reifler,
2014). In particular, Week and Garrett (2014) argued that corrections are ‘uniformly effective’
regardless of partisanship.
Political Fact-checking on Social Media 2
While the research above provides a positive foundation for the importance of fact-
checking, it is worth noting that most of these studies exposed the participants to a set of
corrective messages that they would otherwise not encounter. In other words, in these
experimental studies, people were required to read fact-checking messages that they may not
normally choose to consume. The current literature on political polarization on the Internet
(Bimber & Davis, 2003; Iyengar & Hahn, 2009; Slater, 2007; Stroud, 2012; Sunstein, 2001)
casts doubt on the assumption that partisans seek content that challenges their views. This so-
called selective exposure thesis argues that the Internet makes consuming one-sided information
much easier. The resulting unbalanced information diet reinforces the reader’s pre-existing
attitude and pushes him or her even further to the extreme (Iyengar & Hahn, 2009; Levendusky,
2013; Slater, 2007; Stroud, 2012; Sunstein, 2001). If this selective exposure tendency is strong,
the objective of fact-checking messages – providing the public with accurate information on both
sides of the political spectrum – becomes weak.
Therefore, assessing how active media users share and curate fact-checking messages is
critical to understanding the current dynamics surrounding the fact-checking movement.
Nowadays many people obtain political news from their connections (American Press, 2014).
Those who actively share news with others are sometimes called opinion leaders and play an
important role in bridging the media with audiences (Cappella, Kim, & Albarracin, 2014; Singer,
2014). Their sharing of media content not only exposes the message to their personal network
(Cappella, Kim, & Albarracin, 2014), but also adds a positive layer of meaning to the original
message, since sharing is an indicator of personal endorsement (An et al., 2011; Singer, 2014). In
contrast, their hostile comments highlighting media bias can produce negative impressions and
lower the credibility of the original message (Lee, 2012; Stravrositu & Kim, 2014; Shi, Messaris,
Political Fact-checking on Social Media 3
& Cappella, 2014; Thorson, Varga, & Ekdale, 2010). Hence, the analysis of social media users’
reactions to fact-checking would complement the extant literature on fact-checking and help us
better understand the phenomenon.
Therefore, to shed new light on the mechanisms of fact-checking, the current research
investigates social media users’ responses to fact-checking messages by analyzing a large set of
behavioral data gathered unobtrusively and in real-time. More specifically, this study focuses on
the role of Twitter users’ partisanship in selectively sharing fact-checking messages and
criticizing fact-checkers. This project draws on the social identity approach (Tajfel & Turner,
1979; Turner & et al., 1987), which proposes that people who strongly identify with a certain
group tend to categorize the world into either the ingroup or the outgroup, and subsequently
engage in competition with the other group. The social identity framework is particularly useful
for explaining social media users’ behaviors since it highlights the importance of social groups in
activating cognitive and emotional responses to stimuli. Social media platforms such as
Facebook and Twitter are replete with partisan cues that remind users of the social groups they
belong to by constantly feeding information about their friends’ interests (Barbera et al., 2015;
Conover et al., 2011; Himelboim et al., 2013; Jacobson et al., 2015).
Moreover, the social identity approach can be readily applied to the fact-checking
phenomenon during a presidential election season when the citizens are encouraged to vote for
one candidate and thus think in terms of two parties. Even if fact-checkers verify claims from
both sides of the spectrum, a particular message indicating whose claim is factually superior may
make the group contour more salient and unintentionally trigger inter-group competition at the
moment. This can motivate the recipients of fact-checking messages to achieve a common group
goal, which is positive distinctiveness for the ingroup as compared to the other group. When
Political Fact-checking on Social Media 4
people have strong directional goals (the desire to arrive at a particular conclusion), they are
likely to process information in a biased manner (Kunda, 1990; Lodge & Taber, 2005) and
engage in discriminatory behaviors to uphold a positive social identity (Hogg & Abrams, 2003;
Turner, 1999).
Additionally, the theory provides lenses to analyze group differences that may exist in
behaviors of Democrats and Republicans towards fact-checking. Some scholars (Brewer, 1999;
Mullen, Brown, & Smith, 1992) argue that relative group status (higher vs. lower) can give rise
to different dynamics of group behaviors. Groups can vary in their relative perceived power and
prestige, and their ranking can be higher or lower compared to the reference group. Partisans
generally favor their ingroup and derogate the outgroup. However, these two phenomena
(ingroup favoritism and outgroup derogation) are not always symmetrical, and one can manifest
more strongly than the other. One line of literature (Brewer, 1999; Cadinu & Reggiori, 2002;
Mullen et al., 1992; Smith, Spears, & Oyen, 1994; Vanneman & Pettigrew, 1972) suggests that
those who perceive their ingroup to occupy a lower status and thus feel threatened are more
likely to exhibit stronger outgroup negativity than those who perceive their group to have a
higher status. Such analyses may be fruitful for ascertaining differences between two partisan
groups.
To examine this phenomenon, the study analyzed a large data set collected on Twitter, a
popular social media platform. Twitter played a prominent role in the political sphere during the
2012 U.S. election season and experienced an unprecedented level of user participation in major
political events such as presidential debates (McKinney et al., 2014). Twitter is also known for
its unique position as both a news-sharing platform and social networking site (Kwak, Lee, Park,
& Moon., 2010). Therefore, it is a perfect venue to examine users’ media content sharing
Political Fact-checking on Social Media 5
behaviors as well as their commenting patterns and to discuss implications of such behaviors for
the network. In addition, messages posted on Twitter are publicly available
1
, which makes data
collection much more comprehensive compared to other social media (Vargo, Guo, McCombs,
& Shaw, 2014). Messages posted on Facebook are typically private and thus collecting data is
difficult.
Specifically, using the streaming service provided by Gnip (a commercial data provider),
the current project identified the three major fact-checking organizations’ Twitter messages
(n=194) pertaining to one of the two major U.S. Presidential candidates in October 2012, one
month prior to the election day (November 6). These three organizations included Politifact,
Factcheck.org, and the Washington Post’s Factchecker, which have been identified as the most
influential fact-checkers by previous scholars (Amazeen, 2013; Gottfried et al., 2013; Graves,
2013). Subsequently, the study compiled users (n=55,869) who retweeted these fact-checking
messages as well as those who replied to these messages. Each user’s party preference was
measured by a composite score of their tweets posted during 2012, as well as the partisan sources
they chose to retweet. Having identified the fact-checking users’ party preference, the project
systematically examined whether retweeting and posting hostile comments towards the fact-
checkers were a function of political group identification. In addition, the project investigated
differences between Democrats and Republicans in selective retweeting and hostile commenting
behavior.
The study found results that are overall consistent with the social identity approach: (1)
partisans selectively shared fact-checking messages that “cheerlead” their own group and
1
This excludes private accounts and private messages exchanged between users
Political Fact-checking on Social Media 6
demoralize the opposing group; (2) partisans accused the fact-checkers of reporting with bias and
expressed hostility in their comments; (3) Republicans tended to gravitate towards the outgroup
negative information more than Democrats did. These findings indicate that fact-checking
messages clearly adjudicating political statements are shared disproportionately by a certain
group of people, those who are already in favor of the fact-check’s conclusion. At the same time,
these fact-checking message sources are accused of being partial by those who disagree with the
conclusion. This image as a biased source can hinder the fact-checker’s ability to effectively
challenge misinformation. The detailed implications of research findings are further discussed in
the following chapters.
The rest of this dissertation is organized as follows. Chapter 2 reviews the extant
literature on the fact-checking movement, content sharing, and commenting phenomena in social
media. It also introduces the conceptual frameworks that underlie this work, such as the social
identity theory (SIT), the social categorization theory (SCT), and the hostile media perceptions
(HMP) to derive several hypotheses and research questions. Chapter 3 discusees the sources of
data, the collection and transformation processes, and the methods for data analysis. Chapter 4
presents the results of preliminary analyses of fact-checking users, the patterns of their selective
sharing and commenting, and differences between fact-checking user groups. Chapter 5
discusses the implications of the results, suggests limitations, and offers directions for future
research.
Chapter 2: Literature Review
Understanding the Fact-checking Movement
Fear of misinformation spread online. In recent years, the spread of misinformation on
the web – whether it is satire news, conspiracy theories, or unsubstantiated rumors – has become
Political Fact-checking on Social Media 7
a great concern. The World Economic Forum (WEF, 2014) ranked misinformation online as one
of the top 10 global issues, followed by topics such as increasing societal tensions in the Middle
East and widening income disparities. Indeed, sometimes seemingly preposterous claims,
ranging from unsubstantiated health advice to malicious rumors about politicians, diffuse quickly
through the Internet and can reach a large audience. Rather than just being innocuous chatter,
these rumors can have substantial effects on individuals’ critical decision-making such as
preventing parents from immunizing their children (Dube, Bettinger, Halperin, et al., 2012;
Smith, Humiston, Marcuse, et al., 2011) and influencing citizen’s actual voting decision (Weeks
& Garrett, 2014).
Although concern about rumor spread has existed throughout human history, today’s
contemporary media environment may exacerbate the danger of misinformation online. First, the
online audience can be more easily confused about the credibility of sources they encounter as
opposed to traditional media readers (Metzger & Flanagin, 2013). For instance, content posted
on seemingly objective sites can make information appear credible, especially when users are
glossing over material. Second, web-based platforms provide various features that allow users to
duplicate content and effortlessly pass along messages to other people (boyd, 2011). The ease
with which social media users can spread information – such as Twitter’s retweet feature – can
sometimes give rise to massive rumor cascades as seen during the 2012 U.S. presidential election
season (Shin, Jian, Driscoll, & Bar, 2016). Lastly, the web enables like-minded people (or news
sources) to find each other and exchange ideas (Sunstein, 2009). Rojecki and Meraz (2014)
argued that political rumors diffuse largely because partisan sources can validate dubious claims
within internet environments.
To combat the problem of misinformation available on the web, considerable effort has
Political Fact-checking on Social Media 8
been devoted to factchecking and correcting inaccurate information circulating online in recent
years. Before the emergence of digitally networked communication environments, scholars and
theorists (Banker, 1992) contended that information should freely circulate and compete for
acceptance based on its truthfulness. This so-called “marketplace of ideas” argument assumes
that not only are people capable of discerning what is true or false, but they can also objectively
evaluate information. Hence, ideas naturally correct themselves without the help of
countervailing institutions to moderate the process. However, the modern view (Heath &
Bendor, 2003) postulates that truth does not always prevail in the marketplace of ideas because
individuals are often limited by their irrationality and lack capabilities to critically process
information. Thus, the latter approach supports efforts to intervene in the marketplace of ideas,
which is reflected in the recent rise of fact-checking and rumor debunking sites.
Types of fact-checking. Fact-checking as a response to fear of misinformation spread
online can be further classified based on: 1) whether the fact-checking website focuses on
political statements vs. internet rumors; 2) whether a human or a machine conducts the
verification process; and 3) the extent to which factchecking is independent (i.e., non-partisan).
First, fact-checking websites are often run by established organizations such as news
media or academic institutions. The majority of current mainstream political factchecking
websites fall into this category. For instance, the Tampa Bay Time’s Politifact and Washington
Post’s Factchecker are news media outfits, while Factcheck.org is affiliated with an academic
institution (i.e., a project of the Annenberg Public Policy Center of the Annenberg School for
Communication at the University of Pennsylvania). These fact-checking organizations focus on
verifying the falsehood of public claims found in politicians’ speeches, campaign ads, and
interviews, rather than Internet rumors. Other websites, such as Snopes.com,
Political Fact-checking on Social Media 9
TruthorFiction.com, HoaxSlayer.com, and urbanlegends.about.com, focus on viral Internet
rumors and hoaxes as opposed to political statements or speeches. Internet users who want to
investigate dubious claims circulating on social media or verify suspicious chain emails often use
these sites.
Second, fact-checking can also be categorized based on whether the verification process
involves humans (e.g., journalists) or computers. Human individuals currently conduct the
majority of political fact-checking since they can interpret political statements within situational
contexts. Therefore, this type of political fact-checking is highly labor intensive and can be
costly. Graves (2013) reported that Politifact and Factcheck.org each has 5~6 staffers who
conduct extensive research to fact-check 10~20 weekly items and meet several times before
reaching a truth rating. In addition to human endeavors, researchers are developing technology-
based interventions to streamline the fact-checking process. A number of academic research
groups have created web applications and computer algorithms (Ciampaglia, Shiralkar, Rocha et
al., 2015; Gupta, Kumaraguru, Castillo, et al., 2014; Silverman, 2014) that help detect emerging
rumors and track them as they arise in real time. For instance, a research project at the Tow
Center for Digital Journalism at Columbia University (Silverman, 2014) developed an
application (emergent.info) that identifies controversial claims circulating online and labels them
True, False, and Unverified. The service employs an algorithm to gather Internet users’ tagging
patterns for each claim to determine the “truthiness” rating. Although computational fact-
checking is still in infant stages, the constant flow of research shows promising potential. In the
future, the two fact-checking methods are likely to converge into a model where machines assist
human fact-checkers.
Lastly, fact-checkers can be labeled as “partisan” or “non-partisan” based on whether
Political Fact-checking on Social Media 10
they fact-check both sides of the political spectrum or just one side. Leading fact-checking
organizations such as Politifact, Factcheck.org, and Washington Post’s factchecker are generally
described as “nonpartisan.” They declare independence not only because they fact-check
political claims from both sides, but also because they arrive at an objective conclusion without
exerting biases. Whether these fact-checkers actually exhibit selection bias or slant toward a
specific political group in their assessment of truth is controversial. Media scholars and political
pundits (Amazeen, 2015; Hemingway, 2011; Graves, 2013; Uscinski & Butler, 2013; Uscinski,
2015) often debate the extent to which fact-checkers’ rulings are reliable. Although measuring
fact-checkers’ ideological affinities is difficult, one thing that is clear is that these “mainstream”
fact-checking outlets are among the most professionally objective sources available. Unlike these
sites, some other fact-checkers openly express their political affiliations and aim to expose the
falsehood of the opposing party or media (Rogerson, 2014). For instance, while NewsBuster
focuses on verifying only liberal media bias, MediaMatter mostly fact-checks statements from
conservative groups.
Fact-checking as specialized journalistic genre. Many scholars view contemporary
factchecking, especially within the political realm, as a special genre of journalism or news
coverage and discuss fact-checking practices in the journalistic context (Amazeen, 2014;
Coddington, Molyneux, & Lawrence, 2014; Graves, 2013; Graves, Nyhan, & Reifler, 2016).
Indeed, the pursuit of accuracy has long been recognized as a central norm for professional
journalists worldwide (Cooper, 1990; Singer, 2007). Numerous versions of journalistic codes of
ethics have emphasized reporters’ commitment to verification – discerning truthful messages
from deceptive ones – as a cornerstone of professional integrity (Singer, 2007). For instance, the
first sentence in the first edition (1926) of The Society of Professional Journalists (SPJ)’ ethics
Political Fact-checking on Social Media 11
code recognizes that “the duty of journalists is to serve the truth.” The emphasis on the truth-
seeking role of journalists is still maintained in the current SPJ’s codes of ethics and is well
aligned with the undercurrent of the fact-checking movement
2
.
However, despite sharing common principles, the differences between the traditional
truth-seeking norms in journalism and the recent factchecking phenomenon deserve some
attention. First, the former focuses on eliminating errors in reporting, while the latter is dedicated
to checking and publicizing the veracity of claims made by others (Amazeen, 2015; Graves &
Glaisyer, 2012). Journalistic institutions have long attempted to produce error-free articles
because valid information directly affects the organization’s reputation and credibility. When
journalists commit errors in reporting, they are encouraged to acknowledge the mistakes and
promptly correct them. Contemporary factcheckers identify potential errors in public claims and
investigate them, employing the same journalistic standards to determine the extent to which the
claim is accurate.
Second, both traditional journalists and dedicated factcheckers emphasize objectivity, but
practice them in different ways (Coddington et al., 2014). American reporters are trained to
distance themselves from any given subject and separate facts from values or opinions, which
results in the style of “he said, she said” reporting (Schudson, 2001). This stylistic practice not
only allows journalists to maintain neutrality, but also minimizes risks of getting involved in
political arguments and inviting negative reactions from their sources. In contrast, fact-checkers
directly engage in “adjudicating factual disputes” (Coddington et al., 2014, p.3) and “publicly
deciding” (Graves, 2013, p.18) their position rather than merely quoting both sides. Accordingly,
2
There is a view in journalism that news organizations should openly embrace partisan positions
and accept the notion of “journalists as advocates” as an alternative to the ideal of objectivity
(Waisbord, 2008).
Political Fact-checking on Social Media 12
they have become the frequent target of criticism and have drawn allegations of bias (Amazeen,
2013).
Although contemporary factchecking diverges from traditional news reporting, this trend
was not unforeseen or unprecedented (Graves, Nyhan, & Reifler, 2016). In the 1990s, journalists
experimented with a new strategy for covering campaigns and developed segments called
“adwatch.” Through this format, journalists verified factual claims in political advertisements
and assigned labels such as False, Misleading, and True (Kaid, Tedesco, & McKinnon, 1996;
Pfau & Louden, 1994). Therefore, some view adwatch as the first incarnation of factchecking
and pay attention to fact-checkers’ watchdog function (Fridkin, Kenney, & Wintersieck, 2015).
In this sense, the literature on the norms and principles of journalism can largely guide
discussions concerning the factchecking phenomenon since journalistic institutions and
factcheckers uphold similar values in truthful reporting.
Previous research on the effects of fact-checking. In general, findings of recent studies
(Amazeen et al., 2015; Gottfried et al., 2013; Nyhan & Reifler, 2015) suggest that fact-checking
has positive effects on people’s knowledge and beliefs. For instance, one study (Gottfried et al.,
2013) found that people who regularly visited a fact-checking site during the 2012 U.S. election
season were more likely to give accurate answers to factual questions about politics
3
than those
who did not visit fact-checking sites. Another experimental study (Nyhan & Reifler, 2015) found
that exposure to fact-checking messages increased participants’ average performance on the
questions that their political knowledge. This tendency was even stronger among those who were
politically sophisticated (knowledgeable) compared to those who were less informed. Similarly,
3
Respondents were tested on the background facts and the two major party presidential
candidates’ stands on issues.
Political Fact-checking on Social Media 13
Amazeen et al. (2015) showed that people who saw a correction debunking a politician’s claim
were more likely to evaluate the false statement as “false” than those who did not see a
correction. These studies found no evidence that individuals’ prior political attitude made
significant differences in the outcome.
Although the majority of recently published research converges on the positive
consequences of fact-checking, a closer look into the earlier studies reveals inconsistent patterns.
For instance, some research findings (Nyhan & Reifler, 2010) indicate a moderating role of
partisanship on the strength of the fact-checking effects. The so-called “backfire” effect shown in
the study of Nyhan and Reifler (2010) illustrates a scenario where individuals who received a
fact-check inconsistent with their prior attitude did not believe in the correction. They found that
participants exposed to a fact-checking message embraced a false claim even more strongly
when they encountered a correction challenging their previously held view. Similarly, another
study (Hart & Nisbet, 2012) found that exposure to counter-attitudinal news stories caused
people to dislike the policies suggested in the news. This component of the “backfire” argument,
however, was not supported in other studies (Weeks & Garrett, 2014; Weeks, 2015; Garrett,
2011). In particular, Week and Garrett (2014) argued that corrections are ‘uniformly effective’
regardless of partisanship.
Moreover, unlike the studies investigating the effects of factchecking (i.e., messages that
adjudicate factual disputes or political claims) on individuals’ political knowledge or belief in the
debunked claims, studies (Shin et al., 2016; Thorson, 2015) focusing on other consequences
(e.g., people’s attitude toward a candidate and social diffusion) have different results. For
example, Thorson (2015) found that even when a correction successfully debunked
misinformation regarding an opposing candidate, the knowledge update did not necessarily
Political Fact-checking on Social Media 14
change people’s attitudes toward the candidate. For instance, those who were exposed to both
misinformation and correction about a candidate of the opposing party (reading a negative rumor
first and then subsequently reading a correction) had a more negative attitude toward the
candidate than those who saw neither misinformation nor the correction. She termed the
phenomenon “belief echoes” since the attitude towards the candidate remains unchanged despite
the correction.
Furthermore, corrections often fail to penetrate the target audience vulnerable to
misinformation and affect the overall rumor dynamics (Friggeri et al. 2014; Shin et al., 2016;
Silverman, 2012). For instance, Shin et al. (2016) examined the diffusion patterns of 57 political
rumors circulating on Twitter and observed relatively little effect of countervailing information
on these rumors. They found that Twitter users showed little interest in challenging the veracity
of the rumors, even though fact-checking sites were available just a few clicks away for
verification. In addition, they found that debunking effort done by fact-checking sites often failed
to slow down the rumor spread. Friggeri et al. (2014) similarly found that, although rumor
debunking on Facebook affected the rumor volume (i.e., re-posts) temporarily, the overall effect
was minimal in the long term.
Social Transmission of Online Content
Increased content sharing on social media. Constant sharing has become one of the
main activities that Internet users engage in today. Recent surveys (IPSOS, 2013) suggest that
over 70% of global Internet users share some type of content (e.g., picture, media content,
opinions) on social media sites. Accordingly, an increasing number of people (i.e., 46% of
Americans) get their news stories from interpersonal channels such as email forwarding, text
messaging, and social media posting (American Press, 2014). This phenomenon illustrates the
Political Fact-checking on Social Media 15
notion of empowered citizens and consumers in the digital age (Bennett & Iyengar, 2008;
Jenkins, 2006; Southwell, 2013). Media users who were traditionally regarded as a passive
audience now have unprecedented power not only in choosing what media channels to tune into,
but also in their ability to share content selectively with other people.
In particular, social media platforms stand at the forefront of the explosive content
sharing phenomenon since they afford individuals with new opportunities that were not
previously available. Researchers (boyd, 2010; Ellison, Steinfield, & Lampe, 2011; Treem &
Leonardi, 2012) have identified different sets of affordances of social media both in a social or
organizational setting. Of these affordance, two are relevant to an increased motivation for
information sharing on social media. First, replicability (boyd, 2010) refers to users’ ability to
easily duplicate content. For instance, social media provides technological features such as share
(Facebook) and retweet (Twitter), which allow users to copy the original post and diffuse it into
their own networks at the click of a button. Low barriers to sharing content on social media
platforms often drive viral phenomena that rely on re-transmission of the original message.
Second, association (Treem & Leonardi, 2012) is an affordance of media that enables users to
show and view explicit connections among individuals or between individuals and content. Users
can express their social relationships through features like friending (Facebook) and following
(Twitter). Also, users can notify others when they endorse certain content through “liking”
(Facebook) or “retweeting.” This type of affordance depends upon connections in social media
and the audience that users want to inform, entertain, and bond with (Ellison, Steinfield, &
Lampe, 2011). These affordances of social media in turn facilitate information sharing among
ordinary users (Lee & Ma, 2012).
However, not all posts or news in social media receive attention or are redistributed. The
Political Fact-checking on Social Media 16
selection of media content depends on a number of factors, and a specific mechanism of viral
content has received increasing research attention in the past. Typically, previous research
examining the predictors of content sharing has focused either on psychological or content
specific factors (Cappella, Kim, & Albarracin, 2014). While psychological factors focus on
individuals’ motivations to choose a particular message over the other, content specific factors
are concerned with message level characteristics such as the utility of information, visibility, and
emotion contained in the message. Although both factors are analytical tools, they are not
mutually exclusive and instead often influence each other (Cappella, et al., 2014). Therefore, it is
important to consider both individual traits and message features in analyzing the popularity of
media content and understanding the mechanism of media choices.
Selective sharing of political information. In recent years, selective exposure has
emerged as a hot topic in political communication. A growing concern is that people
continuously consume one-sided information that supports their own views without exposing
themselves to politically different opinions (Iyenger & Han, 2009; Slater, 2007; Stroud, 2012).
Some (Bimber, & Davis, 2003; Stroud, 2008; Sunstein, 2001) have speculated that the current
media environments make encountering political disagreements less likely. Unlike the pre-
Internet age, people can now set up their own media system to only include news sources
ideologically aligned with their own views. At the collective level, selective exposure challenges
the democratic notion that citizens are informed of all aspects of an issue. It can also bring about
a more fragmented society by creating political cleavages along the party line (Iyengar & Hahn,
2009; Slater, 2007; Stroud, 2012; Sunstein, 2001).
Due to the important implication of exposure, the lion’s share of previous research on
media choice has focused on selective exposure as opposed to selective information sharing. The
Political Fact-checking on Social Media 17
selective exposure literature, building on cognitive dissonance theory (Festinger, 1957), proposes
that people experience psychological discomfort (dissonance) when they are exposed to political
views that conflict with their own. To eliminate such discordances, those who have strong
political preferences choose to tune into media channels that support their views and ignore
channels that challenge their views. This selectivity pattern has also been called a congeniality
bias (Eagly & Chaiken, 1993, 2005) or a confirmation bias (Jonas et al., 2001). Consistent with
the prediction, previous studies (Brannon, Tagler, & Eagly, 2007; Iyengar & Han, 2009; Jonas et
al., 2001; Knobloch-Westerwick & Meng, 2009; Garrett, 2009a; Mutz & Martin, 2001) have
shown that Democrats tend to expose themselves only to Democratic campaign messages and
Republicans to Republican campaign messages.
Recently, however, researchers have begun to explore systematic biases that Internet
users exhibit in their content sharing behavior (Barbera et al., 2015; Conover et al., 2011;
Himelboim et al., 2013; Jacobson et al., 2015). Information sharing can take different forms
across different media. For example, news website visitors can email an interesting story to their
friends or post the story on their social media. Bloggers may choose to embed a hyperlink for a
further read, while Facebook and Twitter users can simply press a “like” or “retweet” button to
distribute an existing message. At the end, selective sharing focuses on “retransmission” of
attitude-consistent content, whereas selective exposure is concerned about people tuning into
similar viewpoints. Therefore, sharing involves a stronger social implication than exposure
because the effects of sharing spill over to the transmitter’s personal network while the effects of
exposure is limited to the person who consumes the message.
Research findings from both the exposure and information sharing literature are generally
comparable. However, bias toward congenial information seems to be more robust in the content
Political Fact-checking on Social Media 18
sharing domain than in that of exposure. For instance, some previous studies on selective
exposure (Garrett 2009a; Garrett 2009b; Hart et al., 2009; Knobloch-Westerwick & Kleinman,
2012; Messing & Westwood, 2012) have reported circumstances where people choose to read
content that contradicts their opinion. Perceived popularity of a news article (Messing &
Westwood, 2012) is one factor that can outweigh anticipated dissonance between the partisan
viewer and exposed content. Someone may read an article even though it clashes with their own
views simply because everyone else seems to be reading it. Information utility – perceived
usefulness of the news – is another reason that people choose to read opinion-challenging
messages (Knobloch-Westerwick & Kleinman, 2012). This indicates that selective exposure does
not always occur in the direction that will reduce cognitive dissonance.
On the contrary, previous studies show that selective sharing is more consistent with a
user’s attitude within the political context. U.S. political bloggers were more likely to share
hyperlinks aligned with their own political spectrum than the opposing side (Adamic & Glance,
2005; Jacobson et al., 2015; Hargittai, Gallo, & Kane, 2008; Meraz, 2015). Twitter users also
retweet messages from those sharing similar political attitudes – as opposed to conflicting
attitudes – more often (An, et al., 2014; Barbera et al., 2015; Boutet et al., 2012; Colleoni et al.,
2014; Conover et al., 2011). The question then arises as to why Internet users demonstrate strong
partisan content sharing behaviors. The current study speculates that factors other than
dissonance reduction may underlie a strong selective sharing phenomenon. More specifically, the
study assumes that selective sharing stems from an inherent social understanding that encourages
people to only share a certain view. The next few sections further investigate the mechanism
involved with this phenomenon.
Political Fact-checking on Social Media 19
Motivational approaches. A number of different approaches explain individuals’ media
behavior. The most commonly used include mood management theory, uses and gratifications
theory (U&G), and the information utility model (Hartmann, 2009). Despite the differing
theoretical and analytical perspectives, they all aim to answer why and how people select or use
media content (Hartmann, 2009). These theories are similar in that they attribute an individual’s
media choice to a certain motivation, which can be either extrinsic or intrinsic. For instance, the
mood management research accredits people’s media choice to intrinsic motivations such as
seeking pleasure (Knobloch & Zillmann, 2002; Zillmann, 1985; Zillmann, 1988a), while the
information utility research ascribes media choices to extrinsic motivation such as information
seeking (Atkin, 1973; Knobloch, Carpentier, & Zillmann, 2003). U&G studies seek to classify
motivations of individuals’ media usage into meanginful categoreis which include social and
psychological needs of media (Katz, Blumler, & Gurevitch, 1974; Rubin, 2002).
In a political context, models involving two general classes of motivations – defense
motivation and accuracy motivation – have been frequently used to explain how political
partisanship influences citizens’ information processing and media choices (Hart et al., 2009).
Defense motivation refers to the desire to defend positions aligned with one’s own, while
accuracy motivation pertains to the desire to reach a correct or best conclusion (Chaiken,
Liberman, & Eagly, 1989). When a person is motivated to defend his or her previously held
beliefs, the person tends to process opinion supporting information favorably –whether it is in
the form of content selection or credibility perception. In contrast, when a person is motivated to
form an accurate appraisal of the stimuli, he or she is likely to seek information that guides the
decision and process information in an objective manner. Given that these two motivations are
competing forces, the question is: under what conditions does one type outweigh the other?
Political Fact-checking on Social Media 20
In this approach, one popular belief is that an individual’s party identification or
partisanship motivates them to be defensive as opposed to accurate (Kunda, 1990). Specifically,
the partisan motivated reasoning model (Taber & Lodge, 2006) postulates partisans’ preferential
evaluation of attitude-consistent information. Prior work (Druckman & Bolsen, 2013; Hart &
Nisbet, 2012; Mutz, 2009; Nisbet, 2005) found that those with strong opinions about a political
issue tended to interpret information in a manner that reinforces their preexisting beliefs. This is
consistent with defense motivation theory since non-partisans, those who are not committed to a
specific issue or opinion, were less likely to engage in defensive information processing (Taber
& Lodge, 2006). Although it is not clear whether non-partisans are uniformly guided by accuracy
motivation, in the absence of defensive motivation, the desire to gain a realistic view of an object
or issue (i.e., accuracy motivation) may drive non-partisans to seek and use information (Lavine
et al., 2012).
The social identity approach. In keeping with the framework that emphasizes
motivation of partisans, this project specifically draws on the social identity approach (Tajfel,
1972; Tajfel & Turner, 1979; Turner & Oakes, 1986; Turner et al., 1987) as a mechanism to
explain social media users’ selective content sharing. The social identity approach focuses on the
role of social groups, which are defined as “collections of people sharing the same social identity
(Hogg et al., 2004, p.247)” which compete with one another over status and power. More
specifically, the social identity approach proposes that when relevant cues are provided,
individuals categorize themselves and others in terms of ingroup and outgroup based on shared
similarities (e.g., race, religion, age, gender, and political party) and exhibit distorted opinions as
well as behaviors based on group identification. For instance, political scientists have shown that
party identification is an important aspect of social identity, which influences individuals’
Political Fact-checking on Social Media 21
perceptions about the world surrounding them (Green, Palmquist & Schickler, 2002; Stephen,
2012) and leads to a feeling of “us against them” (Iyengar, Sood, Lelkes, 2012).
The social identity approach is comprised of two sub-theories: social identity theory
(SIT) and self-categorization theory (SCT). The core idea of SIT (Tajfel & Turner, 1979) is that
people are motivated to make group comparisons in favor of their ingroup relative to outgroups,
because doing so can enhance self-esteem. This tendency can manifest in a variety of different
forms including ingroup favoritism, cohesion among ingroup members, stereotyping outgroups,
derogation of outgroup members, and inter-group conflicts (Hogg & Abrams, 2003; Turner,
1999). Drawing on SIT, some scholars (Iyengar, Sood, Lelkes, 2012; Stephen, 2012) have
attributed the growing polarization in American politics to both Republicans’ and Democrats’
motivations to differentiate the ingroup from the outgroup. In other words, political polarization
can be veiwed as a phenomenon driven by a pressure to positively differentiate one’s ingroup
from relevant outgroups.
SCT (Turner et al., 1987), an extension of SIT, focuses on cognitive processes in which
individuals categorize ‘self’ and others into a certain group and associate each group with
prototypes (i.e., a set of perceived attributes that represents a group). SCT argues that group
categorization strongly depends upon the context defined in relation to other groups. Therefore,
cues indicative of groups are extremely important. For instance, Oaks (1978) states that cues
should be accessible and fit to be effective. That is, cues should not only be visible (accessible),
but should also match (fit) the prototype of the group. When it comes to political groups,
political leaders – such as presidential nominees during a presidential election campaign – that
embody the prototype of the entire party are viewed as a highly prototypical ingroup member or
a highly prototypical outgroup member. Once these cues become salient to individuals and
Political Fact-checking on Social Media 22
produce emotional connections, people further engage in strengthening their group’s prototypes
following the principle of meta-contrast (maximizing the ratio of perceived inter-group
differences to within-group differences) to accentuate a positive entity of their ingroup.
Therefore, the SCT and SIT assume a “dynamic interaction between psychological processes and
the social context (Turner & Oakes, 1986, p.240).”
The social identity approach is particularly relevant to examining Twitter users’ selective
sharing of political fact-checking messages in two respects. First, Twitter is a platform where
users can easily categorize themselves in terms of ingroup or outgroup. A number of previous
studies have shown that the Twitter political sphere is divided into “red” or “blue” groups where
conservative users mainly interact with other similar conservative users and liberal users with
other liberal users (Barbera et al., 2015; Conover et al., 2011; Himelboim et al., 2013; Jacobson
et al., 2015). Users may employ such prevalent partisan cues to construct their own identity since
Twitter constantly feeds information about others’ activities and interests. Similarly, Marwick
and boyd (2011) argue that Twitter users monitor followers’ interests and observe what others
post to imagine their audience. This imagined audience subsequently influences what
information Twitter users choose to broadcast.
Second, fact-checking messages are designed to adjudicate a politician’s statements, and
their rulings such as “False,” “True” and “Pants on Fire” can easily invoke the relative standing
of the target group in comparison to the opposing group. For instance, even though fact-checking
organizations claim to not make character judgments, the underlying connotation of their
statements imply that someone is lying or telling the truth. The literature of social psychology
(Brambilla et al., 2012; Leach, Ellemers, & Barreto, 2007; Wojciszke, 2005) suggests that when
evaluating others (or groups), people are sensitive to morality-related qualities (e.g., honesty) as
Political Fact-checking on Social Media 23
opposed to competence-related qualities (e.g., intelligence, skills). Therefore, partisans may want
to share fact-checking messages that are positive toward the ingroup party leader and negative
toward the outgroup party leader since highlighting such messages elevates the status of their
own group relative to an outgroup.
Findings from studies (Abrams & Giles, 2007; Appiah, Knobloch-Westerwick, & Alter,
2013; Harwood, 1999; McKinley, Mastro, & Warber, 2014) examining selective exposure in
non-political contexts also support SIT. That is, study participants tended to select negative
content featuring their outgroup than positive content featuring the outgroup (Appiah et al.,
2013; Knobloch-Westerwick et al., 2010) and generally avoid content that depicted their ingroup
negatively (Abrams & Giles, 2007). For instance, in the study conducted by Appiah et al. (2014),
African Americans displayed selection bias towards news positively featuring their ingroup
rather than news featuring their outgroup both positively or negatively In addition, McKinley et
al. (2014) observed that exposure to positive depictions of their ingroup resulted in a boost of
self-esteem.
This logic can be easily applied to the patterns in which partisans selectively retweet fact-
checking messages. Retweeting can be considered a form of endorsement and as a means of
engaging in group behavior. Although motivations for retweeting may vary depending on the
context, researchers commonly found that Twitter users retweet a message because they find
such information important and relevant to their followers (Marwick and boyd, 2010; boyd et al.,
2010; Syn & Oh, 2015). Similarly, boyd et al. (2010) described the act of retweeting as
participating in “a public interplay of voices that give rise to an emotional sense of shared
conversational context (p.1).” For this reason, of all the communication conventions prevalent on
Twitter (e.g., hashtag, mention), retweeting has shown the highest degree of homophily – a
Political Fact-checking on Social Media 24
tendency of people interacting with similar others – and segregation along party lines (Bode et
al., 2015; Bouet et al., 2011; Bruns & Highfield, 2013; Conover et al., 2011; Maireder et al.,
2012).
Based on the particular relevance of retweeting to group identity, two hypotheses are
proposed. First, fact-checking messages that contribute to a positive distinctiveness of the
Democratic Party (messages that are either positive towards Obama or negative towards
Romney) are more likely to be retweeted by Democrats than Republicans. In the same way, the
study hypothesizes that fact-checking messages, which contribute to a positive distinctiveness of
of the Republican Party (messages that are either positive towards Romney or negative towards
Obama), are more likely to be shared by Republicans than Democrats.
H1: Fact-checking messages that are relatively advantageous to Obama are more likely to
be retweeted by Democrats than Republicans.
H2: Fact-checking messages that are relatively advantageous to Romney are more likely
to be retweeted by Republicans than Democrats.
Ingroup favoritism (e.g., loyalty, attachment, pride) and outgroup derogation (e.g.,
discrimination, negative stereotypes, hostility) are general patterns that are associated with
intergroup biases
4
. However, despite the close relationship between ingroup favoritism and
outgroup negativity, previous studies (Bourhis & Gagnon, 2001; Brewer, 1999; Kosic, Mannetti,
& Livi, 2014; Mummendey & Otten, 2001; Turner & Reynolds, 2001) have shown that these two
concepts are not always symmetrical and can exist in the absence of the other. For instance,
racism does not always accompany strong hatred toward minority outgroups; rather, it is often
4
The literature on SIT suggest that ingroup favoritism is the more prevalent form of intergroup
bias than outgroup derogation (Hogg et al., 2006; Brewer, 1999)
Political Fact-checking on Social Media 25
characterized by the absence of positive sentiments (e.g., admiration, sympathy, trust) toward
such groups (Dovidio & Gaertner, 1993; Pettigrew & Meertens, 1995; Stangor, Sullivan, & Ford,
1991). In this sense, Brewer (1999) argued that ingroup favoritism and outgroup antagonism are
separable phenomena and only in some conditions are they truly reciprocal.
This begs the question of what conditions give rise to greater ingroup positivity or greater
outgroup negativity? The extant literature does not clearly define the relationships between the
two forms of intergroup bias. However, one line of research (Brewer, 1999; Mullen, Brown, &
Smith, 1992; Smith, Spears, & Oyen, 1994; Vanneman & Pettigrew, 1972) focused on the
relative social status of the group (or perceived threat) as an important moderator that determines
the extent to which outgroup hate is stronger than ingroup favoritism. These studies suggest that
lower-status group members exhibit resentment toward the outgroup in an attempt to restore their
self-esteem, while higher-status group members exaggerate the positive aspects of their own
groups to maintain their status.
Although they did not specifically examine this possibility, Knobloch-Westerwick et al.
(2010) found results that corroborate the differential effects of the group status on people’s
media choice. Knobloch-Westerwick et al. (2010) examined how older individuals (50-65 years
old) and younger individuals (18-30 years old) select news content based on the valence of the
message (negative vs. positive) featuring the ingroup and outgroup. Since most societies
associate youthfulness with positive values (e.g., able-minded, better physical condition, up-to-
date), elderly people are considered to be relatively disadvantaged (i.e., ageism). The researchers
found that young readers preferred positive news about same-aged group (ingroup) but were
overall indifferent to news featuring older people whether positive or negative. In addition, the
authors found that older recipients preferred negative news about the young people, yet their
Political Fact-checking on Social Media 26
preference to select news featuring their own group (old people) positively was relatively weak.
This pattern demands further investigation.
In the fact-checking arena, Republicans may have felt that their group status was
threatened compared to Democrats because the Republican Party – relative to the Democratic
Party – has not performed well by the standards of the major fact-checkers. According to an
analysis conducted by the Center for Media and Public Affairs (CMPA) at George Mason
University, Politifact rated Republican claims to be false three times more often than it rated
Democratic claims during Obama’s second term (between January and May), 2013. Similarly,
another study (Ostermeier, 2011) that examined Politifact’s ratings from January 2010 through
January 2011, found that statements made by Republicans received substantially harsher rulings
than those of Democrats.
In addition, the current study found the consistent patterns during the final month of the
2012 U.S. Presidential election – the period when the dataset for this study was collected. As
further discussed in the Results section, while 42.3 % of the fact-checking tweets (n=82) posted
by the three organization’s Twitter accounts in October 2012 contained rulings that were
advantageous to Obama, 23.7% of them (n=46) were advantageous to Romney. Due to this
reason, Romney’s campaign often expressed resentment toward fact-checkers. For instance,
Romney’s chief pollster, Neil Newhouse, infamously remarked that, “we are not going to let our
campaign be dictated by fact-checkers (Bennet, 2012).” This sentiment accurately represents the
relatively low status of the Republican Party in the fact-checking domain.
Given the foregoing, this study assumes that Republican Twitter users, especially those
who were exposed to messages from fact-checking organizations, perceived their group’s status
to be relatively lower than that of the Democrats. Bringing to bear the SIT literature on the
Political Fact-checking on Social Media 27
Twitter dataset collected during the 2012 Presidential election, the study compares content
sharing patterns between two different groups: arguably the higher status group (Democrats) and
the lower status group (Republicans). More specifically, the study investigates, whether
derogating the outgroup group member (Obama) is relatively more important than cheering their
own group member (Romney) for Republicans. Conversely, the study examines whether
promoting the ingroup (Obama) is relatively more important than denigrating the outgroup
(Romney) for Democrats.
R1: Is negative valence towards Obama a more important predictor of retweeting for
Republicans than positive valence towards Romney?
R2: Is positive valence towards Obama a more important predictor of retweeting for
Democrats than negative valence towards Romney?
User Generated Comments
Online commentaries. Along with content sharing, one of the most popular user
activities today is to post comments in response to online content. Nowadays many media
platforms, such as Amazon, YouTube, Facebook, Twitter, and blogs employ user comment
functionality. A large number of major US news websites also allow users to post comments to
the news items (Singer, 2014). Through these features, users submit their opinions, expertise, and
personal stories, and sometimes interact with other users. A study conducted in 2010 (Purcell,
Rainie, Mitchell, Rosentiel, & Olmstead, 2010) found that 25% of adult U.S. Internet users have
commented on blog posts, online news, and product reviews. These comments have also become
useful sources of information for other users, and encourage users to browse different parts of the
website (Fardani, Bitton, Ryokai, Goldberg, & Ca; 2010; Liu, Zhou, & Zhao, 2015). In South
Political Fact-checking on Social Media 28
Korea, 84% of the Internet news users reported reading other users’ online comments at least
once a week (Lee & Jang, 2010).
Reader comments are important because they facilitate deliberative discourse and bolster
interaction between the public and journalists (Larsson, 2011). Even though the notion of
readers’ feedback to news media has existed for centuries through the channels such as “letters to
the editor,” today’s comment sections are much more visible, more transparent, and carry richer
information than traditional tools (Manosevitch & Walker, 2009). In comparing online
comments to news articles and readers’ letters to newspaper editors, McCluskey and Hmielowski
(2012) found that readers’ online comments had a wider range of opinion and affective tone than
reader’s letters. They attributed such differences to the ability to post anonymously, the absence
of gatekeepers, the ease of participation, and the characteristics of a younger audience. In this
sense, communication scholars (Hermida & Thurman, 2007; Karlsson, 2011; Neuberger &
Neurnbergk, 2010) have been increasingly interested in how communication technologies
enhance the audiences’ ability to publicly air their opinions, and how journalists utilize the
comment sections to engage readers in the news making process.
In the context of empowered consumers, readers’ comments on news articles and content
sharing have many characteristics in common. Both indicate readers’ engagement with content
and contribute additional information to the original message for other users (Anderson et al.,
2014; Lee, 2014). However, what distinguishes commenting from content sharing is that
commenting can be, and often is, a result of reader’s disagreement with the message or other
users. According to a study (Paskin, 2010) that examined a random sample of reader comments
posted on the 10 major US newspapers’ websites, a majority of the comments contained a
negative tone (68%). Paskin (2010) concluded that “readers were more motivated to take the
Political Fact-checking on Social Media 29
time to post a comment when they had something negative to say about either the story in
question or about other readers (p.78).”
To date, there are no studies that directly compare the characteristics of the most shared
news articles and the most commented articles. Yet a cursory review of the literature examining
the mechanism of the viral content or the most commented-upon news articles suggests that
disagreement or negative sentiment is a common theme that emerges uniquely in user comments
and not sharing. For instance, in the domain of content sharing, useful (Thorson, 2008) or highly
arousing news articles (Berger & Milkman, 2012) seem to be popular. However, the most
commented news articles tend to be controversial topics such as political news (Liu, Zhou, &
Zhao, 2015; Tenenboim & Cohen, 2015). In discussing this topic, Tenenboim and Cohen (2015)
argued that controversial topics are “inherently related to disagreement among groups or
individuals (p. 204)” and motivate people to voice their opinion.
Media bias expressed in reader comments. User comments can be analyzed in more
ways than content sharing, as the former contains richer qualitative dimensions than the latter.
One of the most discussed characteristics of user-generated comments has been incivility, the
extent to which a comment is offensive (Coe, Kenski, & Rains, 2014). Prior research (Coe et al.,
2014; Diaz Noci et al., 2010; Reich, 2009; Santana, 2014) has shown that user commenting
space is replete with uncivil content, particularly under anonymous conditions. According to Coe
et al. (2014), who content-coded reader comments on a local newspaper, one in five comments
contained some form of incivility such as name-calling, vulgarity, or a disparaging remark. In
particular, they found that news items that divided ingroup and outgroups such as sports were
more likely to produce uncivil comments than items without such a component. Focusing on the
effects of uncivil comments, other studies (Anderson, Brossard, Scheufele, 2014; Lee, 2012)
Political Fact-checking on Social Media 30
have shown that such uncivil comments influence other viewers to form unfavorable attitudes
(e.g., biased, risky) towards the content or the author.
Others have developed their own dimensions to analyze reader comments. Freelon (2015)
classified user comments into three types of norms: deliberative, communitarian, and liberal
individualist. The deliberative norm indicates the user’s openness to challenging ideas including
asking questions, giving reasons, and acknowledging different positions. The communitarianism
norm refers to statements intended for inside members who agree with the commenter. Lastly,
the liberal individualism norm is similar to uncivil remarks including insults and pejorative
language. He found that the dominant type on Twitter posts containing hashtags was
communitarian, whereas the most frequent type on newspaper comments were both liberal
individualism and deliberation. On the other hand, Manosevitch and Walker (2009) analyzed
reader comments to news websites based on six factors: whether the comment contained facts,
additional information sources, personal narratives, expressed value, position on the issue, and
reasons.
This project focuses on comments that express the commenter’s perception of “media
bias” because the concerns about fact-checkers’ bias have never ceased since the phenomenon
gained traction. Media practitioners (Holan, 2015; Roy, 2012; NPR, 2012) and scholars (Graves,
2013; Amazeen, 2014, 2015; Marietta, Barker, & Bowser, 2015; Ostermeier, 2011; Uscinski &
Butler 2013; Uscinski, 2015) alike have debated over the extent to which major factcheckers are
biased. While some (Grave, 2013; Amazeen, 2014; 2015) are more sympathetic to the aim of
fact-checkers, others (Marietta et al., 2015; Uscinski & Butler, 2013; Uscinski, 2015) are more
critical of the factcheckers by pointing out the potential impartiality and selection bias. Although
a few studies (CMPA, 2013; Ostermeier, 2011) have reported that fact-checkers tend to rate
Political Fact-checking on Social Media 31
Republicans’ statements to be “false” more frequently than Democrats’ statements, it is not clear
whether it is the reflection of the reality or indication of fact-checkers’ bias.
Investigating such controversy is beyond the scope of this paper. Instead, the current
paper is limited to examining readers’ reactions to fact-checks in terms of the perceived bias of
the fact-checkers. Even though only a small number of people who click on a news story may
comment, a systematic analysis of these comments can give insights into the publics’ concerns
(Lee, 2012). Previous studies have shown that readers’ comments about news items have a great
impact not only on others’ perceptions of the general public’s opinion (Kim & Sun, 2006; Lee,
2012), but also on perceptions of the original news content (Lee & Jang, 2010). Although the
extent of fact-checkers’ bias has previously been discussed, there is little research on how the
public actually perceives the fact-checkers’ bias. This study investigates the extent to which
Twitter users comments display their concerns to the bias of fact-checking messages.
Hostility toward fact-checkers triggered by social identity
Readers’ perceptions of the bias in media message have been widely examined under the
framework of hostile media perception (HMP). HMP (Gunther, Christen, Liebhart, & Chia,
2001; Gunther & Schmitt, 2004; Vallone, Ross, & Lepper, 1985) refers to a phenomenon where
those who have strong opinions about a certain issue perceive neutral media reports on that issue
to be biased against their own view. The phenomenon is highlighted by the fact that partisans on
both sides of an issue perceive the identical message to be unfavorable to their side (i.e., a
contrast effect) rather than favorable to their side (i.e., an assimilation effect). In a seminal study,
Vallone et al. (1985) examined how pro-Arab and pro-Israeli students perceived news broadcasts
on a controversial topic involving a conflict in the Middle East. When the study participants were
asked to rate fairness and objectivity of the news programs, both groups with pro-Israel and pro-
Political Fact-checking on Social Media 32
Arab sentiments saw the news as favoring the opponents. In addition, both groups of partisans
believed that other people would be swayed towards the other side and turn against their side if
they saw the media coverage. Since then, the HME has received extensive empirical support
across a variety of contexts such as elections (Duck, Terry, & Hogg, 1998; Huge & Glynn,
2010), abortion (Giner-Sorolla & Chaiken, 1994; Hartmann & Tanis, 2013), national security
laws (Choi, Yang, & Chang, 2009), and genetically modified food (Gunther & Leibhart, 2006;
Gunther, Miller, & Leibhart, 2009; Gunther & Schmitt, 2004).
During the past three decades, the HMP literature has also expanded to include the
“relative hostile media effect” (Gunther et al., 2001; Gunther & Christen, 2002; Gunther et al.,
2009), which relaxes the original assumption of objectively “neutral” media coverage. More
specifically, the relative HMP claims that the tendency of partisans to charge media bias is still
confirmed, when media is actually in favor of one side. When viewing genuinely slanted media
coverage, both sides may acknowledge that the coverage is indeed biased, but they would
perceive the relative intensity of bias differently. For instance, Coe et al. (2009) showed when
participants were randomly assigned to view either Comedy Central’s The Daily News (liberal
leaning program) or Fox News (conservative leaning program), conservatives rated the bias
against themselves significantly more in The Daily News than did liberals. Similarly, liberals
perceived stronger bias against themselves in Fox News than did conservatives.
Despite the wide body of literature documenting the HMP, there has been a relative lack
of consensus on the theoretical mechanism causing the phenomenon (Reid, 2012). Previously,
several mechanisms have been tested for the HMP. For instance, the “selective recall” account
(Vallone et al., 1985) suggests that partisans recall opposing arguments more than supporting
arguments because contrary information is more salient. However, this explanation has been
Political Fact-checking on Social Media 33
largely dismissed by a number of researchers (Arpan & Raney, 2003; Giner-Sorola & Chaiken,
1994; Schmitt et al., 2004) due to the fact that partisans actually recalled supporting arguments
more than opposing arguments. Instead, they (Giner-Sorola & Chaiken, 1994; Schmitt et al.,
2004) probed the possibility of “selective categorization” in which partisans recalled identical
items (i.e., passages) from the stimulus information but categorized the majority of recalled items
as hostile to their own side. However, this idea has received only partial support in previous
studies (Schmitt et al., 2004; Gunther & Liebhart, 2006) and did not lead to further research.
Another explanation is “different standards.” That is, partisans simply perceive opinion-
challenging arguments to be inaccurate. Although this account has been generally supported
(Giner-Sorola & Chaiken, 1994; Gunther et al., 2001), this mechanism still does not sufficiently
explain the research findings in which hostile media perceptions notoriously occur in mass media
contexts such as a news article, but rarely in non-media context such as a student essay (Schmitt
et al., 2004). To further examine this issue, Gunther and his colleagues (Gunther & Schmitt,
2004; Gunther & Leibhart, 2006; Gunther et al., 2009; Gunther et al., 2012) offered the
“perceived reach” hypothesis in which recipients estimated the impact of a news article on other
people to be great whereas they perceived the impact of a student essay to be minimal. That is,
because partisans are afraid of the media’s harmful effect on the public, they accuse the media of
reporting with bias and express hostility toward the media.
Additionally, in recent years, the social identity approach has emerged as “one of the
most complete and coherent theoretical accounts of the HME (Hartmann & Tanis, 2013, p.536)”
which can potentially integrate the relevant literature – such as perceived reach, motivated
reasoning, and differential standards – into one model. A number of researchers (Hartmann &
Tanis, 2013; Huge & Glynn, 2010; Reid, 2012; Stroud, Muddiman, & Lee, 2014) have
Political Fact-checking on Social Media 34
conceptualized hostile media perceptions as an individual’s attempt to achieve positive group
distinction as well as a coping strategy to deal with a possible threat to the legitimacy of their
group. For instance, in examining a gubernatorial race in Ohio, Huge and Glynn (2010) found
that those at the extreme ends of the ideology spectrum were more likely to perceive the media
coverage of the race as hostile to their candidate than moderate individuals. Similarly, Hartmann
and Tanis (2013) showed that when people were primed to think of themselves as the “pro-
choice” and “pro-life” group in the abortion issue, those who identified strongly with an either
group perceived a relatively neutral newspaper article as less favorable toward their own side.
The current research draws on the social identity explanation of HMP, as such an
approach has advantages as an overarching framework. First, social identity theory focuses on
group membership (a strong identification with a group), which is presumed to be a necessary
condition for the HMP. Second, it also describes various phenomena addressed in the HMP
literature such as the “perceived reach” of the media as an important moderator. For instance,
social identity theory projects that, in a highly competitive inter-group context, people tend to
view themselves and others in terms of either ingroup or outgroup (Tajfel & Turner, 1979;
Turner et al., 1987). In this self-categorization process, when someone does not support fellow
ingroup members, then that someone is likely to be categorized as an outgroup member and
subject to discrimination.
Corrective action. The HMP has also been discussed in the context of corrective actions,
which refers to “political behaviors that are reactive, based on perceptions of media and media
effects, and seek to influence the public sphere (Rojas, 2010, p.347).” The notion of corrective
behaviors focuses on the consequences of media bias perceptions rather than the perceptions.
Rojas (2010) found that those who perceive media bias were more likely to engage in correcting
Political Fact-checking on Social Media 35
public opinions through actions such as posting comments in online discussion forums and
commenting on internet news. These actions can not only be seen as a way to correct the bias in
the media reports, but also be seen as an expression of group-based anger or engagement in
political struggle for power between groups. The latter approach has been extensively addressed
in the literature on the social identity model of collective action (SIMCA: van Zomeren,
Postmes, & Spears, 2008; van Zomeren, Leach, & Spears, 2012).
Corrective action, as a behavioral manifestation of hostile media perception, can
potentially explain the recent media watchdog movements dedicated to exposing mass media’s
bias. For example, politically liberal organizations such as Media Matters claim that they
monitor and analyze conservative bias in the U.S. media. On the other hand, conservative
organizations such as the Media Research Center’s NewsBusters aim to expose and raise
questions about liberal media bias. There are also a number of websites committed to revealing
mainstream fact-checkers’ bias such as Politifact. While Politifact claims to be non-partisan,
both right and left wing organizations accuse it of favoring the opposing side.
Hence, the current study views corrective action (i.e., posting a comment calling out
media bias) as a result of social identity triggered by an intergroup context. That is, when
partisans read a fact-checking message that does not take sides with their ingroup, they
categorize the fact-checker as an outgroup member and express resentment. At the same time,
publicly addressing the “problem” of the fact-checker can elevate the status of their ingroup,
since such an act potentially exposes media’s bias in favor of the other group. Therefore, it
should be expected that relatively neutral fact-checking rulings (e.g., Half True, Promise
compromised, Half Flip-Flop) are perceived to be that of outgroups by both Republicans and
Political Fact-checking on Social Media 36
Democrats and thus be charged with bias. This reasoning predicts that neutral fact-checking
messages will receive comments expressing media bias from both groups.
H3: Neutral fact-checking messages are more likely to receive hostile comments from
both Republican and Democrats than non-partisan users.
On the other hand, based on the relative HMP, the study assumes that fact-checking
messages that elevate the relative status of the Democratic Party (i.e., messages that are either
positive to Obama or negative to Romney) may be perceived to be fair by Democrats, but to be
biased by Republicans. In the same way, fact-checking messages that elevate the relative status
of the Republican Party (i.e., messages that are either positive to Romney or negative to Obama)
may be perceived to be fair by Republicans, but to be biased by Democrats.
H4a: Fact-checking messages that are relatively advantageous to Obama are more likely
to receive hostile comments from Republicans than from both Democrats and non-partisans.
H4b: Fact-checking messages that are relatively advantageous to Romney are more likely
to receive hostile comments from Democrats than from Republicans and non-partisans.
The current study further investigates whether Republicans engaged in stronger
corrective action due to their relatively lower group status in the fact-checking domain during the
2012 US Presidential election. The study conducted by Hartmann and Tani (2013) similarly
documented the importance of a lower group status in activating a HMP among partisans.
Additionally, Huge and Glynn (2010) found that, during a gubernatorial election campaign in
which the Democratic Party was projected to win the election, Democrats exhibited less HMP
than Republicans. Moreover, due to the relatively poor performance of the Republican party
compared to the Democratic party (CMPA, 2013; Ostermeier, 2011), Republicans may view the
Political Fact-checking on Social Media 37
fact-checkers as an outgroup member more than Democrats do. Based on this reasoning, the
study examines the following research question.
R3: Is the extent to which partisans express a HMP through comments greater among
Republicans than Democrats?
Chapter 3: Methods
Data Collection
This project investigates how Twitter users shared and reacted to corrective messages
posted on three major fact-checking Twitter accounts –The Tampa Bay’s Politifact,
Factcheck.org, and The Washington Post’s Factchecker – in October 2012. October
5
was chosen
because three presidential debates and one vice presidential debate led to the highest fact-
checking activity that year (Graves & Glaisyer, 2012), and the time prior to an election usually
garners the greatest public attention across the nation (Knobloch-Westerwick, 2012). Each of the
three fact-checking sites owned an account on Twitter and published their fact-checks about the
presidential and vice presidential candidates during this month. Since this project draws on the
social identity approach, the scope of the study is limited to messages pertaining to each party’s
leader, who can easily provoke inter-group competitiveness. In other words, the project only
examined fact-checks that mentioned at least one of two presidential candidates: Barack Obama
or Mitt Romney.
Analysis of the data focused on a large collection of political tweets (n=298,894,327)
collected by the Innovation Lab at USC Annenberg School during the 2012 presidential election
period in the United States (October 2011 to December 2012). This dataset (hereafter larger
political dataset) provides a relatively comprehensive picture of the political Twittersphere in
5
The 2012 U.S. Presidential election was held on November 6, 2016.
Political Fact-checking on Social Media 38
2012, as it includes all the publicly available tweets containing at least one political keyword. A
team of researchers developed a master list of 427 keywords (e.g., the names of likely
candidates, political parties, issue-specific terminology, and hashtags used to promote
conventions or debates) often found in political Twitter conversations. The list of keywords used
in this study was designed to be as comprehensive as possible in order to capture tweets
associated with politics in 2012, encompassing various nicknames of political parties and
candidates. Specifically, the list was first compiled in October 2011, including 208 keywords,
and was updated manually based on emerging discursive trends during the election cycle (e.g.,
new campaign themes or candidates entering the race). By Election Day, November 6, 2012, the
keywords expanded to 427 unique keywords. The list of keywords is presented in Appendix A.
The data was collected in real-time using the Gnip PowerTrack service (Gnip;
http://www.gnip.com), which is also referred to as “the firehose
6
.” Unlike other streaming
services (e.g., the Twitter streaming API), the firehose provides 100% of the publicly available
tweets for keyword search and metadata such as the time stamp of the tweet, each user’s profile
information, number of followers and followees of the account, as well as any hashtags and url
links attached to the tweet.
Using each fact-checker’s username (i.e., handle), the study retrieved original tweets
7
posted by Politifact (n=1,123), Factcheck.org (n=402), and Factchecker (n=295) that mentioned
Obama or/and Romney during October 2012 from the large political dataset. The total number of
6
Twitter activity during the data collection period was captured by a custom Python script
running on a local server that stored the incoming tweets as objects in a MongoDB database
without modification. At the end of the data collection, the data stored in MongoDB were parsed
and indexed into a custom relational database running PostgreSQL.
7
This method excludes retweets or promotional messages (e.g., A tweet from Politifact: “If you
hear something during the debate that you'd like us to fact-check, tweet it with #PolitiFactThis”).
Political Fact-checking on Social Media 39
fact-checks (tweets containing fact-checking) posted on Politifact, Factcheck.org, and
Factchecker’s account were 126, 48, and 20 respectively, resulting in 194 fact-checks in total.
The details of these fact-checks are discussed in the Results section.
Measurement
Fact-check level
Fact-check’s Ruling. The three fact-checking websites used slightly different styles in
assessing political statements. Politifact’s scale
8
was based on an increasing order of faultyness:
“True,” “Mostly True,” “Half True,” “Mostly False,” “False,” and “Pants on Fire.” Factchecker
rated political statements on a range of one to four Pinocchios with one Pinocchio for statements
containing slight omissions or exaggerations and four Pinocchios for complete lies. Although
rare, they also give a “Geppetto” rating, when a statement was truthful. On the other hand,
Factcheck.org did not utilize a standardized rating scale and instead used various linguistic
markers such as “False.” “Not possible,” “Nonsense,” “Accurate,” and “Yes” to indicate their
rulings. In addition, all of these factchecking websites occasionally provided nuanced contextual
analyses without explicitly substantiating their rulings. For instance, the fact-check of “Romney
called Iraq exit ‘tragic’? Here’s full context. http://t.co/07xoeKk3 #debate,” by politifact, falls
into the contextualized fact-checking category.
To obtain comparable rulings, this study re-coded each fact-check’s rulings for three
variables: (1) which candidate gained a relative advantage from the fact-check, (2) the valence of
the fact-check toward Obama, (3) the valence of the fact-check toward Romney. The procedure
8
They also use “Promise Kept,” “Compromise,” “Promise Broken,” “No Flip,” “Half Flip,” and
“Full flop”
Political Fact-checking on Social Media 40
involved three independent undergraduate coders coding the same message. The unit of analysis
was each fact-checking tweet. Sometimes, multiple tweets pertaining to the same fact-check
topic were found, yet each tweet was independently coded. Coders were asked to provide two
coding values, one based on (a) the fact-checker’s headline (i.e., tweet) and the other based on
(b) full articles provided by hyperlinks in the text
9
. The nature of this coding was relatively
straightforward and thus reached good inter-coder reliability during the training period where
coders independently practiced the protocol with a subset of the fact-checks. When the full
sample was coded, Krippendorff’s alphas showed all higher than 8.1 (based on the headlines)
and 8.5 (based on the extended articles) for three variables (n=194). The remaining
disagreements were discussed and resolved by consensus.
Relative advantage of fact-check for one candidate over the other. The first variable
indicated the extent to which a fact-check’s ruling was explicitly adjudicating the claim
regarding the candidates and eventually sided with one candidate. This variable had three
categories of values: “advantageous to Obama,” “advantageous to Romney,” and “Neutral.” For
instance, the “advantageous to Obama” category included fact-checking statements that
explicitly indicated that Obama’s statement was true, or Romney’s statement was false. It is
worth noting that the study did not differentiate various negative (or positive) rulings toward the
target candidate. For instance, “Pants on Fire,” “False,” “Mostly False,” “Not True,” “Not so,”
three Pinocchios, and four Pinocchios
10
were all considered negative towards the candidate. In
9
For instance, the Factcheck.org’s tweet of “Romney says 6 studies prove his revenue-neutral
tax plan can work as he plans. But who wrote those ‘studies’? http://t.co/dxDFoghg #debate”
cannot be determined just by reading the headline.
10
The Washington Post’s Factchecker illustrates how their Pinocchio ratings correspond to
politifact’s scales. For instance, Factchecker indicate that one Pinocchio is similar to “Mostly
Political Fact-checking on Social Media 41
the same way, the study did not discern various styles of positive rulings such as “True,” “Mostly
True,” and “Yes.” The same rule was applied to the “Romney favorable” category. In addition,
even when a fact-check did not employ a specific scale, if a fact-check indicated that the
statement under investigation was clearly inaccurate or accurate, then it was coded either
“Obama favorable” or “Romney favorable.” The case in point was Factcheck.org’s tweet of
“Romney says he will pay for $5T tax cut without raising deficit or raising taxes on middle class.
Experts say that's not possible,” which was subsequently coded as “Obama favorable.”
All other fact-checks that did not clearly indicate whether the statement under
investigation was false or true were coded as “Neutral.” This included statements containing a
neutral anchor such as “Half True,” “Compromise,” “Half-Flip,” “Mixed,” “Complicated,” and
“Two Pinocchios.” In addition, the “Neutral” category also included contextual analyses such as
“Will Romney increase defense by $2T? It depends what base of comparison you use when
assessing the cost of his plan. http://t.co/LpYvU3R8” issued by Factcheck.org. The detailed
coding results are in Appendix 2.
Valence of the fact-check toward Obama & toward Romney. Each fact-checking tweet
was further coded for its valence of each candidate. Some fact-checks were directly relevant to
both candidates such as pitting one candidate’s statement against another. However, other fact-
checks were only concerned about one candidate without referencing the other. For example, a
fact-check of “Romney said ‘Obama began his presidency with an apology tour’. Pants on Fire!
http://t.co/PaHqEtnO #debate” is negative to Romney and positive to Obama. However, a fact-
check of “Obama said insurance premiums have gone up slowest in 50 years thanks to
True,” two Pinocchios to “Half True,” three Pinocchios to “Mostly False,” four Pinocchios to
“False.”
Political Fact-checking on Social Media 42
Obamacare. False. http://t.co/tkt2xvf8” contains negativity towards Obama, but no direct
references to Romney. On the other hand, a fact-check of “Obama said he would cut taxes for
small business, middle class. Mostly True. Details: http://t.co/BCIzQnqs #debate” is positive to
Obama without directly mentioning Romney.
Rather than measuring a relative advantage of the fact-check, this set of two variables
directly assesses a valence of the fact-check toward Obama and Romney. Each variable had three
categories: positive, negative, neutral (combining both neutral and absent) for the target
candidate. Based on this coding rule, a fact-check was coded as one of three categories for both
Obama and Romney. For example, the fact-check about Obama’s tax cut proposal for small
business was coded to be “positive” for “valence for Obama” and “neutral” for “valence for
Romney.”
User Level
User’s Party Preference. To identify whether a user who retweeted or replied to three
organizations’ fact-checking messages during October 2012 was Republican or Democrat, the
study utilized automatic sentiment (content) analysis techniques to detect the valence of a given
text for a political party. In this paper, Republican and Democrat are defined as their party
preference rather than their official affilication through voter registration. More specifically, the
study combines three different sets of information to code each tweet’s sentiment: linguistic,
data-specific, and social media specific signatures. The linguistic approach uses a standard
lexicon associated with a certain category such as Linguistic Inquiry and Word Count (LIWC)
(Pennebaker, Francis, & Booth, 2001) and Affective Norms for English Words (ANEW)
(Bradley & Lang, 1999). On the other hand, data-specific features for a given category (e.g.,
political affiliation) are extracted directly from the data under investigation using machine-
Political Fact-checking on Social Media 43
learning techniques (discussed later). Lastly, signatures of social media include expressions of
various social relationships such as following, liking, and retweeting other users. This assumes
that an unknown user’s character can be inferred from its relationship with the known user (i.e.,
retweeting the official Republican account).
This study closely follows the steps of Vargo et al. (2014) who used both the linguistic
and data-driven methods in classifying Twitter users’ partisanship in terms of Obama supporters
and Romney supporters. However, this study differs from that of Vargo et al. in two respects.
First, the target classification of this project is Democrats vs. Republicans and not candidate
preference. Second, in addition to the linguistic and data-specific features of Vargo et al., this
study integrates features specific to social media into the final model. That is, the current project
predicted the user’s party preference partially based on whether a given tweet was a retweet of
the known group of media accounts that Democratic and Republican Twitter users tended to
follow (Adams, 2012; Vargo et al., 2014).
A more detailed description of methods is warranted here. To prepare the dataset for
sentiment analysis, this study identified unique Twitter users (n=55,869) from those who
retweeted or replied to three organizations’ fact-checking messages during October 2012. Then
each user’s tweets (n=25,983,635) posted between January 2012 and December 2012 were
subsequently retrieved from the larger political dataset, excluding the fact-checking messages.
The users tweets were further divided into two groups: tweets about the Democratic Party
but not about the Republican Party
11
, and conversely tweets about the Republican Party but not
11
This study also excluded tweets about other Republican primary candidates for the Democratic
Party related tweets.
Political Fact-checking on Social Media 44
about the Democratic Party
12
. To achieve this grouping, the study used keyword-search strategies
to identify the Republican Party related tweets mentioning “republican,” “gop,” “rnc” or
“romney,” and excluded tweets mentioning terms related to the Democratic Party such as
“democrats,” “dnc,” “dems,” and “obama.” In reverse, for the Democratic Party related tweets,
the study retained tweets mentioning “democrats,” “dnc,” “dems,” or “obama” but removed
tweets that containing“republican,” “gop,” “rnc,” and other Republican primary candidates
13
including “romney,” “gingrich,” and “santorum.” This process resulted in 6,015,550 tweets for
the Democratic Party and 9,658,672 tweets for the Republican Party.
This step was necessary because sentiments are difficult to analyze when a tweet contains
more than one target (i.e., contains both “Romney” and “Obama”). For instance, typically
lexicons assign positive values to words such as ‘happy’, ‘love’, and ‘hug’ and negative values to
words such as ‘sad’, ‘hate’, and ‘cry’. According to this approach, the following tweet “UCLA
study finds that female Democrats are hideous Trolls, but Republican Congresswomen are good-
looking: http://t.co/3bRBAX8b #tcot #gop” may be coded as “neutral” due to presence of both
positive (i.e., good) and negative (i.e., hideous) words, even when it is positive toward the
Republican party.
The study chose SentiStrength, a sentiment analysis tool that uses a lexicon of 2,310
sentiment words for assigning positive or negative scores to a given text. SentiStrength is
optimized for short texts such as tweets and has proven to produce “near human accuracy on
general short social web texts (Thelwall, 2010, p.11).” However, it performs less well in political
content or controversial topics where sarcasm or jokes are prevalent (Thelwall, 2010). Therefore,
12
The study excluded tweets mentioning incumbent Obama and his running mate Biden.
13
Excluded those candidates who appeared at least three primary ballots (Available at :
https://en.wikipedia.org/wiki/Republican_Party_presidential_candidates,_2012)
Political Fact-checking on Social Media 45
the study supplements the lexical approach by implementing machine learning, retweet features,
and the t-test method.
To extract additional features unique to the current dataset rather than generic emotional
features (i.e., words) that already exist in SentiStrength, the study developed a model in Python
(programming language) using the Naïve Bayes classifier, a supervised learning system. This
approach extracts features that predict human coded results (i.e., annotated data) based upon
human coded data. This process is often called “training,” where hand-coded texts (e.g., tweets)
are broken into words (known as “tokens”) and subjected to pointwise mutual information (PMI)
to select the words most highly correlated with the outcome (i.e., the hand-coded result such as
“Positive toward the Democratic Party,” “Negative toward the Democratic Party,” and “Neutral
toward the Democratic Party”).
There is no consensus on the appropriate sample size of labeled data to generate efficient
models (Figueroa et al., 2012). Therefore, the study started with a small sample size (n=100) and
increased to 500 by an interval of 100 while comparing its performances. More specifically, a
sample of tweets (e.g., n=100) was randomly drawn from the Republican and the Democratic
Party dataset and was coded by two undergraduate coders for its sentiment towards the target
group such as positive, negative, and neutral
14
. This coded sample was used to extract
representative features predicting the outcome of the valence. To assess the performance of the
model, the study used 10-fold cross validation. This method randomly partitions the sample into
10 equally sized subsamples. It leaves one of the 10 samples for validation and uses the
remaining 9 samples to train data. For both the Republican and Democratic Party dataset, the
14
Inter-coder reliability (Krippendorff’s alpha) between two coders was above 0.8 each time.
Disagreement was resolved by consensus coding.
Political Fact-checking on Social Media 46
size of 500 annotated data demonstrated reasonably strong performance, which is an average
accuracy of 76%. This is consistent with previous studies (Colleoni et al., 2014; Pennacchiotti &
Popescu, 2011) that used machine learning to classify political affiliation. The number of terms
extracted from the final model was 127 (positive Democratic Party), 156 (negative Democratic
Party), 116 (positive Republican Party), and 178 (negative Republican Party).
In addition to the data-derived features, the study added retweeting behavior (network
features inherent to Twitter) as another set of sentiment predictors to determine positive
sentiment toward the political party, as recommended by previous studies (O’Banion &
Birnbaum, 2013; Pennacchiotti & Popescu, 2011). The study compiled two different lists of
Twitter accounts that Democrats and Republicans tend to follow (Vargo et al., 2014) as well as
Twitter accounts associated with Obama and Romney’s campaign (Adams, 2012). To this end,
72 sources were included in the Democratic accounts, and 58 sources in the Republican
accounts. The final list of account names for each group is in Appendix 2. This information was
further used to identify instances where the tweet was a retweet of the Democrat-categorized
account or Republican-categorized account. For instance, the presence of “rt@GOP” in the text
predicted the positive sentiment of the tweet toward the Republican Party.
Lastly, these two sets of features – data-driven and network– were added to the
SentiStrength dictionary which predict the sentiment toward the political party. Hence, the study
ran the updated SentiStrength lexicon on each tweet in the Republican Party tweets and the
Democratic Party tweets. The program provided a single sentiment scale ranging from -4 to + 4.
For users who had multiple tweets – and therefore multiple scores – for the Democratic Party and
for the Republican Party, a one tailed t-test was conducted with a probability of .10 as the cut-off
to ensure that the sentiment difference between the two groups was significant. Following de
Political Fact-checking on Social Media 47
Winter’s (2013) minimum requirement of a two-group t-test, the study chose users who had at
least two tweets for each party. Those users who did not have sufficient data, as well as those
whose sentiment difference did not meet the significance criterion, were labeled as “non-partisan
users” since they lacked a strong party preference on Twitter. This label did not mean that they
were truly independent users; rather, it indicated at the very least that these users did not use
Twitter as a platform to publicly express their political party preference.
The study identified 41,225 (73.79%) unique Democrat users and 5,472 (9.79%) unique
Republican users. Non-partisan users were 9,172 (16.42%). The number of Democrats who
retweeted or replied to the three fact-checking organizations in October 2012 was 7.5 times
larger than Republicans who did the same. This means that the majority of users engaged in fact-
checking were Democrats. The accuracy of this information was assessed with a random sample
of 382 users (0.68%) drawn from the entire user population (n=55,869) using a 95% confidence
level and a 5% confidence interval (McIntire & Miller, 2007). Two undergraduate coders
independently coded each user’s party preference, in this random sample, based on each user’s
tweets posted during the data collection period. Again, consensus coding was used to reach one
single value. The human coded user preference agreed with the final model 92% of the time.
Users’ Response Level
Identification of retweets and replies. Since this project focuses on the audience’s
sharing and reaction behavior, each fact-check’s retweets and replies (i.e., “in reply to”) were
also retrieved. Twitter’s retweet feature, which allows users to repost another user’s tweet,
contains the RT @username convention. A user utilizes Twitter’s built-in reply function when
responding to specific messages from other users. The feature is equivalent to posting comments
to a news article. Both ‘retweet(s)’ and ‘in reply to(s)’ are public and appear on the user’s
Political Fact-checking on Social Media 48
timeline (i.e., public profile) for their followers. Conveniently, the metadata provided by Gnip’s
PowerTrack service included a tweet ID, a unique numerical identifier assigned to each tweet. In
addition, Gnip also indicated whether a message was a retweet (re-post) or not (original post),
and tagged the original tweet ID if the message was a retweet. Similarly, when a message was a
reply, the service indicated the original tweet ID of the message being replied to. Based on these
parameters, the project identified 91,987 retweets and 1,591 replies for 194 fact-checks.
Replies expressing media bias. To identify replies containing the user’s concerns about
media bias, this study developed a codebook based on previous studies measuring HMP
(Gunther et al., 2010; Gunther, 2006). The coding procedure involved three undergraduate
coders independently coding the same messages (n=1,591). The coders categorized each reply as
1 if the user mentioned bias of the fact-check (content) or the fact-checker (journalist). If not, the
reply was coded as 0. The codebook specified that the reply should be coded as 1 when it
complained about the fact-checker’s analysis of political statements, even if a reply did not use a
specific term such as “bias”. For example, a reply of “@politifact yes, he DID begin with an
apology tour and Romney just proved it!!! Thanks for showing YOUR bias” to politifact was
coded as 1, as it expressed the perception of the fact-checker’s bias. Similarly, “@politifact you
are lying. Who fact checks politifact??? Romney would stop public funding for PBS, as a result
PBS would shut down. #p2 #p2b” was also coded as 1. On the other hand, expressions of
resentment or anger towards the target (e.g., Obama or Romney) were not considered “media
bias”. Inter-coder reliability, measured by Krippendorff’s alpha coefficient, was at .78. Coders
discussed disagreements with the author to reach a consensus.
Political Fact-checking on Social Media 49
Chapter 4: Results
Preliminary Analysis
Fact-checking tweets. As shown in Table 1, 42.3 % of the 194 fact-check (n=82) tweets
posted by the three accounts in October 2012 contained rulings that were advantageous to
Obama (i.e., either positive to Obama or negative to Romney), while 23.7% of them (n=46) were
advantageous to Romney (i.e., either positive to Romney or negative to Obama). The remaining
34% (n=66) were neutral, as their statements contained either a contextualized analysis or a
neutral anchor.
Table 1. Descriptive statistics for the relative advantage of factchecks
Relative Advantage Percentage
Fact-checks Advantageous to Obama 42.3% (n=82)
Fact-checks Advantageous to Romney 23.7% (n=46)
Neutral Fact-checks 34% (n=66)
Similarly, Table 2 illustrates the breakdown of fact-checks by fact-checking
organizations. 44.4 % (n=56) of Politifact’s fact-checks were advantageous to Obama, 22.2%
(n=28) were advantageous to Romney, and 33.3% (n=42) were neutral. Factcheck.org posted
41.7% (n=20) that were advantageous to Obama, 29.2 % (n=14) that were advantageous to
Romney, and 29.2% (n=14) that were neutral. Lastly, the Washington Post’s Factchecker had
30% (n=6) advantageous to Obama, 20% (n=4) advantageous to Romney, and the remaining
10% of their fact-checks neutral.
Political Fact-checking on Social Media 50
Table 2. Descriptive statistics for the relative advantage of factcheck by each factchecking
organization
Relative Advantage Politifact Factcheck.org Factchecker
Fact-checks Advantageous to Obama 44.4 % (n=56) 41.7% (n=20) 30% (n=6)
Fact-checks Advantageous to Romney 22.2 % (n=28) 29.2%(n=14) 29.2%(n=4)
Neutral Fact-checks 33.3% (n=42) 29.2% (n=14) 50% (n=10)
Total 100% (n=126) 100% (n=48) 100% (n=20)
In addition to the relative advantage of the fact-checks, the valence of the fact-checking
tweet towards each candidate was also analyzed. Of the 194 fact-checks, 45% (n=67) were
positive towards Obama, 61.1% (n=91) were neutral towards Obama, and 24.2% (n=36) were
negative towards Obama. On the other hand, 40.9% (n=61) of the 149 fact-checks contained
negative valence towards Romney, 59.8% (n=104) were neutral towards Romney, and 19.5%
(n=29) were positive towards Romney. See Table 3.
Table 3. Descriptive statistics for the valence of the fact-checking message towards each
candidate
Valence of the fact-check toward
Obama
Valence of the fact-check toward
Romney
Positive 45% (n=67) 19.5% (n=29)
Negative 24.2% (n=36) 40.9% (n=61)
Neutral 61.1% (n=91) 59.8% (n=104)
Users. Among the users who retweeted or replied to the three fact-checking
organizations, the study identified 73.79% (n=41,225) as Democrats, 9.79% (5,472) as
Political Fact-checking on Social Media 51
Republicans, and 16.42% (9,172) as Non-partisan users. The median number of followers
15
each
user had was 116 (Mean=1,409, SD=92,116). Of these users, BarackObama, Obama’s official
Twitter account
16
, had the highest number of followers (n=7,709,727), followed by pitbull, the
account of Armando Christian Pérez, a Grammy Award-winning artist. On the other hand, the
median number of accounts that the users themselves (followees) followed was 216 (Mean=434,
SD=3,030). In addition, the total number of tweets the user posted during the data collection
determined how active the Twitter users were. The median number of total tweets posted by the
fact-checking users was 79 (Mean=665, SD=4,550).
To investigate further whether users who engaged in fact-checking were systematically
different from other users who tweeted about general political topics, the study compared the
number of followers and followees for each user in the current fact-checking dataset with that of
a random sample of the same size (n=55,869) retrieved from the larger political dataset. The
median number of followers found in the random sample was 72 (Mean= 436, SD= 16,944), and
the median number of accounts that they followed was 123 (Mean= 259, SD=1,676). The median
number of total tweets by these users was 3 (Mean=18, SD=220). See Table 4. In sum, fact-
checking users not only had a much higher number of followers and followees, but also posted
many more tweets (n=79) than the random users (n=3). This tendency was still true after
eliminating Obama’s account from the fact-checking users.
15
A user’s follower number was captured ever time the user posted a tweet during the data
collection period. Therefore, for those who had multiple records of follower and followee
information, the study took a median of the user’s records across all observations.
16
Romney’s account was not identified in the dataset.
Political Fact-checking on Social Media 52
Table 4. Comparisons between users of fact-checking and random users
Users of fact-checking Users from the random sample
Median Number of followers 116 (M=1,409) 72 (M=436)
Median Number of followees 216 (M=434) 123 (M=259)
Total Number of Tweets 79 (M=665) 3 (M=18)
Note. Values indicate the median number of the user’s followers and followees across the time
during the data collection period. Values in the parenthesis are the mean of the user’s followers
and followees across the time.
Retweets of the fact-checks by users. Each of the 46,140 unique users retweeted at least
one fact-check in the dataset (n=91,987). Of these, 74.4% (n=34,339) were Democrats, 9%
(n=4,145) were Republicans, and 16.1% (n=7,451) were non-partisan users. These users’
retweeting pattern was further analyzed based on three general categories of fact-checks –
whether the ruling was advantageous to Obama, advantageous to Romney, and Neutral. More
specifically, the fact-checks advantageous to Obama were retweeted by 39,386 unique users:
83.09% (n=32,701) Democrats, 1.50% (n=591) Republicans, and 15.4% (n=6,092) Non-
partisans. Furthermore, the fact-checks advantageous to Romney were retweeted by 7,785 unique
users, which was composed of 46.9% Republicans (n=3,602), 20.6% Democrats (n=1,583), and
32.5% Non-partisans (n=2,501). Lastly, the neutral fact-checks were retweeted by 4,591 unique
users, which was further broken down into 75.6% Democrats (n=3,399), 10.1% Republicans
(n=454), 14.3% Non-partisans (n=642). See Table 5.
Political Fact-checking on Social Media 53
Table 5. Retweeters grouped by party preference and fact-checking message types
Fact-checks
advantageous to
Obama
Fact-checks
advantageous to
Romney
Neutral fact-checks
Democrats 83.09% (n=32,701) 20.6% (n=1,583) 75.6% (n=3,399)
Republicans 1.50% (n=591) 46.9% (n=3,602) 10.1% (n=454)
Non-partisans 15.4% (n=6,092) 32.5% (n=2,501) 14.3% (n=642)
Total 100% (n=39,384) 100% (n=7,686) 100% (n=4,495)
Replies to the fact-checks by users. All the fact-checks had relatively a small number of
replies (n=1,591). This volume was only a fraction (1.73%) of the retweets (n=91,987). From
these replies, the study identified 1,320 unique users (i.e., repliers). Of those unique repliers,
51.4% (n=678) were Democrats, and 29.8% (n=393) were Republicans. The rest (18.9%, n=249)
were non-partisans. Again, the same three general categories of fact-checks were compared for
their repliers. The repliers to fact-checks advantageous to Obama were 45.2% Democrats
(n=383), 35.6% Republicans (n=302), and 19.2% non-partisan users (n=163). The repliers to
fact-checks advantageous to Romney were 66.9% Democrats (n=192), 15.7% Republicans
(n=45), and 17.4 % non-partisan users (n=50). The repliers to neutral fact-checks were 57.5%
Democrats (n=142), 25.1% Republicans (n=62), and 17.4% (n=43) non-partisan users. Detailed
results are presented in Table 6.
Political Fact-checking on Social Media 54
Table 6. Repliers of fact-checking messages grouped by party preference
Fact-checks
advantageous to
Obama
Fact-checks
advantageous to
Romney
Neutral fact-checks
Democrats 45.2% (n=383) 66.9% (n=192) 57.5% (n=142)
Republicans 35.6% (n=302) 15.7% (n=45) 25.1% (n=62)
Non-partisans 19.2% (n=163) 17.4% (n=50) 17.4% (n=43)
Total 100 % (n=848) 100% (n=287) 100% (n=247)
Replies expressing concerns about media bias. Of 1,591 replies, 32.4% (n=515)
contained concerns about the bias of the fact-check or the fact-checker. These include replies
directly using the words “bias” or “unfair” in their comment such as “@politifact Horrible,
biased interpretation. No proof Obama said he would ‘create daylight.’ Your interpretation is
NOT a FACT check.” In addition, these include remarks that complain about the conclusion of
the fact-check and suggest their own evidence such as “http://t.co/MVhnM7Q1 If I may bring
this up. Thanks for reading and educating yourself. @politifact.” Among the unique repliers
expressing media bias (n= 472), 57.2% were Republicans (n=270), 35.4% were Democrats
(n=167), and 7.4% were non-partisan users (n=35). More specifically, replies expressing bias of
the fact-checks advantageous to Obama were mostly from Republicans (83.8%, n=84), whereas
replies expressing bias of the fact-checks advantageous to Romney were mostly from Democrats
(82.4%, n=82). The neutral fact-checks received bias comments both from Democrats (50.0%,
n=50) and Republicans (42.1%, n=42). The detailed patterns are presented in Table 7.
Political Fact-checking on Social Media 55
Table 7. Hostile reliers of fact-checking messages grouped by party preference
Fact-checks
advantageous to Obama
Fact-checks
advantageous to
Romney
Neutral fact-checks
Democrats 11.2% (n=31) 82.4% (n=98) 50% (n=38)
Republicans 83.8% (n=232) 5% (n=6) 42.1%(n=32)
Non-partisan users 5.1% (n=14) 12.6% (n=15) 7.9% (n=6)
Total 100% (n=277) 100% (n=119) 100% (n=76)
Hypothesis testing
Retweeting as an intergroup phenomenon. The study hypothesized that the fact-
checking messages advantageous to Obama were more likely to be shared by Democrats than
Republicans (H1). The study also predicted that fact-checking messages advantageous to
Romney were less likely to be shared by Democrats than Republicans (H2). This set of
hypotheses were suppoted via a series of chi-square goodness-of-fit tests and post hoc analyses
using the distribution of Democrats, Republicans, and non-partisan users for each type of fact-
check (See Table8).
A chi-square test for fact-checking messages advantageous to Obama showed that the
three political groups of retweeters were not equally distributed, indicating the effect of
partisanship on selective retweeting, X
2
(2, N=39,354) = 22,412, p<0.01. Post-hoc comparisons
further confirmed that the proportion of Democrats was significantly higher than that of
Republicans, X
2
(1, N= 33,292) = 19,248, p<0.01. A chi-square test for fact-checking messages
advantageous to Romney showed a significant effect of partisan group, X
2
(2, N=7,686) =
407.43, p<0.01, but the sharing pattern was exactly opposite of the messages advantageous to
Obama. The proportion of Republicans was significantly higher than that of Democrats for this
category of fact-checking messages, X
2
(1, N= 6,103) = 19.17, p<0.01.
Political Fact-checking on Social Media 56
Table 8. Proportions of three political groups among retweeters of obama favorable and romney
favorable fact-checking messages
Democrats Republicans Non-partisans X
2
Fact-checks advantageous to
Obama
32,701 591 6,062 22,412 ***
Fact-checks advantageous to
Romney
2501 3,602 1,583 407.43 ***
***p < .001
Additionally, the study conducted regression analyses as robust checks on partisans’
selective retweeting behavior. The study ran two sets of regression models predicting proportions
of Democrats and Republicans among each fact-check’s unique retweeters. The purpose of these
analyses was to examine whether the previous results hold even after controlling for the message
related variables as well as a specific fact-checker. For these analyses, the study excluded fact-
checking messages retweeted less than by 10 unique retweeters.
First, three models with different sets of variables were used to predict proportions of
Democrat retweeters in each fact-checking message. Model 1 examined the role of the fact-
check’s ruling – whether it was advantageous to Obama, Romney, or neutral – in attracting
Democrats. Model 2 added the message characteristics such as whether it contained a hashtag (#)
or a url, whether it mentioned someone using “@username” convention, and whether it was a
live-tweet posted during the presidential and vice presidential debates. Finally, Model 3
controlled for the specific fact-checking organization.
Political Fact-checking on Social Media 57
As Table 9 presents, the three models demonstrated strong and robust effects of fact-
check rulings on attracting more or fewer Democrat retweeters
17
. Obama favorable fact-checking
messages were more likely to be retweeted by Democrats in comparison to neutral fact-checking
messages. In contrast, Romney favorable fact-checking messages were less likely to be retweeted
by Democrats compared to neutral fact-checking messages. Although not directly related to the
focus of this research, the study found that fact-checking messages posted during live
presidential and vice presidential debates (i.e., live tweets) tended to attract more Democrat
retweeters than other messages. Also fact-checks posted by the Washington Post’s Factchecker
tended to have more Democrat retweeters than other two fact-checkers.
17
In addition, the study ran models with standard errors clustered at the fact-checking
organization level to take into account possible correlation in the residuals. These models also
supported the hypotheses.
Political Fact-checking on Social Media 58
Table 9. Regression models predicting proportion of Democrats in each fact-check’s retweeters
Model 1 Model 2 Model 3
B (SE) Sig. B (SE) Sig. B (SE) Sig.
(Constant) 65.90(1.44) *** 61.51(4.31) *** 76.23(8.39) ***
Advantage to Obama
(ref. Neutral)
21.22(1.85) *** 20.86(1.81) ***
20.15(1.82) ***
Advantage to Romney
(ref. Neutral)
-31.84(2.19) *** -31.94(2.14) *** -31.88(2.12) ***
A url (ref. none) -0.02(4.08) 0.66(4.26)
A hashtag (ref. none) 3.77(1.84) * 3.11(1.88)
A mention (ref. none) 1.97(3.43) -6.61(5.25)
A live-tweet (ref. none) 3.91(1.76) * 4.15(1.75) *
Number of retweets 1.87(0.12) * 2.86(0.10) **
Politifact (ref. Fact-
checker)
-14.86(6.78)*
Facthchck.org (ref.
Fact-checker)
-14.46(7.09)*
Observations N=174 N=174 N=174
R
2
0.80 0.81 0.82
Note. Number of retweets was log transformed. Dependent variable was proportion of
Democrats in retweeters for each fact-check. *** p < .001, ** p < 0.01, *p < 0.05.
Similarly, the three models were utilized for examining the effects of fact-checking
rulings on attracting more or fewer Republican retweeters in each fact-checking message. Across
the models, there was again a strong effect of fact-check rulings on the extent to which messages
Political Fact-checking on Social Media 59
were retweeted by Republicans. Romney favorable fact-checking messages were more likely to
be retweeted by Republicans than neutral fact-checking messages. Obama favorable fact-
checking messages were less likely to be retweeted by Republicans than neutral fact-checking
messages. Neither message level characteristics nor specific fact-checking organization
influenced the extent to which Republicans retweeted the fact-check.
Political Fact-checking on Social Media 60
Table 10. Regression models predicting proportion of Republicans in each fact-check’s
retweeters
Model 1 Model 2 Model 3
B (SE) Sig. B (SE) Sig. B (SE) Sig.
(Constant) 11.62(1.23) *** 12.24(3.83) *** 11.96(7.56) ***
Advantage to Obama
(ref. Neutral)
-9.40(1.59) *** -9.14 (1.60) ***
-9.09(1.64) ***
Advantage to Romney
(ref. Neutral)
31.89(1.89) *** 31.97 (1.90) *** 31.97 (1.91) ***
A url (ref. none) 1.56 (3.63) 1.11(3.83)
A hashtag (ref. none) -2.18(1.64) -2.23(1.69)
A mention (ref. none) -1.20(3.05) -0.83(4.73)
A live-tweet (ref. none) -1.42(1.57) -1.42(1.58)
Number of retweets -1.23(0.99) -1.78(1.21)
Politifact (ref. Fact-
checker)
0.89(6.11)
Facthchck.org (ref.
Fact-checker)
0.27(6.39)
Observations N=174 N=174 N=174
R
2
0.77 0.78 0.78
Note. Number of retweet was log tranformed. Dependent variable was proportion of Democrats
in retweeters for each fact-check. *** p < .001, ** p < 0.01, *p < 0.05.
Ingroup status differences in retweeting
In addition, the study compared the strength of the outgroup negativity to the ingroup
positivity on attracting retweets between Democrats and Republicans. More specifically, two
research questions focused on whether promoting the ingroup member (Obama) was relatively
Political Fact-checking on Social Media 61
more important than denigrating the outgroup member (Romney) for Democrats (R1), and
conversely whether derogating the outgroup member is relatively more important than cheering
their ingroup member for Republicans (R2).
For this set of analyses, instead of using relative advantage (i.e., “Advantageous to
Obama”, “Neutral”, “Advantageous to Romney”), the study utilized a more fine grained measure
which coded each fact-checking message into positive, neutral, or negative valence towards each
candidate: Obama and Romney. The study compared the extent to which a specific valence
towards each candidate contributed to explaining the outcome in the final regression models.
Political Fact-checking on Social Media 62
Table 11. Regression models predicting proportion of Democrats and Republicans in each fact-
check’s retweeters
Model1
Predicting proportion of
Democrats
Model2
Predicting proportion of
Republicans
B (SE) Sig. B (SE) Sig.
(Constant) 51.72 (6.68) *** 14.48 (5.95) *
Negative valence for Obama
(ref. Neutral)
-30.57 (2.67) *** 28.89 (2.37) ***
Positive Valence for Obama
(ref. Neutral)
14.17 (2.49) *** -6.68 (2.21) **
Negative valence for Romney
(ref. Neutral)
2.87 (2.61) -3.43 (2.32)
Positive valence for Romney
(ref. Neutral)
-13.75 (2.75) *** 12.96(2.45) ***
A url (ref. none) 0.06 (4.67) 2.45(4.15)
A hashtag (ref. none) -0.56 (2.13) 0.63(1.90)
A mention (ref. none) -4.07 (5.73) -3.16(5.10)
A live-tweet (ref. none) 1.25 (2.22) -1.26(1.98)
Number of retweets 2.86 (0.10) ** -0.77(0.89)
Factcheck.org (ref. Politifact) 4.11 (7.75) -2.15(1.91)
Factchcker (ref. Politifact) 33.09 (7.75) *** -7.25 (6.89)
Observations N=174 N=174
R
2
0.78 0.73
Note. The number of retweets was logarithm transformed.
As presented in Table 11, different patterns emerged from the two regression models. In
Political Fact-checking on Social Media 63
both models, positive valence towards the ingroup member was a statistically significant
predictor for both Democrat (B=14.17, p <0.01) and Republican retweeters (B=12.96, p < 0.01).
However, the effect of outgroup negativity was significant only for predicting Republican
retweeters (B=28.89, p < 0.01), but not Democrat retweeters (B=2.87, p > 0.5).
Furthermore, the study directly compared the relative importance of two indicator
variables – negative valence towards the outgroup and positive valence towards the ingroup –
within each model. To accomplish this goal, the study utilized a measure proposed by Silber,
Rosenbaum, and Ross (1995), to estimate the relative contributions of two variables in a model
as the square root of the variance ratio denoted as ω. The analysis was implemented using
‘relimp’ package for R (Firth, 2011).
The results confirmed the stronger effect of outgroup negativity than ingroup positivity
for Republicans. It also showed the stronger effect of ingroup positivity than outgroup negativity
for Democrats. More precisely, the study found that negative valence towards the outgroup
(Obama) explained more variation (ω =2.40, 95% CI [1.53-3.77]) in predicting the proportion of
Republican retweeters than positive valence towards the ingroup (Romney). On the contrary,
negative valence towards the outgroup (Romney) contributed less (ω =0.34, 95% CI [0.12-0.98])
to predicting the degree of Democrat retweeters for fact-checks than positive valence towards the
ingroup (Obama).
Hostile comments as an intergroup phenomenon. The remaining set of hypotheses
(H3, H4a, and H4b) was concerned with the extent to which fact-checking messages received
comments expressing concern about media bias from three groups of political groups:
Republicans, Democrats, and non-partisan users. More specifically, the study examined the
distributions of hostile commenters for each fact-checking message type: (1) neutral fact-
Political Fact-checking on Social Media 64
checking messages, (2) fact-checks advantageous to Obama, and (3) fact-checks advantageous to
Romney. A series of chi-square goodness-of-fit tests were performed to determine whether the
hostile responses from the three groups were significantly different.
H3 predicted that neutral fact-checking messages were more likely to receive comments
containing accusations of media bias from both Republican and Democrats than non-partisans.
For testing H3, the study compiled comments containing references to media bias towards
neutral messages (n=88). Subsequently, the study identified the unique authors (n=76) and their
party preference. Of those who replied to neutral fact-checks mentioning bias, 50% were
Democrat (n=38), 42.11% were Republicans (n=32), and 7.89% were non-partisan users (n=6).
A chi-square test showed that bias comments were not equally distributed among the three
political groups, X
2
(2, N=76)=15.23, p<0.001. In addition, post-hoc comparisons revealed that
both the proportions of Democrats and Republicans were significantly higher than that of non-
partisan users, yielding X
2
(1, N=50)=12.46, p <0.01 and X
2
(1, N=38)=9.67, p < 0.01. The
proportions of Democrats and Republicans were not significantly different, X
2
(1, N=70)=0.08, p
> 0.05.
Table 12. Results of chi-square test and descriptive statistics for corrective comments to neutral
fact-checking messages by three political groups
Users of Corrective Comments To Neutral Fact-checks
Democrats Non-partisan Republicans
38 (50%) 6 (7.89%) 32 (42.11%)
Note. χ
2
= 15.23***, df = 2. Numbers in parentheses indicate column percentages.
***p < .001
Political Fact-checking on Social Media 65
H4a stated that fact-checking messages relatively advantageous to Obama were more
likely to receive comments with references to media bias from Republicans than from both
Democrats and non-partisan users. 277 unique users posted corrective comments (n=292) to
relatively Obama-favorable fact-checking messages. Of these users, 83% were Republicans
(n=232), 11.19% were Democrats (n=31), and 5.05% were non-partisan users (n=14). A chi-
square omnibus test indicated statistically significant differences among the groups, X
2
(2,
N=277)=148.33, p<0.001. See table 13 for the results. The follow up post-hoc analyses showed
that the proportion of Republicans was significantly higher than that of Democrats and non-
partisans, X
2
(1, N=263)=77.45, p <0.01 and X
2
(1, N=246)=108.99, p < 0.01. The proportions of
Democrats and non-partisan users were not significantly different when using Bonferroni p-value
to adjust for multiple testing, X
2
(1, N=45)=4.46, p > 0.10.
Table 13. Results of chi-square test and descriptive statistics for corrective comments to fact-
checking messages advantageous to obama by three political groups
Users of Corrective Comments To Obama Favorable Fact-checks
Democrats Non-partisan Republicans
31 (11.19%) 14 (5.05%) 232 (83.0%)
Note. χ
2
= 148.33***, df = 2. Numbers in parentheses indicate column percentages.
***p < .001
The study used the same set of tests to examine hypothesis (H4b): factchecking messages
relatively favorable to Romney are more likely to receive corrective comments from Democrats
than from Republicans and non-partisan users. 119 unique users posted hostile comments
(n=135) to relatively Romney-favorable fact-checking messages. Among these users, 82.35%
were Democrats (n=98), 5.04% were Republicans (n=6), and 12.61% were non-partisan users
Political Fact-checking on Social Media 66
(n=15). The distribution of the three groups of political users in this category was again
significantly different according to a chi-square test, X
2
(2, N=119)=60.69, p<0.001. Post-hoc
tests revealed that the proportion of Democrats was significantly higher than that of Republicans
and non-partisans, X
2
(1, N=104)=44.74, p <0.01 and X
2
(1, N=113)=29.13, p < 0.01.
Additionally, the proportion of Republicans was also significantly smaller than non-partisan
users when using the Bonferroni criterion, X
2
(1, N=21)=2.27, p <0.40.
Table 14. Results of chi-square test and descriptive statistics for corrective comments to fact-
checking messages advantageous to romney by three political groups
Users of Corrective Comments To Romney Favorable Fact-checks
Democrats Non-partisan Republicans
98 (82.35%) 15 (12.61%) 6 (5.04%)
Note. χ
2
= 60.69***, df = 2. Numbers in parentheses indicate column percentages.
***p < .001
Ingroup status differences in corrective comments. Finally, the last research question
examined whether corrective action towards fact-checkers was stronger among Republicans than
Democrats. The post-hoc tests of H3 already indicated that there was no significant difference
between the proportions of Democrats (50%) and Republicans (42.11%) in terms of posting
hostile comments to neutral fact-checks, X
2
(1, N=70)=0.08, p > 0.05. In other words, these
neutral fact-checking messages were accused of “being partial” equally by Democrats and
Republicans.
Furthermore, for a more realistic comparison, the study compared differences in hostile
commenting behavior between Democrats and Republicans given their different retweeting
behaviors. That is, the previous analysis assumed the equal distribution of the three groups (33%
Political Fact-checking on Social Media 67
Democrat, 33% Republican, and 33% Non-partisan) and examined how hostile commenters
deviate from such distribution. However, perhaps a more realistic assumption is that the
distribution of hostile commenters should match that of retweeters, which was disproportionally
dominated by Democrats.
As expected, the results paint a different picture when examining the observed
distribution of corrective commenters against the distribution of retweeters of the 194 fact-
checks. The follow up post-hoc tests revealed that the proportion of Republican corrective
commenters was significantly higher than that of Democrats and non-partisans, X
2
(1,
N=70)=16.90, p <0.01 and X
2
(1, N=38)=11.14, p < 0.01. This means that the extent to which
Republicans engaged in corrective action (42.11%) relative to retweeting fact-checks (9.79%),
was greater than the extent to which Democrats engaged in corrective action (50%) given their
retweeting participation (73.79%).
Table 15. Results of chi-square test and descriptive statistics for corrective commenters and
retweeters for neutral fact-checking messages by three political groups
Distribution of Corrective Commenters vs.
Retweeters for Neutral Fact-checks
Democrats Non-partisan Republicans
Corrective Commenters 38 (50%) 6 (7.89%) 632 (42.11%)
Retweeters (Base Line) 41,225 (73.79%) 9,172 (16.42%) 5,472 (9.79%)
Note. χ
2
= 21.04***, df = 2. Numbers in parentheses indicate column percentages.
***p < .001
Political Fact-checking on Social Media 68
Table 16. Summary of Hypothesis
Hypotheses Results
Retweeting As An Intergroup Behavior
H1: Fact-checks that are advantageous to Obama are more likely to be shared by
Democrats than Republicans.
Supported
H2: Fact-checks that are advantages to Romney are more likely to be shared by
Republicans than Democrats.
Supported
Status Differences in Retweeting
R1: Is negative valence towards Obama a more important predictor of retweeting
for Republicans than positive valence towards Romney?
Yes
R2: Is positive valence towards Obama a more important predictor of retweeting for
Democrats than negative valence towards Romney?
Yes
Corrective Comments As An Intergroup Behavior
H3: Neutral fact-checking messages are more likely to receive corrective comments
from both Republicans and Democrats than non-partisan users
Supported
H4a: Fact-checks that are advantageous to Obama are more likely to receive
corrective comments from Republicans than from both Democrats and non-
partisan users
Supported
H4b: Fact-checks that are advantageous to Romney are more likely to receive
corrective comments from Democrats than from Republicans and non-
partisan users
Supported
Status Differences In Corrective Comments
R3: Is the extent to which partisans express a HMP through comments greater Partially
Political Fact-checking on Social Media 69
among Republicans than Democrats? supported
Chapter 5: Discussion
Discussion
This study examined two types of reactions within social media users to political fact-
checking messages. Unlike previous studies, which focused on the effects of consuming fact-
checks on political knowledge (Amazeen et al., 2015; Gottfried et al., 2013; Nyhan & Reifler,
2015), the current study emphasized fact-check consumers’ voluntary sharing and commenting
behavior in a public sphere. The importance of this observation is that while only a small number
of Internet users regularly visit fact-checking sites, those who publicly share what they endorse
and oppose can influence others by increasing exposure to these messages, as well as affecting
interpretations of the original fact-checks (Cappella et al., 2014).
More specifically, the study investigated whether the fact-checking phenomenon served
as a tool for partisans to “cheerlead” their own group and demoralize the opposing group. The
study also examined whether hostility towards fact-checkers was a function of partisanship:
partisans view fact-checkers as an outgroup when fact-checking was not supportive of the
ingroup, and thus accused them of being biased. The differences between Democrats and
Republicans were further assessed. The social identity approach – encompassing social identity
theory (SIT) and social categorization theory (SCT) – is the foundational framework that guides
this research. The social identity approach, which focuses on inter-group social comparisons, has
particular relevance to the current social media’s network dynamics, since the political landscape
in social media is often palpable enough to spark group competitions and trigger membership
oriented behaviors.
Political Fact-checking on Social Media 70
The months leading up to a presidential election are filled with more intense conflicts
between political parties than during non-election season. Therefore, the study chose to
investigate the patterns of Twitter users’ reactions to the three major fact-checking organizations’
messages one month prior to the 2012 U.S. Presidential election. The study compiled users who
retweeted the fact-checks (n=194) posted by the three fact-checkers’ Twitter accounts –
Politifact, Factcheck.org, and The Washington Post’s Factchecker – as well as those who replied
to these messages. Each user’s party identification was measured by a composite score of the
tweets posted during 2012 as well as the partisan sources they chose to retweet. This chapter
provides a descriptive overview of the results reported earlier and discusses the implications of
these findings in the context of media and political polarization. The study’s limitations are
addressed and future research directions suggested.
Selective retweeting of ingroup favorable messages. A major finding of the study was
that the majority of the fact-check retweeters were Democrats (74.4%). Only 9% were
Republicans, and the rest (16.1%) were non-partisan users. This pattern is generally consistent
with previous studies. Using a national survey, Gottfried et al. (2013) found that Democrats were
more likely to regularly visit fact-checking websites than Republicans. Similarly, Nyhan and
Reifler (2015) reported that Democrats viewed fact-checking more favorably than Republicans.
Such a disproportionate number of consumers in the fact-checking phenomenon raise concerns
about audience segregation based on political affinity and the reputation of fact-checkers as non-
partisan.
The extent to which partisans selectively shared fact-checking messages that aligned with
their partisanship was also examined, drawing on the social identity approach (Tajfel, 1972;
Tajfel & Turner, 1979), which explains individuals’ behavior from the motivational and
Political Fact-checking on Social Media 71
emotional bases of social groupings. Social identity theory proposes that when people feel
strongly about a certain group, and when cues indicating such groups are available, they are
motivated to make social comparisons that foster positive ingroup distinctiveness (Tajfel &
Turner, 1979). Not surprisingly, the literature suggests that the more significant the self-meaning
people derive from the group membership, the more strongly they internalize group norms and
engage in social competition to preserve positive self-esteem (Hogg & Abrams, 2003; Turner &
Oaks, 1986; Turner et al., 1987).
Consistent with this argument, fact-checking messages that were advantageous to the
Obama campaign were shared significantly more by Democrats than by Republicans.
Conversely, fact-checking messages advantageous to the Romney campaign were shared
significantly more by Republicans than by Democrats. This indicates that partisans selectively
used fact-checks that shed positive light to their group as ammunition to win competition.
According to SCT (self categorization theory), fact-checks containing strong party cues such as
naming a certain politician’s statement as “pants on fire” may unintentionally prime readers’
group membership. Emphasizing who is right and who is wrong may color the reader’s judgment
even before reading the entire argument, which defeats fact-checkers’ intention of highlighting
the faultiness of political claims. More research is needed to further examine the effects of
language markers in corrective messages on persuasion.
In addition, these results have implications for the extant literature on political
polarization in the contemporary media environment. Previously, selective media exposure and
political polarization have mainly focused on partisan media outlets – such as Fox or MSNBC –
as opposed to mainstream media outlets, which emphasize balance and fairness (Bennett &
Iyengar, 2008; Iyengar & Hahn, 2009; Levendusky, 2013; Muntz, 2006, Sunstein, 2001). This
Political Fact-checking on Social Media 72
study adds another dimension in which partisans handpick media content that support their
ingroup from relatively balanced mainstream media outlets. In this scenario, media consumers
themselves play the role of partisan sources in social media by selectively sharing information
with their followers and by serving as an echo chamber that reinforces preexisting beliefs among
like-minded people.
In particular, those who mediate between fact-checking messages and regular audiences
(both retweeters and repliers) play an important role recognized in diffusion studies as opinion
leaders. The current study found that those who interact with the three fact-checking
organizations were more visible, active, and engaged than other political users. Similarly, other
studies (Gottfried et al., 2013; Nyhan & Reifler, 2015) reported that frequent consumers of fact-
checking sites tended to be more educated and knowledgeable about political issues than those
who had not visited a fact-checking site. This information is important because opinion leaders
have the potential to influence others. According to the classic two-step flow model (Coleman,
Katz, & Menzel, 1966; Katz & Lazarsfeld, 1955), information flows from the media to opinion
leaders, and influence travels from opinion leaders to their followers throughout the system. The
literature also suggests that interpersonal communication is more powerful in changing people’s
attitude or behaviors than mass media (Rogers, 2003).
Differences between Democrats and Republicans in selective sharing. Furthermore,
the study examined whether Democrats and Republicans differed in the extent to which they
shared outgroup negative fact-checks. The SIT literature suggested that perceived threats to
group status bolster outgroup hatred relatively more than ingroup love (Brewer, 1999; Mullen,
Brown, & Smith, 1992; Smith, Spears, & Oyen, 1994; Vanneman & Pettigrew, 1972). The study
posited that Republicans may have perceived status threats because fact-checkers had
Political Fact-checking on Social Media 73
consistently rated the statements of Republican candidates to be false more frequently than that
of Democrat candidates (CMPA, 2013; Ostermeier, 2011). This study also found in the current
dataset (October, 2012) that there were more fact-checks that were advantageous to Obama than
Romney. Consistent with this expectation, the analysis revealed that negative valence towards
the outgroup in a fact-check was significantly associated with attracting more Republican
retweeters, whereas the same factor was not significant for Democrat retweeters. Similarly, when
the study directly compared the relative importance of two factors – positive ingroup fact-checks
and negative outgroup fact-checks – among Democrats and Republicans, the results were
opposite between Democrats and Republicans. The analysis demonstrated a relatively stronger
effect of outgroup negativity to ingroup positivity for Republicans, but the trend was reversed for
Democrats.
This pattern may be, in part, explained by the incumbency effect of Obama where the
current President attracts more attention both from Democrats and Republicans than the other
candidate. However, the pattern of the current research findings mirrors exactly that of the
Knobloch-Westerwick et al. (2010) study which found that higher status group members (i.e.,
younger people) were not interested in reading news about outgroup members (i.e., older people)
regardless if the news was positive or negative. They were only interested in reading positive
news about their own group. In contrast, lower status group members (i.e., older people) were
more interested in reading negative news about the outgroup (i.e., younger people) than reading
positive news about them.
Alternatively, inherent differences between Democrats and Republicans could explain the
discrepancy in their retweeting behavior. Iyenger et al. (2012) observed that, although both
Democrats and Republicans expressed resentment towards each other, the degree of outgroup
Political Fact-checking on Social Media 74
derogation was stronger among Republicans than Democrats. Nyhan and Reifler (2015) also
found differential effects of fact-checks on knowledge enhancement between two groups of
partisans. The educational effects of belief challenging fact-checks were stronger for Democrats
than belief-consistent fact-checks. In contrast, the effects of belief-consistent fact-checks were
stronger for Republicans than belief-challenging fact-checks. A more robust study is needed to
untangle the effect of perceived threat to group status from idiosyncratic partisan characteristics
on relative preference towards ingroup and outgroup information.
Hostile media comments. The social identity account of the hostile media perception
argues that partisans charge media bias as a coping mechanism to uphold positive group identity
(Hartmann & Tanis, 2013; Huge & Glynn, 2010; Reid, 2012; Stroud, Muddiman, & Lee, 2014).
When partisans encounter fact-checking messages that are not perfectly aligned with their group
interests, they demonstrate a contrast effect in which they perceive the fact-check to be further
away from their group than it actually is (Gunther et al., 2001; Gunther, 2015). As a result, those
partisans seek to influence the public by expressing concerns about the fact-checkers’ bias
through public comments. Consistent with the social identity framework, the study found that
neutral fact-checking messages received hostile comments both from Democrats and
Republicans more than non-partisans. The study also showed that, while fact-checking messages
that were advantageous to Obama received significantly more hostile comments from
Republicans than other political groups (Democrats and non-partisans), fact-checking messages
that were advantageous to Romney invited significantly more hostile comments from Democrats
than other user groups.
These findings indicate that bias perceptions are in the eye of the beholder. The following
examples of users’ comments further illustrate this phenomenon. For fact-checking messages
Political Fact-checking on Social Media 75
rendering a “half true,” “mixed,” or “complicated” ruling to Obama or Romney, both Democrats
and Republicans posted similar hostile comments such as “@politifact No, Your fact check as a
fact is a lie. Obama is correct, period! Rachel Maddow is right about your organization!” (from a
Democrat user) and “@politifact your choices of who and what to fact check clearly demonstrate
your left winged bias” (from a Republican user). Such a mirroring pattern was aptly recognized
by one non-partisan user who commented that, “This is why no one likes you.”
These findings have critical implications for fact-checking practitioners. Readers’
comments about media content – especially negative opinions – can greatly influence other
viewers’ evaluation of a news story (Anderson, Brossard, & Scheufele, 2014; Kim & Sun, 2006;
Lee & Jang, 2010). Communication scholars (Chaiken, Liberman, & Eagly, 1989; Petty &
Cacioppo, 1986) have long recognized the role of source characteristics in persuasion such that
perceived source credibility spills over to the message by influencing the power of the argument.
Therefore, users’ comments depicting fact-checkers as biased sources can hinder the fact-
checking organization’s ability to effectively challenge misinformation by damaging their
reputation. Such perceptions can also lead to avoidance of reading messages produced by that
source (Borah, Thorson, & Hwang, 2015).
It is important to note, however, that one study (Lee, 2012) found that users’ hostile
comments actually attenuated other partisan viewers’ perceptions of media bias. Hostile media
perception theory suggests that partisans’ anger and hostility towards media are due, in part, to
their projection that media exert a large influence on the public mind (Gunther, 1988; Gunther et
al., 2001). Therefore, if fellow users’ negative comments provide cues that media is not so
successful in influencing the public’s mind, they may lower the viewer’s hostility towards media.
Despite this possibility, more research is needed to understand the accumulative effects of
Political Fact-checking on Social Media 76
negative comments on the credibility perception of the fact-checkers. If skepticism of the fact-
checking movement continues, it can erode the reputation of the entire industry, which is hard to
manage once it is severely damaged.
Differences between Democrats and Republicans in hostile comments. The study
further examined whether Democrats and Republicans differed in the extent to which they
expressed concerns about media bias in neutral fact-checks. The results showed that there were
no significant differences between Democrats and Republicans in terms of attracting hostile
comments. Yet, this analysis assumed that the same number of Democrats and Republicans
consumed fact-checking messages. To consider a disproportionately large number of Democratic
retweeters in the analysis, the study compared the distributions of hostile commenters with that
of retweeters. The results showed that Republicans tended to post more hostile media comments
to neutral fact-checking messages than Democrats did. Such findings are comparable to previous
studies (Hartmann & Tani, 2013; Huge & Glynn, 2010), which observed a stronger hostile media
effect for a lower status group than a higher status group. Further research is needed to
investigate this phenomenon.
Limitations
This study is subject to a number of limitations. First, this study focused on the manifest
behaviors of social media users such as retweeting and commenting, rather than their exposure to
fact-checking messages. Although retweeting is a common activity that requires minimum effort
on Twitter, retweeters may not be representative of all fact-checking consumers. This study
recommends that future research systematically examine such a potential discrepancy between
selective exposure and selective sharing. In the same vein, the current study measured each fact-
Political Fact-checking on Social Media 77
checking user’s party preference using their tweets posted during the data collection period. The
explicit endorsement of a political party or a candidate through retweeting or posting is clear, but
lack of such behavior may not necessarily mean that these users are non-partisans.
Second, the current project focused on partisans’ responses to fact-checking messages as
opposed to non-partisans. Indeed, a majority of fact-checking users in the current dataset
(83.58%) were identified as either Democrats or Republicans, and only a fraction of the users
(16.42%) were non-partisans. However, since these non-partisans may be motivated more by
accuracy goals (the desire to elect a better candidate) than by directional goals (the desire to
support a particular candidate), they may be the best target audience for fact-checkers. Therefore,
examining how non-partisans consume and share fact-checking messages would provide useful
insights for practitioners.
Lastly, drawing on the social identity approach, this study assumed that social media
users who read fact-checking messages were primed to think in terms of two competing political
parties. Although previous studies have reported that partisan cues are abundant in social media
including Twitter, ensuring whether exposure to fact-checking headlines actually activates
partisanship would be a robust approach to addressing social identity theory. Further, this study
categorized a number of different rulings into the same category. For instance, texts containing
“False,” “Mostly False,” “Pants on Fire,” and “Not True” were all treated as negative rulings
toward the target. Hence, future research differentiating linguistic markers would help us better
understand the effects of message framing on partisanship activation.
Conclusion
Fact-checking is an innovative form of delivering political information to the public and
holding politicians accountable (Graves, 2015; Nyhan & Reifler, 2015). Due to its unique
Political Fact-checking on Social Media 78
contribution to the political sphere, the public as well as political actors have shown increasing
interests in the fact-checking phenomenon over the past few years. Despite the buzz, however,
little research has investigated how fact-checking messages – published by elite fact-checking
organizations – are actually spread and viewed in a social media setting where people normally
encounter such messages. To fill this gap, this study investigated Twitter users’ responses to
factchecking messages by analyzing a large set of behavioral data gathered unobtrusively and in
real-time during the 2012 U.S. presidential election season. The results showed that partisans
selectively shared fact-checking messages that were favorable to a candidate from the same
political party and selectively criticized fact-checkers for showing a bias through public
comments. Republicans who occupied allegedly a lower status in the fact-checking domain
relative to Democrats showed stronger negativity towards the Democratic Party and the fact-
checkers.
These findings raise concerns over a self-reinforcing cycle that polarizes media
audiences. Since partisan users who are active consumers of fact-checking filter certain “facts”
for their followers, this second level agenda setting can contribute to the selective exposure and
echo-chamber phenomenon. In addition, hostile comments toward a fact-checker posted by
fellow partisans (ingroup members) may carry extra weight and influence viewers’ attitudes
negatively. Examining the fact-checking movement from the active audience paradigm in which
individuals actively process information for their own purpose would be a fruitful approach for
future research.
Political Fact-checking on Social Media 79
Bibliography
Aarts, H., & Dijksterhuis, A. (2003). The silence of the library: environment, situational norm,
and social behavior. Journal of personality and social psychology, 84(1), 18.
Adair, B., & Thakore, I. (2015). Factchecking census finds continued growth around the world.
Retrieved from http://reporterslab.org/tag/factcheck-org/
Adams, R. (2012, June). US elections 2012: Top 50 Twitter accounts to follow. The Guardian.
Available at : http://www.theguardian.com/world/richard-adams-blog/2012/jun/12/us-
elections-twitter-top-50
Amazeen, M. A. (2013). Making a difference? A critical assessment of fact-checking in 2012.
New American Foundation. Available at
http://www.democracyfund.org/media/uploaded/Amazeen_-
A_Critical_Assessment_of_Factchecking.pdf
Amazeen, M. A. (2015). Revisiting the epistemology of fact-checking. Critical Review, 27(1), 1-
22.
Amazeen, M. A. (2015). Developing an Ad-reporting typology: A network analysis approach to
newspaper and fact-checker. Journalism & Mass Communication Quarterly, 1-25. doi:
10.1177/1077699015574099
Amazeen, M.A, Thorson, E., Muddiman, A., & Graves, L. (2015 February). A comparison of
correction formats: The effectiveness and effects of rating scale versus contextual corrections
on misinformation. Available at http://www.americanpressinstitute.org/wp-
content/uploads/2015/04/The-Effectiveness-of-Rating-Scales.pdf
An, J., Cha, M., Gummadi, K., & Crowcroft, J. (2011). Media landscape in Twitter: a world of
new conventions and political diversity. In Proceedings of the 5
th
International AAAI
Political Fact-checking on Social Media 80
conference on Weblogs and Social Media (ICWSM), Barcelona, July. Available at
http://www.mpi-sws.org/gummadi/papers/icwsm2011.pdf
An, J., Quercia, D., Cha, M., Gummadi, K., & Crowcroft, J. (2014). Sharing political news: the
balancing act of intimacy and socialization in selective exposure. EPJ Data Science, 12, 1-
21.
Andersen, P.A., & Blackburn, T. R. (2004). An experimental study of language intensity and
response rate in email survey. Communication Reports, 17, 73-84.
Anderson, A. A., Brossard, D., Scheufele, D. A., Xenos, M. A., & Ladwig, P. (2014). The “nasty
effect:” online incivility and risk perceptions of emerging technologies. Journal of
Computer-Mediated Communication, 19(3), 373-387.
Appiah, O., Knobloch-Westerwick, S., & Alter, S. (2013). Ingroup favoritism and outgroup
derogation: Effects of news valence, character race, and recipient race on selective news
reading. Journal of Communication, 63, 517-534.
Arpan, L. M., & Nabi, R. L. (2011). Exploring anger in the hostile media process: Effects on
news preferences and source evaluation. Journalism & Mass Communication Quarterly,
88(1), 5-22.
Atkin, C. K. (1973). Instrumental utilities and information seeking. In P. Clarke (Ed.), New
Models for Mass Communication Research (pp.205-242). Beverly Hills, CA: Sage.
Aune, R. K., & Kikuchi, T. (1993). Effects of language intensity similarity on perceptions of
credibility relational attributions, and persuasion. Journal of Language and Social
Psychology, 12(3), 224-238.
Bankre, S. (1992). The ethics of political marketing practices: the rhetorical perspective. Journal
of Business Ethics, 11, 843-848.
Political Fact-checking on Social Media 81
Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than
good. Review of general psychology, 5(4), 323.
Bennet, J. (2012, August). Our campaign be dictated by fact-checkers. The Atlantic. Retrieved
from http://www.theatlantic.com/politics/archive/2012/08/were-not-going-to-let-our-
campaign-be-dictated-by-fact-checkers/261674/
Bennett, W. L., & Iyengar, S. (2008). A new era of minimal effects? The changing foundations
of political communication. Journal of Communication, 58(4), 707-731.
Berger, J. (2014). Word of mouth and interpersonal communication: A review and directions for
future research. Journal of Consumer Psychology, 24(4), 586-607.
Berger, J., & Milkman, K. L. (2012). What makes online content viral?. Journal of Marketing
Research, 49(2), 192-205.
Bickart, B., & Schindler, R. M. (2001). Internet forums as influential sources of consumer
information. Journal of interactive marketing, 15(3), 31-40.
Bimber, B., & Davis, R. (2003). Campaigning online: The Internet in U.S. elections. Oxford:
Oxford University Press.
Bimber, B. (2014). Digital media in the Obama campaigns of 2008 and 2012: Adaptation to the
personalized political communication environment. Journal of Information Technology &
Politics, 11(2), 130-150.
Bobkowski, P. S. (2015). Sharing the news: Effects of information utility and opinion leadership
on online news sharing. Journalism & Mass Communication Quarterly, 92(2), 320-345.
Bohner, G., Bless, H., Schwarz, N., & Strack, F. (1988). What triggers causal attributions? The
impact of valence and subjective probability. European Journal of Social Psychology, 18(4),
335-345.
Political Fact-checking on Social Media 82
Borah, P., Thorson, K., & Hwang, H. (2015). Journal of Information Technology & Politics,
12(2), 186-199. doi : 10.1080/19331681.2015.1008608
Boyd, D. (2010). Social network sites as networked publics: Affordances, dynamics, and
implications. Networked self: Identity, community, and culture on social network sites.
Bradac, J. J. (1988). Language variables: conceptual and methodological problems of
instantiation. A handbook for the study of human communication: methods and instruments
for observing, measuring, and assessing communication processes. Westport, CT: Ablex
Publishing.
Bradley, M.M., & Lang, P. J. (1999). Affective norms for English words (ANEW): Instruction
manual and affective ratings. Technical Report. Gainesville, FL: The Center for Research in
Psychophysiology, University of Florida.
Brambilla, M., Sacchi, S., Rusconi, P., Cherubini, P., & Yzerbyt, V.Y. (2012). You want to give
a good impression? Be honest! Moral traits dominate group impression formation. British
Journal of Social Psychology, 51(1), 149-166. doi:10.1111/j.2044-8309.2010.02011.x
Brannon, L.A., Tagler, M.J., & Eagly, A.H. (2007). The moderating role of attitude strength in
selective exposure to information. Journal of Experimental Social Psychology, 43, 611-617,
Bryant, J., & Oliver, M. B. (Eds.). (2009). Media effects: Advances in theory and research.
Routledge.
Burgoon, M., Jones, S. B., & Stewart, D. (1975). Toward a message centered theory of
persuasion: Three empirical investigations of language intensity. Human Communication
Research, 1(3), 240-256.
Political Fact-checking on Social Media 83
Cacioppo, J. T., Gardner, W. L., & Berntson, G. G. (1997). Beyond bipolar conceptualizations
and measures: The case of attitudes and evaluative space. Personality and Social Psychology
Review, 1(1), 3-25.
Cadinu, M., & Reggiori, C. (2002). Discrimination of a low-status outgroup: The role of ingroup
threat. European Journal of Social Psychology, 32(4), 501-515.
Campbell, M.J., Julious, S.A., & Altman, D.G. (1995). Estimating sample sizes for binary,
ordered categorical, and continuous outcomes in two group comparisons. BMJ: British
Medical Journal, 311, 1145-1148.
Cappella, J. N., Kim, H. S., & Albarracín, D. (2014). Selection and Transmission Processes for
Information in the Emerging Media Environment: Psychological Motives and Message
Characteristics. Media Psychology. Advance online publication.
Chaffee, S. H. (1986). Mass Media and Interpersonal Channels: Competitive, convergent, or
complementary. In G. Gumpert & R. Cathcart (Eds.), Inter/media : Interpersonal
Communication in a Media world. New York: Oxford University Press, pp.62-80.
Chaiken, S., Liberman, A., & Eagly, A. H. (1989). Heuristic and systematic information
processing within and beyond the persuasion context. In J.S. Uleman & J. A. Bargh (Eds.),
Unintended thought (pp.212-252). New York : Guildford Press.
Choi, J., Yang, M., & Chang, J.C. (2009). Elaboration of the hostile media phenomenon : The
role of involvement, media skepticism, congruency of perceived media influence, and
perceived opinion climate. Communication Research, 36, 54-75.
Center for Media and Public Affairs (2013). Media fact-checker says Republicans lie more.
Retrieved from http://cmpa.gmu.edu/study-media-fact-checker-says-republicans-lie-more/
Political Fact-checking on Social Media 84
Chavalier, J., & Mayzlin, D. (2006). The effect of word of mouth on sales: Online book reviews.
Journal of Marketing Research, 43(3), 345-354.
Chen, H. C., Reardon, R., Rea, C., & Moore, D. J. (1992). Forewarning of content and
involvement: Consequences for persuasion and resistance to persuasion. Journal of
Experimental Social Psychology, 28(6), 523-541.
Ciampaglia, G. L., Shiralkar, P., Rocha, L. M., Bollen, J., et al. (2015). Computational fact-
checking from knowledge networks. PLOS ONE, 10(10). DOI:
10.1371/journal.pone.0128193.
Coddington, M., Molyneux, L., & Lawrence, R.G. (2014). Factchecking the campaign: How
political reporters use Twitter to set the record straight (or not). The International Journal of
Press/Politics, 19(4), 391-409. doi: 10.1177/1940161214540942
Coe, K., Tewksbury, D., Bond, B.J., Drogos, K.L., Porter, R.W., Yahn, A., & Zhang, Y. (2008).
Hostile news: Partisan use and perceptions of cable news programming. Journal of
Communication, 58(2), 201-209.
Coe, K., Kenski, K., & Rains, S. A. (2014). Online and uncivil? Patterns and determinants of
incivility in newspaper website comments. Journal of Communication, 64(4), 658-679.
Cooper, T. (1990). Comparative international media ethics. Journal of Mass Media Ethics, 5(1),
3-14.
Cotton, J. L. (1985). Cognitive dissonance in selective exposure. In D. Zillmann and J. Bryant
(Eds.)., Selective exposure to communication. New York: Routledge.
Craig, T. Y., & Blankenship, K. L. (2011). Language and persuasion: Linguistic extremity
influences message processing and behavioral intentions. Journal of Language and Social
Psychology, 30(3), 290-310. doi:10.1177/0261927X11407167
Political Fact-checking on Social Media 85
De Fleur, M. L. (1987). The Growth and Decline of Research on the Diffusion of the News,
1945-1985. Communication Research, 14 (1), 109–130. doi:
10.1177/009365087014001006
De Winter, J.C.F. (2013). Using the student’s t-test with extremely small sample sizes. Practical
Assessment, Research, & Evaluation, 18(10), 1-11.
Duan, W., Gu, B., & Whinston, A. (2008). The dynamics of online word-of-mouth and product
ales: An empirical investigation of the movie industry. Journal of Retailing, 84(2), 233-
242.
Dubé, E., Bettinger, J. A., Halperin, B., Bradet, R., Lavoie, F., Sauvageau, C., ... & Boulianne,
N. (2012). Determinants of parents’ decision to vaccinate their children against rotavirus:
results of a longitudinal study. Health education research, 27(6), 1069-1080.
Duck, J.M., Terry, D.J., & Hogg, M.A. (1998). Perceptions of media campaign: The role of
social identity and the changing intergroup context. Personality and Social Psychology
Bulletin, 24, 3-16.
Eagly, A. H., & Chaiken, S. (1992). The psychology of attitudes. Fort Worth, TX: Harcourt
Brace Jovanovich.
Eagly, A. H., & Chaiken, S. (2005). Attitude research in the 21
st
century: The current state of
knowledge. In D. Albarracin, B.T. Johnson, & M.P. Zanna (Eds.), The handbook of attitudes
(pp.743-767). Mahwah, NJ: Erlbaum.
Ellison, N.B., Steinfield, C., & Lampe, C. (2011). Social capital implications of Facebook-
enabled communication practice. New Media & Society, 13(6), 873-892.
Elmer, G. (2012). Live research : Twittering an election debate. New media & Society, 15(1), 18-
30. doi: 10.1177/1461444812457328
Political Fact-checking on Social Media 86
Enli, G., & Naper, A.A. (2016). Social media Incumbent Advantage: Barack Obama and Mitt
Romney’s Tweets in the 2012 US presidential election campaign. In A. Bruns., G. Enli., E.
Skogerbo., A.O. Larsson., & C. Christensen. (Eds.), The Routledge Companion to Social
Media and Politics (pp.346-377). New York: Routledge.
Esposito, J. L. (1987). Subjective factors and rumor transmission : A field investigation of the
influence of anxiety, importance, and belief on rumormongering. Doctoral dissertation,
Temple University, dissertation abstracts international, 48, 596B.
Feldman, L., Hart, P.S., Leiserowitz, A., Maibach, E., Roser-Renouf, C. (2015). Do hostile
media perceptions led to action? The role of hostile media perceptions, political efficiacy,
and ideology in predicting climate change activism. Communication Research. Advance
online publication. doi: 10.1177/0093650214565914
Festinger L (1957) A theory of cognitive dissonance. Evanston, IL: Peterson.
Firth, D. (2011). relimp: Relative contribution of effects in a regression model. R package
version.
Fischer, P., Jonas, E., Frey, D., & Schulz-Hardt, S. (2005). Selective exposure to information:
The impact of information limits. European Journal of Social Psychology, 35(4), 469-492.
Fiske, S. T. (1980). Attention and weight in person perception: The impact of negative and
extreme behavior. Journal of Personality and Social Psychology, 38(6), 889.
Freelon, D. (2015). Discourse architecture, ideology, and democratic norms in online political
discussion. New Media & Society, 17(5), 772-791. doi: 10.1177/1461444813513259
Fridkin, K., Kenney, P. J., & Wintersieck, A. (2015). Liar, Liar, Pants on Fire: How Fact-
Checking Influences Citizens’ Reactions to Negative Advertising. Political Communication,
32(1), 127-151.
Political Fact-checking on Social Media 87
Fuegen, K., & Brehm, J. W. (2004). The intensity of affect and resistance to social influence.
In E. S. Knowles & J. A. Linn (Eds.), Resistance and persuasion (pp. 412-450). Mahwah,
NJ: Lawrence Erlbaum.
Garrett, R.K. (2009a). Echo chamber online? Politically motivated selective exposure among
Internet news users. Journal of Computer Mediated Communication, 14(2): 256-285.
Garrett, R.K. (2009b). Politically motivated reinforcement seeking: Reframing the selective
exposure debate. Journal of Communication, 59(4): 676-699.
Garrett, R.K. (2011). Troubling Consequences of Online Political Rumoring. Human
Communication Research, 37(2), 255–274. doi: 10.1111/j.1468-2958.2010.01401.x
Garrett, R.K. (2013). Selective exposure: New methods and new directions. Communication
Methods and Measures, 7(3-4), 247-256.
Gekoski, A., Gray, J. M., & Adler, J. R. (2012). What makes a homicide newsworthy? UK
national tabloid newspaper journalists tell all. British Journal of Criminology, 52(6), 1212-
1232.
Giner-Sorolla, R., & Chaiken, S. (1994). The causes of hostile media judgments. Journal of
Experimental Social Psychology, 30, 165-180.
Gottfried, J.A., Hardy, B.W., Winneg, K.M., & Jamieson, K.H. (2013). Did fact checking matter
in the 2012 presidential campaign? American Behavioral Scientist, 57(11), 1558-1567.
Graves, L., & Glaisyer, T. (2012, February). The fact-checking universe in Spring 2012. New
America Foundation. Retrieved from http://newamerica.net/sites/newamerica.net/files/
policydocs/The_Fact-checking_Universe_in_2012.pdf
Graves, L. (2013). “Deciding What’s True: Fact-Checking Journalism and the New Ecology of
News.” Unpublished dissertation, Graduate School of Journalism, Columbia University,
Political Fact-checking on Social Media 88
New York.
Graves, L. (2013). What we can learn from the factcheckers’ ratings. Columbia Journalism
Review. Available at
http://www.cjr.org/united_states_project/what_we_can_learn_from_the_factcheckers_rati
ngs.php
Graves, L., Nyhan, B., & Reifler, J. (2016). Field experiment examining motivations for fact-
checking. Journal of Communication, 66(1), 102-138.
Greenberg, B.S. (1964). Diffusion of the new about the Kennedy assassination. Public Opinion
Quarterly, 28 225-232.
Hargittai, E., Gallo, J., & Kane, M. (2008). Cross-ideological discussions among conservative
and liberal bloggers. Public Choice, 134 (1-2), 67-86.
Hill, R.J., & Bonjean, C.M. (1964). News diffusion: A test of the regularity hypothesis.
Journalism & Mass Communication Quarterly, 41(3), 336-342. doi:
10.1177/107769906404100302
Gunther, A. C., & Chia, S. C. Y. (2001). Predicting pluralistic ignorance: The hostile media
perception and its consequences. Journalism & Mass Communication Quarterly, 78(4),
688-701.
Gunther, A. C., & Schmitt, K. (2004). Mapping boundaries of the hostile media effect. Journal of
Communication, 54(1), 55-70.
Gunther, A. C., & Leibhart, J.L. (2006). Broad reach or biased source? Decomposing the hostile
media effect. Journal of Communication, 56, 449-466.
Gunther, A.C., Miller, N., & Liebhart, J.L. (2009). Assimilation and contrast in a test of the
hostile media effect. Communication Research, 36, 747-764.
Political Fact-checking on Social Media 89
Gupta A, Kumaraguru P, Castillo C, Meier P, et al. (2014). TweetCred: A Real-time Web-based
System for Assessing Credibility of Content on Twitter. In Proceeding of 6th
International Conference on Social Informatics (SocInfo). Barcelona, Spain.
Hannak, A., Margolin, D., Keegan B, et al. (2014). Get back! You don’t know me like that: The
social mediation of fact checking interventions in Twitter conversation. In: Proceedings of
the Eighth International AAAI Conference on Weblogs and Social Media, pp.187-196.
Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd: an anger superiority
effect. Journal of personality and social psychology, 54(6), 917.
Hamilton, M. A., & Hunter, J. E. (1998). The effect of language intensity on receiver evaluations
of message source and topic. In M. Allen & R. W. Preiss (Eds.), Persuasion: Advances
through meta-analysis (pp. 99-138). Cresskill, NJ: Hampton Press
Hamilton, M. A., & Stewart, B. L. (1993). Extending an information processing model of
language intensity effects. Communication quarterly, 41(2), 231-246.
Hansen, L. K., Arvidsson, A., Nielsen, F. Å., Colleoni, E., & Etter, M. (2011). Good friends, bad
news-affect and virality in twitter. In Future information technology (pp. 34-43). Springer
Berlin Heidelberg.
Hart, W., Albarraci, D., Eagly, A., Brechan, I., Lindberg, M.J., & Merrill, L. (2009). Feeling
validated versus being correct: A meta-analysis of selective exposure to information.
Psychology Bulletin, 135(4), 555-588.
Hartmann, T. (2009). A brief introduction to media choice. In T. Hartmann (Ed.), Media choice:
A theoretical and empirical overview (pp.1-9). New York: Routledge.
Political Fact-checking on Social Media 90
Hartmann, T., & Tanis, M. (2013). Examining the hostile media effect as an intergroup
phenomenon: The role of intergroup identification and status. Journal of Communication,
63, 535-555.
Heath, C. (1996). Do people prefer to pass along good or bad news? Valence and relevance of
news as predictors of transmission propensity. Organizational behavior and human
decision processes, 68(2), 79-94.
Heath, C., & Bendor, J. (2003). When truth doesn’t win in the marketplace of ideas: Entrapping
schemas, Gore, and the Internet.
Hemingway, M. (2011). Lies, Damned Lies, and ‘Fact Checking’. Available at
http://www.weeklystandard.com/lies-damned-lies-and-fact-checking/article/611854
Hermida, A., & Thurman, N. (2007, March). Comments please: How the British news media are
struggling with user-generated content. In Articulo presentado en el 8th International
Symposium on Online Journalism.
Hermida, A., Fletcher, F., Korell, D., & Logan, D. (2012). Share, like, recommend: Decoding the
social media news consumer. Journalism Studies, 13(5-6), 815-824.
Hill, R.J., & Bonjean, C.M. (1964). News diffusion: A test of the regularity hypothesis.
Journalism & Mass Communication Quarterly, 41(3), 336-342. doi:
10.1177/107769906404100302
Himelboim I, McCreery S and Smith M (2013) Birds of a feather tweet together: Integrating
network and content analyses to examine cross-ideology exposure on Twitter. Journal of
Computer-Mediated Communication, 18(2): 40-60.
Political Fact-checking on Social Media 91
Hodas, N. O., & Lerman, K. (2012, September). How visibility and divided attention constrain
social contagion. In Privacy, Security, Risk and Trust (PASSAT), 2012 International
Conference on and 2012 International Conference on Social Computing, pp. 249-257.
Holan, A. D. (December, 2015). All politicians lie. Some lie more than others. The New York
Times. Available at http://www.nytimes.com/2015/12/13/opinion/campaign-stops/all-
politicians-lie-some-lie-more-than-others.html
Huge, M., & Glynn, C.J. (2010). Hostile media and the campaign trail: Perceived media bias in
the race for governor. Journal of Communication, 60, 165-181.
Iyengar, S., & Hahn, K.S. (2009). Red media, blue media: Evidence of ideological selectivity in
media use. Journal of Communication, 59, 19-39.
Iyengar, S., Sood, G., & Lelkes, Y. (2012). Affect, not ideology: A social identity perspective on
polarization. Public Opinion Quarterly, 76(3), 405-431.
Jonas, E., Schulz-Hardt, S., Frey, D., & Thelen, N. (2001). Confirmation bias in sequential
information search after preliminary decisions: An explanation of dissonance theoretical
research on selective exposure to information. Journal of Personality and Social
Psychology, 80, 557-571.
Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American psychologist,
39(4), 341.
Kaid, L. L., Tedesco, J. C., & McKinnon, L. M. (1996). Presidential ads as nightly news: A
content analysis of 1988 and 1992 televised adwatches. Journal of Broadcasting &
Electronic Media, 40(3), 297-308.
Kam, C. D. (2005). Who toes the party line? Cues, values, and individual differences. Political
Behavior, 27(2), 163-182.
Political Fact-checking on Social Media 92
Kapferer, J. (1990). Rumors: Uses, interpretations, and images. New Brunswick, NJ:
Transaction Publishers.
Karlsson, M. (2011). Flourishing but restrained: The evolution of participatory journalism in
Swedish online news, 2005-2009. Journalism Practice, 5(1), 68-84.
Katz, E., & Lazarsfeld, P. (1955). Personal influence. Glecoe, IL: The Free Press.
Katz, E., Blumler, J.G., & Gurevitch, M. (1974). Uses and gratifications research. Public
Opinion Quarterly, 37(4), 509-524.
Katz, E., Ali, C., & Kim, J. (2014). Echoes of Gabriel Tarde: What we know better or different
100 years later. Los Angeles, CA: USC Annenberg Press.
Kim, J., & Rubin, A. M. (1997). The variable influence of audience activity on media effects.
Communication Research, 24(2), 107-135.
Kim, H. S., Lee, S., Cappella, J. N., Vera, L., & Emery, S. (2013). Content characteristics
driving the diffusion of antismoking messages: implications for cancer prevention in the
emerging public communication environment. JNCI Monographs, 2013(47), 182-187.
Knobloch, S., & Zillmann, D. (2002). Mood management via the digital jukebox. Journal of
Communication, 52, 351-366.
Knobloch ,S., Carpentier, F. D., & Zillmann, D. (2003). Effects of salience dimensions of
information utility on selective exposure to online news. Journalism & Mass
communication Quarterly, 80(1), 91-108.
Knobloch-Westerwick, S., & Meng, J. (2009). Looking the other way selective exposure to
attitude-consistent and counterattitudinal political information. Communication Research,
36(3), 426-448.
Political Fact-checking on Social Media 93
Knobloch-Westerwick, S., & Hastall, M. R. (2010). Please yourself: social identity effects on
selective exposure to news about in and out-groups. Journal of Communication, 60, 513-
535.
Knobloch-Westerwick, S., & Kleinman, S. B. (2012). Preelection selective exposure:
Confirmation bias versus informational utility. Communication Research, 39(2), 170-193.
Knobloch-Westerwick, S. (2015). Choice and preference in media use: Advances in selective
exposure theory and research. New York: Routledge.
Kreiss, D. (2014). Seizing the moment: The presidential campaigns’ use of Twitter during the
2012 electoral cycle. New Media & Society. Advance online publication. doi
10.1177/1461444814562445
Ksiazek, T. B., Peer, L., & Lessard, K. (2014). User engagement with online news:
Conceptualizing interactivity and exploring the relationship between online news videos
and user comments. New Media & Society. Advance online publication.
Kuklinski, J. H., & Hurley, N. L. (1994). On hearing and interpreting political messages: A
cautionary tale of citizen cue-taking. The Journal of Politics, 56(03), 729-751.
Kunda, Z. (1990). The case for motivated reasoning. Psychology Bulletin, 108(3), 480-498.
Kwak H, Lee C, and Moon S. (2010) What is Twitter, a social network or a news media. In
Proceedings of the 19
th
World-Wide Web (WWW) Conference, Raleigh, North Carolina.
Jenkins, H. (2006). Convergence culture: Where old and new media collide. NYU press.
Lapinski, M. K., & Boster, F. J. (2001). Modeling the ego-defensive function of attitudes.
Communication Monographs, 68(3), 314-324. doi: 10.1080/03637750128062
Larsson, A. O. (2011). Interactive to me–interactive to you? A study of use and appreciation of
interactivity on Swedish newspaper websites. New Media & Society, 13(7), 1180-1197.
Political Fact-checking on Social Media 94
Lavine, H., Johnston, C., & Steenbergen, M. (2012). The ambivalent partisan: How critical
loyalty promotes democracy. Oxford: Oxford University Press.
Lawrence, E., Sides, J., & Farrell, H. (2010). Self-segregation or deliberation? Blog readership,
participation, and polarization in American Politics. Perspectives on Politics, 8(1), 141-
157.
Leach, C.W., Ellemers, N., & Barreto, M. (2007). Group virtue: The importance of morality (vs.
competence and sociability in the positive evaluation of in-groups. Journal of personality
and social psychology, 93(2), 234-249.
Lee, E. (2012). That’s not the way it is: How user-generated comments on the news affect
perceived media bias. Journal of Computer Mediated Communication, 18, 32-45.
Lee, E. J., & Jang, Y. J. (2010). What do others’ reactions to news on internet portal sites tell us?
Effects of presentation format and readers’ need for cognition on reality perception.
Communication Research, 37(6), 825-846.
Lee, C. S., & Ma, L. (2012). News sharing in social media: The effect of gratifications and prior
experience. Computers in Human Behavior, 28(2), 331-339.
Lewandowsky, S., Stritzke, W., Freund, A. M., Oberauer, K., & Krueger, J. (2013).
Misinformation, disinformation, and violent conflict: From Iraq and the “War on Terror” to
future threats to peace. American Psychology, 68(7), 487-501.
Lodge, M., & Taber, C. S. (2000). Three steps toward a theory of motivated political reasoning.
In A. Lupia, M.D., McCubbins, & S.L. Popkin (Eds.), Elements of Reason (pp.183-213).
Cambridge, UK: Cambridge University Press.
Political Fact-checking on Social Media 95
Liu, Q., Zhou, M., & Zhao, X. (2015). Understanding news 2.0: A framework for explaining the
number of comments from readers on online news. Information & Management. Advance
online publication.
Manosevitch, E., & Walker, D. (2009). Reader comments to online opinion journalism: A space
of public deliberation. 10th International Symposium on Online Journalism, Austin, TX.
Mantzarlis, A. (2016 February). There are 96 fact-checking projects in 37 countires, new census
finds. Available at http://www.poynter.org/tag/impact-of-fact-checking
McArthur, L. Z., & Ginsberg, E. (1981). Causal Attribution to Salient Stimuli An Investigation
of Visual Fixation Mediators. Personality and Social Psychology Bulletin, 7(4), 547-553.
Mclntire, S.A., & Miller, L.A. (2007). Foundations of psychological testing: A practical
approach. London : Sage.
McCluskey, M. & Hmielowski, J. (2012). Opinion expression during social conflict: Comparing
online reader comments and letters to the editor. Journalism, 13(3), 303–319.
McKinley, C.J., Mastro, D., & Warber, K.M. (2014). Social identity theory as a framework for
understanding the effects of exposure to positive media images of self and other on
intergroup outcomes. International Journal of Communication, 8, 1049-1068.
McKinney, M.S., Houston, B., & Hawthorne, J. (2014). Social watching a 2012 republican
presidential primary debate. American Behavioral Scientist, 58(4), 556-573.
Mendelsohn, H. (1964). Broadcast and personal sources of information in emergent public crisis:
The presidential assassination. Broadcasting, 8 (2), 147-156. doi
:10.1080/08838156409386098
Political Fact-checking on Social Media 96
Messing, S., & Westwood, S. J. (2012). Selective exposure in the age of social media:
Endorsements trump partisan source affiliation when selecting news online.
Communication Research, 41(8), 1042-1063.
Meraz, S. (2015). Quantifying partisan selective exposure through network text analysis of elite
political blog networks during the U.S. 2012 Presidential election. Journal of Information
Technology & Politics, 12(1), 37-53.
Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online
environments: The use of cognitive heuristics. Journal of Pragmatics, 59, 210-220.
Metzger, M.J., Hartsell, E.H., & Flanagin, A. J. (2015). Cognitive dissonance or credibility? : A
comparison of two theoretical explanations for selective exposure to partisan news.
Communication Research. Advance online publication. doi :10.1177/0093650215613136
Meyer, P. (1988). Defining and measuring credibility of newspaper: Developing an index.
Journalism Quarterly, 65, 567-588.
Mondak, J. J. (1993). Source cues and policy approval: The cognitive dynamics of public support
for the Reagan agenda. American Journal of Political Science, 186-212.
Mullen, B., Brown, R., & Smith, C. (1992). Ingroup bias as a function of salience, relevance, and
status: An integration. European Journal of Social Psychology, 22(2), 103-122.
Nasrallah, M., Carmel, D., & Lavie, N. (2009). Murder, she wrote: enhanced sensitivity to
negative word valence. Emotion, 9(5), 609.
Negroponte, N. (1996). Being digital. Vintage.
Neuberger, C., & Nuernbergk, C. (2010). Competition, complementarity or integration?
Journalism Practice, 4(3), 319-332.
Nicholson, S. P. (2012). Polarizing cues. American Journal of Political Science, 56(1), 52-66.
Political Fact-checking on Social Media 97
Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social
judgment.
Nov, O., Naaman, M., & Ye, C. (2010). Analysis of participation in an online photo ‐sharing
community: A multidimensional perspective. Journal of the American Society for
Information Science and Technology, 61(3), 555-566.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political
misperceptions. Political Behavior, 32(2), 303-330. 10.1007/s11109-010-9112-2
Nyhan, B., & Reifler, J. (2015). Estimating fact-checking’s effects : Evidence from a long term
experiment during campaign 2014. Available at http://www.americanpressinstitute.org/wp-
content/uploads/2015/04/Estimating-Fact-Checkings-Effect.pdf
Öhman, A., Lundqvist, D., & Esteves, F. (2001). The face in the crowd revisited: a threat
advantage with schematic stimuli. Journal of personality and social psychology, 80(3), 381.
Olmstead, K., Mitchell, A., & Rosentiel, T. (2011). Navigating news online: here people go, how
they get there and what lures them away. Pew Research Center’s Project for Excellence in
Journalism. Available at : http://www.journalism.org/2011/05/09/navigating-news-online/
Ostermeier, E. (2011). Selection bias? Politifact rates Republican statements as false as 3 times
the rate of Democrats. Retrieved from
http://editions.lib.umn.edu/smartpolitics/2011/02/10/selection-bias-politifact-rate/
Pallak, M. S., Mueller, M., Dollar, K., & Pallak, J. (1972). Effect of commitment on
responsiveness to an extreme consonant communication. Journal of Personality and Social
Psychology, 23(3), 429.
Pennebaker, J. W., Francis, M.E., & Booth, R.J. (2001). Linguistic inquiry and word count:
LIWC 2001. Mahwah, N.J. : Lawrence Erlbaum Associates.
Political Fact-checking on Social Media 98
Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral
routes to attitude change. New York: Springer Verlag
Pfau, M., & Louden, A. (1994). Effectiveness of adwatch formats in deflecting political attack
ads. Communication Research, 21(3), 325-341.
Pingree, R. J. (2011). Effects of unresolved factual disputes in the news on epistemic political
efficacy. Journal of Communication, 61(1), 22-47.
Pingree, R. J., Brossard, D., & McLeod, D. M. (2014). Effects of Journalistic Adjudication on
Factual Beliefs, News Evaluations, Information Seeking, and Epistemic Political Efficacy.
Mass Communication and Society, 17(5), 615-638.
Purcell, K., Rainie, L., Mitchell, A., Rosenstiel, T., & Olmstead, K. (2010). Understanding the
participatory news consumer. Pew Internet and American Life Project, 1, 19-21.
Rainie, L., Smith, A., Schlozman, K.L., Brady, H., & Verba, S. (2012). Social media and
political engagement. Pew Research Center’s Internet and American Life Project.
Retreived from http://www.pewinternet.org/2012/10/19/additional-analysis-2/
Rosengren, K.E. (1973). News diffusion: An overview. Journalism Quarterly, 50 (1), 83-91.
doi:10.1177/107769907305000113
Rogers, E. M. (2000). Reflections on news event diffusion research. Journalism & Mass
Communication Quarterly, 77(3), 561-576. doi: 10.1177/107769900007700307
Rogers, E. M. (2003). Diffusion of Innovation (5th ed.). New York: Free Press
Rogers, E. M., & Seidel, N. (2002). Diffusion of News of the Terrorist Attacks of September 11,
2001. Prometheus, 20(3), 209–219. doi:10.1080/0810902021014326
Political Fact-checking on Social Media 99
Rojecki, A., & Meraz, S. (2014). Rumors and factitious informational blends: The role of the
web in speculative politics. New Media & Society. Advance online publication. doi:
10.1177/1461444814535724
Roy, A. (November, 2012). The ten worst fact-checks of the 2012 election. Forbes. Available at
http://www.forbes.com/sites/aroy/2012/11/05/the-ten-worst-fact-checks-of-the-2012-
election/#17136db857bb
Rosnow, R. L., Esposito, J. L., & Gibney, L. (1988). Factors influencing rumor spreading:
Replication and extension. Language & Communication, 8(1), 29-42.
Rosnow, R., Yost, J. H., & Esposito, J. L. (1986). Belief in rumor and likelihood of rumor
transmission. Language & Communication, 6(3), 189–194. doi: 10.1016/0271-
5309(86)90022-4
Rogerson, K. (2014). Factchecking the fact checkers: online verification organizations and the
search for “Truth”. In Proceedings of the European conference on social media (ECSM),
434-439. July, 2014. Brighton, UK.
Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion.
Personality and social psychology review, 5(4), 296-320.
Rubin, A. (2002). The uses and gratifications perspectives of media effects. In J. Bryant & D.
Zillmann (Eds)., Media Effects: Advances in theory and research (pp.525-548). Hillsdale,
NJ: Lawrence Erlbaum Associates.
Santana, A. D. (2014). Virtuous or vitriolic: The effect of anonymity on civility in online
newspaper reader comment boards. Journalism Practice, 8(1), 18-33.
Schudson, M. (2001). The objectivity norm in American journalism. Journalism, 2(2), 149-170.
Political Fact-checking on Social Media 100
Sen, S., & Lerman, D. (2007). Why are you telling me this? An examination into negative
consumer reviews on the web. Journal of interactive marketing, 21(4), 76-94.
Shah, D.V., Cappella, J.N., & Neuman, W.R. (2015). Big data, digital media and computational
social science: Possibilities and perils. The ANNALS of the American Academy of Political
and Social Science, 659 (1), 6-13.
Shi, R., Messaris, P., & Cappella, J.N. (2014). Effects of online comments on smokers’
perception of antismoking public service announcements. Journal of Computer-Mediated
Communication, 19(4), 975-990.
Shin, J., Jian, L., Driscoll, K., & Bar, F. (2016). Political rumoring on Twitter during the 2012
U.S. presidential election: Rumor diffusion and correction. New Media & Society. Advance
online publication.
Shoemaker, P. J. (1996). Hardwired for news: Using biological and cultural evolution to explain
the surveillance function. Journal of communication, 46(3), 32-47.
Silverman, C. (2012). Visualized: Incorrect information travels farther, faster on Twitter than
corrections. Retrieved from http://www.poynter.org/latest-news/regret-the-
error/165654/visualized-incorrect-information-travels-farther-faster-on-twitter-than-
corrections/
Silverman, C. (2012). Visualized: Incorrect information travels farther, faster on Twitter than
corrections. Retrieved from http://www.poynter.org/latest-news/regret-the-
error/165654/visualized-incorrect-information-travels-farther-faster-on-twitter-than-
corrections/
Silverman, C. (2014). Emergent Rumor Tracker. Available at http://www.emergent.info
Political Fact-checking on Social Media 101
Singhal, A., Rogers, E., & Mahajan, M. (1999). The Gods are drinking milk: Word of mouth
diffusion of a major news event in India. Asian Journal of Communication, 9(1), 86–107.
doi: 10.1080/01292989909359616
Singer, J. B. (2007). Contested autonomy: Professional and popular claims on journalistic norms.
Journalism studies, 8(1), 79-95.
Singer, J. B. (2014). User-generated visibility: Secondary gatekeeping in a shared media space.
New Media & Society, 16(1), 55-73.
Skowronski, J. J., & Carlston, D. E. (1987). Social judgment and social memory: The role of cue
diagnosticity in negativity, positivity, and extremity biases. Journal of Personality and Social
Psychology, 52(4), 689.
Smith, P. J., Humiston, S. G., Marcuse, E. K., Zhao, Z., Dorell, C. G., Howes, C., & Hibbs, B.
(2011). Parental delay or refusal of vaccine doses, childhood vaccination coverage at 24
months of age, and the Health Belief Model. Public health reports, 126, 135.
Stavrositu, C. D., & Kim, J. (2014). All blogs are not created equal: The role of narrative formats
and user-generated comments in health prevention. Health communication. Advance online
publication.
Stencel, M. (2016, February). Global fact-checking up 50% in past year. Duke Reporters’ lab.
Available at http://reporterslab.org/latest-news/
Stieglitz, S., & Dang-Xuan, L. (2012, January). Political communication and influence through
microblogging--An empirical analysis of sentiment in Twitter messages and retweet
behavior. In System Science (HICSS), 2012 45th Hawaii International Conference on (pp.
3500-3509). IEEE.
Political Fact-checking on Social Media 102
Schwartz, H.A., & Ungar, L.H. (2015). Social media: A systematic overview of automated
methods. ANNALS. doi: 10.1177/0002716215569197
Sundar, S., & Nass, C. (2001). Conceptualizing sources in online news. Journal of
Communication, 51(1), 52-72.
Sunstein, C. (2001). Republic. com 2.0. New Jersey: Princeton University Press
Sunstein, C. (2009). On rumors: How falsehood spread, whey we believe them, what can be
done. New York: Farrar, Straus, and Giroux.
Sweeney, P. D., & Gruber, K. L. (1984). Selective exposure: Voter information preferences and
the Watergate affair. Journal of Personality and Social Psychology, 46(6), 1208.
Taber, C.S. & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs.
American Journal of Political Science, 50(30), 755-769.
Tajfel, H., & Turner, J. C. (1979). An integrative theory of intergroup conflict. The social
psychology of intergroup relations, 33(47), 74.
Tenenboim, O., & Cohen, A. A. (2015). What prompts users to click and comment: A
longitudinal study of online news. Journalism, 16(2), 198-217.
Thorson, E. (2008). Changing patterns of news consumption and participation: News
recommendation engines. Information, Communication & Society, 11(4), 473-489.
Thorson, K., Vraga, E., & Ekdale, B. (2010). Credibility in context: How uncivil online
commentary affects news credibility. Mass Communication and Society, 13(3), 289-313. doi:
10.1080/15205430903225571
Turner, J. C. (1991). Social influence. Milton Keynes, England: Open University Press.
Turner, J.C., Hogg, M.A., Oakes, P.J., Reicher, S.D., & Wetherell, M.S. (1987). Rediscovering
the social group: A self-categorization theory. Oxford, UK: Blackwell.
Political Fact-checking on Social Media 103
Turner, J. C., & Oaks, P. J. (1986). The significance of the social identity concept for social
psychology with reference to individualism, interactionism, and social influence. British
Journal of Social Psychology, 25, 237-252.
Uscinski, J. E., & Butler, R. W. (2013). The epistemology of fact checking. Critical Review,
25(2), 162-180.
Uscinski, J. E. (2015). The epistemology of fact checking : Rejoinder to Amazeen. Critical
Review, 27(2), 243-252.
Waisbord, S. (2009). Advocacy journalism in a global context. In K. Wahl-Jorgensen, T.
Hanitzsch (Eds.), The Handbook of Journalism Studies (pp.371-385). New York:
Routledge.
Weber, P. (2014). Discussions in the comments section: Factors influencing participation and
interactivity in online newspapers’ reader comments. New Media & Society, 16(6), 941-957.
Weeks, B. E., & Holbert, R. L. (2013). Predicting dissemination of news content in social media:
A focus on reception, friending, and partisanship. Journalism & Mass communication
quarterly, 90(2), 212-232. DOI: 10.1177/1077699013482906
Weeks, B.E., & Garrett, R. K. (2014). Electoral consequences of political rumors: Motivated
reasoning, candidate rumors, and vote choice during the 2008 U.S. presidential election.
International Journal of Public Opinion Research. Advance online publication. doi:
10.1093/ijpor/edu005
Weeks, B. E., & Holbert, R. L. (2013). Predicting Dissemination of News Content in Social
Media A Focus on Reception, Friending, and Partisanship. Journalism & Mass
Communication Quarterly, 90(2), 212-232.
Political Fact-checking on Social Media 104
Weeks (2015) Emotions, partisanship, and misperceptions: How anger and anxiety moderate the
effect of partisan bias on susceptibility to political misinformation. Journal of
Communication, 65: 699-719.
Wojciszke, B. (2005). Morality and competence in person-and self perception. European Review
of Social Psychology, 16(1), 155-188. doi:10.1080/10463280500229619
World Economic Forum (2014). The rapid spread of misinformation online. Retrieved from
http://reports.weforum.org/outlook-14/top-ten-trends-category-page/10-the-rapid-spread-of-
misinformation-online/
Vallone, R. P., Ross, L., & Lepper, M. R. (1985). The hostile media phenomenon: biased
perception and perceptions of media bias in coverage of the Beirut massacre. Journal of
personality and social psychology, 49(3), 577.
van Zomeren, M., Postmes, T., & Spears, R. (2008). Toward an integrative social identity model
of collective action: A quantitative research synthesis of three socio-psychological
perspectives. Psychological Bulletin, 134(4), 504-535.
van Zomeren, M., Leach, C.W., & Spears, R. (2012). Protesters as “passionate economists” : A
dynamic dual pathway model of approaching coping with collective disadvantage.
Personality and Social Psychology Review, 16(2), 180-199.
Zillmann, D. (1985). The experimental exploration of gratifications from media entertainment. In
K. E. Rosengren, L.A. Wenner, & P. Palmgreen (Eds.), Media Gratifications research:
Current perspectives (pp.225-239). Beverly Hills, CA: Sage
Zillmann, D. (1988a). Mood management through communication choices. American
Behaviorial Scientist, 31, 327-340.
Political Fact-checking on Social Media 105
Appendix A
@MittRomney, @PlanetRomney, @MittNews, @believeinromney, #romney, #mitt,
#mittromney, #romneycare, #valuesvotersummit, #mormonism, #churchoflatterdaysaints,
#massachusettes, #privatesector, #businessexperience, @TeamRomney, mitt2012, romney2012,
mitt romney, obama, Barack Obama, @BarackObama, Obama, #nobama, #onetermpresident,
#Omustgo, #worstPOTUSever, #obamafail, @obama2012, obama2012, biden, Joe Biden,
#biden, #vpbiden, biden2012, paulryan, Paul Ryan, @RepPaulRyan, @PaulRyanVP, #PaulRyan,
ryan2012, @MittRomney, @PlanetRomney, @MittNews, @believeinromney, #romney, #mitt,
#mittromney, #mitt2012, #romneycare, #valuesvotersummit, #mormonism,
#churchoflatterdaysaints, #massachusettes, #privatesector, #businessexperience,
#questionsmittlikes, Rick Perry, platypus -platipus, @GovernorPerry, @TeamRickPerry,
@PerryTruthTeam, @RickPerryNews, #perry, #rickperry, #perry2012, #perryFML, #anitaperry,
Cain, @thehermancain, @CainPress, @CainStaff, @Citizens4Cain, #hermancain, #cain,
#herman, #caintrain, #idonthavethefactstobackthisup, #blackwalnut, #uppityCain, #999,
#999problems, #NeinNeinNein, #cain2012, becky stan, #cainiacs, #teamcain, #cainservatives,
#honkiesforherman, #cainwreck, Bachmann, Bachman, @MicheleBachmann,
@TeamBachmann, @MicheIBachmann, @Prez_Bachmann, #whosthemann?,
#michelebachmann, #bachmann, #bachman, #michelebachman, #michellebachman,
#michellebachmann, #Bachmann2012, #Bachman2012, Gingrich, @newtgingrich,
@Newt_Gingrich, @Newt2012HQ, #gingrich, #newt, #NewtGingrich, #newt2012,
#gingrich2012, #freddiemac, Ron Paul, @RonPaul, @RepRonPaul, @RonPaul_2012,
@dailypaul, @drRonPaul, @RonPaulNews, @RonPaulrev, @RPR3VOLution, @RonPaulcom,
#ronpaul, #paul2012, #ronpaul2012, #ronpaulrevolution, Santorum, @RickSantorum,
Political Fact-checking on Social Media 106
@RickSantorumUSA, @RickSantorumPR, #Santorum, #RickSantorum, #santorum2012,
#rick2012, Gary Johnson, @GovGaryJohnson, #garyjohnson, #Johnson2012, #GJ2012,
Huntsman, @JonHuntsman, #jonhuntsman, #huntsman2012, #JoinTheHunt, Pawlenty,
@timpawlenty, #pawlenty, #TimPawlenty, #teampawlenty, #tpaw, #pawlenty2012, debate,
debates, #TweettheDebate, #CBSNJdebate, #cnndebate, #gopdebate, #gopdebates, #foxdebate,
#bloombergdebate, #econdebate, #nhdebate, #nhdebatewatch, #iowadebate, #debatewatch,
#westernrepublicanpresidentialdebate, #tff11, #president, #candidates, #politics, #electioncycle,
caucus, #caucus, #Election2012, #campaign2012, #2012, republican, gop, republican, G.O.P.,
#gop, #gop2012, #gogop, #2012gopnominees, dem, dnc, democrats, #republican, #republicans,
#conservative, #rightwing, #rnc, rnc, #libertarian, Tea Party", #teaparty, #teapublican, #teabate,
#cnnteaparty, #teapartiers, Obama, #nobama, #onetermpresident, #Omustgo, #worstPOTUSever,
#obamafail, #tcot, #twcot, #tlot, #hhrs, #sgp, #tiot, #ucot, #ocra, #twisters, #crossroads, #Florida,
#Iowa, #Newhampshire, #southcarolina, #t4o, #iamthe53, #wearethe53, @DebateLatino,
@GingrichEspanol, Western Republican Presidential Debate, @YALiberty, @sfliberty,
@Koch_Industries, @LPNational, #sfl, #yal, #LP, #liberty, #libertarian, #ausrianeconomics,
#aynrand, #cnbcdebate, iacaucuses, iacaucus, Iowa caucuses, Iowa caucus, @dmrcaucus,
#OccupyCaucus, Occupy the Caucus", #scprimary, #gop2012, #rnc2012, #rnc, #gop, #dnc2012,
#dnc, #16trillionfail, #arithmetic, #forward2012, #factcheck, factcheck, Akin, ToddAkin, #Akin,
#dmndebate, #mittlies, #debatewatch, lehrer, middle class, middleclass, small business, small
businesses, smallbusiness, job creator, job creation, jobcreator, class war, classwar, budget cut,
deficit spending, bush tax cuts, bushtaxcuts, bernanke, the fed, central bank, dream act,
dreamers, #dreamact, path to citizenship, secure borders, illegal immigrant, affordable care act,
bipartisan, congress cooperation, congress, gridlock, congress obstruction, congress compromise,
Political Fact-checking on Social Media 107
across the aisle, washington gridlock, washington compromise, Washington cooperation,
#dmndebate, #PMTdebate, #debates, #denverdebate, #XboxPoll, #huffpostlive, #factcheck,
#indyvote, #NCADebates12, #LetsDebate, #HEAD2HEAD, #occupythevote, #centrevpdebate,
#bingo, #bigbird,big bird, #essencedebate,garage bank, #nbcpolitics,dodd frank, #talkpoverty,
#kpccdebate, #expandthedebate,"jill stein", #decision2012, #BoycottTheVote2012
,#NoOneForPresident, #BiggestLiberalAsshole2012, @truthteam2012, @RomneyResponse,
@factcheckdotorg,@thecaucus,twitter bomb,tweet bomb, ronmy, romny, ronmey, obma,
obamma,Ronmy, Romny, Ronmey, Obma, Obamma, tax deductions, Simpson-Bowles, sarah
palin, palin, #benghazi, joementum, jomentum, dem, gaffe lang:en, #joebidenmovies, auto bail
out, bank bail out, auto bailout, bank bailout, obamacare, moderate, Mitt, Massachusetts Mitt,
severely conservative, failed record, ryan, gym, ryan work out, catholic, dumbbell, barbell,
weightlifting, loopholes, ryan wisconsin, ryan blue collar, biden blue collar, ryan middle class,
biden middle class, ryan abortion, ryan roe wade, biden roe wade, biden Delaware, ryan Libya,
ryan deficit, biden deficit, ryan budget, biden budget, ryan medicare, biden medicare, biden
buried, ryan scranton, ryan detroit, ryan bankrupt, ryan bankrupcy, biden scranton, biden detroit,
biden bankrupt, biden bankrupcy, #obamadidit, #nobama, ryan terror, biden terror, ryan
terrorism, biden terrorism, ryan terrorist, biden terrorist, ryan ambassador, biden ambassador,
#vpdebate, mularkey, #mularkey, #malarkey, mularky, malarky, malarkey, biden teeth, #literally,
#centrifuges,#factsmatter,#malarkey,#veep,@laughingjoebiden,#debatewatch,#onepointplan,#25
birds,#RoadToGreece,#MockTheVote,#sketchydeal,#angryobama,binders women, candy
crowley, candy crowly, #NortheasternRepublican ,#politifactthis, romnesia, "horses and
bayonets",#horsesandbayonets,"laid out quite a program","currency manipulator",
#CantAfford4More, #MittsStormTips, #firsttimevoter, #youngpeoplevote, #barackthevote
Political Fact-checking on Social Media 108
,#vote, #vote2012, #election2012, #electionday, #videothevote, #datawin, obamaphone,
#drunknatesilver
Political Fact-checking on Social Media 109
Appendix B
Democratic Accounts
AC360, alexwagner, AlexWitt, andersoncooper, Atrios, BorowitzReport, BarackObama,
BashirLive, BenlaBolt , BillKarins, billmaher, BlGBlRD, CackalackyJD, CaseySez, cbellantoni,
cesinnyc, Coburn, chrislhayes, chrisrock, chucktodd, craigmelvin, dailykos, dailyrundown,
davidaxelrod, DenzelWisdom, digimuller, edshow, ezraklein, giff18, hardball , hardball_chris,
Hardballvideo, JansingCo, joebiden , JoeNBC, JoshuaChaffee, joytamika, KeithOlbermann,
Lawrence, leanforward, LOLGOP, maddow, MaddowApp, MaddowAux, MaddowBlog,
MaddowGuestList, maracamp, Melissa_Ryerson, MHarrisPerry, MHPshow, mikescotto,
MichelleObama, mitchellreports, Morning_Joe, morningmika, MorningMusiQ, MoveOn,
MotherJones, MMFlint, mmfa, msnbc, msnbcvideo, negannyc, nick_ramsey, NowWithAlex,
Obama2012, Obama2012, piersmorgan, PoliticsNation, ProducerGuy1, RichardLui, Salon,
secupp, stefcutter, StephenAtHome, stevebenen, SteveKornacki, tamronhall, thecyclemsnbc,
TheDailyEdge, TheDailyShow, thenation, TheLastWord, TheRevAl, thinkprogress,
ThomasARoberts , Toure, TruthTeam2012, upwithchris, VeronicaDLCruz, WeGotEd,
WillieGeist1
Republican Accounts
AceofSpadesHQ, adamhousley, AlanColmes , andreamsaul, andylevy, AndyWendt, AllenWest,
AnnCoulter, bdomenech, benshapiro, bethanyshondark, BillHemmer, blackrepublican
BreitbartNews, BretBaier, CaptYonah, caseystegall, ChrisLaibleFN, ClaytonMorris, dangainor
DailyCaller, DaveRamsey, DavidLimbaugh, DennisDMZ, dickmorristweet, Dloesch,
drmannyonFOX, DRUDGE_REPORT, edhenryTV, emzanotti, EricFehrn, ericbolling,
EWErickson, FaithManganFN, foxandfriends, FoxNews, foxnewspolitics, fredthompson,
Political Fact-checking on Social Media 110
FreedomWorks, GeraldoRivera, glennbeck, GOP, greggutfeld, gretawire, hannityshow ,
HARRISFAULKNER, HeatherChilders, Heritage, hotairblog, IMAO_, IngrahamAngle,
iowahawkblog , JamesOKeefeIII, janicedeanfox, JedediahBila, Jennafnc, JennQPublic,
JimPethokoukis, johnboehner , johnhawkinsrwn, JonahNRO, jonathanserrie, JonScottFNC,
JoyLinFN, jpodhoretz, JudgeJeanine, KarlRove, kilmeade, kimguilfoyle, kirstenpowers10,
KurtSchlichter , lauraingle, Liberty_Chick, libertyladyusa, Limbaugh, loyaltoliberty,
marklevinshow, marthamaccallum, mattklewis, megynkelly , MelissaTweets, michaeljohns
Michellemalkin, MikeEmanuelFox, MittRomney, mkhammer, MonicaCrowley, newsbusters,
MollyLineNews, NickKalmanFN, NicoleBuschFN, NolteNC, Obama_Clock, OliverNorthFNC
Oreillyfactor, PaulRyanVP, RedState , RepublicanGOP, RickFolbaum, RickLeventhal,
RightWingNews, RNC, RomneyCentral, RomneyResponse , rushlimbaugh , RyanGOP
sarahk47, sdoocy, secupp, ShannonBream, ShiraBushFNC, SpecialReport, TeamCavuto,
TeamRomney, TPPatriots, theblaze
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Relationship formation and information sharing to promote risky health behavior on social media
PDF
The two faces of digital diplomacy: dialogic public diplomacy and space for user motives
Asset Metadata
Creator
Shin, Jieun
(author)
Core Title
Democrats vs. Republicans: how Twitter users share and react to political fact-checking messages
School
Annenberg School for Communication
Degree
Doctor of Philosophy
Degree Program
Communication
Publication Date
07/26/2018
Defense Date
05/31/2016
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
election,fact-checking,hostile media perception,OAI-PMH Harvest,politics,sharing,social identity,social media
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Jian, Lian (
committee chair
), Bar, Francois (
committee member
), McLaughlin, Margaret (
committee member
), Thorson, Kjerstin (
committee member
)
Creator Email
jeshin7@gmail.com,jieunshi@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-284634
Unique identifier
UC11279379
Identifier
etd-ShinJieun-4661.pdf (filename),usctheses-c40-284634 (legacy record id)
Legacy Identifier
etd-ShinJieun-4661.pdf
Dmrecord
284634
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Shin, Jieun
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
election
fact-checking
hostile media perception
politics
sharing
social identity
social media