Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Measuring truth detection ability in social media following extreme events
(USC Thesis Other)
Measuring truth detection ability in social media following extreme events
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Copyright 2021 Katie Elizabeth Sippel Byrd
MEASURING TRUTH DETECTION ABILITY IN SOCIAL MEDIA FOLLOWING
EXTREME EVENTS
by
Katie Elizabeth Sippel Byrd
A Thesis Presented to the
FACULTY OF THE USC DORNSLIFE COLLEGE OF LETTERS, ARTS AND
SCIENCES UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
MASTER OF ARTS
(PSYCHOLOGY )
May 2021
ii
TABLE OF CONTENTS
List of Tables………………………………………………….…………………………………iii.
List of Figures………………………………………………….………………………………...iv.
Abstract……………………………………………………….…………………………………..v.
1. Introduction………………………………………….………………………………..………1.
1.1 Current Study………………………...………………………….………...….……..2.
1.2 Previous False News Research………...…...…..……...…………….……..………3.
1.3 Previous Deception Detection Training Research………...….....…………………..5.
2. Hypotheses……………………………………………………………………………..……..7.
3. Study 1………………………………………………………………………………………10.
3.1. Method……………………………………………………………………………...10.
3.1.1. Social Media Post Selection…………………………………………….10.
3.1.2. Respondents……………………………………………………………..12.
3.1.3. Procedure……………………………………………………………..…13.
3.2. Results………………………………………………...…………………………….14.
3.2.1. Item Characteristic Curves (ICC)…………………………………...…..14.
3.2.2. Differential Item Functioning (DIF)…………...………………………..19.
4. Study 2………………………………………………………………………………………22.
4.1. Method……………………………………………………………………………...22.
4.1.1. Social Media Post Selection……….…………………………………….23.
4.1.2. Respondents……………………………………………………………..24.
4.1.3. Measures……………………………...…………………………………25.
4.1.3.1. Cognitive Reflection Test (CRT)……………………………….25.
4.1.3.2. Skepticism………………………………………………………25.
4.1.3.3. Conscientiousness………………………………………………25.
4.1.3.4. Sensitivity and Specificity……………………………………...26.
4.1.4. Procedure……………………………………………………………..…26.
4.2. Results………………………………………………………………………………27.
4.2.1. Signal Detection Theory (SDT) Analysis……………………………….28.
4.2.2. Feedback Training Bayesian Regression Models……………………….31.
4.2.3. Individual Differences Bayesian Regression Models…………………...33.
5. Discussion…………………………………………………………………………………...35.
6. Limitations…………………………………………………………………………………..37.
7. Future Research……………………………………………………………………………..38.
8. Conclusion…………………………………………………………………………………..38.
References………………………………………………………………………………………40.
iii
List of Tables
Table 1. Snopes.com and Twitter’s advanced search criteria as well as the number of fatalities
and social media posts selected for each soft-target terror attack and natural disaster. ................ 11
Table 2. Study 1 summary of self-reported sex, age, education, and political orientation by event
type. ............................................................................................................................................... 13
Table 3. AUC values for the full set of 40 items and the reduced set of 32 items for both extreme
....................................................................................................................................................... 18
Table 4. Number of DIF items in the four constructed (True/False, Natural Disasters/Soft-Target
Terror Events) 16 item scales. ...................................................................................................... 20
Table 5. Study 2 summary demographics for sex, age, education, political orientation, and race
for the soft-target terror attack sample and the natural disaster sample. ...................................... 24
Table 6. Mean psychometric scores for the soft-target terror attack sample and the natural
disaster sample .............................................................................................................................. 26
Table 7 Area Under the Curve (AUC) values for both the control condition and feedback
condition for the practice judgments and the trial judgments. Values are shown for both the soft-
target terror attacks and natural disaster contexts. ........................................................................ 29
Table 8. Regression results for feedback on sensitivity and specificity for both natural disasters
and terror events. ........................................................................................................................... 32
Table 9. Individual differences regression results for sensitivity and specificity in both extreme
event contexts................................................................................................................................ 34
iv
List of Figures
Figure 1. Item Characteristic Curves (ICC) for all 80 true and false social media posts for the
soft-target terror attack and natural disaster conditions ................................................................ 16
Figure 2. Item Characteristic Curves (ICC) for the 64 social media posts in the four constructed
psychometric scales. ..................................................................................................................... 17
Figure 3. Test Information for the four constructed psychometric scales. ................................... 18
Figure 4. ROC curves for the full set of 40 items and the reduced set of 32 items for both
extreme event contexts. ................................................................................................................. 19
Figure 5. Item Characteristic Curves (ICC) for the political orientation Differential Item
Functioning (DIF) social media posts in the Natural Disasters false items scale. ........................ 21
Figure 6. Item Characteristic Curves (ICC) for the age Differential Item Functioning (DIF) social
media posts in the Natural Disasters true items scale. .................................................................. 21
Figure 7. Item Characteristic Curves (ICC) for the education Differential Item Functioning (DIF)
social media posts in the Soft-Target Terror Attack false items scale. ......................................... 22
Figure 8. Receiver Operating Characteristic Curves (ROC) for the practice/trial and
feedback/control conditions for soft target terror attacks. ............................................................ 30
Figure 9. Receiver Operating Characteristic Curves (ROC) for the practice/trial and
feedback/control conditions for natural disasters. ........................................................................ 31
v
Abstract
With the increased reliance on social media to spread important information regarding
extreme events, and the fact that most individuals do not perform well at correctly distinguishing
true and false information in social media, it is necessary to find ways to measure and increase
this ability. This study (N=800) consists of two parts: (1) the creation of 4 reliable scales to asses
detection ability for soft-target terror attacks and natural disasters, and (2) the testing of feedback
training to improve correct identification performance. In Study 1, 80 actual social media posts,
half true and half false, half related to soft target terror attacks and half natural disasters, verified
through Snopes.com, were presented to a US-based adult sample (N=402). Each individual was
presented 40 actual social media posts, half true and half false, pertaining to either natural
disasters or soft-target terror attacks that took place in the US between 2016-2019, and asked to
make a binary judgement as to whether the post was true or false. Using the responses and item
response theory (IRT) four scales of 16 items each were established based on the discriminability
and difficulty of the items. The four scales measure ability to correctly identify true and false
posts related to soft-target terror attacks and natural disasters. In Study 2 (N=398), using the 4
scales established in Study 1, participants made 32 binary judgements on social media posts
related to either natural disasters or soft-target terror attacks. Each participant completed 16
practice judgements and 16 trial judgements. Participants randomly assigned to the feedback
training condition received feedback (told they were either correct or incorrect) after each of the
16 practice judgments. Participants randomly assigned to the control condition did not receive
any feedback following the practice judgements. Performance on the trial judgements was
incentivized. Feedback training was not found to increase ability to correctly distinguish between
true and false information in social media in the context of extreme events. Political
vi
conservatism was found to be negatively related to the ability to correctly identify false
information, while cognitive reflection (CRT) was found to be positively related to the ability to
correctly identify false information across extreme event contexts. The four psychometric scales
established provide a reliable measure of ability to identify true and false information in social
media following extreme events and could be used for further research.
1
1. Introduction
As of January 2020, there were over 3.8 billion social media users worldwide, with 321
million of those users joining since January 2019 (Kemp, 2020). On average each of those social
media users spends 142 minutes on social media daily (Salim, 2019), with 55% of U.S adults in
2019 reporting that they receive news from social media either “often” or “sometimes”, an 8%
increase from the previous year (Suciu, 2019). While the use of social media tends to be focused
on younger generations, social media use is not restricted to one particular age group. As of
February 2019, 90% of people in the US between the ages of 18-29, 82% of people between the
ages of 30-49, 69% of people between the ages of 50-64, and 40% of people age 65 and over
used at least one form of social media (Clement, 2019). Thus, social media plays a pivotal role in
information distribution. While it is important to have accurate information in all settings, in
certain contexts, such as extreme events, having accurate information can be even more vital.
Social media has a unique advantage in that it allows for instant information to be shared from
essentially anyone or anywhere. Thus, providing an avenue for quick communication to a
potentially large number of people. In the case of extreme events this can be particularly useful
for government agencies or first response entities who are either trying to share information and
resources to the population, or gather information from individuals in need (Houston et al., 2015;
Keim & Noji, 2011; Jaeger et. al, 2007).
Unfortunately, for the same reasons that social media can be beneficial in extreme events,
it can also be detrimental. Due to the capability of instant communication that can be easily
shared, false information can be spread rapidly, and without the chance or ability to be checked
for accuracy. While the spread of false information in certain contexts might be relatively
harmless, other false information could be detrimental to individuals in need or the agencies
2
there to help. For example, false information about a bomb location or a second gunman could be
used to lure individuals from one location to another or cause unnecessary panic that could lead
to injuries or death. Thus, in extreme event situations it is important to be able to distinguish
between true and false information in social media. Considering this widespread use of social
media, and the finding that individuals currently perform poorly at distinguishing between true
and false information in social media (Byrd & John, 2021), it is relevant to search for ways to
improve this ability.
1.1 Current Study
Previous research has identified that participants do not perform much better than chance
(rarely exceeding 60%) at correctly identifying true and false information in social media
following extreme events based on the content of the posts (Byrd & John, 2021), a finding that is
consistent with other deception detection research. In addition, research has found certain
individual characteristics that are related to deception detection performance both in the political
context (Pennycook & Rand, 2019; Pennycook et. al, 2019) and in the extreme event context
Byrd & John, 2021). Knowing this, it is important to investigate whether there are ways to
improve this ability. While there is research related to improving deception detection in social
media, this research is mostly in relation to computer algorithms. Existing research uses large
datasets to train computer programs on how to identify and filter false information mostly by
identifying IP addresses, source information, sharing patterns, and more. These programs
typically ignore the content of the social media posts themselves and rely heavily on other
external factors. While these computer programs are becoming quite efficient at identifying false
information, all of the information is not identified or not identified immediately, which in the
case of extreme events could be detrimental due to the need for instant communication. Thus,
3
having a way to improve individuals’ abilities to identify false information themselves could be
beneficial in extreme event situations, yet little research has been done.
This study seeks to establish scales that measure ability to correctly identify information
following extreme events, and search for ways to increase individuals’ ability to correctly
identify information in social media following extreme events, through the use of feedback
training. This study will determine not only if overall ability to identify information can be
improved, but also whether the ability to identify true information or the ability to identify false
information is improved separately. In active extreme event situations individuals might not have
the time to wait for a computer program to identify false information or the time to research the
source. Therefore, it is vital that the individual better judge the information themselves based on
the content of the social media post. Additionally, this study aims to further explore whether
certain cognitive and individual characteristics are predictive of identification ability by
determining if previously supported theories of potentially vulnerable subgroups in the
population are found in this study as well using a different sample of the population and a
reliable scale of ability. By determining whether the same characteristics found in previous
studies are predictive in my study as well, I can confirm that these subgroups are consistently
vulnerable across social media domains. This would then open the door for other research
avenues into why this might be the case, and how to best help these vulnerable subgroups in the
future.
This paper is a combination of two studies. Study 1 aims to identify four sets of social
media posts, varying in difficulty and discriminability, that measures ability to correctly identify
true and false information in social media for both natural disasters and soft-target terror attacks.
Study 2 aims to determine whether feedback training improves individuals’ ability to detect true
4
and false information in social media. The remainder of this paper is structured in the following
way. First a background into previous false news literature and deception detection training
literature is given, followed by the hypotheses for both Study 1 and Study 2. Next, the methods
and results for Study 1 are presented followed by the methods and results for Study 2.
Subsequently, there is a combined discussion section, future research section, and conclusion
section for the results from Study 1 and Study 2.
1.2 Previous False News Research
In recent years there has been an increase in research related to false news across various
domains. One domain of false news research is centered on developing computer algorithms that
can detect and filter false information in social media (Rubin, 2017; Hamidian & Diab, 2015;
Lloyd, 2017; Snider, 2017; Vanian, 2018), with one having shown an accuracy of 88% within 24
hours (Wu et. al, 2015). Other research is focused on how to classify false information (Zubiaga
et al., 2018), how false information spreads through social media (Vosoughi et al, 2018; Huang
et al., 2015), factors that affect social media post credibility (Morris et al., 2012), and sharing
patterns or how users interact with false information (Wang & Bairong, 2018; Zubiage et al,
2016; Starbird et al., 2014).
An additional domain of false news research examines whether certain individual
characteristics are associated with false news detection ability. For example, several studies have
found political orientation to be a predictor of false information detection ability (Allcott &
Gentzkow, 2017; Pennycook & Rand, 2019; Byrd & John, 2021). Particularly, in false news
related to politics, self-identified conservatives have been shown to perform worse at correctly
identifying false information in social media when compared to self-identified liberals
5
(Pennycook & Rand, 2019). In the context of extreme events, namely natural disasters and soft-
target terror attacks, self-identified political orientation was found to have a moderating effect
within extreme event contexts (Byrd & John, 2021). Specifically, as self-identified political
conservatism increased, the ability to correctly identify false social media posts related to soft-
target terror events increased, while the ability to correctly identify false social media posts
related to natural disasters decreased.
Additionally, cognitive measures have been found to be related to the ability to correctly
identify false information in social media. In particular, delusion-prone individuals were found to
be more likely to believe in false news, likely due to analytic cognitive style (Bronstein et al.,
2019), while cognitive reflection (CRT) was found to be positively related to false information
identification (Pennycook & Rand, 2019). Previous research also found that the perceived
accuracy of the information in a post can be increased through just a single prior exposure to the
post (Pennycook et al, 2018). This was found to be especially true in individuals with relatively
low cognitive ability (Roets, 2017). Previous research has not found any consistent individual
differences to be related to the ability to correctly identify true information in social media. This
study seeks to confirm whether cognitive styles, political ideology, or other individual
characteristics are related with ability to correctly identification information as true or false in the
context of social media posts related to extreme events.
1.3 Previous Deception Detection Training Research
Previous research related to deception detection has shown that individual’s ability to
detect deception during in-person communication is only slightly better than chance, with
performance rarely exceeding 60% (Kraut, 1980; DePaulo, 1994; DePaulo et al., 1985;
6
Zuckerman & Driver, 1985; Bond et al., 2006), even for professionals with deception detection
experience (Porter et al, 2000; Köhnken, 1987). Recent research has extended this finding to
include social media deception detection ability in the context of extreme events (Byrd & John,
2021). Additionally, previous research has found confidence and accuracy to be mostly unrelated
in deception detection (DePaulo et al., 1997; Vrij & Baxter, 1999). With the ability to detect
deception being low, previous research has explored different training techniques in an attempt
to improve ability.
Overall deception detection research has shown mixed results, with some studies finding
training to be effective (Hartwig et al., 2006; Frank & Feeley, 2003; Crews et al., 2007; Driskell,
2012), while others finding a minimal to zero effect (Akehurst et al., 2004). Various researchers
have given suggestions on how to test training accuracy, as well as the best training practices to
use. Levine et al. (2005) suggested that researchers include an additional training group that does
not receive accurate deception training to test for the placebo effect of training. While some
researchers suggest that deception training should last at least one hour to be effective (Frank &
Feeley, 2003), other researchers worry that too much time and practice can lead to lower
accuracy due to boredom (deTurck & Miller, 1990). In addition, different studies take different
approaches to deception training. Some studies aggregate common deception cues and provide
them to individuals before they make their judgments (Levine et al., 2005; Santarcangelo et al.,
2004; Vrij & Graham, 1997; Kassin & Fong, 1999), some provide feedback on the correct
response (Zuckerman et al, 1984; Zuckerman et al., 1985), and some combine both training types
(Hartwig et al., 2006; deTurck et al., 1997; Fiedler & Walka, 1993). A meta-analysis by Driskell
(2012) found an overall effect of type of training (effect size = 0.5, CI = [0.42 , 0.57] ). Studies
that used both feedback training and information training were found to be the most effective,
7
followed by feedback only training, then information only training. The combined training
approach and feedback only training approach were found to have similar effects, and both found
to be significantly better than the information only training approach. Since the results of studies
using a feedback only training approach are similar to the results of studies using a combined
training approach, the current study utilizes a feedback only approach on order to shorten the
survey length and training time in an attempt to minimize the effect of fatigue. This study aims to
expand previous deception detection training research to the realm of online social media
deception detection in the context of extreme events.
2. Hypotheses
In both Study 1 and Study 2, participants were asked to make binary judgments on
whether a social media post is true or false and then report their confidence rating on that
judgement. Before beginning, participants are explicitly informed that the base rate of true social
media posts and false social media posts that they are judging is approximately equal. Likewise
they are informed that the error penalties of making a false positive (i.e. judging a posts to be
true when it is actually false) and making a false negative (i.e. judging a post to be false when
it’s actually true) are equal. By specifying the base rate (proportion of true/proportion of false
=1) and error penalty (false-positive error/false-negative error =1) an optimal threshold, in terms
of maximizing expected value, was specified allowing me to utilize a Signal Detection Theory
(SDT) framework. By using SDT the classification task can be assessed independent of the base-
rated and error-penalties either embedded in the task, or assumed by the participant if the base
rated and error penalties are not specified. Through SDT the corresponding Area Under the
Curve (AUC) values can be calculated to assess performance. AUC values provide a more
8
accurate measure of performance than typical performance measures. A variety of previous
deception detection research has utilized SDT (Wright et al., 2012; Meissner & Kassin, 2002,
Bond, 2008), including research detecting phishing attacks (Martin et al., 2018; Canfield et al.,
2016), and deception in social media (Byrd & John, 2021).
This study seeks to address a number of new research questions and hypotheses that have
not yet been studied in the context of social media and extreme events, in addition to confirming
and extending results found in recent studies. With the exception of Byrd & John (2021), other
similar false news research, mainly in the realm of political false news, has neglected the effects
that base-rates and error penalties have, by either not informing participants of the base rate and
error penalties, not assessing perceived base rate and error penalties, or not accounting for base-
rates and error penalties in the evaluation of performance. The current study addresses the
following research questions.
1. Using 40 social media posts per context (soft-target terror attacks/natural disasters) can the
ability to distinguish between true information and false information be measured? My objective
is to collect social media posts having varying levels of difficulty and discriminability. Using
Item Response Theory (IRT) analysis I plan to identify the social media posts that do and do not
perform well at distinguishing ability. Using the difficulty and discriminability values and Item
Characteristic Curves (ICCs) I plan to identify four sets of social media posts that perform well
at evaluating individuals’ ability to distinguish between true and false information following
extreme events.
2. Does outcome feedback remove noise from the truth signal? In other words, does providing
feedback training improve performance? This will be evaluated in this study by addressing the
following questions.
9
A. Does feedback increase AUC values? Previous studies have reported mixed results on
how effective feedback training is at improving deception detection; previous research
suggests that participants do not perform very well at correctly identifying information in
social media following extreme events in terms of AUC. This is an exploratory study;
thus, I make no prediction regarding the effect feedback will have on AUC values.
B. Does feedback increase the ability to correctly identify true posts? In previous studies,
sensitivity (i.e. the probability of saying true when the post is true) has been found to be
relatively stable across individuals, thus unlike specificity (i.e. the probability of saying
false when the post is false), there does not appear to be many factors that predict
sensitivity. In addition, people tend to be better at identifying true information compared
to false information. Thus, I do not expect sensitivity will increase due to feedback.
C. Does feedback increase the ability to correctly identifying false posts? In the past
specificity has been shown to vary across individuals, with certain demographic and
cognitive measures being predictive of performance. Additionally, performance on
correctly identifying false information is relatively low. Thus, I expect that if feedback
training were to be effective, it would most likely be seen through improved specificity
performance.
3. Are there any cognitive variables related to specificity or sensitivity? Considering results from
previous false news research I hypothesize that the cognitive measures (CRT, skepticism, and
conscientiousness) will be positively related to specificity.
4. Are there any individual characteristics related to specificity or sensitivity? Considering the
results from previous false news research which found political ideology to be related to
10
specificity, I hypothesize that a relationship between political ideology and specificity will be
present.
3. Study 1
For my research questions, the content of the social media posts is important. Since I am
using actual social media posts, the difficulty and discriminability of the posts are not
homogenous across the dataset. The goal of Study 1 is to construct four sets of social media posts
that form reliable measures of the four underlying abilities. The four abilities are, the ability to
correctly identify true information and the ability to correctly identify false information for both
soft-target terror attacks and natural disasters. Each set should include items that vary in
difficulty and discriminability levels and exclude items that provide no information about the
individual’s ability.
3.1 METHOD
A US based group of participants (N = 402), recruited through Amazon Mechanical Turk,
made 40 binary judgements on the accuracy of actual social media posts related to either soft-
target terror attacks or natural disasters that took place in the US between 2016-2019. After each
judgement participants reported their confidence in their judgement. After completing all 40
judgements participants self-reported their age, sex, race, highest education level, and political
orientation. Participants’ responses were recorded using Qualtrics.com
(http://www.qualtrics.com).
3.1.1 Social Media Post Selection
Using Snopes.com (http://www.snopes.com) and Twitter’s advanced search interface
(http://www.twitter.com), a collection of 80 posts, 20 true and 20 false for both US natural
11
disasters and soft-target terror attacks, was constructed. To identify social media posts related to
recent extreme events, only natural disasters and soft-target terror attacks that happened in the
US between 2016-2019 were considered. When searching Snopes.com and Twitter’s advanced
search interface, keywords related to each event were used. A list of these keywords and the
event’s corresponding fatality rate are shown in Table 1, alongside the number of true and false
social media posts selected for each event. Using Twitter’s advanced search interface, posts
containing non-verifiable facts or non-pertinent information (e.g. opinions, prayers, personal
remarks, sympathies), as well as reposts and retweets, were excluded. True social media posts
selected through Twitter’s advanced database were taken from credible media sources (e.g.
Government agencies, local police, NBC, CNN). Social media posts selected from Snopes.com
were limited to only posts verified as either “True” or “False”. Posts that Snopes.com identified
as being something other than “True” or “False” (e.g. “Mostly True”, “Mostly False”, “Mixture”,
or “Unproven) were not considered for this study since the participants were asked to make
binary judgments. A full list of Snopes.com ratings and definitions can be found at
www.snopes.com/fact-check-ratings/.
Table 1. Snopes.com and Twitter’s advanced search criteria as well as the number of fatalities and social
media posts selected for each soft-target terror attack and natural disaster.
Event: Search Criteria: Fatalities:
Posts Selected:
(True/False)
Soft-target Terror Attacks
2016 Orlando Nightclub
Shooting
“Orlando” and “shoot” and “Pulse” or
“nightclub”
50 4/5
2017 Vegas Concert
Shooting
“Vegas” or “Mandalay Bay” or “music
festival” and "shoot"
59 3/3
2017 Texas Church
Shooting
“Texas” or “Sutherland Springs” or
“church” and "shoot"
27 3/2
12
2017 Manhattan Truck
Attack
“Manhattan” or “New York” and “truck”
or “attack”
8 1/2
2018 Thousand Oak Bar
Shooting
"Thousand Oak" or "California" and "bar
shoot"
12 1/1
2018 Pittsburg Synagogue
Shooting
"Pittsburg" or "Tree of Life" or
"synagogue" and "shoot"
11 2/2
2018 Florida High School
Shooting
"Florida" or "Stoneman Douglas" or
"high school" and "shoot"
17 2/2
2019 Virginia Beach
Shooting
“Virginia Beach” and “shoot” 12 2/1
2019 Dayton, Ohio
Shooting
“Dayton” or “Ohio” and “shoot” 9 1/1
2019 El Paso shooting “El Paso” or “Walmart” and “shoot” 22 1/1
Natural Disasters
2017 Hurricane Harvey "Hurricane" and "Harvey" 68 4/7
2017 Hurricane Irma "Hurricane" and "Irma" 134 2/3
2017 Tulsa Oklahoma
tornado
"Tulsa" and "tornado" or "Oklahoma"
and "tornado"
0 0/2
2018 Hurricane Michael "Hurricane" and "Michael" 35 4/1
2018 Woolsey Fire
"Woolsey" and "fire" or "California" and
"fire"
3 4/4
2018 Hurricane Lane
"Hurricane" and "Lane" or "Hawaii" and
"Hurricane
1 2/1
2018 Hawaii Kilauea
volcano
"Hawaii" and "volcano" or "Kilauea"
and "volcano"
0 1/1
2018 Hurricane Florence "Hurricane" and "Florence" 53 1/1
2018 Alaska Earthquake.
"Alaska" and "earthquake" or
"Anchorage" and "earthquake"
0 1/0
2018 Taylorville, Illinois
Tornado
“Taylorville” or “Illinois” and “tornado” 0 1/0
3.1.2 Respondents
13
Two separate samples, N=205 for natural disasters and N=197 for soft-target terror
attacks were recruited through Amazon Mechanical Turk (Mturk) for a total sample of N=402.
The sample was restricted to adult (18 years or older), English speaking, US based Mturk
workers. Summary demographics for age, sex, education, and political ideology are presented
separately by event type in Table 2.
Table 2. Study 1 summary of self-reported sex, age, education, and political orientation by event
type.
Characteristic Natural Disasters
Soft-Target Terror
Attacks
Sample Size (N) 205 197
Sex (% Male) 58% 51%
Mean Age (years) 34 35
Highest Education
Less than a Bachelor’s degree 42% 44%
Bachelor’s degree or higher 58% 56%
Political Orientation
1 Extremely Liberal 14% 12%
2 Moderately Liberal 24% 20%
3 Slightly Liberal 16% 18%
4 Moderate 17% 20%
5 Slightly Conservative 15% 12%
6 Moderately Conservative 6% 10%
7 Extremely Conservative 6% 7%
3.1.3. Procedure
Participants were presented sequentially with 40 social media posts (20 true and 20 false),
corresponding to the extreme event context (soft-target terror attacks or natural disasters) they
completed. Participants were informed that the social media posts were approximately half true
and half false. Participants were then asked to make a binary judgement of whether each social
media post was true or false. For each judgment, participants reported their confidence rating of
that judgement on a 5-point scale from 1=Not at all confident, to 5=Extremely confident. A
14
previous study by Byrd & John (2021) found confidence ratings (on an ordinal scale) and
probability of being correct (on a continuous scale) to be comparable for this task.
All participants received a minimum of $0.50 for participation but had the chance to earn
up to an additional $10.00 based on performance. To incentivize performance, participants were
rewarded $0.25 for each correct classification and penalized $0.25 for each incorrect
classification. To minimize cheating and thoughtless responses, each social media post presented
was displayed for a minimum of 10 seconds and a maximum of 30 seconds. After completing all
40 judgements, participants reported their age, sex, education, and political orientation.
3.2. RESULTS
The results for Study 1 are presented in two sections. (1) Utilizing Item Characteristic
Curves to determine difficulty and discriminability values; (2) Utilizing Differential Item
Functioning to determine how items perform differently between groups. All analyses were
completed separately for soft-target terror events and natural disasters. The AUC value for soft-
target terror attacks was .72 (SD=.006), and .61 (SD= .006) for natural disasters. Thus, overall
performance was better for soft-target terror attacks than natural disasters, with both contexts
having performance better than chance.
3.2.1 Item Characteristic Curves (ICC)
Item response theory (IRT) was utilized to estimate ICC for a two-parameter logistic
model for each of the four sets of 20 social media posts. ICC curves were created in R using the
“ltm” package using the following formula
P(xim = 1 | zm) = g{αi(zm− βi)}
where xim is the dichotomous manifest variable for person m on item i, zm denotes the
individual’s level on the latent scale centered at 0, αi the discrimination parameter, βi the
15
difficulty parameter (Rizopoulos, 2006), and g is the logistic function. The 2-parameter ICC
curves estimate each social media posts’ difficulty and discriminability parameters. Following
previous research that suggested that ability to correctly identify true information is separate
from the ability to correctly identify false information (Byrd & John, 2021; Pennycook & Rand,
2019), separate models were run for the true posts and false posts in each of the two extreme
event conditions. The ICC curves for all 80 items are displayed in Figure 1.
ICC curves have two parameters associated with them, difficulty and discriminability.
The difficulty parameter specifies the difficulty of correctly identifying an item (An & Yung,
2014), or in this study a social media post. The difficulty parameter is defined by the estimated
ability for which there is a 50% chance of obtaining the correct classification. The difficulty
value is represented on the ICC curves in Figure 1 where the curve crosses .50 probability
(designated by the dashed line). As the point that the curve crosses the dashed line shifts to the
left, the less difficult the item, as that point shifts to the right, the more difficult the item.
Alternatively, the discriminability value measures how useful the item is at identifying individual
differences in ability. Discriminability is shown in Figure 1 by the steepness of the curve. The
steeper the slope of the curve, the higher the discriminability value, and the more discriminating
the item is (An & Yung, 2014). Therefore, the flatter the curve, the less discriminating the item.
When the likelihood of correctly identifying the item as true or false is independent of ability, the
ICC is flat, and the item is inadequate for assessing individual differences in identifying true (or
false) social media posts.
The ICC curves in Figure 1 confirm that there is great variability in each of the four
groups of social media posts’ difficulty and discriminability parameters. With the exception of a
few outliers in each of the four groups, overall the items have desirable variations in difficulty
16
and discriminability. In both the soft-target terror attack and natural disaster conditions, false
posts seem to be more discriminating (i.e. have steeper slopes) compared to the true posts.
Overall, the natural disaster social media posts appear to be slightly more difficult than the soft-
target terror attack social media posts for both true and false posts.
To construct a more valid measure of ability to identify true and false posts, the social
media posts with the highest discriminability values and a range of difficulty values were
selected. To determine which social media posts to select, all 80 difficulty and discriminability
scores were separated into their perspective groups (e.g. true/false, natural disaster/soft target
terror). All 20 items were then sorted based on their difficulty and discriminability values. Upon
Figure 1. Item Characteristic Curves (ICC) for all 80 true and false social media posts for the soft-target
terror attack and natural disaster conditions
True Posts False Posts
Terror
Natural
Disasters
17
inspection it was established that there were around 16 items in each group whose
discriminability scores were informative in evaluating either truth or deception detection ability.
Therefore, the 16 social media posts with the highest discriminability scores (i.e. the items with
the steepest curves) in each of the four groups was selected. That is, to limit the amount of noise
in the data, the items that did not perform as well at distinguishing truth or deception detection
abilities were removed. Ultimately, each of the four scales (true/false, soft-target terror/natural
disasters) had 16 social media posts, with varying difficulty levels, for a total of 64 social media
posts. The ICC curves for the 64 items in the four constructed scales are displayed in Figure 2.
The test information scores are displayed in Figure 3. In both contexts the peak information for
the false set of social media posts is higher than the peak information for the true set of social
media posts.
True False
Terror
Natural
Disasters
Figure 2. Item Characteristic Curves (ICC) for the 64 social media posts in the four constructed
psychometric scales.
18
The AUC values and corresponding ROC curves for the full set of 40 items as well as the
reduced set of 32 items are displayed in Table 3 and Figure 4 respectively for both extreme event
contexts. AUC values increased for both soft target terror attacks and natural disasters when the
least discriminating items were removed.
Table 3. AUC values for the full set of 40 items and the reduced set of 32 items for both extreme
event contexts.
Full set of items
(40 social media posts)
Final set of items
(32 social media posts)
Natural Disasters
0.61***
(SD =.01)
0.647***
(SD = .01)
Soft Target Terror Attacks
0.72***
(SD = .01)
0.73***
(SD = .01)
Note: *** p < .001
Terror
Natural
Disasters
True False
Figure 3. Test Information for the four constructed psychometric scales.
19
Figure 4. ROC curves for the full set of 40 items and the reduced set of 32 items for both
extreme event contexts.
3.2.2. Differential Item Functioning (DIF)
An important aspect to consider when constructing a scale is whether or not the items
preform differently between groups. Differential Item Functioning (DIF) analyses can be used to
determine this. An item is said to function differently (i.e., to be a DIF item) when individuals
with the same ability level, but from different groups, have different probabilities of answering
the item correctly (Magis et. al., 2010). There are two types of methodological approaches for
determining DIF, methods that use IRT models (parametric) and methods that use non-IRT
models (nonparametric). Considering an IRT model was used to construct the four scales, an IRT
based DIF method was used for this analysis.
20
Differential Item Functioning analyses were run for the final 16 items selected for each of
the four scales using the ‘dichoDif’ function in the ‘difR’ package (Magis et. al., 2010). Similar
to the ICC analyses above, 2-parameter models were used to estimate the parameters. The Raju
(Raju, 1990) method with 10 iterations of item purification was utilized to determine which
items are DIF. The following groups were evaluated for DIF items: political orientation, sex, age,
and education. Political orientation identifies differences between individuals who self-identified
as liberals (1-3 on the political orientation scale) to those who identify as conservatives (5-7 on
the political orientation scale), those who identified as moderate (4 on the political orientation
scale) were excluded. For the demographic DIF analyses, sex compares males to females, age
compares individuals under 40 years of age to individuals over 40 years of age, and education
compares individuals with a less than a 4-year college degree to those with a 4-year college
degree or higher. Results from the DIF analyses are displayed in Table 4.
Table 4. Number of DIF items in the four constructed (True/False, Natural Disasters/Soft-Target
Terror Events) 16 item scales.
# of DIF Items
Political
Orientation
Sex Age Education
Natural Disasters
True 0 1 4 0
False 6 9 10 5
Terror
True 0 0 1 0
False 2 0 4 1
Figures 5-7 demonstrate how ICC curves for DIF items vary between different groups.
For example, in Figure 5 the ICC curves for liberals (left image) are less discriminative than the
ICC curves for conservatives (right image). Figure 4 shows similar results for those who are 40
years of age or younger (left image) to those who are about 40 years of age (right image) when
judging true social media posts related to natural disasters. Figure 6 demonstrates a DIF item that
21
varies both in difficulty and discriminability between groups. In Figure 7, the DIF item displayed
is more difficult and less discriminating for the group with less than a 4-year college degree (left
image) compared to the group with a 4-year college degree or higher (right image).
Figure 5. Item Characteristic Curves (ICC) for the political orientation Differential Item Functioning (DIF)
social media posts in the Natural Disasters false items scale.
Figure 6. Item Characteristic Curves (ICC) for the age Differential Item Functioning (DIF) social media posts
in the Natural Disasters true items scale.
22
In study 1 four psychometric scales were constructed to measure the ability to correctly
identify true information and the ability to correctly identify false information for both soft-target
terror attacks and natural disasters. Each scale includes items with varying difficulty and
discriminability.
4. Study 2
The 64 social media posts in the four psychometric scales constructed in Study 1, were
used to assess the extent to which feedback learning improves performance for correctly
identifying true and false information in social media following extreme events. Using AUC
values and Bayesian regression models, the effect of feedback training is assessed. Additionally,
Bayesian regressions were run to identify whether various cognitive variables are predictive of
performance ability.
4.1. METHODS
Figure 7. Item Characteristic Curves (ICC) for the education Differential Item Functioning (DIF)
social media posts in the Soft-Target Terror Attack false items scale.
23
Similar to Study 1, two US based samples (N=204 for soft-target terror attacks, N=194
for natural disasters) were recruited through Amazon Mechanical Turk for a total of 398
participants. Participants made 32 binary judgements on the accuracy of actual social media
posts and reported their confidence in their judgement. The social media posts were taken from
the four scales constructed in Study 1, related to either soft-target terror attacks or natural
disasters that took place in the US between 2016 and 2019. Participants were randomly assigned
to either the feedback condition or the control condition. Participants in both conditions made 16
practice judgements and 16 trial judgements. For the 16 practice judgements, participants
assigned to the feedback condition were given feedback after each judgement as to whether their
judgement was correct or incorrect. Participants in the control condition did not receive any
feedback during the 16 practice judgments.
In each set of 16 judgements there were 8 true posts and 8 false posts. To control for
order effects and potential differences between sets of social media posts, both conditions
(feedback/control) were randomly divided into two groups where the sets of social media posts
were then counterbalanced.
After completing all 32 judgements participants completed three psychometric measures
and demographic questions. Participants’ responses were recorded using Qualtrics.com
(http://www.qualtrics.com).
4.1.1 Social Media Post Selection
The four scales constructed in Study 1 were used to measure truth and deception ability
in Study 2. Each of the four scales (true/false, soft-target terror/natural disasters) has 16 social
media posts for a total of 64 social media posts. Each of the four groups of 16 was then separated
into two sets of 8 each, one set of 8 for the practice judgements and one set of 8 for the trial
24
judgements, referred to later as set A and set B. These sets were determined using comparable
difficulty and discriminability values, and then counterbalanced equally within each condition
(feedback/control).
4.1.2. Respondents
Similar to Study 1, 398 adult participants based in the US were recruited through Mturk
to participate in Study 2. In addition to the task, participants self-reported their age, race, sex,
education, political orientation, and completed three psychometric surveys. One psychometric
survey measured conscientiousness, a second measured skepticism, and a third measured
cognitive reflection. An overview of the sample demographics, separated by event type and
condition, is shown in Table 5.
Table 5. Study 2 summary demographics for sex, age, education, political orientation, and race
for the soft-target terror attack sample and the natural disaster sample.
Characteristic
Soft Target Terrorism
N=205
Natural Disaster
N=194
Control Feedback Control Feedback
Sample Size (N) 101 104 111 83
Sex (% Male) 56% 53% 49% 55%
Mean Age (years) 37 37 36 34
Highest Education
Less than a Bachelor’s degree 46% 41% 49% 47%
Bachelor’s degree or higher 54% 59% 51% 53%
Political Orientation
1 Extremely Liberal 13% 12% 14% 14%
2 Moderately Liberal 15% 19% 21% 19%
3 Slightly Liberal 15% 17% 18% 10%
4 Moderate 21% 18% 19% 17%
5 Slightly Conservative 17% 13% 6% 18%
6 Moderately Conservative 8% 12% 13% 16%
7 Extremely Conservative 11% 9% 9% 6%
Race
Minority 33% 23% 26% 34%
Majority 67% 77% 74% 66%
25
4.1.3. Measures
For this study three cognitive measures are used. In addition, two measures of
performance, sensitivity and specificity, are calculated.
4.1.3.1 Cognitive Reflection (CRT)
In this study Cognitive Reflection (CRT) scores were calculated by summing the total
number of correct item responses on a 7-item CRT scale. This 7-item scale is a combination of
the three items from Fredrick (2005) and the four items from Thomson & Oppenheimer (2016).
This same measure of Cognitive Reflection has been used in similar false news research and has
been found to be a predictor of false information identification (Pennycook & Rand, 2019;
Pennycook et. al. 2019). The following is an example item, “In a lake, there is a patch of lily
pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire
lake, how long would it take for the patch to cover half of the lake?” While the fast incorrect
“intuitive” response is 24 days, the correct answer is 47 days (alpha = .82 Natural Disasters,
alpha = .82 Terror).
4.1.3.2 Skepticism
In this study skepticism was measured using the 30-item Professional Skepticism Scale
(Hurtt, 2010). Each participant rated the extent to which they agreed with each of the 30 item
statements using a 6-point Likert-scale. Skepticism scores were then calculated by averaging the
responses across all 30 items. In this study higher scores indicate higher levels of skepticism. In
this 30-item scale there are items related to search for knowledge, interpersonal understanding,
suspension of judgement, a questioning mind, and self-confidence (alpha = .87 Natural Disasters,
alpha = .90 Terror).
4.1.3.3 Conscientiousness
26
In this study conscientiousness was measured using a 9-item conscientiousness subscale
of the Big Five Inventory (John & Srivastava, 1999). Using a 5-point Likert scale participants
indicated agreement with each of nine statements. Conscientiousness scores were then calculated
by averaging the responses across all 9 items. In this study higher scores indicate higher levels of
conscientiousness (alpha = .87 Natural Disasters, alpha = .86 Terror) Mean scores for each of the
cognitive variables is shown in Table 6.
Table 6. Mean psychometric scores for the soft-target terror attack sample and the natural
disaster sample
Psychometric Scale
Soft-Target Terror Attacks
N=205
Natural Disasters
N=194
Control
(N=101)
Feedback
(N=105)
Control
(N=111)
Feedback
(N=83)
Cognitive Reflection (CRT) 3.50 (SD=2.3) 3.80 (SD=2.2) 3.87 (SD=2.3) 3.35 (SD=2.3)
Skepticism 4.45 (SD=0.7) 4.40 (SD=0.7) 4.51 (SD=0.6) 4.36 (SD=0.6)
Conscientiousness 3.93 (SD=0.8) 3.92 (SD=0.7) 3.93 (SD=0.8) 4.01 (SD=0.7)
4.1.3.4 Sensitivity & Specificity
Additionally, sensitivity and specificity values were also calculated for each participant.
For this study sensitivity refers to the participant’s ability to correctly identify true information as
true. This measure was calculated as the proportion of true social media posts correctly identified
as true by the participant. Specificity in this case represents the participant’s ability to correctly
identify false information as false. Specificity scores were calculated as the proportion of false
social media posts that the individual correctly identified as false. For each participant three
sensitivity and specificity values were calculated: a practice sensitivity and specificity, a trial
sensitivity and specificity, and a total sensitivity and specificity.
4.1.4 Procedure
Participants were randomly assigned to either the feedback condition or the control
condition. Participants in both conditions were shown 16 social media posts as practice
27
judgments followed by 16 social media posts as trial judgments and asked to make a binary
judgement of true or false for each post followed by their confidence rating. Confidence ratings
were reported on a 5-point scale from 1 Not at all confident to 5 Extremely confident. Both the
feedback group and the control group were randomly divided into two groups. One group
received set A for the practice judgements and set B for the trial judgements, while the other
group received set B for the practice judgements and set A for the trial judgments. Participants
were informed that the social media posts were approximately half true and half false.
Participants received $0.50 for their participation regardless of their performance. Additionally,
they were incentivized with an additional bonus of up to $8.00 based on their performance on the
16 trial judgments. Participants received an additional $0.50 for each correct response and were
penalized $0.50 for each incorrect response. The total bonus awarded was determined by
summing the amount from the 16 trial judgements only.
As in Study 1, to minimize cheating and thoughtless responses, each social media post
presented was displayed for a minimum of 10 seconds and a maximum of 30 seconds. After
completing all 32 judgements participants completed the CRT, skepticism, and conscientiousness
items and were asked to self-report their age, sex, education, and political orientation.
4.2 Results
The results for Study 2 are presented in three sections. (1) Evaluating the effect of
feedback training on performance using Signal Detection Theory (SDT) metrics; (2) Evaluating
the effect of feedback training on performance in relation to sensitivity and specificity using
Bayesian regression models; and (3) Examining relationships between cognitive/demographic
variables and specificity/sensitivity using Bayesian regression models. All analyses were
completed separately for soft-target terror events and natural disasters.
28
4.2.1 Signal Detection Theory (SDT) Analysis
To estimate an ROC curve a stimulus strength is needed. To create a continuous range of
information strength on the horizontal axis, each of the confidence ratings for the 32 judgements
was unfolded. While confidence ratings were reported on an ordinal scale, Byrd & John (2021)
found that an ordinal confidence rating and a continuous probability of correct judgment
converged for this task. Each of the confidence ratings was coded 1 (Extremely confident) to 5
(Not at all confident) for the social media posts that they judged to be false, and 6 (Not at all
confident) to 10 (Extremely confident) for the social media posts that they judged to be true.
Therefore, the more confident the participant is in the judgment, the more extreme the truth
signal value.
If participant said "True" → Truth signal = Confidence rating +5
If participant said "False" → Truth signal = 5 - Confidence rating
Using the information strength values, Receiver Operating Characteristic (ROC) curves
were constructed separately for soft-target terror events and natural disasters using the ROC
procedure in SPSS version 25. The ROC curves, sensitivity values, and specificity values were
similar within each condition across the counterbalanced social media posts order, thus for the
remainder of the analyses the counterbalanced groups (those who saw set A for the practice
judgments and set B for the trial judgments, and those who saw set B for the practice judgments
and set A for the trial judgements) are aggregated. Using the ROC curves, AUC values were
calculated. Table 7 displays the AUC estimates for the practice judgements and the trial
judgments separated by the control condition and the feedback condition for both the soft-target
terror attacks and natural disasters.
29
Overall, feedback training did not consistently improve AUC values. In the natural
disaster context, the AUC value for the feedback condition increased between the practice
judgements (AUC = .600, SD =.015) and the trial judgements (AUC = .632, SD =.015), while
the AUC value for the control condition decreased between the practice judgments (AUC = .637,
SD =.013) and the trial judgments (AUC = .629, SD =.013) . However, the opposite pattern was
observed for the soft-target terror attack context where AUC values for the feedback condition
decreased between the practice judgements (AUC = .713, SD = .013) and the trial judgements
(AUC = .706, SD = .013), while increasing in the control condition (Practice AUC = .670, SD =
.013 ; Trial AUC = .697 , SD = .013). In general, the AUC values for the soft-target terror attack
condition were significantly higher than those of the natural disaster condition, which is to be
expected considering the difficulty values of the social media posts in Study 1. Considering the
AUC value for the practice judgments for the natural disaster feedback condition was .600 (SD =
.013) compared to .713 (SD=.013) for soft-target terror attacks, one possibility is that feedback
did improve performance for natural disasters but not soft-target terror attacks. One reason for
this would be that performance for the feedback soft-target terror condition was already high to
begin with. An AUC of .600 allows more room for improvement than a substantially higher
AUC of .713. Receiver operating characteristic curves (ROC) corresponding to the values in
Table 7 are presented in Figures 8 and 9.
Table 7. Area Under the Curve (AUC) values for both the control condition and feedback
condition for the practice judgments and the trial judgments. Values are shown for both the soft-
target terror attacks and natural disaster contexts.
Condition
Information
Stimulus
Soft-target Terror
Attack: AUC
Natural Disaster:
AUC
Control Practice
0.670***
(SD=.013)
0.637***
(SD=.013)
30
Trial
0.697***
(SD=.013)
0.629***
(SD=.013)
Feedback
Practice
0.713***
(SD=.013)
0.600***
(SD=.015)
Trial
0.706***
(SD=.013)
0.632***
(SD=.015)
Average
.697 .622
Note: *** p < .001
Figure 8. Receiver Operating Characteristic Curves (ROC) for the practice/trial and
feedback/control conditions for soft target terror attacks.
31
Figure 9. Receiver Operating Characteristic Curves (ROC) for the practice/trial and
feedback/control conditions for natural disasters.
4.2.2 Feedback Training Bayesian Regression Models
While feedback appears to have not improved overall performance in terms of AUC
values, previous research has found that the ability to correctly identifying true information
(sensitivity) and correctly identifying false information (specificity) are not equivalent. By
utilizing Bayesian regression models, I can determine whether feedback training improves either
sensitivity or specificity separately. Two regression models were used for both soft-target terror
attacks and natural disasters to predict the effect of feedback during the practice judgements on
sensitivity and specificity estimates in the trial judgments. In the models, the practice sensitivity
or specificity value was used as a covariate in the model to account for previous individual and
Natural Disaster: Control / Practice Natural Disaster: Feedback / Practice
Natural Disaster: Feedback / Trial
Natural Disaster: Control / Trial
32
group differences. Although there is some literature related to this research, conservative priors
were used. For each of the models, the following priors were used.
Intercept ~ normal(0, .9)
βs ~ normal(0, .6)
Sigma ~ student_t(4, 0, 1)
For the regression models in this analysis the recommended default number of 4 chains
was used, and the iterations per chain was set to 2000, with 1000 of those being warmup
iterations. The target acceptance rate, adapt_ delta in the Stan language (Carpenter et. al, 2017),
was set to .9. In order to test for convergence multiple criteria were assessed. First it was checked
that there were no divergent transitions post-warmup, as that would indicate that the target
acceptance rate should be increased, it was then ensured that all the Gelman-Rubin statistics for
all parameters were under 1.1. Next the trace plots were examined to ensure they showed good
mixing for the chains, that there was no multimodality shown in the posterior density plots, and
that random draws from the posterior predictive distributions approximated the sample
distribution. All the trace plots showed good mixing. The results of the sensitivity and specificity
models for both natural disasters and soft-target terror events are shown in Table 8. The models
show that outcome feedback did not have an effect on trial judgement sensitivity and specificity
values when covarying out sensitivity and specificity values from the 16 practice judgments.
Thus, receiving feedback did not improve ability to better distinguish between either true
information or false information in social media posts following either extreme event context.
Table 8. Regression results for feedback on sensitivity and specificity for both natural disasters
and terror events.
ND Specificity ND Sensitivity T Specificity T Sensitivity
Intercept
0.22*
[ 0.15; 0.30]
0.34*
[0.24; 0.44]
0.14*
[ 0.07; 0.20]
0.33*
[ 0.22; 0.44]
Specificity Practice
0.63*
0.82*
33
[0.52; 0.75] [0.73; 0.90]
Sensitivity Practice
0.48*
[0.35; 0.61]
0.55*
[0.40; 0.68]
Feedback
0.00
[-0.05; 0.06]
0.03
[-0.02; 0.08]
0.01
[-0.05; 0.05]
-0.01
[-0.06; 0.04]
R^2
0.37 0.21 0.64 0.23
4.2.3 Individual Differences Bayesian Regression Models
Again, sensitivity and specificity are evaluated separately in the individual differences
models for both extreme event contexts. The same process and priors used in section 4.2.2 were
used for these models as well, with all the trace plots showing good mixing. The four regressions
predict overall sensitivity and specificity and include feedback training, six demographic
variables, and three cognitive functioning measures. For this analysis feedback is coded as 0 for
those who did not receive feedback during the practice judgements and 1 for those who did
receive feedback during the practice judgements. Demographic variables were included in the
models as covariates. For age, each unit increase represents one year increase in age. For sex,
males are coded as 0 and females as 1. Education is designated such that 0 represents a highest
education level of less than a 4-year bachelor’s degree, and 1 represents a highest education level
of a 4-year bachelor’s degree or more. Race is coded 0 for majority (Caucasian) and 1 for
minority (not Caucasian). Income is reported on a scale from 1 (less than $10,000) to 8
($200,000 or more). Political orientation and the three cognitive variables (CRT,
conscientiousness, and skepticism) were kept on their original scales. The results of the models
for both natural disasters and soft-target terror events are shown in Table 9.
Self-identified political conservatism was found to be negatively related to specificity for
both the terror and natural disaster context. In this study political orientation is measured on a
self-identified 7-point scale from 1 (extremely liberal) to 7 (extremely conservative), therefore as
*indicates 0 outside the credible interval
34
participants increase on the political ideology scale towards political conservatism specificity
decreases. While this finding is different from the moderating effect extreme event context
(natural disasters and soft-target terror attacks) had on political orientation’s relationship with
skepticism found in Byrd & John (2021), this result is similar to the effect found in previous
false news research in the context of politics (Pennycook & Rand, 2019; Pennycook et. al, 2019).
For the cognitive variable CRT was found to be positively related to specificity for both
terror and natural disasters, with the effect of CRT for the terror condition being double the
effect of CRT for the natural disaster condition. This result is consistent with previous false news
research (Pennycook & Rand, 2019; Pennycook et. al, 2019). Conscientiousness and skepticism
were both positively related to specificity for the soft-target terror attack condition. Additionally,
skepticism was also positively related to sensitivity for soft-target terror attacks.
Table 9. Individual differences regression results for sensitivity and specificity in both extreme
event contexts.
ND Specificity ND Sensitivity T Specificity T Sensitivity
Intercept
0.29
[ 0.05; 0.52]
0.72
[ 0.49; 0.94]
-0.15
[-0.41; 0.10]
0.57
[ 0.39; 0.75]
Feedback
0
[-0.06; 0.05]
0.02
[-0.03; 0.07]
0.03
[-0.03; 0.09]
-0.01
[-0.06; 0.03]
Sex
0.02
[-0.04; 0.08]
-0.05
[-0.10; 0.00]
0.02
[-0.04; 0.08]
0
[-0.04; 0.05]
Education
-0.06 *
[-0.12; -0.00]
0.05*
[ 0.00; 0.10]
-0.02
[-0.08; 0.05]
0.04
[-0.00; 0.09]
Race
-0.06
[-0.12; 0.00]
-0.04
[-0.10; 0.01]
-0.02
[-0.10; 0.04]
-0.03
[-0.07; 0.03]
Income
0
[-0.02; 0.02]
0
[-0.02; 0.01]
0
[-0.02; 0.02]
0
[-0.01; 0.02]
Age
0
[-0.00; 0.01]
0
[-0.00; 0.00]
0
[-0.00; 0.01]
0
[-0.00; 0.01]
Political
Conservatism
-0.02*
[-0.04; -0.00]
0
[-0.02; 0.02]
-0.02*
[-0.05; -0.01]
0
[-0.02; 0.01]
CRT
0.02*
[ 0.01; 0.03]
0
[-0.01; 0.02]
0.04*
[ 0.03; 0.06]
0
[-0.01; 0.01]
Conscientiousness
0.01
[-0.03; 0.06]
-0.02
[-0.06; 0.02]
0.05*
[ 0.00; 0.10]
-0.04*
[-0.08; -0.00]
Skepticism 0.04 0.03 0.12* 0.06*
35
[-0.02; 0.09] [-0.02; 0.08] [ 0.06; 0.17] [ 0.02; 0.10]
R^2 0.26 0.12 0.47 0.12
5. Discussion
The average AUC values in this study (AUC=.697 soft-target terror events, AUC=.622
natural disasters) are substantially higher than the AUC results reported in previous studies
(AUC=.617 soft-target terror events, AUC=.521 natural disasters) (Byrd & John, 2021). This
likely reflects the difference in using a set of social media posts that form a reliable measure of
truth detection ability, as developed in Study 1, compared to a set of posts that has not been
selected for difficulty and discriminability. Utilizing IRT, ICC parameters of discriminability and
difficulty were estimated and allowed for selection of a psychometrically sound measure of truth
detection ability. By identifying social media posts in Study 1 that relate to the underlying ability
trait and that are of varying difficulty, the detection tasks in Study 2 were more sensitive to
possible differences between the two types of events and the effects of feedback learning.
Overall, the results of this study indicate that feedback training does not improve ability
to correctly distinguish between true and false information in social media following extreme
events. Performance in terms of AUC values for those receiving feedback did improve in the
natural disaster context but did not improve in the soft-target terror attack context that received
feedback. While it is possible that the feedback training improved the AUC values for the natural
disaster condition, it is unlikely considering this effect is not shown in the regression analyses
when accounting for practice scores. Therefore, it is most probable that feedback training, as
tested in this study, is not effective. However, since the number of training judgments was
limited to 16 per participant, it could be that the number of training judgements in my study (16)
is too small. It is possible that 16 feedback responses (8 true and 8 false) are not enough to train
*indicates 0 outside the confidence interval
36
individuals and that using a larger number of training judgements could be more effective.
Alternatively, it could be that sensitivity and specificity require more than eight judgments to
reliably estimate.
One common predictor found throughout social media false information research is
political ideology. In the context of political social media posts, political conservatism has been
found to be a negative predictor of false information performance (Pennycook & Rand, 2019;
Pennycook et. al, 2019), while in the context of extreme events political ideology has been found
to be a moderating variable, with political conservatism being positively related to false
information performance in the context of soft-target terror attacks and negatively related in the
natural disaster context (Byrd & John, 2021). This study however did not find the same
moderating effect between extreme events. Similar to the political false information research,
this study found political conservatism to be negatively related to specificity in both extreme
event contexts. Participants who identified as more conservative on the self-report scale
performed worse at correctly identifying false information for both soft-target terror attacks and
natural disasters. Ability to correctly identify true information (sensitivity) was not found to be
related to political ideology in either extreme event context, which is consistent with previous
literature.
Consistent with previous research (Bronstein et al., 2019; Pennycook & Rand, 2019), I
found cognitive reflection (CRT) to be positively related to skepticism (i.e. the ability to
correctly identify false information) across both extreme event contexts, providing evidence
towards the idea that the more reflective one is in their cognitive thinking, the better they
perform at identify false information in social media across a variety of different contexts. The
other cognitive measures, skepticism and conscientiousness, were also found to be positively
37
related to specificity, but only in the soft-target terror attack context. Additionally, in the soft-
target terror context, I found a positive relationship between skepticism and sensitivity. The lack
of these findings in the natural disaster context could imply that this relationship is context
dependent. In other words, this relationship is not consistent across every extreme event, but is
instead dependent on the context.
6. Limitations
The aim of this study was to determine whether simple feedback training increases ability
to correctly identify information in social media following extreme events based on the content
of the social media post alone. While individuals might have the ability to access more
information about a social media post (e.g. username, IP addresses, number of followers, amount
of likes or shares), or have the time and ability to verify or fact check information when actually
using social media platforms, participants in this study did not have access to that information.
During an extreme event individuals most likely do not have the luxury of being able to fact
check information or verify the source because information is either needed quickly, or cannot be
fact checked yet because the information is not available. The additional information about the
posts was excluded, and the time constraints included, to isolate truth detection based on content
only. Additionally, in extreme event contexts it is more common for information to come from
unfamiliar sources (e.g. ones that are close to the event) than from familiar sources. Further,
previous research has found the source of social media posts to be unrelated to correct
identification performance (Pennycook & Rand, 2019). However, the inclusion of such
information could potentially affect performance, so the results of this study should be
interpreted with that in mind. In addition, this study was restricted to 16 feedback judgments.
38
Therefore, this study makes no claim that more extensive feedback training would not be
effective. The results of this study are related to the effect that 16 feedback training trials has on
helping individuals improve their truth detection skills based on the content of the social media
posts alone.
7. Future Research
While this study assessed whether feedback training was successful in improving
information classification in social media, now that a reliable dataset of social media posts for
both natural disasters and soft-target terror events has been developed, there are other training
programs that could also be explored. One example is cognitive training. While the cognitive
training methods used in typical settings do not apply to the context of social media (e.g. eye
contact, body language, or other physical clues), other features present in social media (e.g.
capitalization, image quality, punctuation) can have a similar role. Previous research found a
decrease in an individual’s likelihood of sharing, believing, and liking false social media posts
when participants were provided and interacted with a list of general social media credibility
guidelines prior (Lutzke et al., 2019). With wide use of social media, additional research could
expand this research to alternative extreme events that took place outside the US, or other
extreme event contexts other than terrorism and natural disasters, such as pandemics or wars.
While US events and a US based sample were used for this study, future research could expand
not only to events outside the US, but also a non-US based sample. Differences and similarities
in social media post identification across countries, cultures, and events could then be explored
and compared.
8. Conclusion
39
This study expanded previous social media research following extreme events by
constructing psychometrically sound measures of truth detection in social media posts related to
US natural disasters and soft-target terror events that took place between 2016-2019 through the
use of Item Response Theory. This study also extends previous deception detection feedback
training practices to the context of social media training in relation to extreme events in the US.
In this study, feedback training was not found to increase ability to correctly identify true and
false information in social media posts following natural disasters or soft-target terror events.
Future research should investigate other training methods to improve correct information
identification in social media.
Two individual difference predictors were found to be consistent across both extreme
event contexts, cognitive reflection (CRT) and self-identified political ideology. Both of these
predictors were found to be related to ability to correctly identify false information (specificity).
As an individual’s cognitive reflection increased, so did their ability to correctly identify false
information. Additionally, political conservatism was associated with decreased performance in
identifying false information in social media. Overall, those most challenged to detect truth in
social media posts following extreme events are those with low cognitive reflection and those
that self-identify as more politically conservative.
40
References
Akehurst, L., Bull, R., Vrij, A., & Köhnken, G. (2004). The effects of training professional
groups and lay persons to use criteria‐based content analysis to detect deception. Applied
Cognitive Psychology: The Official Journal of the Society for Applied Research in
Memory and Cognition, 18(7), 877-891.
An, X., & Yung, Y. F. (2014). Item response theory: What it is and how you can use the IRT
procedure to apply it. SAS Institute Inc. SAS364-2014, 10(4).
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election (No.
w23089). National Bureau of Economic Research. Retrieved May 5, 2019
Bond, G. D. (2008). Deception detection expertise. Law and Human Behavior, 32(4), 339-351.
Bond Jr, C., & DePaulo, B. (2006). Accuracy of deception judgments. Personality and
Social Psychology Review, 10(3), 214-234.
https://doi.org/10.1207/s15327957pspr1003_2
Bronstein, M., Pennycook, G., Bear, A., Rand, D., & Cannon, T. (2019). Belief in fake
news is associated with delusionality, dogmatism, religious fundamentalism, and reduced
analytic thinking. Journal of Applied Research in Memory and Cognition, 8(1), 108-117.
https://doi.org/10.1016/j.jarmac.2018.09.005
Byrd, K., & John, R. (2021 in press). Lies, damned lies, and social media following extreme
events. Risk Analysis.
Canfield, C., Fischhoff, B., & Davis, A. (2016). Quantifying phishing susceptibility for
detection and behavior decisions. Human Factors, 58(8), 1158-1172.
https://doi.org/10.1177/0018720816665025
Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M.,
Guo, J., Li, P., & Riddell, A. (2017). Stan: A probabilistic programming language.
Journal of Statistical Software, 76(1), 1 - 32. doi:http://dx.doi.org/10.18637/jss.v076.i01
Clement, J. (2019, June 18). Share of U.S. adults who use social media 2019, by age. Retrieved
April 3, 2020, from https://www.statista.com/statistics/471370/us-adults-who-use-social-
networks-age/
Crews, J., Cao, J., Lin, M., Nunamaker, J., & Burgoon, J. (2007). A comparison of instructor-led
vs. web-based training for detecting deception. Journal of STEM Education, 8(1).
DePaulo, B. (1994). Spotting lies: Can humans learn to do better? Current Directions in
Psychological Science, 3(3), 83-86. http://dx.doi.org/10.1111/1467-8721.ep10770433
41
DePaulo, B., Stone, J., & Lassiter, G. (1985). Telling ingratiating lies: Effects of target
sex and target attractiveness on verbal and nonverbal deceptive success. Journal of
Personality and Social Psychology, 48(5), 1191.
http://dx.doi.org/10.1037/0022-3514.48.5.1191
deTurck, M. A., Feeley, T. H., & Roman, L. A. (1997). Vocal and visual cue training in
behavioral lie detection. Communication Research Reports, 14(3), 249-259.
deTurck, M. A., & Miller, G. R. (1990). Training observers to detect deception: Effects of self
monitoring and rehearsal. Human Communication Research, 16(4), 603-620.
Driskell, J. E. (2012). Effectiveness of deception detection training: A meta
analysis. Psychology, Crime & Law, 18(8), 713-731.
Fiedler, K., & Walka, I. (1993). Training lie detectors to use nonverbal cues instead of global
heuristics. Human Communication Research, 20(2), 199-223.
Frank, M. G., & Feeley, T. H. (2003). To catch a liar: Challenges for research in lie detection
training. Journal of Applied Communication Research, 31(1), 58-75.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic
Perspectives, 19, 25–42. https://doi. org/10.1257/089533005775196732
Hartwig, M., Granhag, P. A., Strömwall, L. A., & Kronkvist, O. (2006). Strategic use of
evidence during police interviews: When training to detect deception works. Law and
Human Behavior, 30(5), 603-619.
Hamidian, S., & Diab, M. (2015). Rumor detection and classification for Twitter data. Presented
at The Fifth International Conference on Social Media Technologies, Communication,
and Informatics (SOTICS).
Houston, J. B., Hawthorne, J., Perreault, M. F., Park, E. H., Goldstein Hode, M., Halliwell, M.
R., ... & Griffith, S. A. (2015). Social media and disasters: a functional framework for
social media use in disaster planning, response, and research. Disasters, 39(1), 1-22.
Huang, Y. L., Starbird, K., Orand, M., Stanek, S. A., & Pedersen, H. T. (2015, February).
Connected through crisis: Emotional proximity and the spread of misinformation online.
In Proceedings of the 18th ACM conference on computer supported cooperative work &
social computing (pp. 969-980).
Hurtt, R. (2010). Development of a scale to measure professional skepticism. Auditing: A
Journal of Practice and Theory, 29(1), 149-171.
https://doi.org/10.2308/aud.2010.29.1.149
42
Jaeger, P., Shneiderman, B., Fleischmann, K., Preece, J., Qu, Y., & Wu, P. (2007).
Community response grids: E-government, social networks, and effective emergency
management. Telecommunications Policy, 31(10- 11), 592-604.
https://doi.org/10.1016/j.telpol.2007.07.008
John, O., & Srivastava, S. (1999). The Big Five trait taxonomy: History, measurement, and
theoretical perspectives. In L. Pervin & O. John (Eds.), Handbook of personality: Theory
and research, 2
nd
Ed., Chapter 4, pp. 102-138. New York: Guilford Press.
Kassin, S., & Fong, C. (1999). "I'm innocent!": Effects of training on judgments of truth
and deception in the interrogation room. Law and Human Behavior, 23(5), 499-516.
https://doi.org/10.1023/A:1022330011811
Keim, M., & Noji, E. (2011). Emergent use of social media: a new age of opportunity for
disaster resilience. American Journal of Disaster Medicine, 6(1), 47- 54.
(PMID:21466029)
Kemp, S. (2020, January 30). Digital 2020: 3.8 Billion people use social media. Retrieved April
3, 2020, from https://wearesocial.com/blog/2020/01/digital-2020-3-8-billion-people-use-
social-media
Köhnken, G. (1987). Training police officers to detect deceptive eyewitness statements: Does it
work? Social Behaviour, 2(1), 1-17
Kraut, R. (1980). Humans as lie detectors. Journal of Communication, 30(4), 209-218.
https://doi.org/10.1111/j.1460-2466.1980.tb02030.x
Levine, T. R., Feeley, T. H., McCornack, S. A., Hughes, M., & Harms, C. M. (2005). Testing the
effects of nonverbal behavior training on accuracy in deception detection with the
inclusion of a bogus training control group. Western Journal of Communication, 69(3),
203-217.
Lloyd, P. (2017, April 07). Google introduces new global fact-checking tag to help filter 'fake
news'. Retrieved April 9, 2018, from
http://www.dailymail.co.uk/sciencetech/article4389436/Google-introduces-new-global-
fact-checkingtags.html
Lord, F. (1980). Applications of item response theory to practical testing problems. Hillsdale,
NJ: Lawrence Erlbaum Associates
Lutzke, L., Drummond, C., Slovic, P., & Árvai, J. (2019). Priming critical thinking: Simple
interventions limit the influence of fake news about climate change on Facebook. Global
Environmental Change, 58, 101964.
43
Magis, D., Béland, S., Tuerlinckx, F., & De Boeck, P. (2010). A general framework and an R
package for the detection of dichotomous differential item functioning. Behavior
research methods, 42(3), 847-862.
Martin, J., Dubé, C., & Coovert, M. (2018). Signal detection theory (SDT) is effective for
modeling user behavior toward phishing and spear-phishing attacks. Human Factors,
60(8), 1179-1191. https://doi.org/10.1177/0018720818789818
Meissner, C., & Kassin, S. (2002). “He's guilty!”: Investigator bias in judgments of truth and
deception. Law and Human Behavior, 26(5), 469-480.
https://doi.org/10.1023/A:1020278620751
Morris, M. R., Counts, S., Roseway, A., Hoff, A., & Schwarz, J. (2012, February). Tweeting is
believing? Understanding microblog credibility perceptions. In Proceedings of the ACM
2012 conference on computer supported cooperative work (pp. 441-450).
Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is
better explained by lack of reasoning than by motivated reasoning. Cognition, 188, 39-50.
Porter, S., Juodis, M., Brinke, L., Klein, R., & Wilson, K. (2010). Evaluation of the
effectiveness of a brief deception detection training program. Journal of Forensic
Psychiatry & Psychology, 21(1), 66-76.
Raju, N. S. (1990). Determining the significance of estimated signed and unsigned areas between
two item response functions. Applied Psychological Measurement, 14, 197-207. doi:
10.1177/ 014662169001400208
Rizopoulos, D. (2006). ltm: An R package for latent variable modeling and item response theory
analyses. Journal of statistical software, 17(5), 1-25.
Roets, A. (2017). ‘Fake news’: Incorrect, but hard to correct. The role of cognitive ability on the
impact of false information on social impressions. Intelligence, 65, 107-110.
Rubin, V. L. (2017). Deception detection and rumor debunking for social media. In The SAGE
Handbook of Social Media Research Methods (p. 342). Sage.
Salim, S. (2019, January 4). How much time do you spend on social media? Research says 142
minutes per day. Retrieved June 26, 2019, from
https://www.digitalinformationworld.com/2019/01/how-much-time-do-people-spend-
social-media-infographic.html
Santarcangelo, M., Cribbie, R. A., & Hubbard, A. S. E. (2004). Improving accuracy of veracity
judgment through cue training. Perceptual and motor skills, 98(3), 1039-1048.
44
Snider, M. (2017, June 30). Facebook aims to filter more fake news from news feeds. Retrieved
May 26, 2019, from https://www.usatoday.com/story/tech/news/2017/06/30/faceb ook-
aims-filter-more-fake-news-news-feeds/440621001/
Starbird, K., Maddock, J., Orand, M., Achterman, P., & Mason, R. M. (2014). Rumors, false
flags, and digital vigilantes: Misinformation on twitter after the 2013 Boston marathon
bombing. IConference 2014 Proceedings.
Suciu, P. (2019, October 11). More Americans are getting their news from social media.
Retrieved April 3, 2020, from https://www.forbes.com/sites/petersuciu/2019/10/11/more-
americans-are-getting-their-news-from-social-media/#1cb35fb43e17
Thomson, K. S., & Oppenheimer, D. M. (2016). Investigating an alternate form of the
cognitive reflection test. Judgment and Decision Making, 11, 99–113.
Vanian, J. (2018, January 19). Facebook, Twitter take new steps to combat fake news and
manipulation. Retrieved April 9, 2018, from http://fortune.com/2018/01/19/facebook-
twitter-news-feedrussia-ads/
Vrij, A., & Graham, S. (1997). Individual differences between liars and the ability to detect
lies. Expert Evidence, 5(4), 144-148.
Vrij, A., & Baxter, M. (1999). Accuracy and confidence in detecting truths and lies in
elaborations and denials: Truth bias, lie bias and individual differences. Expert Evidence,
7(1), 25-36.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news
online. Science, 359(6380), 1146-1151.
Wang, B., & Zhuang, J. (2018). Rumor response, debunking response, and decision makings of
misinformed Twitter users during disasters. Natural Hazards, 93(3), 1145-1162.
Wright, G. R., Berry, C. J., & Bird, G. (2012). “You can't kid a kidder”: association between
production and detection of deception in an interactive deception task. Frontiers in
human neuroscience, 6, 87.
Wu, K., Yang, S., & Zhu, K. Q. (2015, April). False rumors detection on sina weibo by
propagation structures. In 2015 IEEE 31st international conference on data
engineering (pp. 651-662). IEEE.
Zubiaga, A., Liakata, M., Procter, R., Hoi, G. W. S., & Tolmie, P. (2016). Analysing how people
orient to and spread rumours in social media by looking at conversational threads. PloS
One, 11(3).
45
Zubiaga, A., Aker, A., Bontcheva, K., Liakata, M., & Procter, R. (2018). Detection and
resolution of rumours in social media: A survey. ACM Computing Surveys (CSUR),
51(2), 1-36.
Zuckerman, M., Koestner, R., & Alton, A. (1984). Learning to detect deception. Journal of
Personality and Social Psychology, 46(3), 519-528.
Zuckerman, M., Koestner, R., & Colella, M. J. (1985). Learning to detect deception from three
communication channels. Journal of Nonverbal Behavior, 9(3), 188-194.
Abstract (if available)
Abstract
With the increased reliance on social media to spread important information regarding extreme events, and the fact that most individuals do not perform well at correctly distinguishing true and false information in social media, it is necessary to find ways to measure and increase this ability. This study (N=800) consists of two parts: (1) the creation of 4 reliable scales to asses detection ability for soft-target terror attacks and natural disasters, and (2) the testing of feedback training to improve correct identification performance. In Study 1, 80 actual social media posts, half true and half false, half related to soft target terror attacks and half natural disasters, verified through Snopes.com, were presented to a US-based adult sample (N=402). Each individual was presented 40 actual social media posts, half true and half false, pertaining to either natural disasters or soft-target terror attacks that took place in the US between 2016-2019, and asked to make a binary judgement as to whether the post was true or false. Using the responses and item response theory (IRT) four scales of 16 items each were established based on the discriminability and difficulty of the items. The four scales measure ability to correctly identify true and false posts related to soft-target terror attacks and natural disasters. In Study 2 (N=398), using the 4 scales established in Study 1, participants made 32 binary judgements on social media posts related to either natural disasters or soft-target terror attacks. Each participant completed 16 practice judgements and 16 trial judgements. Participants randomly assigned to the feedback training condition received feedback (told they were either correct or incorrect) after each of the 16 practice judgments. Participants randomly assigned to the control condition did not receive any feedback following the practice judgements. Performance on the trial judgements was incentivized. Feedback training was not found to increase ability to correctly distinguish between true and false information in social media in the context of extreme events. Political conservatism was found to be negatively related to the ability to correctly identify false information, while cognitive reflection (CRT) was found to be positively related to the ability to correctly identify false information across extreme event contexts. The four psychometric scales established provide a reliable measure of ability to identify true and false information in social media following extreme events and could be used for further research.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Improving health decision literacy: enhancing informed health decisions through podcast interventions
Asset Metadata
Creator
Byrd, Katie Elizabeth Sippel
(author)
Core Title
Measuring truth detection ability in social media following extreme events
School
College of Letters, Arts and Sciences
Degree
Master of Arts
Degree Program
Psychology
Publication Date
04/19/2021
Defense Date
03/19/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
fake news,feedback training,item characteristic curve,item response theory,misinformation,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
John, Richard Sheffield (
committee chair
), Lai, Hok Chio (
committee member
), Monterosso, John (
committee member
)
Creator Email
katie.e.sippel@gmail.com,ksippel@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-448617
Unique identifier
UC11666577
Identifier
etd-ByrdKatieE-9490.pdf (filename),usctheses-c89-448617 (legacy record id)
Legacy Identifier
etd-ByrdKatieE-9490.pdf
Dmrecord
448617
Document Type
Thesis
Rights
Byrd, Katie Elizabeth Sippel
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
fake news
feedback training
item characteristic curve
item response theory
misinformation