Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Only half of what I’ll tell you is true: how experimental procedures lead to an underestimation of the truth effect
(USC Thesis Other)
Only half of what I’ll tell you is true: how experimental procedures lead to an underestimation of the truth effect
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: TRUTH EFFECT WARNINGS
1
Only Half of What I’ll Tell You is True:
How Experimental Procedures Lead to an Underestimation of the Truth Effect
Madeline Jalbert
Co-advisors: Norbert Schwarz and Eryn Newman
Department of Psychology
Master of Arts
University of Southern California
August, 2018
TRUTH EFFECT WARNINGS 2
Table of Contents
Abstract ............................................................................................................................................4
Introduction ......................................................................................................................................5
Present Research ........................................................................................................................8
Experiment 1 ....................................................................................................................................9
Method .......................................................................................................................................9
Design ..................................................................................................................................9
Participants .........................................................................................................................10
Materials ............................................................................................................................10
Procedure ...........................................................................................................................11
Results ......................................................................................................................................12
Experiment 2 ..................................................................................................................................14
Method .....................................................................................................................................15
Design ................................................................................................................................15
Participants .........................................................................................................................15
Procedure ...........................................................................................................................15
Results ......................................................................................................................................15
Experiment 3 ..................................................................................................................................16
Method .....................................................................................................................................17
Design ................................................................................................................................17
Participants .........................................................................................................................17
Procedure ...........................................................................................................................18
Results ......................................................................................................................................18
TRUTH EFFECT WARNINGS 3
Effect Size Analysis .......................................................................................................................19
General Discussion ........................................................................................................................20
References ......................................................................................................................................25
Figures............................................................................................................................................30
TRUTH EFFECT WARNINGS 4
Abstract
Ambiguous information is judged to be more true when it has been seen or heard repeatedly than
when it is new. This “truth effect” has important consequences in the real world, where we are
repeatedly exposed to information that may or may not be true as we scroll through social media,
read the news, or talk with friends or coworkers. Yet, while false information in the real world
rarely comes with a warning label, false information in truth effect experiments often does.
Commonly used experimental procedures draw attention to the truth value of claims at exposure
through instructional warnings, alerting participants to potential falsehoods and limiting the
impact of repetition. Three experiments show that the size of the truth effect increases by a factor
of 2 to 5 times when such warnings are avoided. The effect of pre-exposure warnings on later
truth judgments persists even after a delay of three to five days. These findings highlight that the
most common experimental procedures systematically underestimate the likely size of the truth
effect in the real world.
TRUTH EFFECT WARNINGS 5
Introduction
Forty years ago, Hasher, Goldstein, and Toppino (1977) reported that the mere repetition
of a statement increased its acceptance as true. This so-called truth effect proved robust and
easily replicable, with a recent meta-analysis of seventy effect sizes reporting an average effect
size of d = .50 (95% CI [.43, .57]) for the difference in truth ratings between new and repeated
items (Dechêne, Stahl, Hansen & Wänke, 2010). Unfortunately, the standard procedures of truth
effect experiments may not be a good approximation of the conditions of message repetition in
natural contexts. The majority of the experimental procedures used in these truth effect studies
result in exposure to information under conditions that differ from the way people regularly
encounter information outside of the lab; namely, these procedures focus attention on the fact
that some of the information is false. As discussed below, this may systematically decrease the
impact of repetition compared to conditions encountered outside of the lab, meaning that the
medium effect size observed in laboratory studies may actually be an underestimate of the
influence of repetition on perceived truth in the real world. We investigate whether this truth
effect is larger under conditions that more closely mirror how people encounter information in
the real world.
Theoretically, link between repetition and truth can be explained through a fluency
account. Repetition increases the ease of processing information, and this experience of fluency
is accompanied by feelings of familiarity (e.g., Alter & Oppenheimer, 2009; Schwarz, 2010).
Importantly, people draw on familiarity as a cue for social consensus (Weaver, Garcia, Schwarz,
& Miller, 2007), operating under the lay theory that when more people agree with something, the
more likely it is to be true (Festinger, 1954). Consistent with this fluency account, people are
also more likely to rate statements as true if they feel easier to process in ways unrelated to
TRUTH EFFECT WARNINGS 6
repetition, such as if they are easier to perceive (Parks & Toth, 2006; Reber & Schwarz, 1999;
Garcia-Marques, Silva, & Mello, 2016; Silva, Garcia-Marques, & Mello, 2016) or spoken with
an accent that is easier to understand (Lev-Ari & Keysar, 2010).
In this sense, repetition can be thought of as a particularly strong manipulation of fluency,
with repetition increasing judgments of truth more than perceptual manipulations (e.g. Parks &
Toth, 2006, Silva et al., 2016). Additionally, strategies effective in eliminating the truth effect
resulting from perceptual fluency, such as stressing the need for participants to be accurate when
assessing truth, are less effective in correcting repetition-based truth effects (e.g., Garcia-
Marques et al., 2016; Silva et al., 2016). These results indicate that that the perceived validity
resulting from repetition is particularly deleterious and difficult to correct for once established.
A typical repetition-based truth effect study has three main parts. First, during an initial
exposure phase, participants view a series of ambiguous claims. Usually, half of these claims are
true and half are false. Exposure is followed by a delay period ranging from a few minutes to
several days. Finally, during a test phase, participants view both claims they saw during the
exposure phase and new claims and are asked to rate the truth of each claim.
Studies vary in the extent to which they draw participants’ attention to the truth of the
claims at initial exposure. In the majority of studies, researchers alert participants to the presence
potential falsehoods prior to exposure in at least one of two ways: by explicitly telling
participants that claims may or may not be true (e.g., Gigerenzer, 1984; Hasher et al., 1977;
Mutter, Lindsey, & Pliske, 1995) or that “some” or “half” of the claims are false (e.g., Begg,
Anas, & Farinacci, 1992; Brown & Nix, 1996; Garcia-Marques et al., 2016; Nadarevic &
Erdfelder, 2014; Schwartz, 1982; Silva et al., 2016), and/or by asking participants to make a truth
judgment for each claim as it is presented (e.g., Arkes, Boehm, & Xu, 1991; Arkes, Hackett, &
TRUTH EFFECT WARNINGS 7
Boehm, 1989; Bacon, 1979; Brown & Nix, 1996; Dechêne, Stahl, Hansen, & Wänke, 2009;
Garcia-Marques et al., 2016; Gigerenzer, 1984; Hawkins, & Hoch, 1992; Hasher et al., 1977;
Mutter et al., 1995; Schwartz, 1982; Toppino, & Brochin, 1989). In a smaller number of studies,
participants are not alerted to truth at exposure and are only asked to make unrelated judgments
(e.g., Hawkins & Hoch, 1992; Hawkins, Hoch, & Meyers Levy, 2001; Law, Hawkins, & Craik,
1998) or simply view the claims and attend to them for a later memory test (e.g. Mitchell,
Sullivan, Schacter, & Budson, 2006).
At the time of testing, all truth effect studies draw participants’ attention to the truth of
the claims simply by asking participants to judge the truth of each claim. Studies may
additionally provide participants with information immediately prior to test about the validity of
the statements, such as that “some” or “half” of the claims are false (i.e., Begg & Armour, 1991;
Hawkins, Hoch, & Meyers Levy, 2001; Law et al., 1998; Nadarevic & Erdfelder, 2014;
Schwartz, 1982; Silva et al., 2016).
Both instructional warnings and asking participants to make truth judgments draw
attention to truth. How these components of truth effect experiments influence the results has
received little attention. Past studies have only investigated the impact of one variant, namely
making truth judgments at exposure. In a meta-analysis, Dechêne et al. (2010) reported a
significantly smaller effect size when participants were asked to make a truth judgment during
initial exposure (d = .45, 95% CI [.37, .54]) compared to making other judgments or no
judgments at all at exposure (d = .62, 95% CI [.49, .75]). In fact, many reports lack the
procedural details necessary to investigate the impact of instructional warnings. This suggests
that the researchers may not expect the presence of these warnings to influence their results.
TRUTH EFFECT WARNINGS 8
Unfortunately, related research on the encoding and correction of misinformation
suggests otherwise. Pre-exposure warnings can change the way misinformation is processed,
with important implications for its encoding and acceptance. In general, people are better able to
protect themselves from misinformation if a warning makes them skeptical of the accuracy of
that information during encoding. This increase in skepticism leads people to be more careful as
they encounter information and can protect them from simply accepting new information as true
(e.g., Fein, McCloskey, & Tomlinson, 1997; Greene, Flynn, & Loftus, 1982; Lewandowsky,
Ecker, Seifert, Schwarz, & Cook, 2012; Lewandowsky, Stritzke, Oberauer, & Morales, 2005;
Schul, 1993).
These findings are relevant for truth effect studies and suggest that the presence of a
standard instruction alerting participants that they will be seeing some false claims could
increase skepticism and lead participants to be more critical of information during exposure. As a
result, this standard instruction may lead to a systematic underestimation of the effect of
repetition on judgments of truth in the real world, where information does not usually appear
with a “this-might-be-false” tag. Indeed, without alerts about truth or other kinds of warnings,
people tend to trust communicated information and assume it is truthful, clear, and relevant to
their task (Grice, 1975; Schwarz, 1994). This makes close scrutiny at exposure unlikely, which
also suggests that under such conditions the truth effect may be larger than previously thought.
Present Research
Given these concerns, our current experiments aimed to systematically test how the
presence, nature, and timing of instructional warnings influence the size of the truth effect. In
Experiment 1, we examined the impact of a pre-exposure warning: half of the participants
received a warning prior to exposure and test, and half of the participants received a warning
TRUTH EFFECT WARNINGS 9
prior to test only. Participants were warned either that that “some” of the statements would be
false, or that “half” of the statements would be false. Whereas both of these experiments
included a warning prior to test, we dropped this warning in Experiment 2 to shed more light on
the influence of pre-test warnings. Finally, in Experiment 3, we examined whether the impact of
warnings held up after a three to five day delay.
Experiment 1
The purpose of Experiment 1 (preregistered at: https://aspredicted.org/ir6fa.pdf) was to
investigate how the presence of an instructional warning prior to the initial exposure to trivia
claims would influence the size of the truth effect. Based on the observation that pre-exposure
warnings reduce the impact of misleading information in other paradigms (e.g., Fein et al., 1997;
Greene et al., 1981; Lewandowsky et al., 2005; Schul, 1993) we predicted that the impact of
repetition will be smaller when participants are aware that some information they will see may
be false; that is participants would show a smaller truth effect and rate repeated claims less true
when they received a warning compared to when they did not receive a warning. We also
investigated whether the nature of the warning mattered: participants were told that either some
of the claims they were seeing were true and some were false or that half of the claims were true
and half were false. We did not have a specific prediction about how these two warnings types
would differ.
Method
Design. We used a 2 (warning timing: before test vs. before exposure and before test) x 2
(warning type: “some” or “half”) x 2 (repetition: trivia claim repeated vs. new) mixed design,
manipulating warning timing and warning type between subjects and whether claims were
repeated or new within subjects.
TRUTH EFFECT WARNINGS 10
Participants. Based on data from a pilot study, we chose to overpower our study and
recruited 100 participants for each between-subjects condition in all experiments. In this case we
had four warning conditions (pre-exposure and pre-test warnings or pre-test warning only with
either “some” or “half” directions), so we aimed for 400 participants total. All participants were
recruited from Mechanical Turk (MTurk) to take the survey using the online survey platform
Qualtrics. We only recruited MTurk users with a HIT approval rate greater than or equal to 95%.
Participants were told the experiment would take approximately 30 - 45 minutes and were paid
$1.20 for completing the study. The sample was restricted to participants in the United States, as
the trivia claims used were normed with this population.
In total, 405 participants completed the study. The small discrepancy between desired and
actual participant number is due to an interaction between Qualtrics and MTurk, which sometime
results in the recruitment of slightly more or less participants than expected. The actual number
of participants in each condition can vary due to people who did not complete the survey or
submit a survey code. This is also the case with the other MTurk studies in this paper. One
participant was excluded because they reported to their age to be less than 18 years old. This left
us with 206 participants in “some” warning condition (106 N = pre-test warning only condition,
100 N = pre-exposure and pre-test warning condition) and 198 participants in the “half” warning
condition (97 N = pre-test warning only, 101 N = pre-exposure and pre-test warning condition).
There were 154 males and 250 females, and the mean age was 37.48 (SD = 11.55).
Materials. We selected trivia claims from a set of previously normed trivia claims about
a variety of topics: sports, geography, food, animals, and science (Jalbert, Newman, & Schwarz,
2018). During this norming, MTurk participants were asked to judge whether each trivia claim
TRUTH EFFECT WARNINGS 11
was true or false using a binary response scale. The trivia claims were chosen to be ambiguous –
only those rated as true between 35% and 65% of time were included.
Participants saw 36 trivia claims during the initial exposure. In the test phase, participants
saw these same 36 claims as well as 36 new claims, making for 72 claims total. All claims were
presented for five seconds in a random order during each session. Half of the trivia claims were
true and half were false both during exposure and testing phases.
The trivia claims were counterbalanced such that half of the participants saw one set of
36 claims repeated, and half of the participants saw the other set of 36 claims repeated. Based on
the norming data, the proportion of true claims rated as true for counterbalance one and
counterbalance two were the same, as were the proportion of false claims rated as true in
counterbalance one and counterbalance two (all M = .52, SD = .08). Additionally, according to
norming data, both true and false claims were rated as true approximately the same proportion of
the time (both M = .52, SD = .08).
Procedure
Exposure Phase. Participants were told that they would see a series of trivia claims for
approximately the next three minutes. Half of the participants were given a pre-exposure
warning. This warning was either that some of the claims were true and some of the claims were
false or that half of the claims were true and half of the claims were false. The other half of the
participants were not given a warning.
Delay Phase. A twenty-minute delay followed the initial exposure in which participants
read and answered multiple-choice questions about articles unrelated to the trivia claims.
Test Phase. In the test phase, participants were told they would see another series of trivia
claims. All participants were told that half of the statements were ones that they had seen before
TRUTH EFFECT WARNINGS 12
and half of the statements were new. All participants were also given a pre-test warning that
some of the statements were true and some of the statements were false. For each claim,
participants answered the question “Is this statement true or false?” on a six-point scale from
“definitively true” to “definitely false”. Participants could not see the number associated with
each choice.
Demographics. Finally, participants answered a few demographic questions, including
gender and age.
Results
We performed a 2 (warning timing: before test vs. before exposure and before test) x 2
warning type: “some” or “half”) x 2 (repetition: trivia claim repeated vs. new) mixed model
ANOVA. Truth ratings were recoded from the six-point scale (1 = definitely false; 6 = definitely
true).
Overall, there was no difference between the “some” and “half” warning conditions.
Truth ratings in “some” vs. “half” warning types conditions were not significantly different, F (1,
400) = .007, p = .933, partial eta
2
<.001. There was also no significant interaction of warning
type with warning timing, F (1, 400) = .007, p = .935, partial eta
2
< .001, or significant
interaction of warning type with repetition, F (1, 400) = 1.319, p = .251, partial eta
2
= .003. The
three-way interaction with warning timing and repetition was also not significant, F (1, 400)
= .260, p = .611. partial eta
2
= .001.
Consistent with typical truth effect findings, there was a main effect of repetition, such
that participants rated repeated claims (M = 4.29, 95% CI [4.20, 4.37]), as significantly more true
than new claims (M = 3.50, 95% CI [3.45, 3.55]); mean difference = .79, (95% CI [.71, .88], F
(1, 400) = 335.25, p < .001, partial eta
2
= .46). There was also a main effect of warning timing,
TRUTH EFFECT WARNINGS 13
such that participants who received a pre-test warning only overall gave significantly higher truth
ratings (M = 4.08, 95% CI [4.00, 4.16]) than participants who received pre-exposure and pre-test
warnings (M = 3.70, 95% CI [3.63, 3.78]); mean difference = .38 (95% CI [.27, .48], F (1, 400) =
46.88, p <.001, partial eta
2
= .11).
As can be seen in Figure 1 and 2, the main effects of warning timing and repetition were
modified by a significant interaction between warning timing and repetition, F (1, 400) = 88.99,
p < .001, partial eta
2
= .18. In order to explain this interaction, we looked at simple effects using
a Bonferroni correction for multiple comparisons. Repeated measures d effect sizes were
calculated using Comprehensive Meta-Analysis Software (Version 3.0) from mean difference
and SD of the difference, taking into account the correlation between repeated and new claims
and corrected for small sample size. There was a significant truth effect in the pre-exposure and
pre-test warning conditions, F (1, 400) = 386.38, p < .001, partial eta
2
= .49, d = .64 (95% CI
[.45, .84]), as well as the pre-test warning only conditions, F (1, 400) = 39.24, p < .001, partial
eta
2
= .09, d = 1.47 (95% CI [1.23, 1.70]). The effect was significantly larger in the pre-test
warning only condition. This larger truth effect was fully driven by how participants rated
repeated claims: While participants did not rate new claims differently across conditions, F (1,
400) = .42, p = .517, partial eta
2
< .01, repeated claims were rated significantly more true when
participants were not given a pre-exposure warning (M = 4.68, 95% [4.56, 4.80]) than when
participants were given a pre-exposure warning (M = 3.90, 95% [3.78, 4.01]); mean difference
= .78 (95% CI [.62, .95], F (1, 400) = 85.94, p < .001, partial eta
2
= .18). Thus, warning
participants about the presence of falsehoods prior to encoding reduced the effect of repetition on
the perceived truth of the claims.
TRUTH EFFECT WARNINGS 14
Experiment 2
In the first experiment, we manipulated warnings prior to exposure. In Experiment 2
(preregistered at http://aspredicted.org/blind.php?x=8yz2q5), we also tested the effects of
warnings prior to test. There is evidence that, in some cases, warnings before test may protect
participants from the truth effect. In particular, Nadarevic and Aßfalg (2017) reported one
experiment in which they were able to reduce the size of the truth effect with a pre-test warning.
However, this reduction occurred only when participants were given very explicit instructions
about what the truth effect was and were able to accurately answer questions demonstrating their
understanding of the phenomenon. Participants were also told explicitly that their job was to
prevent the truth effect. This suggests that a simple warning prior to test that some claims are
false may not be enough to change the way participants rate claims and may have little impact on
the size of the truth effect.
In order to investigate the effect of a simple pre-test warning, we replicated the
Experiment 1 “half” direction conditions but added a condition in which participants were not
given a warning at any point, neither prior to exposure nor prior to test. Our expectation was that
this no warning condition would look similar to the condition where participants were given a
pre-test warning only; that is, the warning prior to test would be ineffective and participants
would rate repeated claims more true in the pre-test warning only condition and no warning
condition than in the pre-exposure warning condition. We had this expectation for two reasons.
First, if participants are only warned prior to test, it will be difficult to correct for claims that
were already encoded as true during the initial exposure. Second, making a truth rating at test is
already drawing attention to truth independent of the warning, so an additional warning will
provide little new information.
TRUTH EFFECT WARNINGS 15
Method
Design. We used a 3 (warning timing: before exposure and before test, only before test,
or no warning) x 2 (repetition: trivia claim repeated vs. new) mixed design, manipulating
warning type between subjects and whether claims were repeated within subjects.
Participants. We recruited 100 participants per each of three between-subjects
conditions from MTurk and a total of 297 participants completed the study; however, due to an
error in recruitment, 15 participants had taken a past truth effect survey with the same materials,
and these participants were excluded. This left us with 282 participants: 96 participants in the
pre-exposure and pre-test warning condition, 97 participants in the pre-test warning only
condition, and 89 in the no warning condition. The mean age was 37.18 (SD = 12.13), and 127
participants were male and 155 of participants were female.
Procedure. This study was an exact replication of Experiment 1 “half” warning
conditions with one change: the addition of a no warning condition, where participants were
never told, neither prior to exposure nor test, that half of the statements were true and half of the
statements were false.
Results
We performed a 3 (warning timing: before exposure and before test, only before test, or
no warning) x 2 (repetition: trivia claim repeated vs. new) mixed model ANOVA. Consistent
with past findings, there was a main effect of repetition, such that participants rated repeated
claims (M = 4.56, 95% CI [4.46, 4.67]) as significantly more true than new claims (M = 3.48,
95% CI [3.41, 3.54]), mean difference = 1.09 (95% CI [.97, 1.20], F (1, 279) = 327.93, p < .001,
partial eta
2
= .54), and a main effect of warning prior to exposure, such that participants who
received warnings prior to test only (M = 4.15, 95% CI [4.05, 4.26]) or no warning (M = 4.10,
TRUTH EFFECT WARNINGS 16
95% CI [3.99, 4.21]) rated statements to be more true than participants who received a warning
prior to exposure and prior to test (M = 3.81, 95% CI [3.70, 3.91]), F (2, 271) = 11.92, p < .001,
partial eta
2
= .08.
Consistent with Experiment 1, we found a significant interaction between warning and
repetition, F (2, 279) = 29.39, p < .001, partial eta
2
= .17. As can be seen in Figure 3, this
interaction was again driven largely by repeated claims, with simple effects analyses revealing
that repeated claims were rated as significantly more true in both the no warning condition and
pre-test warning only condition than in the condition with a pre-exposure and pre-test warning
(both p < .001). As expected, the truth ratings for repeated claims in the no warning and pre-test
warning only were not significantly different (p = 1.00). There was no significant simple main
effect among new claims, F (2, 279) = 2.44, p = .09, partial eta
2
= .02.
We found a significant truth effect within all three conditions. This effect was smaller
when participants received a pre-exposure and pre-test warning, F (1, 279) = 18.78, p < .001,
partial eta
2
= .06, d = .72 (95% CI [.45, .98]), than when participants were given a pre-test
warning only, F (1, 279) = 206.35, p < .001, partial eta
2
= .43, d = 1.85 (95% CI [1.37, 2.32]) or
were given no warning at all, F (1, 279) = 159.01, p < .001, partial eta
2
= .36, d = 1.70 (95% CI
[1.24, 2.15]). Overall, warnings were once again effective at reducing the size of the truth effect
when given prior to exposure, with this effect largely driven by ratings of repeated claims. As
anticipated, pre-test warnings had no impact on truth ratings or the size of the truth effect.
Experiment 3
After a delay, people often remember information but forget the context in which it was
learned (Henkel & Mattson, 2011; Skurnik, Yoon, Park, & Schwarz, 2005). Thus, it is possible
that pre-exposure warnings are only effective under conditions of a short delay. However, in the
TRUTH EFFECT WARNINGS 17
real world, there may be a long delay between when people view information and when they are
re-exposed to it. The purpose of Experiment 3 (preregistered at
http://aspredicted.org/blind.php?x=rp2gc5) was to see if the effect of pre-exposure warnings
would still hold up after a delay of three to five days. We expected that the results would mirror
the last two experiments and that pre-exposure warnings would still lead to a decrease in the
rated truth of repeated claims compared to repeated claims in the no warning condition.
Method
Design. We used a 2 (warning timing: before exposure only vs. no warning) x 2
(repetition: trivia claim repeated vs. new) mixed design, manipulating warning type between
subjects and whether claims were repeated within subjects.
Participants. We recruited participants from the University of Southern California
psychology subject pool. Participants completed the survey for course credit. We attempted to
recruit up to 200 participants – 100 per between-subjects condition. However, because we were
collecting data near the end of the semester, we ended data collection when the subject pool
closed and included all participants that had completed the experiment at that point. Because of
the importance of keeping a consistent delay time, we only included participants who completed
both parts of the experiment within the correct timeline. One additional participant was excluded
because they took over twenty-four hours to submit the first part of the experiment, which may
have affected their delay time if they completed most of the survey but waited a significant
amount of time to submit. Overall, 54 participants completed both parts of the experiment: 29 in
the no warning condition and 25 in the pre-exposure warning condition. The average age was
20.74 (SD = 2.41), and there were 10 males and 44 females.
TRUTH EFFECT WARNINGS 18
Procedure. When participants signed up, they agreed to complete both parts of a two part
online survey. Part 1 of the experiment was the same as the initial exposure phase in the first
three experiments, where participants simply read 36 trivia claims. We added a few general
questions at the end of part 1 to provide a rationale for presenting the claims, including “How
many statements do you think you read?” and “How many minutes do you think it took you to
read the statements?”. After a three-day delay, participants received a link to part 2 of the survey
and were given 48 hours to complete it. Part 2 of the experiment was the same as the testing
phase in past experiments, where participants were shown the 36 old claims along with 36 new
claims and asked to rate the truth of each.
This experiment had two warning conditions: in the pre-exposure warning condition,
participants received the warning “half the statements are true and half the statements are false”
prior to exposure only, and in the no warning condition, participants did not receive any warning.
Results
The general pattern of results was the same as in the previous three experiments, with the
impact of warnings holding up after the delay (Figure 4). We performed a 2 (warning timing:
before exposure only vs. no warning) x 2 (repetition: trivia claim repeated vs. new) mixed model
ANOVA. There was a main effect of repetition, with repeated claims rated as significantly more
true (M = 4.11, 95% CI [3.94, 4.27]) than new claims, (M = 3.58, 95% CI [3.48, 3.67]), mean
difference = .53 (95% CI [.38, .68], F (1, 52) = 48.37, p < .001, partial eta
2
= .48), and a main
effect of warning condition, with participants given a warning prior to exposure rating claims as
less true (M = 3.64, 95% CI [3.48, 3.80]) than participants given no warning (M = 4.05, 95% CI
[3.90, 4.20]), mean difference = .41 (95% CI [.19, .63], F (1, 52) = 13.68, p = .001, partial eta
2
= .21). Finally, there was a significant interaction, F (1, 52) = 20.45, p < .001, partial eta
2
= .28.
TRUTH EFFECT WARNINGS 19
Following up this interaction with simple effects analyses, there was a significant truth
effect in the no warning condition, F (1, 52) = 71.13, p < .001, partial eta
2
= .58, d = 1.65 (95%
CI [.94, 2.36]), but not in the pre-exposure warning condition, F (1, 52) = 2.76, p = .103, partial
eta
2
= .05, d = .33 (95% CI [.04, .61]). Repeated claims were rated significantly more true in the
no warning condition (M = 4.48, 95% [4.26, 4.70]) than in the pre-exposure warning condition
(M = 3.73, 95% CI [3.49, 3.97]), mean difference = .75 (95% CI [.43, 1.08], F (1, 52) = 21.36, p
< .001, partial eta
2
= .29), whereas truth judgments for new statements did not differ across
conditions, F (1, 52) = .39, p = .533, partial eta
2
= .01.
Effect Size Analysis
A forest plot of all effect sizes within each warning timing and warning type conditions
can be seen in Figure 5. Analysis was performed using Comprehensive Meta-Analysis Software
(Version 3.0). Due to the small number of studies, tau-squared was pooled across studies,
following recommendations by Borenstein, Hedges, Higgins, and Rothstein (2009). A random
effects model was used and effect sizes were fixed across subgroups. Effect sizes were corrected
for small sample biases (Borenstein et al., 2009). These effects were grouped into three
subgroups: conditions where no warnings were given, conditions with pre-test warnings only,
and conditions with pre-exposure warnings (both with and without pre-test warnings). Each
effect fit into exactly one of these three subgroups, so no effects were double-counted.
The total effect size across all conditions was d = 1.00, 95% CI (.85, 1.15). There was
evidence of a significant difference among warning conditions, Q (2) = 43.55, p < .001. Follow-
up analyses using a Bonferroni correction for multiple comparisons revealed that this truth effect
was significantly smaller in the pre-exposure warning condition (d = .58, 95% CI [.38, .77]), than
in both the pre-test warnings only condition (d = 1.58, 95% CI [1.29, 1.86]) and in the no
TRUTH EFFECT WARNINGS 20
warning condition (d = 1.68, 95% CI [1.26, 2.11]); both Q (1)
> 22, p < .001. The pre-test
warning only and no warning conditions were not significantly different, Q (1) = .404, p = 1.000.
Throughout these three experiments, a remarkably consistent pattern has emerged: pre-
exposure warnings significantly reduced the size of the truth effect compared to when there were
no pre-exposure warnings. In contrast, warnings given only prior to testing exerted no influence
and failed to reduce the truth effect compared to a condition without any warning prior to test.
General Discussion
Across three experiments, we found that an instructional warning prior to exposure
alerting participants to the fact that they would see both true and false claims during the
experiment significantly reduced the size of the truth effect compared to not receiving a warning
(Experiments 1 to 3). This reduction in the truth effect occurred both when people were told that
“some” or “half” of the statements were false (Experiment 1). Given that instructional warnings
prior to exposure are a widely used feature of truth effect studies, this robust finding implies that
the bulk of this literature underestimates the influence of repetition under natural conditions,
where falsehoods do not come with a warning label. The size of this underestimate is large and
ranged from a factor of 2 to a factor of 5 in our experiments.
When comparing our results to results from previous literature, the size of the overall
truth effect we found when participants were given a pre-exposure warning (d = .58, 95% CI
[.38, .77]) was similar to the overall between items truth effect size reported by Dechêne et al.
(2010, d = .50 (95% CI [.43 - .57]). This is consistent with the observation that most prior truth
effect studies warn participants about truth at initial exposure through instructional warnings or
asking participants to rate the truth of each claim. In fact, 51 out of 70 studies reported by
Dechêne et al. (2010) asked participant to rate truth at exposure. Out of the 19 remaining studies,
TRUTH EFFECT WARNINGS 21
at least some of these reports draw attention to truth at exposure through instructional warnings
(e.g., Begg et al., 1992; Silva et al., 2016) but information about instructions is not always
included, making these warnings difficult to code. In contrast, the average effect sizes we found
when participants were not given pre-exposure warnings (d = 1.58, 95% CI [1.29, 1.86] in the
warning pretest only condition and d = 1.68, 95% CI [1.26, 2.11] in the no warning condition)
are much larger.
Our studies were not designed to test why instructional warnings reduce the size of the
truth effect. They nevertheless allow us to evaluate the plausibility of some possible
mechanisms. One possible mechanism is that alerting people that not all of the statements are
true at exposure simply made people more careful at test. There are at least three reasons why
this explanation is unlikely for the current results. First, pre-exposure warnings generally
influenced people’s responses on repeated claims, but not new claims. If the alert about false
claims made people more careful overall, there should have been a reduction in true responses
for all claims at test: a criterion shift. Instead, the alert at encoding only made people less
convinced about statements they had seen before. Second, an identical warning did not reduce
the size of the truth effect when it was given immediately before test (Experiment 2). If the
reduction in the truth effect was driven by making participants more critical at test, alerting
people right before the test should have produced the same pattern of results. Finally, the effects
of warnings on reducing the size of the truth effect persisted after a delay of three to five days
(Experiment 3).
Instead, our findings suggest that alerting people about the presence of false statements
before they see them changes the way those statements are encoded and later remembered at test.
Past research has demonstrated that when participants are asked to rate truth at encoding rather
TRUTH EFFECT WARNINGS 22
than engaged in other tasks like rating comprehension, they are less susceptible to the truth effect
(Dechêne et al., 2010; Hawkins & Hoch, 1992). In the context of the experiments reported here,
we might think of alerts that some claims are false as a methodological parallel to explicitly
asking participants to make truth judgments at the time of initial exposure.
How might these warnings change the way people process information? In general,
people tend to assume that information that is communicated to them is clear and accurate—
except when certain cues alert the listener that this assumption is violated (Grice, 1975; Schwarz,
1994). From this perspective, warnings or being asked to rate truth should interrupt the default
acceptance of information, putting participants in a state of skepticism or distrust. While all
participants will necessarily be distrustful at test when being asked to make judgments of truth,
only participants who receive warnings or are asked to make judgments of truth when seeing the
claims for the first time should be distrustful at exposure.
From the misinformation literature, we know that a state of distrust during the initial
encoding of information can protect people against the continued influence of misinformation
after it is retracted (Fein et al., 1997; Lewandowsky et al., 2005). One explanation for these
findings is that a skeptical mindset allows participants to more accurately distinguish between
accurate and inaccurate information (Lewandowsky et al., 2012). Of course, in the experiments
we report here, participants are viewing a large number of trivia claims where the truth/false
status is unclear—participants likely do not have the knowledge to discriminate between accurate
and inaccurate claims. As a consequence, it is unlikely that a skeptical mindset would help
participants to tag and remember most statements as true or false at exposure.
Rather, it is more likely that warnings could change the nature of the semantic
associations people make when they encode the claims. Literature on the impact of distrust
TRUTH EFFECT WARNINGS 23
reveals that priming participant with trust leads them to automatically consider more incongruent
associations. In general, participants generate more message-congruent information while in a
trustful mindset and engage in more counterfactual thinking when in a distrustful mindset
(Kleiman, Sher, Elster, & Mayo, 2015; Schul, Mayo, & Burnstein, 2004). This is similar to a
suggestion by Schul (1993) that warnings lead participants to generate more alternative accounts
for information they are given. In the context of the current experiments, a participant may read
claims wondering if they could be false and come up with reasons why they may not be true. If
these initial associations are retrieved or available when reading the statements again during the
test, these “distrust” associations might override the effect of repetition when judging truth,
making participants less likely to judge the information to be true. This explanation also fits with
a fluency account. When information is consistent with what we already know, it flows smoothly
and we nod along. However, when information feels inconsistent, we stumble and are more
careful (Schwarz et al., 2016). In this case, the “distrust” associations are inconsistent with the
statement we read, which should act to disrupt fluency.
Similar to how warnings prior to exposure can protect people against misinformation, we
demonstrate that warnings prior to exposure can also protect people against accepting something
as true simply because its repetition made it feel familiar. Importantly, this effect does not rely
on participants making an explicit judgment of truth during exposure: a mere statement alerting
them to the potential of falsehoods is enough. However, it is also important to note that warnings
may be more or less effective depending on what other tasks participants are performing. If
warnings were combined with truth judgments they could be particularly effective; participants
would be reminded throughout exposure that information could be false, making skepticism
about truth even more salient. Conversely, warnings may be less effective if participants are
TRUTH EFFECT WARNINGS 24
asked to perform other tasks during encoding, taking their mind away from processing for truth.
For example, even when all participants are given a warning that statements may or may not be
true, those given the task of categorizing statements into different knowledge domains still show
a larger truth effect compared to those asked to rate statements for truth (Nadarevi & Erdfelder,
2014).
In sum, the context in which we encode information matters for the way we later use that
information to make judgments. This has practical importance in the real word, where we rarely
are given a warning before viewing information of questionable veracity. For example, it
suggests that the frequent repetition of claims on social media and 24-hour cable channel
enhances the acceptance of fake news as true. It also underscores the idea that we, as researchers,
need to be careful in the information we provide to participants during our studies. Truth effect
experiments often draw participants’ attention to truth, whether through explicit judgments or
through instructional warnings, inducing a state of skepticism. This skepticism changes the way
participants process the information and has the ironic effect of reducing the size of the truth
effect. Unfortunately, because the real world does not warn us to the presence of potential
falsehoods, our lab studies are likely underestimating how repetition affects our belief in the
information we encounter on a daily basis.
TRUTH EFFECT WARNINGS 25
References
Alter, A. L., & Oppenheimer, D. M. (2009). Uniting the tribes of fluency to form a
metacognitive nation. Personality and Social Psychology Review, 13, 219-235.
Arkes, H. R., Boehm, L. E., & Xu, G. (1991). Determinants of judged validity. Journal of
Experimental Social Psychology, 27, 576-605.
Arkes, H. R., Hackett, C., & Boehm, L. (1989). The generality of the relation between familiarity
and judged validity. Journal of Behavioral Decision Making, 2, 81-94.
Bacon, F. T. (1979). Credibility of repeated statements: Memory for trivia. Journal of
Experimental Psychology: Human Learning and Memory, 5, 241-252.
Begg, I., Anas, A., & Farinacci, S. (1992). Dissociation of processes in belief: Source
recollection, statement familiarity, and the illusion of truth. Journal of Experimental
Psychology: General, 121, 446-458.
Begg, I., & Armour, V. (1991). Repetition and the ring of truth: Biasing comments. Canadian
Journal of Behavioural Science, 23, 195-213.
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-
analysis. Chichester, UK: Wiley.
Brown, A. S., & Nix, L. A. (1996). Turning lies into truths: Referential validation of falsehoods.
Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1088-1100.
Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2009). Mix me a list: Context moderates the
truth effect and the mere exposure effect. Journal of Experimental Social Psychology, 45,
1117-1122.
TRUTH EFFECT WARNINGS 26
Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about the truth: A meta-
analytic review of the truth effect. Personality and Social Psychology Review, 14, 238-
257.
Fein, S., McCloskey, A. L., & Tomlinson, T. M. (1997). Can the jury disregard that information?
The use of suspicion to reduce the prejudicial effects of pretrial publicity and
inadmissible testimony. Personality and Social Psychology Bulletin, 23, 1215-1226.
Festinger, L. (1954). A theory of social comparison processes. Human Relations,7(2), 117-140.
Garcia-Marques, T., Silva, R. R., & Mello, J. (2016). Judging the truth-value of a statement in
and out of a deep processing context. Social Cognition, 34, 40-54.
Gigerenzer, G. (1984). External validity of laboratory experiments: The frequency-validity
relationship. American Journal of Psychology, 97, 185-195.
Greene, E., Flynn, M. S., & Loftus, E. F. (1982). Inducing resistance to misleading
information. Journal of Verbal Learning and Verbal Behavior, 21, 207-219.
Grice, H. P. (1975). Logic and conversation. In P. Cole, & J.L. Morgan (Eds.), Syntax and
semantics, Vol.3: Speech acts (pp. 41 - 58). New York: Academic Press.
Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential
validity. Journal of Verbal Learning and Verbal Behavior, 16, 107-112.
Hawkins, S. A., & Hoch, S. J. (1992). Low-involvement learning: Memory without evaluation.
Journal of Consumer Research, 19, 212-225.
Hawkins, S. A., Hoch, S. J., & Meyers Levy, J. (2001). Low involvement learning: Repetition
and coherence in familiarity and belief. Journal of Consumer Psychology, 11, 1-11.
Henkel, L. A., & Mattson, M. E. (2011). Reading is believing: The truth effect and source
credibility. Consciousness and Cognition, 20, 1705-1721.
TRUTH EFFECT WARNINGS 27
Jalbert, M., Newman, E., & Schwarz, N. (2018). Trivia claim norming. Unpublished manuscript.
Kleiman, T., Sher, N., Elster, A., & Mayo, R. (2015). Accessibility is a matter of trust:
Dispositional and contextual distrust blocks accessibility effects. Cognition, 142, 333-
344.
Law, S., Hawkins, S. A., & Craik, F. I. M. (1998). Repetition induced belief in the elderly:
Rehabilitating age-related memory deficits. Journal of Consumer Research, 25, 91-107.
Lev-Ari, S., & Keysar, B. (2010). Why don't we believe non-native speakers? The influence of
accent on credibility. Journal of Experimental Social Psychology, 46, 1093-1096.
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation
and its correction continued influence and successful debiasing. Psychological Science in
the Public Interest, 13(3), 106-131.
Lewandowsky, S., Stritzke, W. G., Oberauer, K., & Morales, M. (2005). Memory for fact,
fiction, and misinformation: The Iraq War 2003. Psychological Science, 16, 190-195.
Loftus, E. F. (2005). Planting misinformation in the human mind: A 30-year investigation of the
malleability of memory. Learning and Memory, 12, 361-366.
Mitchell, J. P., Sullivan, A. L., Schacter, D. L., & Budson, A. E. (2006). Misattribution errors in
Alzheimer’s disease: The illusory truth effect. Neuropsychology, 20, 185-192.
Mutter, S. A., Lindsey, S. E., & Pliske, R. M. (1995). Aging and credibility judgment. Aging and
Cognition, 2, 89-107.
Nadarevic, L. & Aßfalg, A. (2017). Unveiling the truth: warnings reduce the repetition-based
truth effect. Psychological Research, 81, 814-826.
Nadarevic, L., & Erdfelder, E. (2014). Initial judgment task and delay of the final validity-rating
task moderate the truth effect. Consciousness and Cognition, 23, 74-84.
TRUTH EFFECT WARNINGS 28
Parks, C. M., & Toth, J. P. (2006). Fluency, familiarity, aging, and the illusion of truth. Aging,
Neuropsychology, and Cognition, 13, 225-253.
Reber, R., & Schwarz, N. (1999). Effects of perceptual fluency on judgments of
truth. Consciousness and Cognition, 8, 338-342.
Schul, Y. (1993). When warning succeeds: The effect of warning on success in ignoring invalid
information. Journal of Experimental Social Psychology, 29, 42-62.
Schul, Y., Mayo, R., & Burnstein, E. (2004). Encoding under trust and distrust: the spontaneous
activation of incongruent cognitions. Journal of Personality and Social Psychology, 86,
668.
Schwarz, N. (1994). Judgment in a social context: Biases, shortcomings, and the logic of
conversation. Advances in Experimental Social Psychology, 26, 123-162.
Schwarz, N. (2010). Meaning in context: Metacognitive experiences. In B. Mesquita, L. F.
Barrett, & E. R. Smith (Eds.), The mind in context (pp. 105–125). New York: Guilford
Press.
Schwarz, N., Newman, E., & Leach, W. (2016). Making the truth stick and the myths fade:
Lessons from cognitive psychology. Behavioral Science & Policy, 2, 85-95.
Schwartz, M. (1982). Repetition and rated truth value of statements. American Journal of
Psychology, 95, 393-407.
Silva, R. R., Garcia-Marques, T., & Mello, J. (2016). The differential effects of fluency due to
repetition and fluency due to color contrast on judgments of truth. Psychological
Research, 80, 821-837.
Skurnik, I., Yoon, C., Park, D. C., & Schwarz, N. (2005). How warnings about false claims
become recommendations. Journal of Consumer Research, 31, 713-724.
TRUTH EFFECT WARNINGS 29
Toppino, T. C., & Brochin, H. A. (1989). Learning from tests: The case of true-false
examinations. Journal of Educational Research, 83, 119-124.
Weaver, K., Garcia, S. M., Schwarz, N., & Miller, D. T. (2007). Inferring the popularity of an
opinion from its familiarity: a repetitive voice can sound like a chorus. Journal of
Personality and Social Psychology, 92, 821.
TRUTH EFFECT WARNINGS 30
Figures
Figure 1. Mean truth ratings for repeated and new claims across warning timing conditions when
participants received a warning that some claims were true and some claims were false. In the
pre-exposure and pre-test warning condition, participants received information that some claims
are true and some are false prior to initial exposure to trivia claims and prior to test. In the pre-
test warning only condition, participants received information that some claims are true and
some are false prior to test only. Error bars are 95% confidence intervals.
2.5
3
3.5
4
4.5
5
5.5
pre-exposure and pre-test
warning
pre-test warning only
Truth Rating
(1 = definital true; 6 = definitely false)
Warning Condition
Experiment 1: "Some" Warning
new claims
repeated claims
TRUTH EFFECT WARNINGS 31
Figure 2. Mean truth ratings for repeated and new claims across warning timing conditions when
participants received a warning that some claims were true and some claims were false. In the
pre-exposure and pre-test warning condition, participants received information that half of claims
are true and half of claims are false prior to initial exposure and prior to test. In the pre-test
warning only condition, participants received information that half of claims are true and half are
false prior to test only. Error bars are 95% confidence intervals.
2.5
3
3.5
4
4.5
5
5.5
pre-exposure and pre-test
warning
pre-test warning only
Truth Rating
(1 = definital true; 6 = definitely false)
Warning Condition
Experiment 1: "Half" Warning
new claims
repeated claims
TRUTH EFFECT WARNINGS 32
Figure 3. Mean truth ratings across warning conditions for new and repeated claims.
Participants either received a warning about truth prior to exposure and test, prior to test only, or
did not receive a warning. Error bars are 95% confidence intervals.
2.5
3
3.5
4
4.5
5
5.5
pre-test and pre-
exposure warning
pre-test warning no warning
Truth Rating
(1 = definitely true; 6 = definitely false)
Warning condition
Experiment 2: No Warning Condition
new claims
repeated claims
TRUTH EFFECT WARNINGS 33
Figure 4. Mean truth ratings across warning conditions for new and repeated claims after a three
to five day delay. Participants either received a warning prior to exposure only or no warning.
Error bars are 95% confidence intervals.
2.5
3
3.5
4
4.5
5
5.5
pre-exposure warning only no warning
Truth Rating
(1 = definitley true; 6 = definitely false)
Warning Condition
Experiment 3: Delay Condition
new claims
repeated claims
TRUTH EFFECT WARNINGS 34
Figure 5. Effect sizes (d unbiased) for the truth effect across all experiments and warning timing
conditions.
All conditions
No warnings overall
Experiment 3
Experiment 2
Pre-test warning only overall
Experiment 1
Experiment 2
Pre-exposure warning overall
Experiment 3
Experiment 2
Experiment 1
-1.5 -1 -0.5
0 0.5 1 1.5 2 2.5 3
Cohen's d (unbiased) and 95% CI
Effect Sizes Across Studies
Pre-exposure warning
Pre-test warning only
No warnings
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Metacognitive experiences in judgments of truth and risk
PDF
Intuitions of beauty and truth: what is easy on the mind is beautiful and true
PDF
When photos backfire: truthiness and falsiness effects in comparative judgements
PDF
Difficulty-as-sanctifying: when difficulties build character, purify the self, and elevate the soul
PDF
People can change when you want them to: changes in identity-based motivation affect student and teacher Pathways experience
PDF
Are jokes funnier when they’re easier to process?
PDF
Can I make the time or is time running out? That depends in part on how I think about difficulty
PDF
A roadmap for changing student roadmaps: designing interventions that use future “me” to change academic outcomes
PDF
Young children selectively transmit true information when teaching
PDF
What “works” and why it still isn't enough: sexual harassment training measures of effectiveness
PDF
The effects of acceptance and commitment therapy-based exercises on eating behaviors in a laboratory setting
PDF
Training staff to implement the interview-informed synthesized contingency analysis (IISCA)
PDF
Why is it so difficult to recognize faces differing only moderately in orientation?
PDF
The effects of bilingual acceptance and commitment training (ACT) on exercise in bilingual international university students
PDF
How misinformation exploits moral values and framing: insights from social media platforms and behavioral experiments
PDF
Using acceptance and commitment training to enhance the effectiveness of behavioral skills training
PDF
Using GIS to explore the tradeoffs in hydrographic survey planning: an investigation of sampling, interpolation, and local geomorphology
PDF
Beyond visual line of sight commercial unmanned aircraft operations: site suitability for landing zone locations
PDF
Suitability analysis for wave energy farms off the coast of Southern California: an integrated site selection methodology
PDF
Building the digital dreamscape: virtual worlds, the subconscious mind and our addiction to escapism
Asset Metadata
Creator
Jalbert, Madeline Claire
(author)
Core Title
Only half of what I’ll tell you is true: how experimental procedures lead to an underestimation of the truth effect
School
College of Letters, Arts and Sciences
Degree
Master of Arts
Degree Program
Psychology
Publication Date
07/30/2018
Defense Date
07/30/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
fluency,OAI-PMH Harvest,truth effect
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Schwarz, Norbert (
committee chair
), Oyserman, Daphna (
committee member
), Wiltermuth, Scott (
committee member
)
Creator Email
maddyjalbert@gmail.com,mcjalber@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-43103
Unique identifier
UC11671620
Identifier
etd-JalbertMad-6576.pdf (filename),usctheses-c89-43103 (legacy record id)
Legacy Identifier
etd-JalbertMad-6576.pdf
Dmrecord
43103
Document Type
Thesis
Format
application/pdf (imt)
Rights
Jalbert, Madeline Claire
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
fluency
truth effect