Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Beyond active and passive social media use: habit mechanisms are behind frequent posting and scrolling on Twitter/X
(USC Thesis Other)
Beyond active and passive social media use: habit mechanisms are behind frequent posting and scrolling on Twitter/X
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Beyond Active and Passive Social Media Use:
Habit Mechanisms Are Behind Frequent Posting and Scrolling on Twitter/X
by
Ian Axel Anderson
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PSYCHOLOGY)
May 2024
Copyright 2024 Ian Axel Anderson
THE ANATOMY OF TWITTER HABITS
ii
Acknowledgments
I would like to first thank my advisor, Wendy Wood, for her guidance and support
throughout the creation of this dissertation. I would also like to thank Norbert Schwarz, Leor
Hackel, Nathanael Fast, and Morteza Dehghani for their comments and participation on my
dissertation proposal and qualifying exam committee, which helped this manuscript develop into
a coherent whole. I want to thank Zhao Chang for designing and assisting with the data scraping
process for Study 1. I would also like to thank my lab managers of the USC habit lab, Emmy Li
and Begum Babur, for their help in organizing the data collection process and supporting the
coding of all the video data for Study 2. I would like to thank all the research assistants involved
in coding the data in Study 2: Ege Baysak, Lina Chen, Bart Chu, Logan Forester, Destinee
Handly, Emily Henriquez, Alison Jensen, Sarah Lee, Yitong Liu, Michelle Mu, Ellen Ong,
Menghan Qu, Qadira Raifman-Hadiz, Veronika Sidorova, and Celine Wen. I would also like to
thank my family, friends, and partner for their support throughout the writing process of this
dissertation. Correspondence should be addressed to Ian Anderson, University of Southern
California, Department of Psychology, 3551 Trousdale Pkwy, Los Angeles, CA. 90089. Email:
ianaxelanderson@gmail.com
THE ANATOMY OF TWITTER HABITS
iii
Table of Contents
Acknowledgments.......................................................................................................................... ii
List of Tables..................................................................................................................................iv
List of Figures..................................................................................................................................v
Abstract...........................................................................................................................................vi
Chapter 1: Literature Review, The Anatomy of Twitter Habits......................................................1
Why Do We Use Social Media so Frequently? ...........................................................................2
Motivation and Goals Build Social Media Habits.......................................................................6
Why Habits are Important for Understanding Social Media .......................................................8
Reward Prediction Errors, Intermittent Rewards, and Habit Formation ...................................10
Chapter 2 (Study 1): Tweeting Habits...........................................................................................17
Method .......................................................................................................................................18
Results........................................................................................................................................20
Discussion..................................................................................................................................24
Chapter 3 (Study 2): Twitter Scrolling..........................................................................................29
Method .......................................................................................................................................32
Results........................................................................................................................................43
Discussion..................................................................................................................................58
Chapter 4: General Discussion ......................................................................................................65
Limitations and Future Directions.............................................................................................67
Conclusion .................................................................................................................................69
References .....................................................................................................................................71
Appendices ....................................................................................................................................78
Appendix A: Supplementary Materials for Study 1 ..................................................................81
Appendix B: Supplementary Materials for Study 2...................................................................86
Appendix C: Coding Instructions for Study 2 .........................................................................104
THE ANATOMY OF TWITTER HABITS
iv
List of Tables
Table 1: Means, Standard Deviations, and Correlations: Study 1………………………………21
Table 2: Multilevel Model Predicting Latency to Post Again from Prior
Tweeting Frequency and RPE Reactions: Study1……………………………………………….23
Table 3: Twitter Scroll Rewardingness Means and SD from Study 2………..……………........33
Table 4: Step-by-Step Twitter Scroll Construction Process from Study 2…………………...…33
Table 5: Means, Standard Deviations, and Correlations Between
Participant-level Variables of Interest: Study 2…...……………………………………………..43
Table 6: Simple Linear Regression Model Predicting Average
Dwell Time from Reward Variance and Habit Strength, Controlling
for Scroll Indicator: Study 2.…………………………………………………………………….46
Table 7: Multilevel Model Predicting Dwell Time from Reward Variance and SRBAI,
Controlling for Scroll Indicator: Study 2…………..…………………………………………….48
Table 8: Multilevel Model Predicting Tweet Dwell Time from the Habit
Strength and Reward Prediction Errors, Controlling for Scroll Indicator: Study 2.……………..52
Table 9: Multilevel Model Predicting Scrolling to a Subsequent Tweet from
Habit Strength and Reward Prediction Errors, Controlling for Scroll Indicator:
Study 2…………………………………………………………………………………...............54
Table 10: Generalized Linear Multilevel Model (GLMM) predicting Dwelling
on a Tweet from Prior Scrolling Frequency and Total Reactions, Controlling for
Scroll Indicator: Study 2…………………………………………………………………..……..56
THE ANATOMY OF TWITTER HABITS
v
List of Figures
Figure 1: Plot of the Interaction in Study 1 Between Prior Tweeting Frequency
and Reward Prediction Errors Predicting Latency to Post Again…………………………..……23
Figure 2: Study 2 Diagram of the Experimental Procedure…………...………………………...36
Figure 3: Plot of the Interaction in Study 2 Between Habit Strength
(Prior Scrolling Frequency) and Aggregate Reward Prediction Errors, Predicting
Average Dwell Time in a Multilevel Model………………………………………………..……53
THE ANATOMY OF TWITTER HABITS
vi
Abstract
This dissertation examined the motivations behind scrolling and posting behavior on
Twitter (now X). In particular, I tested whether these passive and active forms of social media
use become habitual and insensitive to the various rewards provided by Twitter/X. Habitual,
frequent social media use should build posting habits that are automatically cued by contexts,
whereas new or occasional users should be driven by rewarding outcomes, especially rewards
greater than expected (Anderson & Wood, 2021; 2023). The first study (N = 400) used a novel
scraping procedure to track users’ postings for two weeks through the Twitter API. As predicted,
highly frequent posters continued to post regardless of the social rewards they received from
others. Reward impact was assessed through reward prediction error (RPE), or the quantity of
rewards on the last post compared with typical experience. In contrast, less frequent posters
changed their posting rates given larger RPEs (i.e., were reward-responsive). A second study (N
= 320) used an experimental design to test scrolling of curated sets of content. As predicted,
habitual scrollers were relatively unaffected by the rewardingness, or interest value, of the tweets
scrolled (calculated as RPEs). In contrast, less habitual and less frequent Twitter/X scrollers were
more likely to dwell on a tweet with especially rewarding content. In sum, the repeated use of
social media makes habits a fundamental mechanism for understanding the motivations that
drive active and passive site use.
THE ANATOMY OF TWITTER HABITS
1
Chapter 1: Literature Review, The Anatomy of Twitter Habits
The first thing nearly 80% of smartphone users do in the morning is to check the newest
updates or notifications that have appeared on their smartphones overnight (Levitas, 2019).
These updates come from a multitude of smartphone applications, but some of the most frequent
come from large-scale social media apps, including Facebook, Instagram, TikTok, and Twitter
(now known as X). In this dissertation, I will focus on two behaviors common across all of the
aforementioned large-scale social media platforms: posting (Study 1) and scrolling (Study 2).
For the first study, I collected a large dataset from Twitter/X during 2021. At the time of
Study 1 data collection, Twitter was a micro-blogging (short form) content sharing platform with
one of the largest user bases among social media sites (25% of US adults reported having an
account). Twitter/X also had a reputation for breaking and making news, with the largest
percentage of its own users (55%) self-reporting that they used it to get news, above Facebook
(47%) and all other social media sites at the time (Pew Research Center, 2021; Techcrunch,
2021). Twitter/X also allowed close observational study because of its relatively open data
policies prior to 2023. These policies allowed my first study to use the streaming API to monitor
users’ posts. According to the Psychology of Technology Institute, which administers and
releases results from a representative survey of social media users in the US that began in 2023
(the Neely Social Media Index), Twitter/X’s user base dropped during our data collection period,
now counting around 16% of US adults as monthly active users as of September 2023 (Moytl,
2023). Despite some evidence of waning cultural influence, Twitter/X remained an active part of
the social media ecosystem throughout the data collection for the second study in this
dissertation, which occurred from September 2022 to May 2023. This second study also
THE ANATOMY OF TWITTER HABITS
2
monitored user behavior through live observation but used a novel experimental method
designed to manipulate the rewardingness of tweets in a scroll.
The core question of this research is whether both the active posting and the more passive
scrolling of social media are driven by habit motivational models (Anderson & Wood, 2021;
Bayer et al., 2022; Wood et al., 2021). Given that these forms of social media use are repeated
frequently, they could become habitual and triggered automatically by site cues. The implication
is that once formed, habits are repeated with minimal input from behavior outcomes such as the
rewards received from other users or from consuming content. I predicted that both active and
passive behavior on X will be more reward-responsive for non-habitual (new or occasional)
behaviors and less reward-responsive for habitual behaviors (Anderson & Wood, 2021, 2023;
Ceylan et al., 2023). Rewards drive performance, especially when they are greater or lesser than
what would be expected given past experience (i.e., reward prediction error, called RPE). To
test these predictions, I present two studies that test whether habitual posting (Study 1) and
scrolling (Study 2) are less responsive to RPEs than new or occasional use. Finally, I discuss
how the present motivational model of habits applies broadly to information sharing and
consumption.
Why Do We Use Social Media so Frequently?
Habits, which reflect learned context-response associations in memory, develop as
people repeat a rewarded action in a stable context (Amodio & Ratner, 2011). Social media use
is especially likely to become habitual because it is repeated frequently in specific ways over
time. Once habits form, they are triggered automatically upon perception of associated context
cues, and people may act on the behavior in mind through ideomotor processes (James, 1890).
As a result, frequent social media users open social apps automatically and browse apps before
THE ANATOMY OF TWITTER HABITS
3
they even realize they saw the app’s icon or a notification on their smartphone screen
(Schnauber-Stockmann & Naab, 2019).
Habit formation is core to social media’s business model. Users who post and scroll
frequently or habitually are more likely to spend time on the platforms. They are thereby exposed
to paid advertisements and generate revenue (Anderson & Wood, 2021). Given this high
frequency of use, social media provides a rich context for studying the formation and
consequences of habits, along with potential drivers of media use (Bayer & LaRose, 2018; Bayer
et al., 2022).
Much of the existing research on social media has focused on user motivations and goals,
which are useful to describe because they highlight the reasons that people initially use apps and
thus initiate the process of habit formation and repeatedly post and scroll. In this dissertation, I
focus on specific behaviors within social media beyond social media use as a general construct
(Bayer et al., 2022). In addition, existing literature has begun to categorize posting and scrolling
into active social media use and passive social media use, respectively, in order to highlight their
divergent psychological impacts on users’ well-being (Verduyn et al., 2017; but see Valkenberg
et al., 2023). Although this dissertation is not focused on well-being, the different psychological
consequences of these two behaviors could suggest different motivating processes. In contrast, a
habit perspective would anticipate that both posting (active use) and scrolling (passive use),
when repeated sufficiently, will be guided by common habit-learning mechanisms.
Why We Post and Share Information on Social Media
Posting and sharing content with others on social media platforms is often tied to
increased well-being (Bayer et al., 2018; Verduyn et al., 2017; Valkenburg et al., 2023), greater
positive affect (Verduyn et al., 2015) and reduced loneliness (Deters & Mehl, 2012). In addition,
by posting online, people fulfill desires to share moral-emotional content (Brady et al., 2017;
THE ANATOMY OF TWITTER HABITS
4
Burton et al., 2021; Atari et al., 2022) and moral outrage (Brady et al., 2021), express emotions
(Buechel & Berger, 2012), reach large audiences (Barasch & Berger, 2014). Others, while
chasing large audiences that will give them more social rewards in the form of likes, shares, or
followers, even perpetuate falsehoods (Vosoughi et al., 2018) or re-share misinformation (Ceylan
et al., 2023). Furthermore, the lack of a physically present audience reduces potential social risks
of disclosure (Berger, 2013). Although self-disclosing information about oneself, particularly to
friends, can be intrinsically rewarding and valuable on its own (Tamir & Mitchell, 2012), the
social media context provides additional rewards in the form of others’ reactions. In prior
research, receiving likes on social media was experienced as a form of social endorsement that
produced neural signatures of reward processing (Sherman et al., 2016; Sherman et al., 2018).
Others’ likes and comments have even been shown in computational models to increase the
frequency of subsequent posts (Lindstrom et al., 2021).
Why We Scroll and Consume Information on Social Media
Users self-report that they consume or scroll content online for several different reasons.
These include upward or downward social comparisons to others (Chan & Briers, 2019), feeling
like they are informed just by looking at headlines (Muller et al., 2016), and many different
emotions experienced while scrolling (e.g., anger from disagreement with a post, outrage about a
post; Anderson & Wood, 2021). Furthermore, when this content is positive, it can have positive
or neutral impacts on well-being outcomes, whereas negative content can have a negative impact
on well-being, even if it does inform users about potential future threats (Buchanan et al., 2021).
Research has also shown associations between passive social media use (including scrolling) and
specific depression symptoms, if not depression itself (Aalbers et al., 2019).
Algorithms in social media are designed to feed users content they enjoy–or at least pay
attention to. What the algorithmic feeds show to users is often based on a multitude of signals of
THE ANATOMY OF TWITTER HABITS
5
users’ attention or revealed preferences through likes, comments, and other engagement
(Narayanan, 2023; Morewedge et al., 2023). Many user behaviors, including scrolling, are
motivated or triggered by platform design that appeals to users’ social learning systems (Brady et
al., 2021; Anderson & Wood, 2023; Brady & Crockett, 2023). Given the social rewards
associated with “PRestigious, Ingroup, Moral, and Emotional” (PRIME) content, it is especially
likely to be amplified and proliferate by algorithms sensitive to user signals of attention (Brady
et al., 2021; Anderson & Wood, 2023; Brady & Crockett, 2023). The crux of this theory
regarding PRIME information is that users scroll in accordance with the content they find most
socially rewarding. This dissertation tested whether non-habitual Twitter/X users scroll in
accordance with what they find rewarding and whether this is also true for users who scroll more
habitually.
Rewards, Posting, and Scrolling
As outlined in the above review, social rewards for posting online come in the form of
reactions from other users. These reactions are embedded into the Twitter/X platform interface as
likes, retweets, and replies. Quote retweets and replies both involve commentary from other
users, which can be either positive or negative in valence. Thus, the only reactions that are
clearly socially rewarding are direct retweets (reposts without commentary) and likes. Although
these may not capture every possible dimension of reward (as noted above, self-disclosure can
act as a reward on its own), they are likely good representations of the rewardingness of a post.
Thus, quantities of others’ reactions to a post are used in Study 1 to represent post rewardingness
(see Lindstrom et al., 2021; Brady et al., 2021; Anderson & Wood, 2023).
Common among all these reactions to posts is that they require another user’s response in
order to constitute a social reward. In contrast, the social rewards for scrolling are not embedded
into the fabric of social media platforms in the same manner. These rewards are more subjective
THE ANATOMY OF TWITTER HABITS
6
and depend on each user’s perception of the content they consume. In this way, the social
rewards for scrolling are determined by users’ own experiences rather than by other users’
behavior. Thus, Study 2 used survey data about users’ experiences of each post scrolled to
determine a post’s rewardingness.
Motivation and Goals Build Social Media Habits
The differing psychological motivators and consequences of posting and scrolling
behavior might lead one to presume that they are also promoted by different psychological
processes. Contrary to this possibility, this dissertation tests whether the repeated use of social
media engages a fundamental, habit-learning process that results in reward insensitivity among
habitual posters and scrollers. This follows from prior research on Facebook and Instagram
posting behavior, which provided evidence that receiving likes and comments on one's own posts
repeatedly over time made high-frequency and more habitual posters less sensitive to these
rewards, indicating the existence of habits to post and share information (Anderson & Wood,
2023).
Further supporting evidence for outcome insensitivity in habit performance on social
media has emerged from the study of re-sharing misinformation on social platforms (Ceylan et
al., 2023). Across three studies, users were presented with true and false news headlines that they
could share (or not) in a simulated social media context. The most habitual sharers were
responsible for sharing an outsized proportion of the false news headlines. They also shared with
less regard for motivation-based interventions, reminding them of the accuracy (Study 2) and
political leaning (Study 3) of the headlines in advance. The final study (Study 4) showed that
platform design, especially the reward structure on social media, was responsible for the high
levels of misinformation shared. That is, after users were given novel rewards for sharing
THE ANATOMY OF TWITTER HABITS
7
accurate information, they persisted in doing so even when the rewards were no longer given. In
addition to suggesting a path forward for solving social media’s well-documented
misinformation issues, this result demonstrated that rewards within the social media platform
dictate the sorts of sharing habits that social media users develop (Brady et al., 2021; Anderson
& Wood, 2023; Mcloughlin & Brady, in press).
In addition to reward and outcome insensitivity, habitual social media posters have been
shown to be sensitive to cues in the platform environment. Prior literature has also shown that
high-frequency and more habitual posters were disrupted by platform changes, which shifted the
cues that drive habit performance (Anderson & Wood 2021, 2023). Thus, cues proved important
for activating habitual, frequent posting on Facebook and Instagram, while rewards were
important for less habitual and less frequent posters. This dissertation tested whether the
importance of rewards also dissipates in more habitual and more frequent Twitter/X posters and
scrollers.
Further supporting the idea that social media is automatically cued separately from
motivations, frequent users appear especially likely to respond with little thought online. For
example, Vishwanath (2015) found that self-reported habitual Facebook users were highly
susceptible to phishing attacks via Facebook’s messenger, responding automatically to phishing
messages even when they were highly motivated by a concern for privacy. A second study on
social media and Facebook phishing (Vishwanath, 2017) suggested that habitual users were
more likely to respond automatically when using a smartphone to access social media and when
the analysis accounted for whether participants paid attention to different cues (whether they
have a profile photo; and their “friend” count) on the phisher’s social media profile. This
THE ANATOMY OF TWITTER HABITS
8
research suggests that, as with other habits, frequent users repeat past actions in ways that are not
necessarily consistent with their current goals (Wood, 2017; Wood & Neal, 2007).
Finally, automatic or habitual behaviors can be interpreted by their performers as goaldirected or motivated. Evidence for social media habits was first noted in research in tandem
with self-reported measures of motivation. For example, people who self-identified as habitual
users were found to have higher past use frequency and greater satisfaction with social media
(Hu et al., 2018). Furthermore, disentangling possible relationships between users’ perceived
motivations for social media use and their actual triggers has produced contradictions in prior
research. For example, one study found both that (a) users’ sense of belonging to a social media
platform’s community positively influences the strength of their self-reported usage habit and (b)
that users’ sense of belonging attenuated the positive relationship between users’ frequency of
past social media platform use and their self-reported usage habits (Liu et al., 2018). This
interplay between users’ perceived motivations for social media and users’ actual triggers for
their behavior was an inspiration for past work showing that high-frequency posters infer that
they are motivated to respond to rewards and to overlook automatic processes such as habit
(Anderson & Wood, 2023).
Why Habits are Important for Understanding Social Media
The emerging research on social media habits makes it clear that understanding the
science of habit learning is central to fully understanding how frequent social media use develops
and the consequences this frequent use may have for other online behaviors (e.g., getting
phished, misinformation sharing, spreading moral outrage). In this dissertation, I extended this
prior research on social media habits into a novel test of whether two separate behaviors (posting
and scrolling) within the same social media platform each display habit reward insensitivity
among the most habitual posters and scrollers. Habit insensitivity has not yet been demonstrated
THE ANATOMY OF TWITTER HABITS
9
across multiple behaviors within the same social media platform, and to my knowledge,
information consumption (scrolling) on social media has not yet been observed in a paradigm
that tests participants’ reward responsiveness. Despite the different rewards and motivations that
initiate posting and scrolling behavior and the differences noted above in the consequences of
active and passive use, I propose that both behaviors can be understood in terms of the
fundamental psychological process of habit formation.
Whether we post, scroll, and otherwise use social media in a motivated or more automatic
and habitual manner is important because of the implications for the design of online social
platforms, responsibility for problems that arise with these platforms, and approaches to fix or
mitigate problems. For example, understanding when aspects of social media use are habitual or
motivated by goals and rewards directly impacts our understanding of how to change user
behavior. A purely motivational account of user behavior would posit that users who are
scrolling or posting more than they like could simply modify their goals to change their behavior
(de Houwer et al., 2018). A habit-based account would suggest that a goal-based approach to
behavior change may work for the less frequent or non-habitual users but that the most frequent,
habitual users would need structural changes to the environment to alter their behavior–in this
case, the social platform itself–to disrupt existing habits (such as the platform changes examined
in Anderson & Wood 2021; 2023).
Another broader theoretical contribution of this dissertation is to integrate existing,
individual-focused (i-frame) approaches to understanding social media use with a focus on the
surrounding environment context and the structures set up by the apps (s-frame) within which all
users operate (Chater & Lowenstein, 2022). Recognizing the important role of habits and the
context in which behaviors are performed, computer scientists partnered with Snapchat
THE ANATOMY OF TWITTER HABITS
10
determined that user behavior is better predicted by models that incorporate aspects of users’
routines than models of user behavior that do not include routine-related context (Peters et al.,
arXiv preprint 2023).
This dissertation also furthers social media research with tests for reward insensitivity
effects in two novel datasets: a streaming dataset of users’ posts on Twitter/X and a screenrecorded dataset of scrolling behavior. Both follow recent research using live observation rather
than historical data to study user responses to rewards (Anderson & Wood, 2023). Historical data
has the advantage of looking at the entirety of a given user’s behavior, but the quantities of
reactions or rewards on each post collected in this type of scrape are the quantities at the time of
the scrape, which can be months or even years after some of the posts occurred. Streaming data
allows the researcher to observe user behavior live and thus accurately pick up how many
reactions were received by each post at the precise time when a subsequent post was made. This
is important for capturing the causal effects of rewards and is more precise than historical data.
In the case of scrolling behavior, we also used live observation of users in order to determine
precise sequences of behavior within the scroll, which are more difficult to pick up without using
a screen-recording method. By recording scrolling sessions, I was able to determine when a
given scrolling behavior occurred at the level of tenths of a second. These procedures each
constitute an advancement in the precision of tests of the causal links between user behaviors and
rewards.
Reward Prediction Errors, Intermittent Rewards, and Habit Formation
To precisely estimate the impact of rewards on user behavior, both of these studies use
reward prediction errors (RPE) to understand the way rewards influence social media use. The
first studies of RPE involved dopamine neurons in rhesus monkeys, specifically how monkeys
responded to expected and unexpected rewards when they received a reward cue with and
THE ANATOMY OF TWITTER HABITS
11
without the reward (Schultz, 1998). Dopamine was released for the cue alone, even absent
reward--suggesting that dopamine and neural activation are about the prediction or anticipation
of rewards more than the actual reward itself. RPE measures of dopamine neurons in human
beings have revealed how individuals update expectations based on different prior reward
experiences and based on cues that suggest forthcoming rewards (Cooper et al., 2014). Social
neuroscientists also found correlations between neural activity in the ventral striatum and reward
prediction errors calculated from neurological signals that anticipate rewards in response to
behavior performance and thus aid in reinforcing a given behavior as it is learned (Garrison et
al., 2013; Hackel & Amodio, 2018).
Assuming that people learn behaviors based on expectations of rewards similar to
Schultz’s rhesus monkeys, RPE provides a picture of human social behavior that can be applied
to Twitter/X. This connection between Twitter user behavior and RPE was first demonstrated
computationally by Brady et al. (2021), who found that positive reward prediction errors
calculated from social feedback (likes, retweets) on tweets expressing moral outrage were
correlated with users’ subsequent expression of moral outrage. Thus, getting more rewards than
expected for posts expressing moral outrage made users more likely to express moral outrage
again in future posts. The present research further tests the relationship between RPE, users’
prior posting and scrolling habits, and users’ current posting and scrolling behavior.
RPE is typically calculated as the difference between experienced and expected rewards
over time. Expected rewards are the rewards anticipated prior to an experience, and experienced
rewards are how that experience feels when it happens (Hackel & Amodio, 2018). RPE reflects
relative deviations in the number of rewards received during an activity. This is useful for
answering our central question about whether habit insensitivity to rewards persists across
posting and scrolling activities.
THE ANATOMY OF TWITTER HABITS
12
The importance of using reward measures reflecting users’ relative experiences rather
than absolute quantities of rewards can be illustrated with users who typically receive about 500
reactions on each post in comparison with others who might receive 10 reactions or less. 495
reactions would feel average to the 500-average-reaction-user, but it is a meaningful quantity for
someone who typically receives 10 reactions. Prior research on social media posting calculated
RPE by person-mean centering reactions (likes, retweets) – and thereby measured deviations of
reactions to a given post from each user’s average number of reactions (Lindstrom et al., 2021;
Brady et al., 2021; Anderson & Wood, 2023).
The present studies also estimated RPE on each post for each user in both sets of data. In
Study 1, this was done by taking the number of reactions on the most recent post (experienced
rewards) minus the mean number of reactions on all the users’ previous posts (expected
rewards). This process effectively lagged the expected reward value as the mean rewards
received on prior tweets to accurately estimate how rewarding each like or retweet would have
felt to each participant in relative terms at the time of making a post.
This second study involved the creation of specially curated Twitter/X feeds in order to
carefully control rewards, provide a standardized experience, prevent self-selection into different
scroll experiences, and precisely examine how different users responded to the same reward
schedules even when the content differs on dimensions other than reward value. Using this
careful control allowed me to survey participants’ own perceptions of the rewardingness of the
content they saw in the scroll to construct RPEs. RPEs were calculated by taking the rated
rewardingness of the most recent post (experienced rewards) minus the rewardingness of
previous posts scrolled (expected rewards). Calculating the RPEs in this way effectively lagged
the expected reward value as the mean rewards received on prior tweets in order to accurately
estimate how rewarding seeing a post felt compared with prior posts. In this way, the measures
THE ANATOMY OF TWITTER HABITS
13
of RPE in posting and scrolling accounted for the psychological experience of each user,
adjusting posts’ rewardingness to what each user should find relatively more or less
psychologically rewarding.
Intermittent Reward Manipulation in Study 2
Study 2’s procedure additionally allowed for experimental manipulation of rewards
directly on the Twitter/X platform, which has not been done in the context of newsfeeds.
Newsfeeds aggregate the content posted by other users (e.g., friends, people one follows, highly
popular content) on the platform for easier consumption. These feeds are often the de facto
homepage of social media sites. Scrolls or feeds are less studied in the current social media
literature using observational methods, likely because of the difficulty controlling what users see
on the actual social media platforms. Also, mockups are time-consuming and expensive to
create. These seemingly endless feeds (called the infinite scroll) contain both strongly rewarding
and weakly rewarding information. This mix of information may be especially effective at
building strong user habits to keep scrolling because these rewards are similar to an intermittent
reward schedule (DeRusso et al., 2010).
In addition, the intermittent rewards provided by the Twitter/X scroll are a uniquely
fruitful context in which to study reward schedules and their impact on habitual behavior
(Vishwanath, 2016; Vorder & Kohring, 2013). This is because Twitter/X users view the many
posts (generated daily by others they follow, including friends, acquaintances, news
organizations, celebrities, or people who share their other niche interests) in a context with no
endpoints or set stopping rules. This infinite scroll has been much maligned in popular media and
even generated an apology from its original creator because it encourages excessive (and
potentially habitual) scrolling (Knowles, 2020). In other words, both users and designers seem to
be aware that their scrolling behavior is often mindless or automatic and does not necessarily
THE ANATOMY OF TWITTER HABITS
14
align with their own usage goals. The custom scrolls in Study 2 allow a test of the relationship
between prior scrolling habits and user behavior with pre-rated sequences of posts that were
designed to be equally rewarding while still varying in intermittency. Low intermittency is
signified by a consistently rewarding sequence of tweets, and high intermittency is signified by a
more variable sequence.
Outside the social media context, intermittent reward schedules have been shown to
create strong habits because they maximize users’ RPE (Arad et al., 2023; Maki et al., 2012;
Skinner, 1953). By decreasing users’ expected reward in real time, less rewarding posts could
make the experience of a subsequent rewarding post even more rewarding because they are
unexpected and likely generate stronger neurological signals of surprise in a distinct region of the
medial temporal lobe (Murty et al., 2016). Conversely, many rewarding stimuli encountered one
after another can reduce the overall rewardingness of each new stimulus and modulate medial
temporal lobe connectivity and sensitivity (Murty et al., 2016; Cockburn et al., 2022).
Because each social media post in the scroll constitutes a potential reward, scrolling can
be construed as analogous to a slot machine or Skinner box. If you pull the lever (scroll down or
post again), there’s another potential reward always waiting for you. The application of
intermittent reward schedules to social media habits is supported by prior theory on mobile social
media use, which suggested that intermittent and unexpected rewards help to build users’ mobile
app habits (Bayer & LaRose, 2018; Schnauber-Stockmann & Nabb, 2019).
In theory, RPE-maximizing intermittent schedules should also be the best for building
users’ social media scrolling habits because these create more positive reward prediction errors
(by adjusting reward expectations and giving more unexpected rewards) than consistent rewards.
In computational models of habit systems, greater RPE over time creates more rapid stimulus
responses (Perez & Dickinson, 2020). RPEs are summed over time to measure an individual’s
THE ANATOMY OF TWITTER HABITS
15
habit strength, and higher amounts of summed RPE indicate a greater likelihood of activating a
habitual response instead of the goal-system response (which is reward and outcome-based).
Thus, intermittent rewards can cause habit strength to increase more rapidly through summed
RPE, indicating that scrolling habits form more rapidly when posts are more intermittently
rewarding (Perez & Dickinson, 2020). In addition, RPEs have less effect on behavior
performance as habit strength increases. As a habit is formed, users’ expected and experienced
rewards begin to converge over time, as habitual users have sufficient past experience to
establish accurate expectations of rewards during scrolling. Furthermore, their behaviors,
through habit, are far more tied to cues in the environment. Thus, I anticipated that in the single
observed scrolling session, the effect of intermittency would differ based on habit strength in that
habitual scrollers would be less responsive to intermittency and RPEs.
The Present Research
Social media relies on the twin pillars of the infinite scroll design and users’ repeated
posting behavior, which together generate content to keep the infinite scroll full of rewarding
posts and encourage users to spend time on their feeds. I test whether the mechanisms of these
core user behaviors on social media are similar, even if the behaviors may be psychologically
different (i.e., more active/passive).
For these tests, we measured user behavior in a multitude of different ways as our
dependent variables. In Study 1, we focused solely on the amount of time between each post at
the tweet level. For Study 2, our rich observational process allowed us to measure scrolling
behavior both at the participant level in terms of (a) average tweet dwell time, (b) total tweets
scrolled, and (c) total scrolling time and at the tweet level in terms of (d) tweet dwell time, and
(e) whether a user scrolled to the next tweet. We measure many different dependent variables
associated with behavior because while users all exhibit similar reward insensitivity, users may
THE ANATOMY OF TWITTER HABITS
16
also habituate to slightly different patterns of behavior. In posting, users may develop more
automatic patterns of retweeting content directly and/or when posting original content. We
capture both by analyzing data that is inclusive of both types of posts. In scrolling, users may
develop patterns that involve scrolling very rapidly and efficiently, ones that involve deeply
reading tweets before moving on, or a mix of the two. Our dependent variables of total tweets
scrolled and total scrolling time would capture each of these behaviors, as one would result in
getting further down the scroll while the other would result in more time spent. Average dwell
time captures both, as it looks at time spent per tweet scrolled. In our tweet level measures, dwell
time captures a similar measure, while going to the next tweet captures scrolling length. We use
these measures to account for a wide array of potentially habitual patterns in scrolling behavior.
THE ANATOMY OF TWITTER HABITS
17
Chapter 2 (Study 1): Tweeting Habits
To examine users’ tweeting habits and their relationship to RPEs, I recorded the
timestamps of users’ posts and the number of likes and direct retweets received via Twitter’s
streaming API. Specifically, the present study examined models predicting latency between
posts, or posting rate, from an interaction between prior tweeting frequency (as an indicator of
habit strength, Anderson & Wood, 2021, 2023) and RPE (constructed from observed numbers of
likes and direct retweets of users’ posts). I pre-registered the Study 1 hypothesis with OSF
(https://osf.io/6xu3w).
Hypothesis
H1 (preregistered): If more frequent tweeters post out of habit, the amount of time
between their posts (latency) will be less influenced by the valence and quantity of reactions
received from others than less frequent tweeters.
The predicted effect is a significant interaction between prior tweeting frequency and
RPE that reflects (a) little influence of rewards (RPEs) on posting rates (latency) for high-volume
tweeters and (b) greater influence of rewards (RPEs) on posting rates of low-volume tweeters.
We did not pre-register a predicted direction of the reward effect, as we considered that part of
the analysis exploratory. Although prior literature suggests that greater rewards reduce latency
between posts (Lindstrom et al., 2021; Anderson & Wood, 2023), the pre-registered predictions
simply anticipated that rewards increase or decrease the posting of new or occasional users more
than frequent users. The present study produced an unexpected reward effect on latency among
less frequent tweeters, which is discussed thoroughly below. In addition, the analysis was preregistered in March 2020, before it became clear that RPEs provide a precise way to capture
reward effects (Perez & Dickinson, 2020). Thus, instead of using raw quantities of positive
reactions as originally pre-registered, I tested reward effects in terms of reward prediction errors.
THE ANATOMY OF TWITTER HABITS
18
Method
Twitter accounts included in the sample were chosen by a web-scraping program
designed to ping Twitter’s API every 20 minutes for new tweets and reward counts from old
tweets. This scraping program collected tweet data from 400 randomly selected Twitter handles
(@username) within a pre-set group of Twitter accounts for live monitoring during a week-long
study period.
To identify the appropriate sample size, I conducted analyses using the simr package in R
(Version 1.04; Green & MacLeod, 2016) to test whether the proposed model was sufficient to
detect a small to moderate effect (Cohen's f
2 = .15) of habit strength and prior tweet reactions on
between-tweet latency. Results of the power simulations revealed that a minimum sample size of
44 participants and 1,760 posts was necessary to obtain .80 power for the predicted interaction
term. A detailed description of the simulations can be found in Appendix A under Power
Simulation and is plotted in Figure A-1.
The users were selected from a subset of active (tweeting at least 2x per week) followers
of President Donald Trump (wave 2) and President Barack Obama (wave 1). Observations were
collected in 2 separate week-long waves (due to scraping constraints set by Twitter’s API that
only allowed us to collect tweets from 200 users at once) from N = 400 unique user accounts.
The scraper program gathered data on reaction numbers, reply and retweet numbers, post timing,
and changes in reactions between each tweet.
The final dataset consisted of 270 users who posted 25,355 unique tweets during the
observation period. Thirty-six users were not active during the span of the study, and an
additional 94 users were removed due to a high likelihood that the accounts were automated
(“bots”) rather than individual human users or other types of managed accounts (those with blue
“verified” check-marks). A pre-registered cutoff of 35 tweets plus retweets per day screened out
THE ANATOMY OF TWITTER HABITS
19
such users. This cutoff makes this study a conservative test of user posting behavior, as removing
this ultra-high-frequency posting group might also have removed some extremely habitual users.
Furthermore, analyses were conducted with and without direct retweets (reposts without
comment of other users’ tweets) included, and the results remained equivalent to those reported
in the text. The results presented in the body of the paper include direct retweets, with the
original-generated content models included in Appendix A.
Measures
Prior Tweeting Frequency. The scraping tool measured the average number of tweets
per day posted by each user over the month prior to the study observation period. I relied on this
variable as a measure of habit strength. In support, prior research on Facebook showed a positive
correlation between past posting frequency and experienced habit strength among Facebook
users (Anderson & Wood, 2023).
Reward Prediction Errors. The scraping tool gathered reaction count data on each
tweet made by each user. Likes and direct retweets from other users were tallied separately and
subsequently used to create a third variable reflecting all purely positive-valence reactions to
users’ posts. Replies were omitted from the analysis, as their valence depends on user perception
of others’ commentary (Anderson & Wood, 2023). To align analyses in this study with measures
of RPE and account for the relative numbers of rewards users receive, I examined users’ rewards
received in terms of deviations from the number of reactions they had received on prior posts.
For each tweet, RPE is equal to the experienced reward minus the expected reward. Appendix A
contains models in which these values are divided by the mean number of rewards for each user.
Expected Reward. The first tweet for each user had no expected reward value, so these
tweets were eliminated from the analyses. For each subsequent tweet (tweet 2 to tweet n),
THE ANATOMY OF TWITTER HABITS
20
expected reward was the mean number of rewards received on all prior tweets by that user. The
range of this measure was between 0 and 344.
Experienced Reward. Each tweet’s experienced reward value was calculated as the
number of reactions (combined likes and retweets) received on that tweet. The range of this
measure was between 0 and 839.
A second version of this measure takes a weighted average of the reward ratings on all
posts prior to the focal post, multiplying by a fixed learning rate (of .1) that exponentially
weights the more recently posted tweets more strongly, similar to the models in Otto et al.
(2016), results of analyses with this measure is available in Appendix A, Table A-2. These
results showed essentially the same relations between habits and RPE as those reported in Table
2.
Between-Tweet Latency. The primary dependent variable is between-tweet latency, or
the elapsed time in hours until the following tweet. Latency was calculated by the data scraping
tool. Given that high-frequency tweeters had shorter latencies because they tweeted more
rapidly, the data set was skewed by greater numbers of short-latency posts. Thus, latency was
transformed into a log-normal distribution to decrease skew (as pre-registered at
https://osf.io/6xu3w).
Results
Correlations, means, and standard deviations appear in Table 1. Between-tweet latency
was negatively correlated with prior tweeting frequency, reflecting that users who posted more
often also posted more rapidly on aggregate during the scraping period. The positive correlations
between prior tweeting frequency and likes, retweets, and total positive reactions indicated that
more frequent posters received more reactions on each post on average. RPE reactions exhibit
similar patterns to the reaction count variables, except that the sign is negative in correlation with
THE ANATOMY OF TWITTER HABITS
21
reaction counts, suggesting that users who received higher reaction quantities experienced lower
or more negative RPEs on average across their tweets. The relationship between RPE and
reaction quantity simply indicated that participants who received many reactions from others on
average tended to have negative deviations from their expected reward averages. In contrast,
those who received fewer reactions from others tended to have positive deviations from their
expected reward averages. In addition, RPE was not significantly correlated with prior tweeting
frequency, suggesting that posting more or less often prior to the observation period did not
impact the average of RPEs across a user’s observed tweets. Correlations for a version of this
dataset excluding direct retweets made by each user are available in Appendix A, Table A-1.
Table 1
Means, Standard Deviations, and Correlations: Study 1
Variables M SD Latency Likes
(prior
tweet)
Retweets
(prior
tweet)
Total
positive
reactions
(prior
tweet)
Prior tweeting
frequency
(tweets/day)
Latency 13.04 21.46
Likes (prior
tweet)
0.39 1.90 -.06
Retweets (prior
tweet)
0.07 0.47 -.04 .90***
Total positive
reactions (prior
tweet)
0.45 2.33 -.06 .99*** .94***
Prior tweeting
frequency
(tweets/day)
7.98 7.68 -.42*** .17** .21*** .18**
RPE reactions -0.32 2.14 0.04 -.67*** -.51*** -.69*** -.08
THE ANATOMY OF TWITTER HABITS
22
Note. Higher numbers reflect longer latencies to post again (in hours), the average number of
positive reactions (likes and direct retweets combined), and average numbers of likes and
retweets on the prior tweet, as well as higher prior tweeting frequency (number of daily tweets in
the past month). All means and standard deviations are non-standardized values. Correlations
were computed using the dataset mean-centered and divided by the standard deviation
(standardized) values of each measure except latency. Latency was converted to log hours. *p <
.05 **p < .01 ***p <.001.
Given that the data are structured as tweets and rewards (RPE, reactions, likes, and
retweets) over time within participants, I employed a multilevel (hierarchical) model.
Accordingly, the Level 1 equation captures the association between RPEs on a prior set of tweets
and log normalized between-tweet latency (time until the subsequent tweet). Level 2 models the
interaction between prior tweeting frequency (at the participant level) and the RPE reactions
(within-participant at the post level) predicting between-tweet latency.
(1) �������ij = �0j + �1j���i-1j+ �ji
(2�) �0j = �00 + �01��������������j× ���j + �02��������������+ �03���+ �0j
(2�) �1j = �10 + �11��������������+ �1j
As shown in Table 2, a significant interaction emerged between RPE and prior tweeting
frequency, � = -.01, 95% CI [-.01, -.005], p = .03, df = 13654.41. As predicted, users who
tweeted more frequently continued their tweeting rate even after receiving more positive
reactions (both likes and direct retweets) from other users than would be expected or typical
based on their past quantities of rewards received (more positive RPE). Also, as predicted, users
THE ANATOMY OF TWITTER HABITS
23
who tweeted least frequently were more responsive to RPE in that they decreased their posting
rate (i.e., increased time until their next post) as RPE increased (see Figure 1).
Table 2
Multilevel Model Predicting Latency to Post Again from Prior Tweeting Frequency and RPE
Reactions: Study 1
Independent Variables df � p 95% CI
Intercept 225.01 -.03 .47 -.10, .05
RPE reactions 25,029.90 .01 .25 -.01, .03
Prior tweeting frequency 197.79 -.49 <.001 -.55, -.42
Prior tweeting frequency x
RPE reactions
24,998.73 -.01 .02 -.02, -.005
Note. Estimates are the standardized coefficients (�) of the terms in the multilevel model above.
The dependent variable is between-post latency. RPE positive reactions are unstandardized
differences between expected and experienced rewards. Past tweeting rate is a participant-level,
dataset mean-centered variable measured during the month before the study. The interaction term
is the interaction between prior tweeting frequency and RPE. Degrees of freedom are calculated
using the Satterthwaite method for multilevel models.
THE ANATOMY OF TWITTER HABITS
24
Figure 1
Plot of the Interaction in Study 1 Between Prior Tweeting Frequency and Reward Prediction
Errors Predicting Latency to Post Again
Note. Latency (log hours) between posts as a function of prior tweeting frequency and RPE.
Prior tweeting frequency was measured by the web scraper, counting the average tweets per day
from the month prior to the study’s observation period.
Discussion
The findings from this first study confirm the central pre-registered hypothesis that more
frequent tweeters are less influenced by others’ reactions than less frequent tweeters. Among
those who tweeted infrequently, more positive reactions on the immediately prior tweet than
expected (i.e., a positive RPE) influenced how quickly users posted again. Hence, as predicted,
THE ANATOMY OF TWITTER HABITS
25
less frequent tweeters were found to be more influenced by RPEs on their tweets. Conversely,
more frequent tweeters were less influenced by the rewards (RPEs) on their tweets.
In one crucial way, these findings echo prior investigations of active social media use,
specifically Facebook and Instagram posting behavior: Habitual posters persisted in posting
again quickly, almost regardless of the reactions from others on prior posts (Anderson & Wood,
2023). Highly frequent posters of moral outrage on Twitter also exhibited insensitivity to RPE
measures of social feedback, continuing to post moral outrage in future posts regardless of RPEs
(Brady et al., 2021). Furthermore, habit insensitivity to reward has not always been obtained in
experimental research on human performance (de Wit et al., 2018), perhaps because in
laboratory settings, participants are motivated to actively override unwanted habitual responses
(e.g., Hardwick et al., 2019). This research tested habit instead in the everyday context of social
media use, contributing a novel observational procedure of a streaming scrape method and
support for habit insensitivity to reward. Our result supports the idea that habits are best observed
in research designs that isolate the temporal order of responses and their outcomes, and that is
done in everyday social settings replete with the distractions, stresses, and fatigues of daily life.
Nonetheless, occasional or novel users of Twitter/X in the present research were not
motivated in the same ways by rewards as in earlier research with Facebook and Instagram
(Lindstrom et al., 2021; Anderson & Wood, 2023). Less frequent Facebook and Instagram
posters posted more rapidly in response to more positive reactions, whereas in the present study,
less frequent X posters instead posted more rapidly in response to fewer positive reactions. This
result was unexpected based on the prior literature but did not contradict our pre-registered
prediction that the effect of rewards would depend upon habit strength (a significant Reward X
Habit Strength interaction). To determine whether the slightly different analysis strategies in the
studies, especially the current use of reward prediction errors, contributed to the different results,
THE ANATOMY OF TWITTER HABITS
26
we tested whether the current findings held using raw counts of reactions, as in prior research
(Anderson & Wood, 2023). However, in the present study, less frequent posters decreased
posting rates after receiving larger raw numbers of rewards (Appendix A, Table A-2). It is also
worth noting that this same pattern replicated when RPEs included a fixed learning rate
(Appendix A, Table A-3) and when the dataset was recalculated to exclude participants’ direct
retweet posts (Appendix A, Table A-4).
One possible explanation for the negative effects of rewards on posting rates is the
different nature of posting on Facebook or Instagram versus posting on X. Communicating with
multiple people (i.e., broadcasting) as opposed to one person (i.e., narrowcasting) leads to
sharing to create more favorable impressions with the crowd and makes not getting those
favorable impressions aversive (Barasch & Berger, 2014). Non-habitual Twitter users are likely
broadcasting, given the public nature of the platform. As a result, users may seek out positive
reactions and post again when they do not receive them.1 In contrast, Facebook is designed as a
site for friends and family with a smaller audience, more similar to narrowcasting. This would
encourage users to focus more on others and be less self-presentational, and potentially make
receiving more positive reactions from these others more motivating (Anderson & Wood, 2023;
Barasch & Berger, 2014). However, this interpretation is challenged by the positive effect of
rewards on posting rates found on Instagram, which is plausibly a broadcasting site.
Another possible explanation is the time investment required to post on each site.
Facebook and Instagram have always lacked the strict character limits of Twitter posts (140
characters at the time the present study was conducted), allowing users to curate longer-form
content and likely invest more time in each post. Posting on Instagram is similar to posting a
1 Thanks to Prof. Nathanael Fast for this suggestion.
THE ANATOMY OF TWITTER HABITS
27
photo on Facebook but may require even more significant time investment (editing photo[s],
video[s], or a reel and writing a caption, etc.). Thus, it could be easier for Twitter/X users to
chase desired rewards by posting rapidly, whereas Instagram and Facebook users would be hardpressed to post at the same rate. In support, average posting rates among Instagram and
Facebook users are significantly lower than Twitter/X users, who frequently post multiple times
daily. If it is more costly to create posts on Facebook and Instagram, users would likely not
rapidly post again to get more rewards when they do not receive them (as observed in our
Twitter/X sample) and instead might pause before making another post in order to invest more
time in crafting a post they believe will achieve their desired reward outcomes. This proposed
behavior aligns with the patterns of posting behavior observed among less frequent and nonhabitual Facebook and Instagram posters (Anderson & Wood, 2023). Whether the observed
relationship between RPE and latency for less frequent posters observed in the present study is
driven by audience size or time investment is an additional exciting question for future research.
Finally, a potential unifying framework is provided by the influence of rewards during
foraging. Foraging describes individuals’ search for sparse rewards and continuing search when
they do not receive them (Kolling & Akam, 2017). In this framework, posting can be considered
a kind of foraging in which users seek social rewards. The behavior observed among nonhabitual tweeters is similar to the behavior observed in food-seeking studies among mammals:
individuals seek out rewards (or food) by performing a behavior until they receive them and stop
or slow their behavior once they do (Ansleme & Güntürkün, 2019). Conversely, not receiving
the reward encourages individuals to keep performing their reward-seeking behavior. This aligns
precisely with the posting patterns of less frequent Twitter/X posters. It may be that the design of
Twitter/X generates reward-seeking patterns similar to foraging, while the design of Facebook
and Instagram generates reward-seeking patterns more similar to classic reward-learning
THE ANATOMY OF TWITTER HABITS
28
paradigms. Future research could profitably replicate and contrast these patterns in long-term
observational studies.
Regardless of the specific direction of reward effects, less habitual users were clearly
reward-responsive in their posting behavior. In contrast, users with stronger Twitter/X habits
were less sensitive to rewards (as with Facebook and Instagram, see Anderson & Wood, 2023).
Finally, we also tested whether the obtained effects of reward could reflect artifacts in the
method or data. Because participants ranged from 1 or fewer posts per day over the month prior
to the study up to 35 daily posts, reflecting a high upper limit to Twitter posts, this pattern could
not have been due to a ceiling effect, such that social engagement was already maximized among
habitual participants. This range of posting behavior suggests that habitual users could have
increased or decreased their posting rates in response to rewards from other users.
THE ANATOMY OF TWITTER HABITS
29
Chapter 3 (Study 2): Twitter Scrolling
This study focuses on what has been called a more passive aspect of social media use that
could become habitual with repetition: scrolling (Verduyn et al., 2020). Posts observed within
the scroll can be differently rewarding to view depending on users’ subjective interest in the
content, which is presented in a relatively uniform format (with minor alterations for links,
photos, and videos), making it easy for users to quickly and efficiently assess whether content is
interesting or enjoyable. This format also makes it easy for users to express their interest (or
displeasure) by clicking reaction buttons (on Twitter/X, these are like, retweet, and reply) and
sometimes responding with commentary (replies and quote retweets only).
Users do not, however, provide reactions to every post. Most users on social media
platforms are lurkers–users who do not post (reply or retweet) and primarily consume content by
scrolling (Sun et al., 2014). Thus, this study directly assessed Twitter/X users’ interest in the
content they see. Our measurement of interest is aligned with recent literature that critiques the
present algorithmic delivery systems for acting solely on the revealed preferences (e.g., likes,
retweets, and replies) of users rather than stated preferences, such as those that can be obtained
through survey data as in the present study (Morewedge et al., 2023).
Users develop implicit or explicit expectations of how interesting the upcoming tweets
further down the scroll will be based on their experienced reward value of prior tweets. As in
Study 1, these relative reward experiences were defined as the differences between expected and
experienced rewards, known as reward prediction errors (RPE). This study was designed to
detect how responsive users’ scrolling behavior was to RPE. The predictions were tested with
live recordings of user behavior alongside these survey data.
THE ANATOMY OF TWITTER HABITS
30
Hypotheses
H1 (preregistered): Habitual scrollers will be less sensitive to how intermittent or
consistent rewards are in the posts they scroll than non-habitual scrollers. This lower sensitivity
to rewards would be reflected by a significant interaction between habit strength and reward
variance predicting scrolling behavior. Scrolling behavior is measured as (a) average dwell time
(average time spent on each tweet), (b) total tweets scrolled, and (c) total scrolling time.
The test of H1 presented in this dissertation differs slightly from the preregistration and
dissertation proposal (AsPredicted #119321). In planning the studies, I specified that I would
create scrolls of 50 tweets that varied in rewardingness intermittently (signified by a high
standard deviation across pretest ratings of tweet interestingness) versus consistently (signified
by a low standard deviation across pretest ratings of tweet interestingness). Despite my best
efforts, it was not feasible to create equivalently rewarding scrolls with significantly different
standard deviations (this process is detailed in Appendix B, under Preregistration Deviations). In
addition, pretest ratings and participants’ ratings of interest in the tweets had equivalent means
but different standard deviations, suggesting that participants’ experiences of the tweets differed
from the pretest raters (see Table 3).
Given that it was impossible to construct intermittent and continuously rewarding scrolls,
an alternate approach used participants’ ratings to create a continuous measure of reward
variance–ranging from consistent (low standard deviation) to intermittent (high standard
deviation) rewards. This interaction would indicate that while less habitual and less frequent
scrollers scrolled more (measured as (a), (b), and (c) in H1 above) when they experienced the
scroll as more intermittently rewarding, more habitual and more frequent scrollers would show
THE ANATOMY OF TWITTER HABITS
31
less or no increase in scrolling behavior when they experienced the scroll as more intermittently
rewarding.
Notably, these predictions contradict standard economic models in which the greater
rewards are likely to yield greater motivation to continue performing a behavior. This logic is
incorporated in economic models in which habits are more likely to develop from repeating
consistently rewarding choices (Camerer & Li, 2021).
Also preregistered was an exploratory analysis at the tweet level with reward prediction
errors. These analyses included exploring the effects of habit and tweet-level RPEs on users’
scrolling behavior measured as (a) dwell time (time spent on a tweet) and (b) continuing to a
subsequent tweet (whether they advanced to the following tweet in the scroll). These additional
dependent variables are both measured at the tweet level. In contrast, the three mentioned above
in H1, average tweet dwell time, total tweets scrolled, and total scrolling time, are at the
participant level only and are thus not included in the analyses of H2 and H3.
H2 (exploratory): Habitual scrollers will display less sensitivity to tweet-level RPEs in
terms of their tweet-level scrolling behavior, measured as (a) dwell time and (b) scrolling to the
subsequent tweet, compared to non-habitual scrollers. This lower sensitivity would be indicated
by a significant interaction between RPEs and a habit strength measure, predicting (a) dwell
time and (b) scrolling to the subsequent tweet.
Finally, this study explored the quantities of others’ reactions (likes, retweets, and
replies) seen on tweets scrolled by our participants. These signals of others’ social approval
could impact users’ scrolling behavior. We based the following prediction (H3) on prior research
showing that habit performance remains relatively insensitive to social influence (Mazar et al.,
2023).
THE ANATOMY OF TWITTER HABITS
32
H3 (exploratory): Habitual scrollers’ scrolling behavior will be less impacted by the
quantities of reactions seen on posts than the scrolling behavior of non-habitual scrollers. This
would be reflected in an interaction between quantities of others’ reactions seen on scrolled
tweets and a habit strength measure, predicting (a) tweet-level dwell time and (b) scrolling to the
subsequent tweet.
Method
Participants
To identify the appropriate sample size, I used the simr package in R (Version 1.04;
Green & MacLeod, 2016) to test whether the proposed model was sufficient to detect a small to
moderate effect (Cohen's f
2 = .15) of habit strength and reward prediction error on users’
scrolling. Results revealed that a minimum sample of 170 participants scrolling through an
average of 30 posts was necessary to obtain .80 power for the predicted interaction term.
Participants were recruited from the USC subject pool (N = 230) to scroll the USC content and
from Prolific (N = 121) to scroll the entertainment content.
Once each scroll was constructed (see Table 4), potential participants were recruited to a
pre-screening survey. Participants were invited to participate in the full study if they indicated
moderate or greater interest in the relevant topic (4 or greater on a 7-point scale: “how interested
are you in seeing Twitter content related to [USC/Entertainment], 1 = not at all to 7 = extremely),
possessed a Twitter account, and indicated they had an iPhone with the Twitter application
already installed. We attempted to recruit N = 240 to find the acceptable minimum sample. The
final sample contained 275 participants (USC scroll, subject pool sample N = 186, Entertainment
scroll, Prolific sample N = 89). This number was achieved after 76 participants were removed
from the original 351 due to poor submission quality–defined as scrolling less than ten tweets as
THE ANATOMY OF TWITTER HABITS
33
per the study instructions, obviously stopping the scroll to focus on something other than the
tweets, scrolling the wrong page on Twitter, and failing to submit a complete video and/or set of
tweet ratings.
Experimental Design
To construct the simulated Twitter scrolls, I collected 140 tweets on two topics:
university (USC) and entertainment. The topics were selected to ensure participants were
interested in scroll content and would only disengage with the scrolls after some time.
Entertainment is one of the tabs on Twitter’s Explore page selected to appeal to a broad group of
Prolific users, and the university topic was used to appeal to USC undergraduate students. The
process of customizing these scrolls is detailed in Table 4.
Table 3
Twitter Scroll Rewardingness Means and SD from Study 2
Scroll Pre-Test Scroll
Mean
Participant
Scroll Mean
Pre-Test Scroll
SD
Participant
Scroll SD
USC A 3.45 3.79 0.69 0.85
USC B 3.53 4.02 0.70 0.87
Entertainment A 3.13 2.63 0.53 0.74
Entertainment B 3.00 3.39 0.60 1.00
Note. Pre-test scroll means were calculated from N = 50 Tweets, each rated by a minimum of 20
pre-test participants for interestingness to obtain an aggregate reward score at the participant
level. Participant scroll means are calculated only from the tweets seen by each participant. They
are aggregate ratings of all tweets seen by each participant in the assigned scroll category during
the scrolling task.
THE ANATOMY OF TWITTER HABITS
34
Table 4
Step-by-Step Twitter Scroll Construction Process from Study 2
Phase of Construction Description
1. All Tweet Gathering Researchers found 140 tweets that seemed subjectively
interesting and aligned with the overall topic from (a) the top
most recent tweets on Twitter under the entertainment Explore
tab, and (b) a search for “USC” as well as official university
accounts, for a total of 280 tweets.
2. Pretest Rating All 140 tweets for each topic were rated on “how interesting is
this tweet” (1 = not at all interesting, 7 = extremely interesting)
by a set of at least 20 peer coders to indicate, on average, how
rewarding each tweet was. Coders were undergraduate student
volunteers in the case of the USC scroll and Prolific participants
in the case of the entertainment topic.
3. Initial Scroll Tweet
Selection
After coding, the 280 tweets were divided by topic, including a
unique ID number and their aggregate reward rating. 140 USCrelated and 140 entertainment-related tweets were then ranked
from lowest to highest within their topic by aggregate
rewardingness. Next, within each topic, the middle 50 tweets
(ranked numbers 45-105 of 140) by aggregate reward rating
were separated from the other 90.
4. Scroll Tweet Ordering:
Type A
For type A (formerly intermittent) scrolls within each topic, the
remaining 90 tweets were used. First, tweets 1-45 from the
initial ranking were designated as a low-reward group, and 105-
140 as a high-reward group. To order the tweets in an interval
reward schedule, a random number generator was used to
generate 50 unique numbers (1 = low, 2 = high), one in each of
the 50 tweet slots for the proposed scroll (1 = top of scroll, 50 =
bottom of scroll). Fifty corresponding tweets were then
randomly selected (again using a random number generator) by
a unique ID from the corresponding low or high group indicated
in the tweet’s proposed cell. This became the first full proposed
version of the type A scroll.
5. Scroll Tweet Ordering:
Type B
For the type B (formerly consistent) scrolls within each topic,
the middle 50 tweets were put into a randomized order. This
randomization was done by randomly generating new numbers
1-50, with 1 indicating the first tweet (top of the scroll) in the
proposed scroll and 50 indicating the final tweet (bottom of the
scroll). Once put in their randomized order, these 50 randomly
ordered tweets became the type B reward scroll.
THE ANATOMY OF TWITTER HABITS
35
6. Mean and Standard
Deviation Equalization
Process (all scrolls)
At this point, the mean and standard deviation of the aggregate
reward measure for the consistent scroll in each topic was
calculated. If the mean of the aggregate reward measure for the
proposed intermittent scroll was outside 1 SD from the
consistent scroll mean, tweets with lower or higher rewards than
the randomly selected ones were selected at the researchers’
discretion from the remaining 40 tweets in the intermittent scroll
pool until the means of the intermittent and consistent scrolls
were within 1 SD. This was also examined across topics to
ensure all scrolls were comparably rewarding. However, no
discretionary changes were necessary to bring the scrolls within
1 SD across topics after matching them within topics. The sports
scroll was never used for participants in this study and thus will
not be discussed further. However, during this process, it was
not possible to bring the mean values into equivalence without
impacting the standard deviation in ways that violated one being
significantly more intermittent than the other. The final scroll
means and standard deviations for each scroll topic and scroll
type are in Table 3 below. The scrolls are functionally
equivalent in mean rewardingness and reward standard deviation
(reward variance) based on the pre-test ratings. From this point
forward, the scrolls will be called USC[Entertainment] A[B],
indicating the topic of the scrolls’ content and their former
category, intermittent being “A” and consistent being “B” to
indicate that different tweets were present in these scrolls, which
are otherwise equivalent based on the pre-test ratings.
7. Twitter Account
Creation
At this point, I created four new Twitter accounts. These
accounts had no biographical information or other identifiable
information, merely variations of the research lab’s name as a
handle (e.g., @h****lab2) and display name. 4 research
assistants were then assigned, one to each account, to tweet out
the 50 tweets in the predetermined order.
Note. Scroll names were censored to protect privacy. All tweet content is available in the full
data sheet on the OSF page.
Because the scrolls were posted to actual Twitter/X account pages, the interaction buttons
(like, retweet, share, bookmark) were still active for each user and thus provided a naturalistic
Twitter scrolling experience. The order of the tweets was then checked for correctness by me and
another researcher. This order was checked every week throughout the study period to ensure
THE ANATOMY OF TWITTER HABITS
36
tweets (all original content generated by other accounts on Twitter that those accounts could
potentially delete at any time) were not deleted from the scrolls.
Procedure
Figure 2
Study 2 Diagram of the Experimental Procedure.
Note. All five stages were completed in one sitting by all participants in Study 2.
Participants were invited to the full study, which consisted of 5 stages by passing the prescreening task in a survey on either Prolific or the SONA Subject Pool platform (see Figure 2). In
both the Prolific and subject pool samples, participants were required to take the study on a
computer with their iPhone present. While scrolling the pre-made Twitter/X scrolls, all
participants were instructed to record their phone screen with highly detailed and specific
instructions (Stage 1). Each participant was then randomly directed to either scroll A or B in the
THE ANATOMY OF TWITTER HABITS
37
USC (subject pool) or Entertainment (Prolific) topic and told to scroll for a minimum of 10
tweets (Stage 2). A button to proceed to the next part of the study appeared after 120 seconds on
the final instruction screen to ensure participants did not skip to the next part of the survey
without scrolling and recording. The participants in the Prolific study and the second wave of the
participants in the USC study were instructed to screenshot the final tweet they saw in the scroll
to assist them in the next part of the study. The screenshots were taken to ensure participants
could determine where to stop rating tweets in the final part of the task and submit the study.
This behavior also allowed coders to determine more easily the tweet on which participants
stopped scrolling. After this screenshot, participants stopped their recordings, completing Stage
2. They sent these recordings to the researchers via a private dropbox link (Stage 3). Once
received, the research team ensured that videos were labeled only with their participant ID. At
this point, participants did not use their smartphone further.
In Stage 4, participants advanced to a new part of the survey on their computer. First,
they rated how interesting they found each tweet they had just seen in the scroll, with screenshots
of each corresponding tweet and links to all tweets containing video content accompanying each
question. Participants were instructed to stop rating tweets after the last tweet they saw in the
scroll, as identified by their screenshot in the Prolific study and by memory in the psychology
pool group. When participants rated more tweets than they actually scrolled in their submitted
video, these additional ratings were removed from the data. After completing these ratings,
participants were directed to the final survey page to respond to several scales, including
scrolling habit measures and demographic information (Stage 5).
THE ANATOMY OF TWITTER HABITS
38
Video Coding and Reliability
Behavioral measures were collected from participants’ scrolling videos by the research
team, including the dependent measures mentioned in our main hypotheses: participant level (a)
average tweet dwell time, (b) total tweets scrolled, (c) total scrolling time, as well as tweet-level
(a) dwell time, and (b) scrolling to the subsequent tweet. This process is detailed in the steps
below:
1. Each video was assigned to 2 separate research assistants and was independently coded for
each of the measures described below at the tweet and participant level.
2. After coding, the ratings provided by each coder were compiled into separate datasets, from
which intra-class correlations (ICC) across the first two coders were calculated.
3. Variables that met the standard of an ICC = .8 or greater or Cohen’s kappa of .7 or greater
were considered settled after the first two rounds, and an average of the two codings was
taken to represent the final measure used for analyses.
4. For variables that did not meet these standards, the 10% most different codings were
identified.
5. For the variables identified in (4), a third coder provided new ratings for the top 10% of
most different codings, and ICCs were recalculated. The final values of these top 10%
variables were the average of all three rounds of coding.
6. Finally, some ICCs were low because coders disagreed on how many tweets participants
had viewed. Participants with high discrepancies in estimates of the number of tweets
scrolled were sent for an arbitration round of additional coding. In this round, two new
coders and the lead researcher agreed on the exact codings for all tweets scrolled. The first
two rounds were replaced by arbitration round codings for these participants. The final
THE ANATOMY OF TWITTER HABITS
39
ICCs were calculated after this process, and the two-round ICCs are available in the
supplementary materials (see Appendix B, Table B-1). The fully anonymized version of
our dataset is available for download via OSF (link).
Measures
Means and standard deviations of these measures are available in Table 5, along with
correlations between the variables of interest.
Tweet-level Variables.
Experienced Reward. To capture the rewardingness of each tweet in the scroll,
participants rated “how interesting was this tweet?” (1 = not at all to 7 = extremely). This rating
was obtained during the post-scrolling-task survey, during which screenshots of all tweets (with
links below for video or photo gallery content) were presented sequentially above this question
in the same order as participants had seen the tweets during the scrolling task. Interest value was
assessed because pretest coder ratings were very similar for this item and the other measures of
rewardingness I tested (e.g., funniness; see Appendix B for more information). The range of this
measure was 1 to 7.
Expected Reward. This measure was calculated by taking the mean of participants’
responses to the “how interesting was this tweet” (1 = not at all to 7 = extremely) experienced
reward measure for every tweet before the focal tweet. Thus, for each tweet number n,
experienced reward will be from tweet n, and expected reward will be their mean rating from
tweet 1 to n-1. This formula applies to all tweets except the first tweet in the scroll, where the
expected reward will be the same as the experienced reward. The range of this measure was 0 to
4.54.
THE ANATOMY OF TWITTER HABITS
40
A second version of this measure takes a weighted average of the reward ratings on all
posts prior to the focal post, multiplying by a fixed learning rate (of .1) that exponentially
weights the more recently scrolled posts more strongly, similar to the models in Otto et al.
(2016), results of analyses with this measure are included in Appendix B, Table B-11, Table B12, and Table B-13.
Reward Prediction Error (RPE). RPE was calculated on each tweet for each participant,
as estimated from expected reward subtracted from the corresponding experienced reward. As
mentioned above, two measures of experienced reward (with a fixed learning rate and without)
were used in these calculations, so two measures of RPE were reported.
RPEn = Experienced-Rewardn - Expected-Reward(0...n-1)
Total Others’ Reactions. This measure was calculated as the raw count of the combined
quantity of likes, retweets, and replies from other users on each post. These quantities are
displayed on all Twitter posts and signal each tweet’s popularity with other Twitter users. The
range of this measure was 1 to 43,139.
Dwell Time. This measure was calculated as the number of seconds a user spent looking
at a tweet. Coders used a stopwatch timer to calculate time precisely. When two or more tweets
appeared on the screen at once, the time was evenly divided to the tenth of a second and added to
each of the visible tweets’ dwell times. The range of this measure was 0 to 266 seconds, and the
range of the average of this measure was 0.38 to 21.85 seconds/tweet.
Regarding potential outliers, at least 32 users spent over 60 seconds of dwell time on at
least one tweet in the scroll. In the video coding process, these behaviors were determined to be
legitimate. Participants were often watching videos or browsing articles. However, these long
dwell times do indicate disruptions in users’ scrolling. Thus, additional models were computed
THE ANATOMY OF TWITTER HABITS
41
for all dwell time models, excluding these 32 outliers. These models produced similar results to
those in the text and are in Appendix B (see Table B-9, B-10, B-11, and B-12).
Dwelt on Tweet. This measure is a post-level binary variable based on extended
dwell time–if a user spent more than 1 second looking at a tweet, that tweet was
considered to be dwelt upon or paused at and received a 1. Tweets not observed for this
long were considered skipped and received a 0.
Scrolled to Subsequent Tweet. Following a tweet, users kept scrolling to the subsequent
tweet (1= continued scrolling) or exited the observational portion of the study (0 = stopped
scrolling).
Participant-Level Variables.
Scroll Indicator. This factor represented which scroll the participant viewed (1 = USC
Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 = Entertainment Topic B). Table 3
shows each scroll’s mean reward value, reflecting the pretest peer ratings and the post-task
survey ratings from study participants. Although all scrolls were equivalently rewarding based
on the pretest data, entertainment A and B were not rated as equivalent in the post-task survey
(see Table 3). Therefore, all calculated regression analyses use the Scroll Indicator variable to
control for scroll-based differences.
Prior Scrolling Frequency. This self-reported measure of habit strength was based on
objective data about users’ Twitter use. Participants reported the weekly time they spent scrolling
Twitter in hours per week during the past six months after reviewing their screen time app (all
participants could access this feature, as all were required to have an iPhone). The range of this
measure was 0 to 58 hours. One participant was excluded from all analyses with this variable as
an extreme outlier for putting the scale maximum (100 hours).
THE ANATOMY OF TWITTER HABITS
42
Self-Report Behavioral Automaticity Index (SRBAI, Gardner et al., 2012). On a scale
ranging from 1 (agree) to 7 (disagree), participants rated experienced automaticity of Twitter
scrolling on the 4-item Self-Report Behavioral Automaticity Index (Gardner et al., 2012). Items
included, “Scrolling Twitter is something…” “...I do automatically,” “I do without thinking,” “I
do before I realize I am doing it.”, and “I do without having to consciously remember” (alpha =
.935).
Total Scrolling Time. One dependent variable measure was the total time (in seconds)
that users spent on the scroll. Time was measured from the first swipe downward on the page
containing the assigned Twitter scroll until the participant exited the scroll and/or took a
screenshot of the final tweet. Participants were required to scroll for at least 10 seconds. Thus,
the range of this measure was 10.5 to 874.5.
Total Tweets Scrolled. A second dependent variable was the raw count of tweets viewed
by the participants in the video recording of their scrolling. It was counted from the first tweet at
the top of the scroll until the tweet at which they exited the scroll on and/or screenshotted.
Participants were required to scroll a minimum of 10 tweets. Thus, the range of this measure was
10 to 50.
Reward Variance. A participant-rated measure of intermittency of rewards was
calculated as the standard deviation of users’ own ratings of tweet rewardingness across all
tweets they viewed during scrolling. A lower standard deviation indicated that a participant
experienced the scroll as more consistently rewarding. A higher standard deviation indicated that
a participant experienced the scroll as more intermittently rewarding. The range of this measure
was 0 to 3.46.
THE ANATOMY OF TWITTER HABITS
43
Aggregate Reward Rating. Overall rewardingness was calculated as the mean of
participants’ own ratings of tweet interest across all tweets viewed during the scrolling task.
Higher mean interest indicated that a participant experienced the scroll as more rewarding on
average. Lower mean interest indicated that a participant experienced the scroll as less rewarding
on average. The range of this measure was 1.02 to 6.73.
Results
Participant-Level Means and Correlations
Table 5
Means, Standard Deviations, and Correlations Between Participant-level Variables of Interest:
Study 2
Variables Mean SD 1 2 3 4 5 6
1. Prior
Scrolling
Frequency
(Hrs/Week)
7.95 11.14
2. SRBAI 3.44 1.91 .25***
3. Total
tweets
scrolled
25.77 11.53 .11 .06
4. Total
scrolling
time
(seconds)
162.06 114.03 -.06 .06 .53***
5. Reward
variance
(intermittenc
y)
1.48 0.50 -.14* .06 -.04 .07
6. Aggregate
reward rating
3.59 0.98 .06 -.11 -.01 .06 -.12*
THE ANATOMY OF TWITTER HABITS
44
7. Average
tweet dwell
time
5.31 3.39 -0.15* -0.02 -0.07 0.65*** 0.14* 0.10
Note. Means and Standard deviations are based on the raw data. Correlations represent Pearson’s
r and are computed on mean-centered, standardized versions of the variables. Average tweet
dwell time is the participant-level average of the tweet-level dwell time measure.
* p < . 05 ** p < .01 *** p < .001.
The two measures of habit strength were positively correlated: self-reported prior
scrolling frequency and SRBAI score, r(259) = .25, p < .001 (15 participants in the final sample
failed to fill out the SRBAI and frequency measures, and one participant was removed for being
an extreme frequency outlier). This correlation is lower than anticipated, perhaps because the
frequency measure was self-reported from participants’ hours on Twitter/X per week based on
their iPhone logs (via screentime), including active use (e.g., composing posts).
The two participant-level dependent variables, total tweets scrolled and total scrolling
time, were substantially correlated, r(273) = .53, p < .001, which indicated that participants who
scrolled more tweets also spent more total time scrolling. In addition, the reward variance
(intermittency) was slightly correlated with the prior scrolling frequency measure of habit
strength, r(258) = -.14, p = .02, suggesting that more habitual scrollers rated the tweets they
scrolled as slightly more consistent (or less intermittent) in rewardingness. The unanticipated
small but significant correlation between reward variance and aggregate reward rating indicated
that participants who found the tweets they scrolled to be more intermittently rewarding also
found tweets less rewarding on aggregate, r(259) = -.12, p = .046. Interestingly, aggregate
reward rating was not correlated with total tweets scrolled, total scrolling time, or average dwell
time, suggesting that finding the tweets more or less rewarding on aggregate did not directly
THE ANATOMY OF TWITTER HABITS
45
impact participants’ scrolling behavior. In addition, aggregate reward rating was not correlated
with either measure of habit strength, suggesting that habitual and non-habitual scrollers found
the tweets to be similarly rewarding on average.
Finally, several variables correlated with the average tweet dwell time (participant-level
variable). Average dwell time was weakly correlated with prior scrolling frequency, r(262) = -
.15, p = .016, suggesting that less frequent scrollers tended to stop on tweets for more time (on
average) during their scrolling sessions. Average tweet dwell time was also highly correlated
with total scrolling time, r(273) = .65, p < .001, indicating that those who scrolled longer also
dwelled on tweets for longer. Finally, average dwell time was weakly correlated with reward
variance, r(259) = .14, p = .02, suggesting that participants who experienced tweets as more
intermittently rewarding also tended to dwell longer on the tweets they scrolled. This relationship
is similar to the one predicted in the pre-registered hypothesis, as this suggests that users spent
more time on each tweet they scrolled when they experienced the overall scroll as more
intermittently rewarding.
Reward Variance Analyses
Participant-level Analysis. H1 anticipated that the reward variance of the scroll would
impact non-habitual scrollers more than habitual scrollers. In particular, non-habitual scrollers
were anticipated to scroll more if the rewards were more variable (intermittent), while reward
variance should have a weaker effect on habitual scrolling. Scrolling behavior is reflected in total
scrolling time, total tweets scrolled, and average tweet dwell time. I examined these relationships
first through simple linear regressions, in which these three measures of scrolling behavior were
predicted by reward schedule type and self-reported habit strength/prior scrolling frequency, all
THE ANATOMY OF TWITTER HABITS
46
at the participant level while controlling for which scroll participants viewed using the scroll
indicator variable.
Table 6
Simple Linear Regression Model Predicting Average Dwell Time from Reward Variance and
Habit Strength (SRBAI, Controlling for Scroll Indicator: Study 2
Independent Variables df � p 95% CI
Intercept 251 .14 .17 -.06, .34
SRBAI 251 .03 .67 -.10, .16
Reward variance 251 .15 .01 .03, .28
Scroll indicator [2] 251 -.12 .46 -.42, .19
Scroll indicator [3] 251 -.36 .06 -.74, .02
Scroll indicator [4] 251 -.29 0.12 -.66, .08
SRBAI x Reward
variance
251 .01 0.83 -.11, .14
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents residual (error) degrees of freedom. The dependent variable, average tweet dwell time
for each participant, is continuous. Reward variance is assessed for each participant based on
their own ratings of each tweet scrolled and is also continuous. The scroll indicator is a factor
that takes values 1 to 4, indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 =
Entertainment Topic A, 4 = Entertainment Topic B.
Dwell Time. The analysis of H1 found a significant positive relationship between reward
variance and dwell time (see Table 6) when controlling for habit strength, measured as either
prior scrolling frequency or SRBAI, and was also significant when interactions between the habit
THE ANATOMY OF TWITTER HABITS
47
SRBAI habit strength measure and reward variance were added to the model alongside the scroll
indicator. However, neither habit strength measure was related to the dependent variable, and the
interaction terms failed to reach significance. Additionally, the relationship between reward
variance and prior scrolling frequency becomes marginal when the interaction term is added to
the model (see Appendix B, Table B-2). This pattern was also replicated with the dataset that
excluded participants with outlier values for dwell time. These results are available in Appendix
B, Table B-3, B-4. Thus, our pre-registered hypothesis is only partially confirmed: Participants
who experienced their scroll as more intermittent spent longer in aggregate looking at each
tweet.
Total Tweets Scrolled. The analysis of H1 found no significant relationship between
reward variance and total tweets scrolled, including when controlling for the interaction term. No
model terms reached significance in these analyses; thus, they are not included in the appendix.
Total Scrolling Time. The analysis of H1 found a significant interaction effect in the
opposite direction of our prediction. Participants who are more frequent scrollers scrolled for
longer total scrolling time when rewards were more intermittent. In contrast, less frequent
scrollers displayed little relationship between reward variance and their total scrolling time, � =
.1, 95% CI [.01, .19], p = .03, df = 253 (see Appendix B, Table B-5). However, this relationship
did not replicate the habit strength (SRBAI) measure, which suggests it may be an anomaly in
our data.
Tweet-level Analysis (dwell time only). A multilevel model was also used to test H1's
prediction that reward variance of the scroll would impact non-habitual scrollers more than
habitual scrollers. Multilevel modeling was done to account for the possibility that a previous
tweet's dwell time correlated to the next within participants in ways that would not be captured
THE ANATOMY OF TWITTER HABITS
48
by using an aggregate dwell time measure (because average measures do not account for the
multilevel structure of the dwell time variable). This analysis did not identify any new significant
effects. Again, the predicted positive, significant relationship between reward variance and dwell
time on each tweet was not impacted by habit strength as anticipated. This effect becomes
marginally significant when using prior scrolling frequency as a measure of habit strength in a
model with otherwise identical terms, � = .04, 95% CI [-.001, .08], p = .058, df = 271.96 (see
Appendix B, Table B-6). Models run with the dataset excluding dwell time outliers yielded
similar results.
Table 7
Multilevel Model Predicting Dwell Time from Reward Variance and SRBAI, Controlling for
Scroll Indicator: Study 2
Independent Variables df � p 95% CI
Intercept 269.52 .05 .12 -.01, .11
Reward variance 268.22 .05 .02 .01, .08
SRBAI 265.06 .02 .41 -.02, .06
Scroll indicator [2] 245.99 -.04 .44 -.13, .06
Scroll indicator [3] 263.83 -.11 .06 -.23, .01
Scroll indicator [4] 273.14 -.08 .17 -.20, .04
Reward variance x
SRBAI
276.37 .001 .82 -.03, .04
Note. Estimates are the standardized coefficients (�) of the terms in the multilevel model. df
represents Satterthwaite degrees of freedom. The dependent variable, tweet dwell time for each
tweet seen by each participant, is continuous at the tweet level. Reward variance is assessed for
each participant based on their own ratings of each tweet scrolled and is also continuous. Higher
THE ANATOMY OF TWITTER HABITS
49
SRBAI scores indicate more habitual scrolling behavior. The scroll indicator is a factor that takes
values 1 to 4, indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment
Topic A, 4 = Entertainment Topic B.
Reward Prediction Error Analyses
H2 anticipated that reward prediction errors on each tweet would predict scrolling
behavior for less habitual users. This relationship between RPE and scrolling behavior would be
attenuated for more habitual users. To test H2, self-reported habits and prior scrolling frequency
were analyzed as continuous predictors in separate multilevel models. First, to test the RPE
hypothesis, I constructed multilevel models using post-level, within-participant RPE terms,
interacted with habit strength or prior scrolling frequency to predict (a) participants’ dwell time
on each tweet and (b) whether participants scrolled to a subsequent tweet.
As the data are structured as dwell time and RPE at the tweet level within participants, I
employed a multilevel (hierarchical) model for these analyses. Accordingly, the Level 1 equation
captures the association between RPEs and how long a user dwelt on a post. Level 2 models the
interaction between prior scrolling habits (at the participant level) and the RPE (withinparticipant, at the post level) predicting dwell time.
(1) �����������ij = �0j + �1j���ij+ �ji
(2�) �0j = �00 + �01��������������j× ���j + �02��������������+ �03���+ �0j
(2�) �1j = �10 + �11��������������+ �1j
This set of models included covariates (not displayed in the sample equations) of scroll
type indicator.
THE ANATOMY OF TWITTER HABITS
50
Dwell Time Analysis. In Table 8 and Figure 3 below, we see the predicted interactions
from H2 at the tweet level. In particular, the predicted positive relationship emerged between
RPE and dwell time. In addition, having stronger habits (higher prior scrolling frequency)
attenuated this relationship, � = -.03, 95% CI [-.05, -.01], p = .01, df = 6142.44. Interestingly,
this interaction did not remain significant with the SRBAI measure using the full dataset.
However, suggesting that this effect may be quite robust, this model remains one of a number
that showed similar effects.
Dataset Excluding Dwell Time Outliers. First, the predicted interaction does hold with
prior scrolling frequency and SRBAI measures of habit strength in the dataset that excluded
users who spent more than 60 seconds dwelling on any tweet (N = 32 participants; see Appendix
B Table B-7 and B-8). The SRBAI and RPE interaction term was � = -.04, 95% CI [-.06, -.01], p
= .004, df = 5109.9. In addition, these predicted interaction results held when demographic
characteristics such as age and gender were added to the models (see Appendix B, Table B-9 and
B-10).
Fixed Learning Rate RPEs. The predicted negative interactions between RPE and each
habit strength measure (prior scrolling frequency and SRBAI) were significant when their
distinct models used the fixed learning rate version of RPE. These fixed learning rate RPEs
weighted more recently scrolled tweets more strongly in calculating expected rewards. Fixed
learning rate RPEs were interacted with prior scrolling frequency and SRBAI in two models
predicting dwell time, using the dataset without dwell-time outliers and controlling for age and
gender effects (see Appendix B, Table B-11, and B-12).
Running the prior scrolling frequency model and the SRBAI model with the full dataset
produced similar results to the non-fixed learning rate RPE models above: (a) the prior scrolling
THE ANATOMY OF TWITTER HABITS
51
frequency and fixed learning rate RPE interaction term was significant, � = -.03, 95% CI [-.07,-
.005], p = .04, df = 4716.37, (see Appendix B, Table B-13) and (b) the SRBAI and fixed learning
rate RPE interaction did not reach significance in a model where it replaced the prior scrolling
frequency variable.
Dwelt on Tweet Analysis. Using the RPEs to predict the dichotomous (1 if a participant
paused for greater than 1 second on a tweet, 0 if otherwise) version of the dwell time variable in
a GLMM (Generalized Linear Multilevel Model), we find that the interaction between prior
scrolling frequency and non-fixed learning rate RPEs did not achieve significance. However, the
analysis revealed a significant effect of reward prediction error (RPE) on dwelling likelihood,
with an estimated odds ratio coefficient of 1.89 (SE = 0.03, z = 15.83, p < .001). This effect
indicated a positive relationship between greater RPE and a greater likelihood of dwelling. In
addition, prior scrolling frequency had a significant effect, with an estimated odds ratio
coefficient of 0.78 (SE = 0.07, z = -2.6, p = .009). This effect suggests a negative relationship
between prior scrolling frequency and the likelihood of dwelling on tweets, such that less
frequent scrollers were more likely to dwell on tweets (see Appendix B, Table B-14). The
SRBAI interaction with RPEs did not reach significance in a model where SRBAI replaced the
prior scrolling frequency variable, and SRBAI score did not predict participants dwelling on
tweets either, while the effect of RPE on dwelling likelihood remained the same. These results
hold when using the data excluding dwell time outliers instead of the full dataset and replacing
RPE terms with fixed learning rate RPEs in each dataset across both habit strength measures.
THE ANATOMY OF TWITTER HABITS
52
Table 8
Multilevel Model Predicting Tweet Dwell Time from the Prior Scrolling Frequency and Reward
Prediction Errors, Controlling for Scroll Indicator: Study 2.
Independent Variables df � p 95% CI
Intercept 265.60 .05 .16 -.02, .11
RPE 6093.30 .24 < .001 .22, .26
Prior scrolling
frequency
252.12 -.05 .03 -.09, -.01
Scroll indicator [2] 252.27 -.07 .17 -.17, .03
Scroll indicator [3] 270.76 -.09 .12 -.21, .02
Scroll indicator [4] 279.40 -.06 .29 -.18, .05
Prior scrolling
frequency x RPE
6142.44 -.03 .01 -.05, -.01
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents Satterthwaite degrees of freedom. The dependent variable, tweet dwell time, is
continuous. Aggregate reward prediction error is assessed for each participant based on their own
ratings of each tweet scrolled and is also continuous. Habit strength is measured by prior
scrolling frequency in hours per week. The scroll indicator is a factor that takes values 1 to 4,
indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 =
Entertainment Topic B.
THE ANATOMY OF TWITTER HABITS
53
Figure 3
Plot of the Interaction in Study 2 Between Habit Strength (Prior Scrolling Frequency) and
Aggregate Reward Prediction Errors, Predicting Average Dwell Time in a Multilevel Model.
Note. Average dwell time as a function of prior scrolling frequency and the aggregate RPE for
each participant during their scroll. For the plot, less frequent scrollers were specified as -1SD
below the mean and more frequent scrollers as 1SD above the mean. Shaded areas around the
middle red and blue lines represent 95% CIs.
THE ANATOMY OF TWITTER HABITS
54
Scrolled to Subsequent Tweet Analysis. We additionally preregistered another
dependent variable for this analysis, scrolling to the subsequent tweet. This dependent variable
was predicted in a binomial logistic multilevel regression model from prior scrolling frequency,
reward prediction error, and the interaction between prior scrolling frequency and reward
prediction error (see Table 9). The interaction term was only marginally significant, with an
estimated odds ratio coefficient of 0.94 (SE = 0.03, z = -1.66, p = .09). RPEs positively predicted
the likelihood of scrolling to the subsequent tweet, indicating that high RPEs on the present tweet
made it more likely that a user scrolled to the subsequent tweet. In contrast, lower RPEs made it
more likely that a user stopped scrolling (did not go to the subsequent tweet). These effects all
held when using the fixed learning rate RPE term instead of RPE.
This model was also run with the SRBAI habit strength measure instead of prior scrolling
frequency. The effects remained the same in this model, with the interaction between SRBAI and
RPEs not reaching significance and only the positive effect of RPEs on the scrolling to
subsequent tweets maintaining significance.
Table 9
Generalized Linear Multilevel Model (GLMM) Binomial Logistic Regression Predicting
Scrolling to a Subsequent Tweet from Prior Scrolling Frequency and Reward Prediction Errors,
Controlling for Scroll Indicator: Study 2.
Independent Variables
Estimate
(Odds Ratio) Std. Error Z-score p
Intercept 25.52 2.74 30.18 < .001
RPE 1.17 0.05 3.8 < .001
Prior scrolling
frequency
1.02 0.07 0.34 .73
Scroll indicator [2] 1.08 0.18 0.48 .63
THE ANATOMY OF TWITTER HABITS
55
Scroll indicator [3] 0.85 0.17 -0.82 .41
Scroll indicator [4] 0.81 0.15 -1.1 .27
Prior scrolling
frequency x RPE
0.94 0.03 -1.66 .09
Note. Estimates are the estimated odds ratio of the terms in the Generalized Linear Multilevel
Model, binomial logit. The dependent variable, scrolled to subsequent tweets, is binary at the
tweet level. Reward prediction error is assessed for each participant based on their own ratings of
each tweet scrolled and is also continuous. Habit strength is measured by prior scrolling
frequency in hours per week. The scroll indicator is a factor that takes values 1 to 4, indicating
each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 =
Entertainment Topic B.
Social Influence Analyses
This analysis tested for the predictions laid out in H3. In these predictions, we examined
whether the total number of others’ reactions to each tweet and habit strength predicted scrolling
behavior (measured as dwell time and scrolling to the subsequent tweet). Significant interactions
between these variables would indicate that the total reactions seen on each tweet mattered for
weakly habitual users but less for strongly habitual users. The total of others’ reactions to each
tweet combines the rewards from others that the original poster (not the participant) received for
posting their tweet. This variable is a potential indicator of tweet quality or interestingness–and
we theorized that it may be another possible indicator of reward that participants attend to during
the scrolling process. Accordingly, we tested another habit interaction in multilevel models,
predicting dwell time and scrolling to the subsequent tweet.
THE ANATOMY OF TWITTER HABITS
56
Dwell Time. Multilevel models predicting dwell time produced no significant interaction
terms and no significant terms other than the intercept.
Dwelt on Tweet. Generalized linear multilevel models predicted the binary dwelt on
tweet variable from prior scrolling frequency and total reactions. This model found that prior
scrolling frequency has a negative relationship with the probability that a participant dwells on a
tweet for an extended time (see Table 10). This negative effect indicated that participants who
scroll more frequently are less likely to dwell on tweets. However, the model indicated no
relationship between total reactions and participants’ probability of dwelling on tweets.
Replacing prior scrolling frequency with the SRBAI measure of habit strength produced a
similar result.
Table 10
Generalized Linear Multilevel Model (GLMM) predicting Dwelling on a Tweet from Prior
Scrolling Frequency and Total Reactions, Controlling for Scroll Indicator: Study 2.
Independent Variables Estimate
(Odds Ratio)
Std. Error z-Score p
Intercept 4.01 0.68 8.15 < .001
Total reactions 1.01 0.03 0.39 .70
Prior scrolling
frequency
0.78 0.07 -2.74 .006
Scroll indicator [2] 0.89 0.21 -0.5 .62
Scroll indicator [3] 0.84 0.24 -0.59 .55
Scroll indicator [4] 0.97 0.27 -0.1 .92
Prior scrolling
frequency x total
reactions
1.01 0.03 0.24 .81
Note. Estimates are the odds ratio estimates of the terms in the Generalized Linear Multilevel
Model, binomial logit. The dependent variable, dwelt on tweet, is binary at the tweet level. Total
THE ANATOMY OF TWITTER HABITS
57
reactions are assessed for each tweet based on the total sum of likes, retweets, and replies a tweet
received, and is mean-centered and standardized for this model. Habit strength is measured by
SRBAI score. The scroll indicator is a factor that takes values 1 to 4, indicating each scroll type:
1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 = Entertainment Topic B.
Scrolled to Subsequent Tweet Analysis. Our final test of H3 examined the dependent
variable for scrolling to subsequent tweets. This variable took a value of 1 when a user continued
scrolling to the following tweet and 0 when they did not. In generalized linear multilevel models,
I predicted scrolling to a subsequent tweet from total reactions seen on tweets, the habit strength
measures (prior scrolling frequency and SRBAI), and the interaction between total reactions and
habit strength measures. None of these models produced significant effects of any term
predicting scrolling behavior.
Advertisement Analysis
Suggesting that strong scrolling habits are not guiding participants to focus less on
material unaligned with their interests, the strength of participants’ scrolling habits was unrelated
to behaviors that users might find adaptive, like advertisement-skipping or shorter overall time
spent on advertisements. In these models, the only variable systematically related to
advertisement skipping behavior was participant age, in that older participants tended to spend
more time on advertisements and skip ads less frequently. To illustrate this effect, I have
included a multilevel model predicting time spent on advertisements from RPEs and habit
strength measured as prior scrolling frequency in Appendix B (see Table B-15). These results
held when replacing prior scrolling frequency with the SRBAI habit strength measure.
THE ANATOMY OF TWITTER HABITS
58
Discussion
Our initial correlational analysis of the data revealed very few significant relationships
between the study's two habit strength measures and other variables. Although not predicted,
participants with higher scrolling habit strength (SRBAI) found the scrolls slightly less
intermittent, and participants who were more frequent scrollers tended to dwell on tweets for a
lower average amount of time while scrolling. Neither of these relationships was consistent
across both measures of habit strength, and both were relatively small despite reaching
significance. The lack of significant correlations that are consistent across both measures of habit
strength between any scrolling behavior variables (total tweets scrolled, total scrolling time,
average dwell time) or any participant tweet preference-related variables (average rating, reward
variance) indicates that potential goals and motivators of users were roughly the same across
habit strength. This result supports my explanation, drawn from the interactions shown in the
results, that users’ scrolling behavior is driven by habits (for more habitual scrollers) and the
reward prediction errors (for less habitual scrollers) measured during the study.
H1: Reward Variance; Partially Confirmed
Contrary to H1, intermittent rewards increased the scrolling of more habitual and less
habitual users, as reflected in greater dwell time on posts. The original prediction was that a
mixture of rewarding and less rewarding posts would influence less habitual but not more
habitual users. Instead, reward variation in the scroll broadly impacted user behavior regardless
of habit strength. Although not aligned with our preregistered hypothesis, this result is still
scientifically and practically interesting. It suggests that intermittently rewarding scrolls are more
successful in generating more average time spent (dwelt) on each piece of content in the scroll.
However, no significant relationship was observed between intermittency and total tweets
scrolled or total scrolling time. No significant interactions occurred between the total tweets
THE ANATOMY OF TWITTER HABITS
59
scrolled or scrolling time variables and our measures of habit strength. This lack of significant
relationships suggests that the relationship between intermittency and scrolling behavior was not
incredibly robust overall, or at least is limited to dwell time. The lack of any consistent
interaction between reward variance and habit strength measures, which would have been
evidence of differences in the scrolling behavior outcome measures (total tweets scrolled, total
scrolling time, and average dwell time) of more habitual and less habitual scrollers based on
scroll intermittency, could be a product of multiple factors. These include the difficulty of
measuring habit strength via frequency in this context, but the most plausible is the challenges
noted above to create an effective manipulation of reward intermittency.
A low correlation emerged between the two measures of habit strength: (a) reports of
prior scrolling frequency and (b) ratings of the automaticity of scrolling from the self-report
behavioral automaticity scale (SRBAI, Gardner et al., 2012). Prior research on social media
habits has identified the need to use specific and precise habit measures in the sense that they are
related to specific behaviors rather than the broad use of a social platform (Anderson & Wood,
2021, 2023; Bayer et al., 2022). Although participants were encouraged to check their screentime
logs to estimate their scrolling behavior accurately, this may have biased their judgments to
include all time spent on Twitter/X instead of scrolling time alone. My attempt to increase the
reliability of self-reports of social media use by grounding them in objective measures was less
helpful than anticipated in measuring scrolling behavior (Parry et al., 2021).
The failure to manipulate the reward structure was surprising but understandable given the
different contexts for the pretest ratings of post interest (reward) and the actual scroll ratings (see
Table 3). Considerable order effects influenced the ratings of interest in the tweets. The pretest
group was shown tweets in a randomized order before the scrolls were created, whereas
participants experienced the scrolls only in their final order. Thus, the differences obtained
THE ANATOMY OF TWITTER HABITS
60
between these ratings are likely due to order effects, including carryover between ratings from
one tweet to the next.
H2: Reward Prediction Error; Confirmed with Dwell Time.
For the second hypothesis, I anticipated a significant interaction between reward
prediction errors (RPE) and habit strength predicting scrolling behavior at the tweet level (i.e.,
dwell time as a continuous and a dichotomous variable, with less than a second or skipping
tweets = 0, dwelling = 1). This interaction was significant when predicting continuous dwell time
from prior scrolling behavior using multiple measures of RPE (fixed learning rate and non-fixed
learning rate), even after controlling for multiple covariates (age, gender, scroll indicator), and
even in a data set removing dwell time outlier participants. It was also significant in the data set
without dwell time outliers for SRBAI with and without these covariates. These tests indicated
that RPEs influenced the scrolling behavior of less habitual scrollers, whereas RPEs have less
influence on more habitual scrollers. Among the least habitual scrollers, more positive reward
prediction errors predicted greater tweet dwell time, and this relationship weakened for more
habitual scrollers. Thus, habitual scrollers were driven less by their content preferences than nonhabitual ones, as shown by these participants dwelling on tweets regardless of their interest
value.
Interestingly, these interactions between habit strength and RPEs did not hold when using
dwell time as a dichotomous variable. This divergent result may be due to the nature of our
selected threshold for dwelling. Our threshold of 1 second = dwelling was developed based on
how quickly users seemed to skip videos based on coder observations. Future research using this
dataset might profitably develop more user-specific thresholds for skipping or dwelling on
tweets.
THE ANATOMY OF TWITTER HABITS
61
As pre-registered, tests of Hypothesis 2 also examined the dichotomous scrolled to
subsequent tweet variable. These tests revealed none of our predicted interactions between RPE
and habit strength, although some interactions were marginally significant in the predicted
direction. The significant positive effect of RPE on participants scrolling to subsequent tweets
indicates that participants tended to cease scrolling on lower-RPE tweets and did not cease on
higher-RPE tweets. This effect held regardless of habit strength. On the one hand, this result
could suggest that all users attend to the RPEs on tweets, contrary to our dwell time results. On
the other hand, given that the marginal interaction between RPE and habit strength predicting
scrolling to subsequent tweets might also be explained by a low number of actual scroll-stops per
subject (only one each), this result may become significant with more observations of scrollstopping behavior. This would mean that more habitual scrollers are less responsive to RPE
when scrolling to subsequent tweets compared to less habitual scrollers. Even if it holds as
marginally significant or non-significant, this result may be due to nuanced ways that habitual
users end their scrolling sessions. For example, less rewarding tweets might contain scrollstopping cues for habitual users, while non-habitual users do attend to the actual reward value,
thus causing both to stop scrolling on low-RPE tweets while differing in reward sensitivity.
Future studies could analyze stopping behavior across multiple scrolling sessions to test these
possibilities.
The divergent results between the scrolled-to-subsequent-tweet dependent variable and
dwell time suggest (a) less reward-sensitive dwelling behavior and (b) no evidence of less
reward-sensitive scrolling to subsequent tweets among habitual scrollers. Together, these effects
present the possibility that habitual scrollers are habitual in dwelling on a tweet, but their scrollstopping is more influenced by reward. Whether this scroll-terminating behavior is reward or
cue-based is an exciting topic for future studies.
THE ANATOMY OF TWITTER HABITS
62
H3: Social Influence; Not Confirmed.
Finally, the results found little evidence for the third hypothesis that habitual and nonhabitual scrollers would be differently insensitive to another indicator of rewards. In this study,
the reactions received from other users were displayed along with the tweet content, as we used
Twitter/X’s interface to create these scrolls. I found no significant effects of these counts of likes
and retweets on users’ scrolling behavior across the dwell time and scrolling to subsequent
tweet-dependent variables. The lack of any effect supports the idea that scrolling behavior is not
determined by user inferences about tweet quality based on the number of reactions a tweet has
received. It is difficult to say whether this effect aligns with prior work suggesting that habitual
behaviors are resistant to social influence, including normative information and mimicry, or is
merely an effect that occurs because either (a) users do not pay attention to these particular
metrics of social influence, or (b) that these counts are not good proxies for measuring a tweet’s
social influence (Mazar et al., 2023). Future research could profitably attempt to disentangle
these two explanations for others’ reactions’ lack of impact on scrolling behavior.
Ad Skipping (Exploratory Analysis)
Last, we explored whether participants’ scrolling habits affected advertisement skipping
or watching behavior. This exploratory test uncovered relationships between user age and time
spent on ads, with older participants in our study spending more time on advertisements. This
relationship could be a product of general social media experiences or differing user preferences
by age group. The relationship between age, experience with different social media, and
advertisement-watching behavior could be examined further in future studies to determine why
older users tended to spend more time on Twitter/X advertisements.
THE ANATOMY OF TWITTER HABITS
63
Different Behaviors, Different Types of Scrolling Habits?
Our dependent variables in this study reflected a wide array of user behaviors. These
allowed for the possibility that users’ scrolling habits might manifest in different ways through a
scroll. Support for this idea was found in our data, as our dependent variables measuring
scrolling behavior were correlated but not identical. First, the participant-level variables of
average dwell time per tweet, total tweets scrolled, and total scrolling time were all correlated,
but each measured slightly different aspects of scrolling behavior. Interpretations of our effects
are necessarily relative to which aspects of behavior are being predicted. For example, total
tweets scrolled indicates how far someone went down the scroll but not how fast they scrolled
through the tweets. Total scrolling time captures how long users spent scrolling overall but does
not account for the number of tweets scrolled during the study. Average dwell time accounts for
both variables by measuring time spent on each tweet based on the tweet-level dwell time
measure. Divergences in the speed of scrolling and quantity of tweets scrolled, as well as
between time spent scrolling and quantity of tweets scrolled, may have made effects more
challenging to detect with dependent variables for total tweets scrolled and total scrolling time.
Considering this issue, the fact that the average dwell time dependent variable is the only one to
show a reward variance effect is less surprising. The diverging results on these participant-level
measures of scrolling behavior align with the idea that users develop distinct scrolling habits.
Divergent results were also evident when scrolling behavior was assessed at the tweet
level. An interaction between habit strength and RPE was significant in predicting continuous
dwell time but not with the dichotomous measure of dwell time or scrolling to subsequent tweets.
One explanation for these divergent results is that a continuous measure of dwell time captures
more variation in behavior than dichotomous measures at the tweet level. Future research into
this topic could use a more flexible dichotomization of dwelling on vs. skipping over a tweet.
THE ANATOMY OF TWITTER HABITS
64
Examining more dependent variables, including time spent on ads in the scroll, also led to
interesting exploratory insights about older users that should be examined in future research. For
example, the patterns reviewed here suggest that future research or additional analysis may be
required to (a) understand the psychological mechanisms that drive scrollers’ responses to
intermittent rewards and (b) what drives scrollers to stop scrolling.
Are Scrolling Habits Functional?
Finally, the results of Study 2 indicated that habitual scrollers’ actual preferences are not
closely reflected in their dwell time on tweets. This result aligns with recent claims that
algorithms deliver content misaligned with users’ preferences precisely because the algorithms
are trained on behavioral data (revealed preferences) rather than users’ stated, actual interest in
the content (Morewedge et al., 2023). Although we played the role of the algorithm by
structuring the tweets in this study, users’ Twitter scrolling habits are built through repetition in
the same context as the one used for gathering data. Thus, it is likely that the behavior observed
here is generalizable to users’ everyday experiences viewing content. Rather than being a path to
greater expertise in content discernment, highly frequent and strongly habitual scrolling behavior
does not help users consume specific information in ways that align with their interests. This
conclusion is additionally supported by the lack of consistent correlations between (a) dwell time
and habit strength as well as (b) interest ratings and habit strength. Apparently, more habitual
scrollers are not simply reading more or skipping over (dwelling for less time than less habitual
scrollers) tweets and are not significantly more or less interested in the tweets than less habitual
scrollers.
THE ANATOMY OF TWITTER HABITS
65
Chapter 4: General Discussion
Across two studies of Twitter/X use, this research tested for reward insensitivity in habit
performance with two large datasets related to active, posting, and passive, scrolling, social
behaviors. How rewards guide repeated behavior is a hotly debated topic among psychologists
and economists, and the present evidence supports the reward insensitivity of highly habitual
actions. Both studies are consistent with the idea that humans share learning and memory
systems with other animals and are not uniquely driven by goals (see Kruglanski & Szumowska,
2020; Wood et al., 2021). Finally, these studies suggest that both information sharing (Study 1)
and information consumption (Study 2) in the social media context are driven by rewardinsensitive features of habit and, thus, may not always be performed in a goal-driven way. People
are as bedeviled by active and passive social media habits as lab rats in a Skinner box.
It is important to note that strongly habitual posters and scrollers were not just
responding to different goals than weakly habitual posters and scrollers. Prior research has
demonstrated that positive feedback on social media posts, such as the positive reactions
measured in Study 1, is a central motivator for both habitual and non-habitual posters (Anderson
& Wood, 2023). Furthermore, in Study 2, participants self-reported similar levels of interest in
the topic of the posts before the study began and in their ratings of the content they saw.
Together, these findings support the idea that habitual scrollers are responding out of habit rather
than to different goals than non-habitual scrollers.
While posting and scrolling behaviors share a common habit-learning mechanism, the
expression of each is quite different. Automatic or habitual posting behavior on Twitter/X stems
from repetition of the posting process to the point that the steps become automatic. Clicking the
create a tweet button, clicking the text box to type out your tweet, aspects of typing out tweets,
THE ANATOMY OF TWITTER HABITS
66
and clicking the tweet button to share the post all become part of a habitual routine. The act of
thinking about what you want to post or write is more thoughtful, but our results suggest that the
steps underlying this phase are the components of a posting habit. This automatic process causes
habitual posters to post with less regard for reactions from other users. Automatic, habitual
scrolling behavior has similarly automatic components. Study 2 indicates that the act of
consuming information (dwelling on tweets) can be repeated to the point of automaticity.
Twtiter/X users with strong habits looked at each tweet and automatically moved to the next,
sometimes in the space of less than one second, with little regard for their actual interest in each
tweet.
At present, research on social media, and in particular new research on social media in
psychology, has been primarily focused on the purported detrimental impact of social media use
on adolescents (Vuorre et al., 2021), on users’ well-being (Tromholt et al., 2016; Orben &
Pyrzbizcki, 2019; Valkenberg et al., 2022), and on misinformation spread (Pennycook & Rand,
2019; Vosoughi et al., 2018). Nearly all these literature streams focus on the outcomes of social
media use rather than the mechanisms that drive it (for exceptions, see Anderson & Wood, 2021;
Bayer et al., 2022; Anderson & Wood, 2023). Focusing on these mechanisms can explain the
concrete processes behind the outcomes studied in prior literature. In support, Study 1 and Study
2 found that two supposedly very different component behaviors of social media use are based
on the same reward-learning mechanism, specifically reward prediction errors, suggesting habit
formation theory is a broadly applicable tool for understanding social media behavior. These
tools have already been used to help explain the spread of misinformation online and might also
be applied in future research to explain issues like detrimental impacts on user well-being
(Ceylan et al., 2023). Finding that scrolling and posting are guided by a common mechanism is
THE ANATOMY OF TWITTER HABITS
67
not a reason to completely homogenize user behavior as usage, nor to rigidly apply categories
like active and passive that may not fully capture the divergences or similarities in reward
responses and outcomes.
The naturalistic observational nature of both studies allowed us to better understand habit
performance in technology use and to verify reward insensitivity of real-world habit
performance. Study 1 employed a novel method to gather live observational user data of habit
performance using a streaming API, which enabled the identification of the precise quantities of
rewards on each tweet prior to the next one. Secondary analysis tests of historical data do not
allow for this level of precision. Instead, they treat reactions at the time of the scrape as if they
were the number of reactions a user saw before making their subsequent post. Study 2 is the first
observational study to focus on user scrolling behavior as a habitual, automatic process and the
first to observe user scrolling activity in a live setting involving access to an actual social media
platform. This procedure opens the door for future studies of social media scrolling behavior
using an observational experimental paradigm.
Limitations and Future Directions
One limitation of this work is the number of minor deviations from the pre-registered
hypotheses. Study 1 was initially conceived as a study using raw numbers of rewards instead of
placing these within the context of the rewards typically received or reward prediction errors
(RPE). This decision caused some minor adjustments, including the exclusion of the first tweet
for each user because there was no accurate measure of expected rewards. Future studies using
RPE should consider using historical user data to calculate initial expected reward measures for
each user.
THE ANATOMY OF TWITTER HABITS
68
In Study 2, minor deviations were also required from the pre-registered hypothesis. As
discussed above in Study 2, it was impossible to successfully manipulate intermittent and
consistent rewards in the scrolls as initially planned. As a result, the analysis relied on a
continuous measure of intermittency based on participants’ tweet interest ratings (i.e., reward
variance) to test the pre-registered hypothesis. Because this measure was devised after data were
collected, it provides only an exploratory test. In addition, the findings failed to support the
hypothesis of that new or occasional users would be most influenced by intermittent rewards.
Instead, the variation in rewards impacted all users roughly the same way: More reward variation
increased time dwelt on each tweet scrolled by a participant and had no impact on total tweets or
total scrolling time. Future research could test this effect with a clearer manipulation of reward
structure in order to determine whether intermittent rewards are, in fact, influential across
habitual and non-habitual users. This manipulation could be implemented by obtaining ratings of
the entire scroll of tweets in the same order that it was later presented. A procedure like this
could have yielded more accurate measures of intermittency and rewardingness.
A final limitation is the lack of controls for specific types of tweet content or additional
analyses based on specific tweet content features beyond tweet rewardingness (e.g., the text,
photo, or video content in the included tweets). In the future, I plan to test the specific content of
tweets in Study 1 and Study 2 by using text analysis estimates of emotionality, PRIME
information, and toxicity. Text analysis will allow me to explore whether there are other benefits
to strong scrolling habits, including ignoring or skipping toxic content, for example, and whether
posting habits can be detected within specific categories of information, as was shown for moral
outrage in prior research (Brady et al., 2023; Brady et al., 2021).
THE ANATOMY OF TWITTER HABITS
69
Finally, the connections made in this paper between social-psychological measures of
RPEs and Twitter/X users’ actual behavior may aid researchers in future neuroscientific studies.
For example, the social-psychological measures of RPE in this paper are likely more easily
matched to measures of neurological activity in the brain’s reward centers, often also represented
as RPE (Hackel & Amodio, 2018). Studying these measures in tandem with neurological activity
could further support the idea that these behaviors are reward- and goal-driven vs. habit based.
Future studies could use this research to understand the neuroscience of social media posting,
information sharing behavior, scrolling behavior, and information consumption (e.g., Dore et al.,
2019; Sherman et al., 2018).
Conclusion
In finding support for the hypothesized relationship between habits and rewards in the
social media context across passive and active uses of Twitter/X, the present research adds
scrolling habits to prior literature on posting habits. As in the results of Study 2, a prior study
also indicated that strong posting habits may result in behavior that does not align precisely with
users’ own interests (Anderson & Wood, 2023). Future research might connect these findings to
existing work on passive and active use by testing how this misalignment impacts habitual
posters and scrollers’ well-being.
The results of these studies also provide multiple directions for future research on
behavioral interventions in social media research. In particular, the introductory chapter
discusses the difference between interventions that account for the development of habits in
online social behavior and those that presume users are solely goal-directed. Through their
demonstrations of habit-based reward insensitivity, these two studies provide substantial
evidence that mitigating potentially adverse outcomes (such as scrolling in ways not aligned with
THE ANATOMY OF TWITTER HABITS
70
users’ preferences) of habitual scrolling and posting behavior must include habit-based
interventions. Existing research has shown that these include platform design or reward systems
alterations (Anderson & Wood, 2021; Anderson & Wood, 2023; Ceylan et al., 2023). To impact
all users, these structural (s-frame) alterations designed to impact habitual users must be paired
with more motivation- and goal-based interventions (i-frame) in order to realign users’ behavior
with their interests (Chater & Lowenstein, 2022; Morewedge et al., 2023).
THE ANATOMY OF TWITTER HABITS
71
References
Aalbers, G., McNally, R. J., Heeren, A., de Wit, S., & Fried, E. I. (2019). Social media and
depression symptoms: A network perspective. Journal of Experimental Psychology:
General, 148(8), 1454–1462. https://doi.org/10.1037/xge0000528
Anderson, I. A., & Wood, W. (2021). Habits and the electronic herd: The psychology behind
social media’s successes and failures. Consumer Psychology Review, 4(1), 83-99.
https://doi.org/10.1002/arcp.1063
Anderson, I. A., & Wood, W. (2023). Social motivations’ limited influence on habitual behavior:
Tests from social media engagement. Motivation Science, 9(2), 107-119.
https://psycnet.apa.org/doi/10.1037/mot0000292
Anselme, P., & Güntürkün, O. (2018). How foraging works: Uncertainty magnifies food-seeking
motivation. Behavioral and brain sciences, 42(35).
https://doi.org/10.1017/S0140525X18000948
Amodio, D. M., & Ratner, K.G. (2011). A memory systems model of implicit social cognition.
Current Directions in Psychological Science, 20(3), 143–148.
https://doi.org/10.1177/0963721411408562
Arad, A., Gneezy, U., & Mograbi, E. (2023). Intermittent incentives to encourage exercising in
the long run. Journal of Economic Behavior & Organization, 205, 560-573.
https://doi.org/10.1016/j.jebo.2022.11.015
Atari, M., Davani, A. M., Kogon, D., Kennedy, B., Ani Saxena, N., Anderson, I., & Dehghani,
M. (2022). Morally homogeneous networks and radicalism. Social Psychological and
Personality Science, 13(6), 999-1009. https://doi.org/10.1177/19485506211059329
Barasch, A., & Berger, J. (2014). Broadcasting and narrowcasting: How audience size affects
what people share. Journal of Marketing Research, 51(3), 286-299.
https://doi.org/10.1509/jmr.13.0238
Bayer, J., Ellison, N., Schoenebeck, S., Brady, E., & Falk, E. B. (2018). Facebook in context(s):
Measuring emotional responses across time and space. New Media and Society, 20(3),
1047–1067. https://doi.org/10.1177/1461444816681522
Bayer, J. B., & LaRose, R. (2018). Technology habits: Progress, problems, and prospects. The
Psychology of Habit: Theory, Mechanisms, Change, and Contexts, 111–130.
https://doi.org/10.1007/978-3-319-97529-0_7
Bayer, J. B., Anderson, I. A., & Tokunaga, R. S. (2022). Building and breaking social media
habits. Current Opinion in Psychology, 45, 101-103.
https://doi.org/10.1016/j.copsyc.2022.101303
Berger, J. (2013). Beyond viral: Interpersonal communication in the internet age. Psychological
Inquiry, 24(4), 293-296. https://doi.org/10.1080/1047840X.2013.842203
THE ANATOMY OF TWITTER HABITS
72
Bouton, M. E. (2021). Context, attention, and the switch between habit and goal-direction in
behavior. Learning & Behavior, 49, 349–362. https://doi.org/10.3758/s13420-021-00488-
z
Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., Van Bavel, J. J., & Fiske, S. T. (2017).
Emotion shapes the diffusion of moralized content in social networks. Proceedings of the
National Academy of Sciences of the United States of America, 114(28), 7313–7318.
https://doi.org/10.1073/pnas.1618923114
Brady, W. J., McLoughlin, K., Doan, T. N., & Crockett, M. (2021). How social learning
amplifies moral outrage expression in online social networks. Science Advances, 7(33).
https://doi.org/10.1126/sciadv.abe5641
Buchanan, K., Aknin, L. B., Lotun, S., & Sandstrom, G. M. (2021). Brief exposure to social
media during the COVID-19 pandemic: Doom-scrolling has negative emotional
consequences, but kindness-scrolling does not. PLOS One, 16(10), e0257728.
Buechel, E., & Berger, J. (2012). Facebook therapy? Why people share self-relevant content
online. Advances in Consumer Research, 40, 203. http://dx.doi.org/10.2139/ssrn.2013148
Burton, J.W., Cruz, N. & Hahn, U. (2021) Reconsidering evidence of moral contagion in online
social networks. Nature Human Behavior. https://doi.org/10.1038/s41562-021-01133-5
Camerer, C. F., & Li, X. (2021). Neural autopilot and context-sensitivity of habits. Current
Opinion in Behavioral Sciences, 41, 185-190.
https://doi.org/10.1016/j.cobeha.2021.07.002
Ceceli, A. O., & Tricomi, E. (2018). Habits and goals: a motivational perspective on action
control. Current Opinion in Behavioral Sciences, 20, 110-116.
https://doi.org/10.1016/j.cobeha.2017.12.005
Chan, E., & Briers, B. (2019). It’s the end of the competition: When social comparison is not
always motivating for goal achievement. Journal of Consumer Research, 46(2), 351–370.
https://doi.org/10.1093/jcr/ucy07
Chater, N. & Loewenstein, G. (2022). The i-frame and the s-frame: How focusing on individuallevel solutions has led behavioral public policy astray. Behavioral and Brain Sciences.
DOI: https://doi.org/10.1017&am
Cooper, A. J., Duke, É., Pickering, A. D., & Smillie, L. D. (2014). Individual differences in
reward prediction error: contrasting relations between feedback-related negativity and
trait measures of reward sensitivity, impulsivity and extraversion. Frontiers in human
neuroscience, 8, 248. https://doi.org/10.3389/fnhum.2014.00248
DeRusso, A. L., Fan, D., Gupta, J., Shelest, O., Costa, R. M., & Yin, H. H. (2010). Instrumental
uncertainty as a determinant of behavior under interval schedules of reinforcement.
Frontiers in Integrative Neuroscience, 4(17), 1–8.
https://doi.org/10.3389/fnint.2010.00017
THE ANATOMY OF TWITTER HABITS
73
Deters, F. große, & Mehl, M. R. (2013). Does Posting Facebook Status Updates Increase or
Decrease Loneliness? An Online Social Networking Experiment. Social Psychological
and Personality Science, 4(5), 579–586. https://doi.org/10.1177/1948550612469233
De Houwer, J., Tanaka, A., Moors, A., & Tibboel, H. (2018). Kicking the habit: Why evidence
for habits in humans might be overestimated. Motivation Science, 4(1), 50.
https://doi.org/10.1037/mot0000065
de Wit, S., Kindt, M., Knot, S. L., Verhoeven, A. A., Robbins, T. W., Gasull-Camos, J., &
Gillan, C. M. (2018). Shifting the balance between goals and habits: Five failures in
experimental habit induction. Journal of Experimental Psychology: General, 147(7),
1043-1065. http://dx.doi.org/10.1037/xge0000402
Doré, B. P., Scholz, C., Baek, E. C., Garcia, J. O., O’Donnell, M. B., Bassett, D. S., Falk, E. B.
(2019). Brain Activity Tracks Population Information Sharing by Capturing Consensus
Judgments of Value. Cerebral Cortex, 29(7), 3102–3110.
https://doi.org/10.1093/cercor/bhy176
Gardner, B., Abraham, C., Lally, P., & de Bruijn, G. J. (2012). Towards parsimony in habit
measurement: Testing the convergent and predictive validity of an automaticity subscale
of the Self-Report Habit Index. International Journal of Behavioral Nutrition and
Physical Activity, 9(1), 1-12. https://doi-org.libproxy1.usc.edu/10.1186/1479-5868-9-102
Garrison J., Erdeniz B., Done J. (2013). Prediction error in reinforcement learning: a metaanalysis of neuroimaging studies. Neuroscience & Biobehavioral Reviews, 37(7), 1297-
1310. https://doi.org/10.1016/j.neubiorev.2013.03.023
Green, P., & MacLeod, C. J. (2016). SIMR: an R package for power analysis of generalized
linear mixed models by simulation. Methods in Ecology and Evolution, 7(4), 493-498.
https://doi.org/10.1111/2041-210X.12504
Hackel, L. M., & Amodio, D. M. (2018). Computational neuroscience approaches to social
cognition. Current Opinion in Psychology, 24, 92–97.
https://doi.org/10.1016/j.copsyc.2018.09.001
Hardwick, R.M., Forrence, A.D., Krakauer, J.W. et al. (2019). Time-dependent competition
between goal-directed and habitual response preparation. Nature Human Behavior, 3,
1252–1262. https://doi.org/10.1038/s41562-019-0725-0
Hu, T., Stafford, T. F., Kettinger, W. J., Zhang, X. P., & Dai, H. (2018). Formation and effect of
social media usage habit. Journal of Computer Information Systems, 58(4), 334-343.
https://doi.org/10.1080/08874417.2016.1261378
James W. 1890. The Principles of Psychology. New York: H. Holt.
THE ANATOMY OF TWITTER HABITS
74
Knowles, T. (April 27, 2019). “I’m so sorry, says the creator of endless online scrolling”. The
Times UK. https://www.thetimes.co.uk/article/i-m-so-sorry-says-inventor-of-endlessonline-scrolling9lrv59mdk#:~:text=The%20man%20behind%20our%20ability,invention%20has%20don
e%20to%20society. Retrieved April 28, 2021.
Kolling, N., & Akam, T. (2017). (Reinforcement?) Learning to forage optimally. Current
Opinion in Neurobiology, 46, 162–169. https://doi.org/10.1016/j.conb.2017.08.008
Kruglanski, A. W., & Szumowska, E. (2020). Habitual behavior is goal-driven. Perspectives on
Psychological Science, 15(5), 1256-1271. https://doi.org/10.1177/1745691620917676
Levitas, D. (2019). Always Connected: How Smartphones Keep Us Engaged. IDC Research
Report, sponsored by Facebook. Accessed Nov 13, 2021. https://www.nu.nl/files/IDCFacebook%20Always%20Connected%20(1).pdf
Lindström, B., Bellander, M., Schultner, D.T. et al. (2021). A computational reward learning
account of social media engagement. Nature Communication, 12, 1311
https://doi.org/10.1038/s41467-020-19607-x
Liu, Q., Shao, Z., & Fan, W. (2018). The impact of users’ sense of belonging on social media
habit formation: Empirical evidence from social networking and microblogging websites
in China. International Journal of Information Management, 43, 209-223.
https://doi.org/10.1016/j.ijinfomgt.2018.08.005
Maki, A., Burns, R. J., Ha, L., & Rothman, A. J. (2016). Paying people to protect the
environment: A meta-analysis of financial incentive interventions to promote proenvironmental behaviors. Journal of Environmental Psychology, 47, 242-255.
https://psycnet.apa.org/doi/10.1016/j.jenvp.2016.07.006
Mazar, A., Itzchakov, G., Lieberman, A., & Wood, W. (2023). The unintentional nonconformist:
Habits promote resistance to social influence. Personality and Social Psychology
Bulletin, 49(7), 1058-1070. https://doiorg.libproxy2.usc.edu/10.1177/01461672221086177
Morewedge, C.K., Mullainathan, S., Naushan, H.F. et al. Human bias in algorithm design. Nat
Hum Behav (2023). https://doi-org.libproxy2.usc.edu/10.1038/s41562-023-01724-4
Motyl, M. How Have Social Media Experiences Changed from March-September 2023?
Psychology of Technology Institute (Substack) https://psychoftech.substack.com/p/howhave-social-media-experiences. Retrieved November 25th, 2023.
Müller, P., Schneiders, P., & Schäfer, S. (2016). Appetizer or main dish? Explaining the use of
Facebook news posts as a substitute for other news sources. Computers in Human
Behavior, 65, 431–441. https://doi.org/10.1016/J.CHB.2016.09.003
THE ANATOMY OF TWITTER HABITS
75
Murty, V. P., LaBar, K. S., & Adcock, R. A. (2016). Distinct medial temporal networks encode
surprise during motivation by reward versus punishment. Neurobiology of learning and
memory, 134 Pt A(Pt A), 55–64. https://doi.org/10.1016/j.nlm.2016.01.018
Narayanan, A. (April 10, 2023). “Twitter Showed Us Its Algorithm, What Does It Tell Us?”
Knight First Amendment Institute at Columbia University.
https://knightcolumbia.org/blog/twitter-showed-us-its-algorithm-what-does-it-tell-us.
Retrieved November 27, 2023.
Orben, A., & Przybylski, A. K. (2019). The association between adolescent well-being and
digital technology use. Nature Human Behaviour, 3(2), 173-182.
https://doi.org/10.1038/s41562-018-0506-1
Otto, A. R., Fleming, S. M., & Glimcher, P. W. (2016). Unexpected but incidental positive
outcomes predict real-world gambling. Psychological Science, 27(3), 299-311.
https://doi.org/10.1177/0956797615618366
Parry, D. A., Davidson, B. I., Sewall, C. J., Fisher, J. T., Mieczkowski, H., & Quintana, D. S.
(2021). A systematic review and meta-analysis of discrepancies between logged and selfreported digital media use. Nature Human Behaviour, 5(11), 1535-1547.
https://doi.org/10.1038/s41562-021-01117-5
Pennycook, G., & Rand, D. G. (2019). The Cognitive Science of Fake News. Preprint.
https://doi.org/10.7551/mitpress/9218.001.0001
Peters, H., Liu, Y., Barbieri, F., Baten, R. A., Matz, S. C., & Bos, M. W. (2023). Context-Aware
Prediction of User Engagement on Online Social Platforms. arXiv preprint.
arXiv:2310.14533.
Pew Research Center. (January 12, 2021). Percentage of adults in the United States who use
selected social networks as of September 2020 [Graph]. In Statista. Retrieved November
13, 2021, from https://www-statista-com.libproxy2.usc.edu/statistics/246230/share-of-usinternet-users-who-use-selected-social-networks/
Perez, O. D., & Dickinson, A. (2020). A theory of actions and habits: The interaction of rate
correlation and contiguity systems in free-operant behavior. Psychological Review,
127(6), 945-971. https://doi.org/10.1037/rev0000201
Schnauber-Stockmann, A. & Naab, T. K. (2019) The process of forming a mobile media habit:
Results of a longitudinal study in a real-world setting. Media Psychology, 22(5), 714-742,
https://doi.org/10.1080/15213269.2018.1513850
Schultz, W. (1998). Predictive reward signal of dopamine neurons. Journal of Neurophysiology,
80(1), 1-27. https://doi.org/10.1152/jn.1998.80.1.1
THE ANATOMY OF TWITTER HABITS
76
Sherman, L. E., Payton, A. A., Hernandez, L. M., Greenfield, P. M., & Dapretto, M. (2016). The
Power of the Like in Adolescence: Effects of Peer Influence on Neural and Behavioral
Responses to Social Media. Psychological Science. 27(7), 1027–1035.
https://doi.org/10.1177/0956797616645673
Sherman, L.E., Hernandez, L.M., Greenfield, P.M., Dapretto, M. (2018) What the brain ‘Likes’:
neural correlates of providing feedback on social media, Social Cognitive and Affective
Neuroscience, 13(7), 699–707, https://doi.org/10.1093/scan/nsy051
Skinner, B. F. (1953). Some contributions of an experimental analysis of behavior to psychology
as a whole. American Psychologist, 8(2), 69. https://doi.org/10.1037/h0054118
Sun, N., Rau, P. P. L., & Ma, L. (2014). Understanding lurkers in online communities: A
literature review. Computers in Human Behavior, 38, 110-117.
https://doi.org/10.1016/j.chb.2014.05.022
Tromholt, M. (2016). The Facebook Experiment: Quitting Facebook Leads to Higher Levels of
Well-Being. Cyberpsychology, Behavior, and Social Networking.
https://doi.org/10.1089/cyber.2016.0259
Tamir, D. I., & Mitchell, J. P. (2012). Disclosing information about the self is intrinsically
rewarding. PNAS Proceedings of the National Academy of Sciences of the United States
of America, 109(21), 8038-8043. https://doi.org/10.1073/pnas.1202129109
TechCrunch. (September 20, 2021). Share of social media users who regularly get news from
selected social media sites in the United States in 2020 and 2021 [Graph]. In Statista.
Retrieved November 13, 2021, from https://www-statistacom.libproxy2.usc.edu/statistics/330638/politics-governement-news-social-media-newsusa/
Valkenburg, P. M., van Driel, I. I., & Beyens, I. (2022). The associations of active and passive
social media use with well-being: A critical scoping review. New Media & Society, 24(2),
530-549. https://doi-org.libproxy2.usc.edu/10.1177/14614448211065425
Verduyn, P., Lee, D. S., Park, J., Shablack, H., Orvell, A., Bayer, J., Kross, E. (2015). Passive
Facebook usage undermines affective well-being: Experimental and longitudinal
evidence. Journal of Experimental Psychology: General, 144(2), 480–488.
https://doi.org/10.1037/xge0000057
Verduyn, P., Ybarra, O., Leuven, K., Jonides, J., & Kross, E. (2017). Do Social Network Sites
Enhance or Undermine Subjective Well-Being? Social Issues and Policy Review, 11(1),
274–302. https://doi.org/10.1111/sipr.12033
Verduyn P, Gugushvili N, Massar K, et al. (2020) Social comparison on social networking sites.
Current Opinion in Psychology 36: 32–37. https://doi.org/10.1016/j.copsyc.2020.04.002
Verplanken, B., & Orbell, S. (2003). Reflections on past behavior: a self‐report index of habit
strength 1. Journal of Applied Social Psychology, 33(6), 1313-1330.
THE ANATOMY OF TWITTER HABITS
77
Verplanken, B., & Orbell, S. (2022). Attitudes, habits and behavior change. Annual Review of
Psychology, 73. https://doi.org/10.1146/annurev-psych-020821-011744
Vishwanath, A. (2015). Habitual Facebook use and its impact on getting deceived on social
media. Journal of Computer-Mediated Communication, 20(1), 83-98.
https://doi.org/10.1111/jcc4.12100
Vishwanath, A. (2016). Mobile device affordance: Explicating how smartphones influence the
outcome of phishing attacks. Computers in Human Behavior, 63, 198–207. https://doi.
org/10.1016/j.chb.2016.05.035
Vishwanath, A. (2017). Getting phished on social media. Decision Support Systems, Volume
103, 70-81. https://doi.org/10.1016/j.dss.2017.09.004
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science,
359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Vorderer, P., & Kohring, M. (2013). Permanently online: A challenge for media and
communication research. International Journal of Communication, 7(1), 188–196.
http://doi.org/1932-8086/2013FEA0002.
Vuorre, M., Orben, A., & Przybylski, A. K. (2021). There is no evidence that associations
between adolescents’ digital technology engagement and mental health problems have
increased. Clinical Psychological Science, 9(5), 823-835.
https://doi.org/10.1177/2167702621994549
Wood, W., & Neal, D. T. (2007). A new look at habits and the habit-goal interface.
Psychological Review, 114(4), 843–863. https://doi.org/10.1037/0033-295X.114.4.843
Wood, W., & Rünger, D. T. (2016). Psychology of habit. Annual Review of Psychology, 67,
289–314. https://doi.org/10.1146/annurev-psych -122414-033417
Wood, W. (2017). Habit in Personality and Social Psychology. Personality and Social
Psychology Review, 21(4), 389–403. https://doi.org/10.1177/1088868317720362
Wood, W., Mazar, A., & Neal, D. T. (2021). Habits and goals in human behavior: Separate but
interacting systems. Advance online publication, Perspectives on Psychological Science.
https://doi.org/10.1177/1745691621994226
THE ANATOMY OF TWITTER HABITS
78
Appendices
List of Tables
Table A-1: Table 1 Means, Standard Deviations, and Correlations, Dataset
Excluding Retweets: Study 1………………………………………………………………….…82
Table A-2: Multilevel Model of Positive Reactions and Prior avg. Daily Tweets
Predicting Between-Tweet Latency: Study 1..…………………………………………….…….83
Table A-3: Multilevel Model of Fixed Learning Rate RPE Reactions and
Prior avg. Daily Tweets Predicting Between-Tweet Latency: Study 1..………………….…..…83
Table A-4: Multilevel Model Predicting Latency to Post Again from Prior
Tweeting Frequency and RPE Reactions in Dataset Excluding Direct
Retweets: Study 1.…………..……………………………………………………………….…..84
Table B-1: Study 2 ICCs, Final Coding Round.……...…………………...…..…………...……87
Table B-2: Simple Linear Regression Model Predicting Average Dwell
Time from Reward Variance and Prior Scrolling Frequency, Controlling for
Scroll Indicator: Study 2.……..………………………………………………………....…….....90
Table B-3: Simple Linear Regression Model Predicting Average Dwell Time from
Reward Variance and SRBAI, Controlling for Scroll Indicator. Dataset Excluding
Dwell Time Outliers: Study
2.…………………………....………………………………………...……………………….…90
Table B-4: Simple Linear Regression Model Predicting Average Dwell Time from
Reward Variance and Prior Scrolling Frequency, Controlling for Scroll Indicator.
Dataset Excluding Dwell Time Outliers: Study 2. .…………………………………...………...91
Table B-5: Simple Linear Regression Model Predicting Total Scrolling Time
from Reward Variance and Prior Scrolling Frequency, Controlling for Scroll Indicator...….….92
Table B-6: Multilevel Model Average Predicting Dwell Time from Reward Variance
and SRBAI, Controlling for Scroll Indicator: Study 2………………………...……………..…93
Table B-7: Multilevel Model Predicting Tweet Dwell Time from Prior Scrolling
Frequency and Reward Prediction Errors, Controlling for Scroll Indicator, Data
Excluding Dwell Time Outliers: Study 2.……………… ………………….………………..…..95
Table B-8: Multilevel Model Predicting Tweet Dwell Time from Prior Scrolling
Frequency and Reward Prediction Errors, Controlling for Scroll Indicator, Gender,
and Age: Study 2…………………………………………………………………………………95
Table B-9: Multilevel Model Predicting Tweet Dwell Time from Prior Scrolling
Frequency and Reward Prediction Errors, Controlling for Scroll Indicator, Age,
and Gender, Data Excluding Dwell Time Outliers: Study 2.………………………...……..….. 96
Table B-10: Multilevel Model Predicting Tweet Dwell Time from SRBAI and
Reward Prediction Errors, Controlling for Scroll Indicator, Age, and Gender, Data
Excluding Dwell Time Outliers: Study 2. ………………...……………………………………..97
Table B-11: Multilevel Model Predicting Tweet Dwell Time from Prior Scrolling
Frequency and Fixed Learning Rate Reward Prediction Errors, Controlling for
Scroll Indicator, Age, and Gender, Data Excluding Dwell Time Outliers: Study 2. ……..…......98
Table B-12: Multilevel Model Predicting Tweet Dwell Time from SRBAI and Fixed
THE ANATOMY OF TWITTER HABITS
79
Learning Rate Reward Prediction Errors, Controlling for Scroll Indicator, Age, and
Gender, Data Excluding Dwell Time Outliers: Study 2. ……………………….………….…...99
Table B-13: Multilevel Model Predicting Tweet Dwell Time from Prior Scrolling
Frequency and Fixed Learning Rate Reward Prediction Errors, Controlling for Scroll
Indicator: Study 2…………………………………………………………………………….....100
Table B-14: Generalized Linear Multilevel Model (GLMM) Binomial Logistic
Regression Predicting Dwelling on a Tweet from Prior Posting Frequency and
Reward Prediction Errors, Controlling for Scroll Indicator: Study 2.………………………….101
Table B-15: Multilevel Model Predicting Time Spent on Advertisements from
Prior Scrolling Frequency and Rate Reward Prediction Errors, Controlling for Scroll
Indicator and Age: Study 2..……………………………………………………………………102
THE ANATOMY OF TWITTER HABITS
80
List of Figures
Figure A-1: Power Analysis Plot for Study 1……………………...……………………...........81
Figure B-1: Power Analysis Plot for Study 2……………………………………………….….87
THE ANATOMY OF TWITTER HABITS
81
Appendix A: Supplementary Materials for Study 1
Power Analysis:
Each step in a power simulation involves (a) generating a distribution with pre-specified
population effect sizes, (b) fitting a model to that distribution, and (c) observing whether the
model correctly detected the effect. This process is repeated for many iterations, and power is
calculated as the proportion of steps in which a significant effect was detected. If an effect was
detected as significant in 80 out of 100 simulations, this indicates .80 power.
To estimate power for our first study, I ran 10,000 power simulations using fixed effect
(beta) of .15, .15, and .15 for the main effect of prior post reactions, the main effect of habit
strength, and the interaction between habit strength and prior post reactions, respectively. I
examined participant N’s ranging from 50-500 and total post N’s from ~500-15,000, as estimated
based on pre-test data. See Fig. A-1 for a plot of power estimates.
Power Analysis:
Figure A-1
Power Analysis Plot for Study 1
THE ANATOMY OF TWITTER HABITS
82
Table A-1
Table 1 Means, Standard Deviations, and Correlations, Dataset Excluding Retweets: Study 1.
M SD Latency
(prior tweet)
Likes (prior
tweet)
Retweets (prior
tweet)
Total positive
reactions (prior
tweet)
Prior tweeting
frequency
(tweets/day)
Latency 18.32 24.96
Likes (prior
tweet) 0.61 3.35 -.08
Retweets (prior
tweet) 0.12 0.93 -.06 .96***
Total positive
reactions (prior
tweet)
0.73 4.42 -.08 .99*** .97***
Prior tweeting
frequency
(tweets/day)
8.00 7.70 -.39*** .20** .24*** .21***
RPE reactions -0.51 3.24 .08 -.86*** -.75*** -.85*** -.17**
Note. Higher numbers reflect longer latencies to post again (in hours), average number of
positive reactions (likes and direct retweets combined), and average numbers of likes and
retweets on the prior post, as well as higher prior tweeting frequency (number of daily tweets in
the past month). All means and standard deviations are non-standardized values. Correlations
were computed using the dataset mean-centered and divided by the standard deviation
(standardized) values of each measure except latency. Latency was converted to log hours. *p <
.05 **p < .01 ***p <.001.
THE ANATOMY OF TWITTER HABITS
83
Table A-2
Multilevel Model of Positive Reactions and Prior avg. Daily Tweets Predicting Between-Tweet
Latency: Study 1.
Independent
Variables
df Beta p 95% CI
Intercept 224.90 .28 .01 .07, .50
Positive
reactions
24,840.00 .01 .01 .005, .02
Prior tweeting
frequency
197.70 -.13 <.001 -.15, -.11
Positive
reactions x
Prior tweeting
frequency
24,830.00 -.005 .007 -.007, -.003
Note. Estimates are the unstandardized coefficients (Beta) of the terms in the multilevel model
above. The dependent variable is between-post latency. Positive reactions are raw counts of likes
and retweets. Past tweeting rate is a participant-level variable measured for the month before the
study. The interaction term is the interaction between prior tweeting frequency and RPE.
Degrees of freedom are calculated using the Satterthwaite method for multilevel models.
Table A-3
Multilevel Model of Fixed Learning Rate RPE Reactions and Prior avg. Daily Tweets Predicting
Between-Tweet Latency: Study 1.
Independent
Variables
df � p 95% CI
Intercept 225.85 -.03 .47 -.10, .05
Fixed learning 13,611.84 .02 .1 -.005, .04
THE ANATOMY OF TWITTER HABITS
84
rate RPE
reactions
Prior tweeting
frequency
198.09 -.49 <.001 -.55, -.42
Fixed learning
rate RPE
reactions x
Prior tweeting
frequency
16,263.66 -.02 .01 -.03, -.01
Note. Estimates are the standardized coefficients (�) of the terms in the multilevel model above.
The dependent variable is between-post latency. RPE positive reactions are unstandardized
differences between expected and experienced rewards with a fixed learning rate of .1 (Otto et
al., 2016). Past tweeting rate is a participant-level, dataset mean-centered variable measured for
the month before the study. The interaction term is the interaction between prior tweeting
frequency and RPE. Degrees of freedom are calculated using the Satterthwaite method for
multilevel models.
Table A-4
Multilevel Model Predicting Latency to Post Again From Prior Tweeting Frequency and RPE
Reactions in Dataset Excluding Direct Retweets: Study 1.
Independent
variables
df � p 95% CI
Intercept 185.17 .14 < .001 .06, .23
RPE reactions 13,676.57 .02 .11 -.005, .04
Prior tweeting
frequency
156.80 -.37 < .001 -.44, .29
Prior tweeting
frequency X RPE
reactions
13,654.41 -.01 .03 -.03, -.005
THE ANATOMY OF TWITTER HABITS
85
Note. Estimates are the standardized coefficients (�) of the terms in the multilevel model above.
The dependent variable is between-post latency. RPE positive reactions are unstandardized
differences between expected and experienced rewards. Past tweeting rate is a participant-level,
dataset mean-centered variable measured for the month before the study. The interaction term is
the interaction between prior tweeting frequency and RPE. Degrees of freedom are calculated
using the Satterthwaite method for multilevel models.
THE ANATOMY OF TWITTER HABITS
86
Appendix B: Supplementary Materials for Study 2
Summary of Deviation from the Pre-Registration
The preregistered hypothesis (AsPredicted #119321) was that non-habitual users will be
more motivated to scroll (as measured by overall time scrolling and total quantity of posts
scrolled) when scrolls involve intermittent, interval-based (appearing after a similar average time
of scrolling) rewards, in which very high rewards are interspersed among less-rewarding posts,
than when scrolls contain consistently rewarding posts. For habitual users, I hypothesized that
the reward schedule of posts will have less effect on their scrolling behavior in terms of overall
scrolling time and number of tweets scrolled. This would be reflected in reduced or no difference
between their scrolling time in the intermittent vs. consistent reward schedule conditions.
The major deviation from the pre-registration is due to the fact that it was not possible to
create these conditions, at least based on the Tweets that were collected and pre-rated. The
standard deviations of each scroll converged during all iterations of the mean-equalization
process for the two topics. Thus, we chose to run functionally equivalent scrolls with different
tweets in the observational portion of the study and used participants’ ratings of the tweets as a
measure of intermittency (reward variance) to test our hypothesis about intermittent rewards.
Power Analysis
To estimate power for our first two studies, we ran 10,000 power simulations using fixed
effects (beta) of .15, .15, and .15 for the main effect of prior post reactions, the main effect of
habit strength, and the interaction between habit strength and prior post reactions, respectively.
We examined participant N’s ranging from 20-500 and aggregate post scrolled N’s from 10-40,
as See Figure B-1 for a plot of power estimates.
THE ANATOMY OF TWITTER HABITS
87
Figure B-1
Power Analysis Plot for Study 2
Note. The plot represents the power curve for a set of participants scrolling for 30 tweets on
average, which is the midpoint between the required number of tweets scrolled (10), and the
maximum number possibly scrolled (50). The X-axis represents the total N (number of
participants) required to achieve the power (indicated on the Y-axis).
ICCs
Table B-1
Study 2 ICCs, Final Coding Round.
Variable
Name
Coding
Rounds
Level
(Participant
or Tweet)
ICC or
Kappa
Arbitration
Round?
(means these
are ICCs of
Value 95% CI
(ICC only)
THE ANATOMY OF TWITTER HABITS
88
nonarbitration
round
variables)
Total
Tweets
Scrolled
2 Participant ICC No .93 .91, .94
Total
Tweets
Scrolled
(Round 3
or
“mismatch
es” only)
3 Participant ICC No .78 .68, .85
Total
Scrolling
Time
2 Participant ICC No .89 .87, .91
Dwell
Time
3 Tweet ICC Yes .69 .67, .71
Ad Time 3 Tweet ICC Yes .60 .53, .66
Note. Variables are listed from high reliability (High ICC/Kappa) to low reliability (Low
ICC/Kappa). Coding rounds indicate how many coders rated the data. Total tweets scrolled were
also included in a third round for part of the data. Participant video variables not included here
were calculated outside the coding round (scrolled to subsequent tweet, dwelt on tweet, and ad
skipped were calculated in R later based on the codings of other variables) or were not coded
(habit strength measures).
THE ANATOMY OF TWITTER HABITS
89
Selection of Interestingness
After pre-testing the first set of tweets, we found that the correlation between interest and
funniness was significant for all tweets selected for the USC scroll and positive, r(149) = .25, p
< .001. To shorten the task and avoid task disengagement, we decided to focus on interestingness
alone. In addition, a clerical error (scale anchor of the funniness scale was set to 7 = extremely
interesting) in the pre-test survey for the Entertainment scrolls invalidated these ratings. Thus,
our research team elected to focus on interest both out of convenience and because it was a better
analog for tweet rewardingness.
THE ANATOMY OF TWITTER HABITS
90
Additional Reward Variance Analyses
Table B-2
Simple Linear Regression Model Predicting Average Dwell Time from Reward Variance and
Prior Scrolling Frequency, Controlling for Scroll Indicator: Study 2.
Independent Variables df � p 95% CI
Intercept 253 .15 .14 -.05, .34
Reward variance 253 .12 .05 -.00, .25
Prior Scrolling
Frequency
253 -.12 .06 -.24, .01
Scroll indicator [2] 253 -.13 .39 -.44, .17
Scroll indicator [3] 253 -.35 .06 -.71, .01
Scroll indicator [4] 253 -.28 .11 -.63, .06
Reward variance x Prior
scrolling frequency
253 .04 .39 -.05, .13
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents residual (error) degrees of freedom. The dependent variable, average tweet dwell time
for each participant, is continuous. Reward variance is assessed for each participant based on
their own ratings of each tweet scrolled and is also continuous. The scroll indicator is a factor
that takes values 1 to 4, indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 =
Entertainment Topic A, 4 = Entertainment Topic B.
Table B-3
Simple Linear Regression Model Predicting Average Dwell Time from Reward Variance and
SRBAI, Controlling for Scroll Indicator. Dataset Excluding Dwell Time Outliers: Study 2.
Independent Variables df � p 95% CI
THE ANATOMY OF TWITTER HABITS
91
Intercept 221 .19 .09 -.03, .40
Reward variance 221 .14 .04 .01, .27
SRBAI 221 -.09 .23 -.23, .05
Scroll indicator [2] 221 -.33 .05 -.66, -.005
Scroll indicator [3] 221 -.23 .26 -.62, .17
Scroll indicator [4] 221 -.35 .08 -.74, .04
Reward variance x
SRBAI
221 .03 .69 -.10, .16
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents residual (error) degrees of freedom. The dependent variable, average tweet dwell time
for each participant, is continuous. Reward variance is assessed for each participant based on
their own ratings of each tweet scrolled and is also continuous. The scroll indicator is a factor
that takes values 1 to 4, indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 =
Entertainment Topic A, 4 = Entertainment Topic B.
Table B-4
Simple Linear Regression Model Predicting Average Dwell Time from Reward Variance and
Prior Scrolling Frequency, Controlling for Scroll Indicator. Dataset Excluding Dwell Time
Outliers: Study 2.
Independent Variables df � p 95% CI
Intercept 223 .25 .02 .04, .45
Reward variance 223 .09 .21 -.05, .22
Prior scrolling
frequency
223 -.17 .01 -.30, -.04
THE ANATOMY OF TWITTER HABITS
92
Scroll indicator [2] 223 -.39 .02 -.71, -.07
Scroll indicator [3] 223 -.33 .08 -.69, .04
Scroll indicator [4] 223 -.47 .01 -.83, -.10
Reward variance x
Prior scrolling
frequency
223 .06 .22 -.04, .15
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents residual (error) degrees of freedom. The dependent variable, average tweet dwell time
for each participant, is continuous. Reward variance is assessed for each participant based on
their own ratings of each tweet scrolled and is also continuous. The scroll indicator is a factor
which takes values 1 to 4 indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 =
Entertainment Topic A, 4 = Entertainment Topic B.
Table B-5
Simple Linear Regression Model Predicting Total Scrolling Time from Reward Variance and
Prior Scrolling Frequency, Controlling for Scroll Indicator: Study 2.
Independent Variables df � p 95% CI
Intercept 253 .15 .14 -.05, .34
Reward variance 253 .05 .45 -.08, .18
Prior scrolling frequency 253 -.02 .74 -.15, .10
Scroll indicator [2] 253 -.09 .58 -.39, .22
Scroll indicator [3] 253 -.35 .06 -.71, .01
Scroll indicator [4] 253 -.31 .09 -.66, .04
Reward variance x Prior
scrolling frequency
253 .1 .03 .01, .19
THE ANATOMY OF TWITTER HABITS
93
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents residual (error) degrees of freedom. The dependent variable, total scrolling time for
each participant, is continuous. Reward variance is assessed for each participant based on their
own ratings of each tweet scrolled and is also continuous. The scroll indicator is a factor which
takes values 1 to 4 indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 =
Entertainment Topic A, 4 = Entertainment Topic B.
Table B-6
Multilevel Model Average Dwell Time predicting from Reward Variance and Prior Scrolling
Frequency, Controlling for Scroll Indicator: Study 2
Independent Variables df � p 95% CI
Intercept 272.49 .05 .13 -.01, .11
Reward variance 271.96 .04 .06 -.00, .08
Prior scrolling
frequency
285.08 -.04 .08 -.08, .00
Scroll indicator [2] 250.22 -.04 .340 -.14, .05
Scroll indicator [3] 269.78 -.10 .08 -.22, .01
Scroll indicator [4] 276.02 -.07 .21 -.18, .04
Reward variance x
Prior scrolling
frequency
265.18 .01 .44 -.02, .04
Note. Estimates are the standardized coefficients (�) of the terms in the multilevel model. df
represents Satterthwaite degrees of freedom. The dependent variable, tweet dwell time for each
tweet seen by each participant, is continuous at the tweet level. Reward variance is assessed for
each participant based on their own ratings of each tweet scrolled and is also continuous. The
THE ANATOMY OF TWITTER HABITS
94
scroll indicator is a factor that takes values 1 to 4, indicating each scroll type: 1 = USC Topic A,
2 = USC Topic B, 3 = Entertainment Topic A, 4 = Entertainment Topic B.
THE ANATOMY OF TWITTER HABITS
95
Additional Reward Prediction Error Analyses
Table B-7
Multilevel Model Predicting Tweet Dwell Time from Prior Scrolling Frequency and Reward
Prediction Errors, Controlling for Scroll Indicator, Data Excluding Dwell Time Outliers: Study
2.
Independent Variables df � p 95% CI
Intercept 207.42 .12 .002 .05, .20
RPE 5174.51 .29 <.001 .26, .31
Prior scrolling
frequency
207.14 -.09 .001 -.14, -.04
Scroll indicator [2] 197.79 -.17 .005 -.30, -.05
Scroll indicator [3] 208.17 -.12 .11 -.26, .03
Scroll indicator [4] 215.83 -.16 .03 -.30, -.02
Prior scrolling
frequency X RPE
5245.49 -.07 < .001 -.09, -.04
Note. Estimates are the standardized coefficients (�) of the terms in the multilevel model. df
represents Satterthwaite degrees of freedom. The dependent variable, dwell time, is continuous.
Reward prediction error is assessed for each tweet scrolled by a participant based on their own
ratings of each tweet scrolled and is also continuous. Habit strength is measured by prior
scrolling frequency in hours per week. The scroll indicator is a factor that takes values 1 to 4,
indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 =
Entertainment Topic B.
Table B-8
THE ANATOMY OF TWITTER HABITS
96
Multilevel Model Predicting Tweet Dwell Time from SRBAI and Reward Prediction Errors,
Controlling for Scroll Indicator, Data Excluding Dwell Time Outliers: Study 2.
Independent Variables df � p 95% CI
Intercept 203.71 .12 .007 .03,.20
RPE 5111.98 .29 < .001 .26, .31
SRBAI 210.79 -.03 .29 -.08, .03
Scroll indicator [2] 195.65 -.18 .007 -.30, -.05
Scroll indicator [3] 205.68 -.09 .27 -.24, .07
Scroll indicator [4] 211.42 -.13 .10 -0.29, .03
Interaction (prior
scrolling frequency X
RPE)
5109.99 -.04 .004 -.06, -.01
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents Satterthwaite degrees of freedom. The dependent variable, dwell time, is continuous.
Aggregate reward prediction error is assessed for each participant based on their own ratings of
each tweet scrolled and is also continuous. Habit strength is measured by the SRBAI scale and is
continuous. The scroll indicator is a factor that takes values 1 to 4, indicating each scroll type: 1
= USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 = Entertainment Topic B.
Table B-9
Multilevel Model Predicting Tweet Dwell Time from Prior Scrolling Frequency and Reward
Prediction Errors, Controlling for Scroll Indicator, Age, and Gender, Data Excluding Dwell
Time Outliers: Study 2.
Independent Variables df � p 95% CI
Intercept 264.61 .05 .29 -.04, .13
RPE 6090.61 .24 <.001 .22, .26
THE ANATOMY OF TWITTER HABITS
97
Prior scrolling frequency 248.76 -.05 .03 -.09, -.005
Age 273.56 .05 .04 .005, .10
Gender (Male) 267.13 .03 .51 -.06, .11
Gender (Nonbinary) 255.75 -.09 .52 -.35, .18
Scroll indicator [2] 251.61 -.07 .19 -.16, .03
Scroll indicator [3] 264.85 -.15 .03 -.28, -.02
Scroll indicator [4] 278.16 -.12 .08 -.25, .01
Prior scrolling frequency
x RPE
6138.87 -.03 .01 -.05, -.01
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents Satterthwaite degrees of freedom. The dependent variable, tweet dwell time, is
continuous. Aggregate reward prediction error is assessed for each participant based on their own
ratings of each tweet scrolled and is also continuous. Habit strength is measured by prior
scrolling frequency in hours per week. The scroll indicator is a factor that takes values 1 to 4,
indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 =
Entertainment Topic B.
Table B-10
Multilevel Model Predicting Tweet Dwell Time from SRBAI and Reward Prediction Errors,
Controlling for Scroll Indicator, Age, and Gender, Data Excluding Dwell Time Outliers: Study
2.
Independent Variables df � p 95% CI
Intercept 203.64 .12 .03 .02, .23
RPE 5109.01 .29 < .001 .26, .31
SRBAI 208.48 -.04 .22 -.09, .02
THE ANATOMY OF TWITTER HABITS
98
Age 209.22 -.02 .54 -.08, .04
Gender (Male) 202.66 -.02 .68 -.13, .09
Gender (Nonbinary) 201.18 -.31 .08 -.67, .04
Scroll indicator [2] 194.28 -.18 .007 -.30, -.05
Scroll indicator [3] 203.03 -.05 .62 -.23, .14
Scroll indicator [4] 207.47 -.09 .33 -.27, .09
SRBAI x RPE 5108.70 -.04 .005 -.06, -.01
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents Satterthwaite degrees of freedom. The dependent variable, tweet dwell time, is
continuous. Aggregate reward prediction error is assessed for each participant based on their own
ratings of each tweet scrolled and is also continuous. Habit strength is measured by SRBAI scale.
The scroll indicator is a factor which takes values 1 to 4 indicating each scroll type: 1 = USC
Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 = Entertainment Topic B.
Table B-11
Multilevel Model Predicting Tweet Dwell Time from Prior Scrolling Frequency and Fixed
Learning Rate Reward Prediction Errors, Controlling for Scroll Indicator, Age, and Gender,
Data Excluding Dwell Time Outliers: Study 2.
Independent Variables df � p 95% CI
Intercept 182.12 .05 .41 -.07, .18
RPE 3745.63 .31 <.001 .28, .34
Prior scrolling frequency 183.07 -.08 .01 -.14, -.02
Age 185.47 -.02 .52 -.10, .05
Gender (Male) 186.39 .001 .97 -.13, .13
THE ANATOMY OF TWITTER HABITS
99
Gender (Nonbinary) 179.61 -.33 .11 -.7, .08
Scroll indicator [2] 186.44 -.02 .76 -.17, .13
Scroll indicator [3] 182.74 .03 .81 -.18, .23
Scroll indicator [4] 191.31 .01 .94 -.19, .21
Prior scrolling frequency
x RPE
4297.13 -.08 <.001 -.12, -.05
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents Satterthwaite degrees of freedom. The dependent variable, tweet dwell time, is
continuous. Aggregate reward prediction error is assessed for each participant based on their own
ratings of each tweet scrolled and is also continuous. Habit strength is measured by prior
scrolling frequency in hours per week. The scroll indicator is a factor that takes values 1 to 4,
indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 =
Entertainment Topic B. These relationships hold constant without covariates for age and gender
included in the model.
Table B-12
Multilevel Model Predicting Tweet Dwell Time from SRBAI and Fixed Learning Rate Reward
Prediction Errors, Controlling for Scroll Indicator, Age, and Gender, Data Excluding Dwell
Time Outliers: Study 2.
Independent Variables df � p 95% CI
Intercept 181.99 .05 .47 -.08, .18
RPE 3766.45 .32 < .001 .28, .35
SRBAI 185.09 -.02 .53 -.09, .04
Age 185.19 -.03 .37 -.11, .04
Gender (Male) 184.74 .001 .96 -.13, .13
THE ANATOMY OF TWITTER HABITS
100
Gender (Nonbinary) 178.71 -.38 .07 -.79, .03
Scroll indicator [2] 183.50 -.04 .59 -.19, .11
Scroll indicator [3] 181.66 .04 .69 -.17, .26
Scroll indicator [4] 188.40 .04 .72 -.18, .26
SRBAI x RPE 3412.32 -.05 .002 -.08, -.02
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents Satterthwaite degrees of freedom. The dependent variable, tweet dwell time, is
continuous. Aggregate reward prediction error is assessed for each participant based on their own
ratings of each tweet scrolled and is also continuous. Habit strength is measured by SRBAI in
hours per week. The scroll indicator is a factor that takes values 1 to 4, indicating each scroll
type: 1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 = Entertainment Topic
B. These relationships hold constant without covariates for age and gender included in the
model.
Table B-13
Multilevel Model Predicting Tweet Dwell Time from Prior Scrolling Frequency and Fixed
Learning Rate Reward Prediction Errors, Controlling for Scroll Indicator: Study 2.
Independent Variables df � p 95% CI
Intercept 210.93 -.01 .84 -.09, .07
RPE 3599.21 .28 < .001 .25, .31
Prior scrolling
frequency
206.40 -.05 .05 -.10, .005
Scroll indicator [2] 219.01 .07 .29 -.06, .19
Scroll indicator [3] 217.03 .005 .95 -.14, .15
Scroll indicator [4] 222.67 .03 .69 -.12, .18
Prior scrolling 4716.37 -.03 .04 -.07, -.00
THE ANATOMY OF TWITTER HABITS
101
frequency x RPE
Note. Estimates are the standardized coefficients (�) of the terms in the multilevel model. df
represents Satterthwaite degrees of freedom. The dependent variable, dwell time, is continuous.
Reward prediction error is assessed for each tweet scrolled by a participant based on their own
ratings of each tweet scrolled and is also continuous. Habit strength is measured by prior
scrolling frequency in hours per week. The scroll indicator is a factor that takes values 1 to 4,
indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 =
Entertainment Topic B. These relationships also hold constant with covariates for age and gender
included in the model. In another version of this model where SRBAI score replaces prior
scrolling frequency, the SRBAI x RPE interaction does not reach significance with or without
the age and gender covariates.
Table B-14
Generalized Linear Multilevel Model (GLMM) Binomial Logistic Regression Predicting
Dwelling on a Tweet from Prior Posting Frequency and Reward Prediction Errors, Controlling
for Scroll Indicator: Study 2.
Independent Variables Estimate
(odds ratio)
Std. Error Z-Score p
Intercept 3.7 0.49 7.35 < .001
RPE 1.89 0.03 15.83 < .001
Prior scrolling
frequency
0.78 0.07 -2.6 .009
Scroll indicator [2] 0.82 0.2 -0.83 .41
Scroll indicator [3] 1.03 0.29 0.1 .92
Scroll indicator [4] 1.06 0.3 0.2 .84
Prior scrolling 1.03 0.02 0.73 .47
THE ANATOMY OF TWITTER HABITS
102
frequency x RPE
Note. Aggregate reward prediction error is assessed for each participant based on their own
ratings of each tweet scrolled and is also continuous. Habit strength is measured by prior
scrolling frequency in hours per week. The scroll indicator is a factor that takes values 1 to 4,
indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 = Entertainment Topic A, 4 =
Entertainment Topic B. The interaction term was not significant in models replacing prior
scrolling frequency with SRBAI scores.
Advertisement Analysis
Table B-15
Multilevel Model Predicting Time Spent on Advertisements from Prior Scrolling Frequency and
Rate Reward Prediction Errors, Controlling for Scroll Indicator and Age: Study 2.
Independent Variables df � p 95% CI
Intercept 184.08 .01 .94 -.14, .15
RPE 707.22 -.02 .65 -.08, .05
Prior scrolling frequency 250.50 -.05 .22 -.14, .03
Age 181.75 .2 .001 .08, .31
Scroll indicator [2] 167.16 .18 .12 -.05, .40
Scroll indicator [3] 201.39 -.11 .48 -.42, .20
Scroll indicator [4] 185.03 -.16 .30 -.46, .14
Prior scrolling frequency
x RPE
701.28 -.02 .54 -.09, .05
Note. Estimates are the standardized coefficients (�) of the terms in the linear model. df
represents Satterthwaite degrees of freedom. The dependent variable, time spent on
advertisements, is continuous. Aggregate reward prediction error is assessed for each participant
THE ANATOMY OF TWITTER HABITS
103
based on their own ratings of each tweet scrolled and is also continuous. Habit strength is
measured by prior scrolling frequency in hours per week. The scroll indicator is a factor which
takes values 1 to 4 indicating each scroll type: 1 = USC Topic A, 2 = USC Topic B, 3 =
Entertainment Topic A, 4 = Entertainment Topic B.
THE ANATOMY OF TWITTER HABITS
104
Appendix C: Coding Instructions for Study 2
Screen-Recording Video Coding Instruction Sheet
1. First, find your assigned videos here; your name will be in the RA Assignment column
under sheet tabs titled “Participant Video Assignment (Prolific Entertainment OR USC
Subject Pool”: TS_Data_Main_2 (Round 2 Sheet); TS_Data_Main (round 1 sheet),
TS_Data_Main_3 and TS_Data_Main_Round2Replacements. In Round 3’s full
replacement round, your name is in a coder assignment column in the sheet below:
TS_Data_Main_Round3FullCodings.
2. Using the “Participant ID Column” in the correct TS_Data_Main sheet to find the correct
participant ID, create a new tab in the Google sheet with the Participant ID as the title.
Using the tab titled “(sample) 001” as a reference (or another tab), copy and paste the
column titles and the Tweet Order.
3. Now, find the video you are assigned to work on, as identified by the subject number.
You can find these in the video Google Drive files (see below).
○ Entertainment: Prolific Entertainment Study Data (link removed for privacy)
○ USC: Subject Pool Study Data (link removed for privacy)
4. Start viewing the video. One of the first things you will see is the Twitter account handle
they are scrolling. You can use this to understand whether they are looking at a
“consistent” or “intermittent” tweet scroll. Pause the video, and find the order of the
Tweets and the corresponding Tweet IDs in the “Participant Video Assignment” tab.
These are labeled as consistent or intermittent, including the Twitter handle (@SBlab4)
of each scroll, so you can grab the appropriate one depending on what is shown in the
THE ANATOMY OF TWITTER HABITS
105
video. Copy and paste these into the appropriate columns in the new tab you have created
(now labeled with the “Tweet ID”).
5. Before watching more of the video, also go to the “Participant-level data” tab. There, you
will see more columns you need to fill out. On a new line (blank row), put your assigned
participant’s ID number, the scroll topic (Entertainment, USC, or Sports), and whether
the scroll they are looking at was Intermittent or Consistent.
6. Now, you are ready to enter data based on the video. You may begin with either the
participant-level OR tweet-level data (the data in the tab you have just made). If you
prefer to start with the participant-level data, the instructions are in step 8. Tweet-level
data instructions are below in step 7.
7. Open the sheet tab you have created labeled with the participant’s ID number, and begin
the video. When the participant scrolls down to the tweets, start logging data beginning
with Tweet 1 (the top of the scroll). Explanations for what each column should hold are
perpetually available in the “(sample) 001” tab. Questions about coding can be directed to
Ian or Emmy via the lab’s Slack channel or during lab meetings.
○ In general, the tweet at the top-middle of the screen (as long as it is fully visible)
will be the ‘focal tweet’ – though some users may be looking at multiple tweets at
once. If you have trouble determining this from the video, consult Emmy and Ian.
○ For starting time, people ‘start scrolling’ the first time they move down the main
Twitter page (if they do something else beforehand, note this in the notes of the
first tweet).
○ For the ending of the scroll time, STOP the measure of scrolling time when they
scroll to the final tweet. This means cut the total scrolling time at the end of the
THE ANATOMY OF TWITTER HABITS
106
second-to-last tweet. This is the case for only the participant-level data; please
code the last tweet as accurately as possible in the tweet-level data.
8. Open the tab labeled “participant-level data.” In the new line you have added with your
Participants’ ID number, enter the appropriate data in each column. Ian’s comments at
the top of the sheet explain each of these measures. Questions about coding can be
directed to Ian or Emmy via the lab’s Slack channel or during lab meetings.
9. If your video is very short (less than 5-10 tweets scrolled or less than 1 minute) or seems
odd to you for ANY reason, please now put this information in the “notes” column next
to your video assignment (see the “Participant Video Assignment” Tab), and discuss this
with Emmy or Ian. Some participants may need to be removed for data quality issues.
10. These rules are subject to change as the project evolves. We often discuss adjustments to
coding rules during lab meetings, so if you miss a lab meeting, please reach out to Ian or
Emmy for the latest updates. We will also post recaps in the main Slack channel.
Second Round of Coding Instructions
11. Open TS_Data_Main_3
12. Input the data as you did with the first round into the tabs inside this Google
sheet, following the instructions above for participant-level and tweet-level data.
13. The only exception (in addition to one or two new columns, as explained in
comments on the top of each column of the sheet) to this will be that we now use a new
column in the (sample) 001 sheet titled “RA” to insert your name in the sheet next to all
the lines at the tweet level. Emmy will add the original coder names to the alreadycompleted sheets from the first coding round.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Facebook habits: rewards, cues, and automaticity
PDF
How misinformation exploits moral values and framing: insights from social media platforms and behavioral experiments
PDF
Social media canvassing using Twitter and Web GIS to aid in solving crime
PDF
Tobacco and marijuana surveillance using Twitter data
Asset Metadata
Creator
Anderson, Ian Axel
(author)
Core Title
Beyond active and passive social media use: habit mechanisms are behind frequent posting and scrolling on Twitter/X
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Psychology
Degree Conferral Date
2024-05
Publication Date
01/30/2024
Defense Date
01/18/2024
Publisher
Los Angeles, California
(original),
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
active social media use,habit,human-computer interaction,OAI-PMH Harvest,passive social media use,posting,reward learning,reward prediction errors,scrolling,social media,social networking,social neuroscience,Social Psychology,Twitter,X
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Wood, Wendy (
committee chair
), Fast, Nathanael (
committee member
), Hackel, Leor (
committee member
), Schwarz, Norbert (
committee member
)
Creator Email
iaanders@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113817103
Unique identifier
UC113817103
Identifier
etd-AndersonIa-12641.pdf (filename)
Legacy Identifier
etd-AndersonIa-12641
Document Type
Dissertation
Format
theses (aat)
Rights
Anderson, Ian Axel
Internet Media Type
application/pdf
Type
texts
Source
20240131-usctheses-batch-1124
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
active social media use
habit
human-computer interaction
passive social media use
posting
reward learning
reward prediction errors
scrolling
social media
social networking
social neuroscience
Twitter
X