Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Sequential decisions on time preference: evidence for non-independence
(USC Thesis Other)
Sequential decisions on time preference: evidence for non-independence
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
SEQUENTIAL DECISIONS ON TIME PREFERENCE: EVIDENCE FOR NON-INDEPENDENCE by Eustace Hsu A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (PSYCHOLOGY) August 2017 ii Acknowledgments I would not have gotten this far without the support of my parents, Alaska and Ichia, and my sister, June. My grandmother, Taoning, inspired me and devoted so much effort to my education. I would like to thank my graduate advisor, John Monterosso, for his continuous encouragement. I could not have found a mentor more dedicated to his students. I am also grateful for the expertise and support of the rest of my committee: Giorgio Coricelli, Antoine Bechara, and Richard John. Additionally I thank the following who have given me opportunities and guidance: Eric Johnson, Elke Weber, Howard Andrews, James Corter, Carol Caton, Ryan Murphy and Bernd Figner. Xochitl Cordova and Lisa Giragosian and our undergraduate research assistants – in particular Ali Mar, Pratik Doshi, Tiffany Yang, and Max Ibrahimzade – all made tremendous contributions to the completion of my experiments. My colleagues in the Addiction and Self Control lab – James Melrose, Shan Luo, Louise Cosand, Peggy Lin, and Xiaobei Zhang – I appreciate the unconditional support and insight I’ve gained from our many conversations in and out of the lab. Tim Hayes, Damien Brevers, Tasha Poppa, Suzanne Houston, Tae-Ho Lee, and Vanessa deserve similar mention. Finally, I would like to make a dedication to the memory of Bosco Tjan. As a member of my second-year and qualifying exam committees, Bosco was kind and helpful to meet with me and offer his perspectives, despite less overlap in our iii research areas. His dedication and vision for this institution was inspirational and he will be truly missed. iv Table of Contents Acknowledgments .................................................................................................. ii List of Tables ......................................................................................................... vi List of Figures ....................................................................................................... vii Abstract ............................................................................................................... viii Chapter I: Delay Discounting and Stochastic Behavior ......................................... 1 Time Preference ............................................................................................................ 2 Time Discounting and the limitations of “revealed preference” methodology ............... 3 Neural Correlates of Delay Discounting ........................................................................ 4 Stochasticity in delay discounting ................................................................................. 6 Two types of stochasticity ............................................................................................. 6 Chapter II. Laboratory Studies of Intertemporal Choice ...................................... 14 Methods ...................................................................................................................... 14 STUDY 1 METHOD: Choice task with fMRI ............................................................ 14 STUDY 2 Methods: Choice task with eyetracking ................................................... 17 STUDY 3 Methods: $20 now vs. trial-specific larger later alternative ..................... 22 Data Analysis (General) .......................................................................................... 24 Results ........................................................................................................................ 26 Choice is a function of subjective value .................................................................. 26 Reaction Time is longer when the rewards are ‘matched’ ...................................... 29 Chapter III. Nonindependence: Streakiness in Intertemporal Choice ................. 32 Discussion ................................................................................................................... 39 Moment-to-moment discounting .............................................................................. 40 Chapter IV. Non-Independence without choice? Can Intertemporal Matching imply streakiness? .............................................................................................. 43 Methods ...................................................................................................................... 45 Participants ............................................................................................................. 45 Task ........................................................................................................................ 45 Behavioral analysis ................................................................................................. 47 Results .................................................................................................................... 50 Overlapping participants ......................................................................................... 57 Discussion ................................................................................................................... 58 v Chapter V. Neural correlates of streakiness in intertemporal choice .................. 60 fMRI analysis .............................................................................................................. 62 Results ........................................................................................................................ 63 Discussion ................................................................................................................... 65 Chapter VI. General Discussion .......................................................................... 71 Individual Differences .................................................................................................. 72 Future Directions ......................................................................................................... 73 References .......................................................................................................... 77 Appendix A. Simulation test ................................................................................ 86 Appendix B. Streakiness in Probability Discounting ............................................ 90 Methods ...................................................................................................................... 90 Analysis ................................................................................................................... 92 Results ........................................................................................................................ 93 No evidence of streakiness ..................................................................................... 93 vi List of Tables Table 1: Coefficients for Probit Regression models for Studies 1-3, predicting choice based on value difference. ....................................................................................... 28 Table 2: Hierarchical linear regressions predicting reaction time based on choice uncertainty in each study. ........................................................................................................ 30 Table 3: Nonindependence in discounting for an intertemporal matching task. ... 54 Table 4: Modeling autocorrelation for sequences of bid-now trials. ............................ 55 Table 5: Modeling autocorrelation for sequences of bid-later trials. ........................... 55 Table 6: Autoregression and moving average coefficients corresponding to models included in Tables 3-5. ............................................................................................. 56 Table 7: Modeling autocorrelation for a dataset where bid-now and bid-later trials were averaged when they occurred in sequence. ...................................................... 56 Table 8: Autoregression and moving average coefficients corresponding to dataset where bid-now and bid-later trials were averaged when occurring in sequence. .................................................................................................................................................. 57 Table 9: Whole brain analysis for the repeat>switch analysis. ...................................... 63 Table 10: Whole brain analysis corresponding to parametric block regressor for streakiness. ..................................................................................................................................... 65 vii List of Figures Figure 1: A model of stochasticity in intertemporal choice. ................................................ 8 Figure 2: Trial design for Study 2. ................................................................................................. 21 Figure 3: Trials for a sample participant in studies 1 and 2 ............................................. 21 Figure 4: Trial design for Study 3. ................................................................................................. 23 Figure 5: Trials for a sample participant in Study 3. ............................................................ 24 Figure 6: Percentage of trials where LL option was chosen, by difference in subjective value between options. ...................................................................................... 29 Figure 7: Reaction time correlates with uncertainty of decision. ................................... 31 Figure 8: Intertemporal choice is Streaky. ................................................................................ 39 Figure 9: Two alternative hypotheses for autocorrelation in intertemporal choice. ............................................................................................................................................................... 41 Figure 10: Hyperbolic fits separately by condition. .............................................................. 52 Figure 11: Significant clusters corresponding to the Repeat>Switch contrast. ..... 64 Figure 12: Significant clusters observed in correlation to the amount of streakiness in each three-trial block. ................................................................................. 65 Figure A1: Two ways of determining probability of choice. ............................................. 89 Figure B1: Percentage of trials where risky option was chosen, by difference in subjective value between options. ...................................................................................... 93 Figure B2: Choice between risky and certain rewards is not streaky ........................ 95 viii Abstract The ability to trade off between immediate and future prospects is potentially a key contributor to the fitness of individuals in the modern human environment. Success and failure in this domain is often referred to as an indication of “self-control.” In this thesis I study this construct using delay discounting – the concept that prospects tend to be devalued when one must wait for them – and specifically the intertemporal choice task -- binary choices between immediate and delayed rewards. Although work in intertemporal choice typically models series of responses as independent decisions, it is possible that decisions on any given trial are affected by preceding decisions. In three studies, I demonstrate non- independence in the form of “streakiness” (continuing to choose either smaller- sooner or larger-later rewards after having previously done so). Using an intertemporal matching task, I find some evidence of autocorrelation in discounting – of gradual changes in underlying time preferences over the course of a task. However, the degree of this drift did not appear sufficient to explain the observed streakiness, suggesting the contribution of other factors. Additionally, using functional MRI, I show evidence that streakiness in choice is correlated to regions associated with the computation of the subjective value of rewards. In conclusion, I find evidence that streakiness in choice is the result of both 1) some causal role of decision under ambivalence where future decisions are affected by ix past behavior; and 2) decisions revealing an underlying preference that fluctuates from moment-to-moment, where the drifting preference function is autocorrelated. 1 Chapter I: Delay Discounting and Stochastic Behavior Humans are often required to make decisions balancing between immediate and future prospects. What percentage of our salary should we invest in retirement accounts, and should we choose an aggressive or conservative portfolio? What are the potential consequences of smoking cigarettes regularly? Is going to college, in particular attending an expensive private university, worth the amount of debt that will be accumulated from student loans? Paradoxically, while advances in science and technology allow us to understand better than ever many of the benefits of planning for the future, they have also made it easier than ever to obtain immediate gratification at the expense of achieving long-term goals. Hours of productive work are lost to time spent looking at cat pictures on social media, or even to obsessively checking e-mail. Scientific discussions about how we value time involves connecting and potentially conflating between different topics: patience vs. impatience, bad and good habits, willpower, and addiction. These various topics are often collectively referred to as “self-control” and individual differences do show some correlation across different domains of this topic. Most famously, performance in a delay-of- gratification task, now referred to as the “marshmallow test,” among nursery school children has been shown to correlate with many measures of their success in adulthood, including educational and career attainment, physical health, and substance abuse (Mischel, Ebbesen, & Zeiss, 1972; Mischel, Shoda, & Peake, 1988; Shoda, Mischel, & Peake, 1990; Mischel, 2014). 2 Time Preference One popular paradigm in assessing individual differences in self-control is delay discounting. Broadly, this is a class of tasks which attempt to assess how humans, as well as other species, trade off between time and rewards or penalties. In the most common assessment, the Intertemporal Choice Task (ITC), individuals are asked to choose between smaller-sooner (SS) and larger-later (LL) rewards (or sometimes punishments). This is a commonly used task, possibly because it allows for a range of levels of analysis, such that use of the task ranges from quantitative fields such as economics and finance to psychology and further to applied fields such as policy and health. In modeling discounting behavior, a function that has commonly been used is the hyperbolic discounting model (Mazur, 1987). V=A/(1+k*D) where V is the subjective value of the amount A that is available in D days. Here, k is a parameter that represents “steepness” of discounting — that is, a person – or group -- whose observed behavior is best fit to a larger k is one who tends to devalue future rewards more (in relation to delay) than a person who is characterized by a smaller k. Group- and individual differences in the fitted k- parameter have therefore been correlated to measures of self-control and indices of self-control failure, such as savings (Ersner-Hershfield, Garton, Ballard, Samanez-Larkin, & Knutson, 2009), overeating (Jarmolowicz et al., 2014), and substance use disorder (Mackillop et al., 2011). Meanwhile, different cognitive and affective manipulations have been shown to produce within-subject changes 3 in k (Van den Bergh, Dewitte, & Warlop, 2008; Augustine & Larsen, 2011; Luo, Ainslie, & Monterosso, 2014). The above “hyperbolic” function implies that behavior is not, in the economic sense of the word, rational. This is because it predicts reversals of preference: a hyperbolic discounter can prefer a hundred and one dollars in 366 days to one hundred in 365 days, but at the same time prefer one hundred dollars today to one hundred and one dollars tomorrow. Thus the individual’s preference between the two alternatives switches based on the passage of time. While there are other functions that have been developed to approximate delay discounting behavior, including the exponential function and a number of “quasi- hyperbolic” functions (Laibson, 1997; McClure, Ericson, Laibson, Loewenstein, & Cohen, 2007), I will proceed with the hyperbolic assumption, which is by far the most commonly used in psychology. Time Discounting and the limitations of “revealed preference” methodology In classic microeconomic theory, preference is ‘revealed’ by choice (Samuelson, 1938). Fundamental to the widespread usage of intertemporal choice is the presumption that the task reveals ‘time preference,’ which, as a construct that is strongly implicated in self-control, should therefore correlate with other measures of self-control or self-control failure often measured simultaneously. However, study of the binary choice tasks in general has placed the assumption that choice reveals preference under greater scrutiny. Understanding this context can inform us on how to obtain more accurate 4 estimates of preference, or begin to discuss what shifts in preference truly represent. One example is the delay-speedup asymmetry, which shows evidence that an option is more likely to be chosen when it is framed as the “default” option (Loewenstein & Prelec, 1992; E. U. Weber et al., 2007). Here, the “choice architecture” is responsible for preference being shifted in favor of the option that is initially presented. Another recent advance concerns the usage of sequential-sampling models initially developed for signal detection theory, for the purpose of, for example, modeling variation in reaction time related to the difficulty of identifying the presence of a target (Ratcliff & McKoon, 2008). Recent work in value-based decision-making, including intertemporal choice, has shown that these modeling techniques can be extended to tasks in this domain (Krajbich, Armel, & Rangel, 2011; Rodriguez, Turner, & McClure, Samuel M., 2014; Dai & Busemeyer, 2014; Rodriguez, Turner, Van Zandt, & McClure, 2015). While there are multiple frameworks for this type of model, what is consistent is the framing of a process where evidence is accumulated until it surpasses a threshold, at which point the decision is made. Neural Correlates of Delay Discounting Neuroeconomic studies using fMRI have successfully identified a ‘valuation network’ of the ventral striatum (VS), ventromedial prefrontal cortex (vmPFC), and posterior cingulate cortex (PCC) that display increasing activity scaling with the magnitude of available rewards (Bartra, McGuire, & Kable, 2013; 5 Clithero & Rangel, 2013), including temporally varying rewards present in delay discounting tasks (Kable & Glimcher, 2007; Carter, Meyer, & Huettel, 2010). Critically, delayed rewards exhibit discounted value, such that components of this network will tend to respond more to a $25 reward available immediately, as opposed to the same amount available a month in the future. Additionally, a large portion of the literature has sought to establish influences outside of the valuation system affecting ITC. In particular there is some evidence associating shallower discounting, and more patient decisions, on a within subject level, with increased activation in the dorsolateral prefrontal cortex (DLPFC), a structure implicated in higher level cognition and executive control (McClure, Laibson, Loewenstein, & Cohen, 2004; B. J. Weber & Huettel, 2008; Luo, Ainslie, Pollini, Giragosian, & Monterosso, 2012). Suppression of this region using transcranial magnetic stimulation (TMS) has been shown to causally affect ITC by increasing the likelihood of choosing immediate rewards, seemingly without influencing the valuation of either immediate, or delayed rewards (Figner et al., 2010). More recent studies have established that functional connectivity between the DLPFC and vmPFC (Hare, Hakimi, & Rangel, 2014), as well as the striatum (van den Bos, Rodriguez, Schweitzer, & McClure, 2014), are stronger in individuals who discount rewards less steeply. On the other hand, we have demonstrated evidence that increased delay discounting is associated with functional coupling between the frontoparietal network (which includes the 6 DLPFC) and the insular cortex, a cortical structure involved in the integration of bodily states (Clewett et al., 2014). Stochasticity in delay discounting The main topic of this thesis is to investigate stochasticity in delay discounting behavior. While a large set of the literature in discounting seeks to -- based on the result of a series of elicitations of preference -- precisely estimate an individual’s behavior in this domain, it is recognized that responses may reveal behavior that is not internally consistent, even assuming an accurate model of discounting. For example, the same individual that expresses preference for $10 today over $20 in one month, may on a later trial indicate a preference for $20 in a month over $11 today. Inasmuch as delay discounting behavior is representative of a construct that is often referred to as self-control, variability in discounting behavior is important because outcomes related to self-control failure may occur at the tail ends of one’s range of behavior (ie. one’s moments of unusual impulsivity, or unusual patience) as opposed to being representative of the mean. While the term ‘stochastic’ is generally used to refer to randomness, I will specifically be referring to variation with regard to overall behavior within temporal discounting tasks. Two types of stochasticity It is important to distinguish between two possible sources of stochasticity in choice behavior. The first is stochasticity that does not interact with valuation, 7 which is sometimes referred to as “trembling hand” noise. Imagine that on some occasions, a participant’s response has nothing to do with valuation of the options, but is instead produced by an unintended tremble of the hand that hits a response key (or some other execution error). Such an event should be as likely to occur when the alternatives varied greatly in value as when they were similar in value. A second and more interesting type of noise can be thought of as grounded in imperfections in the valuation process. For example, $10 delayed by one month might, for a particular participant, have a present-value of about $7. But that does not mean that participant will always choose a one month delayed $10 over an immediate $6, and never choose a one month delayed $10 over an immediate $8 -- the valuation process is imprecise. Stochastic models capture this reality of decision-making behavior by incorporating distributions rather than specific values into models of behavior. A simplified illustration corresponding to this example is shown in Figure 1. On the left panel is a fit capturing the probability that a particular participant chooses $10 in one month over different immediate amounts. While $7 seems to be the point of “indifference”, the slope of the curve indicates that behavior has some degree of inconsistency. Rather than the probability of choosing the delayed money being 0 when there is an alternative immediate amount that is at all greater than $7, and the probability of choosing the delayed money being 1 when the immediate alternative is less than $7, the curve captures the fact that there is some inconsistency, and that inconsistency is greater near the point of indifference ($7 present-value). The 8 stochastic model here treats the subjective present-value of $10 delayed by a month as a distribution that is centered at $7, but with tails that correspond to inconsistency in behavior. The panel on the right (the derivative of the cumulative distribution function (CDF) on the left) is the distribution corresponding to the behavior modeled on the left. If the present-value of $10 delayed by one month were a random draw from the probability distribution function (PDF) on the left, and choice between $10 in one month and an immediate option was deterministic based on comparison between the immediate option and the randomly drawn present-value of the delayed reward, then behavior would match the CDF on the right, in that the probability of choosing the later reward would be location of the curve for any given reward. Figure 1: A model of stochasticity in intertemporal choice. Suppose for a hypothetical individual, the present value equivalent of waiting one month for $10 at any given time is a random draw from the distribution on the right. Then, for a choice between $10 in a month and any given immediate monetary reward, a decision is made as the direct result of comparison between the present value of the delayed reward, and the amount of the immediate reward option. Given this proposed scenario, the probability of choosing the delayed reward is approximated by the curve on the left, which is also the integral of the distribution on the right. 10 9 8 7 6 5 4 0.0 0.2 0.4 0.6 0.8 1.0 Cumulative Distribution Function (CDF) Size of now alternative P(Choose delayed option) 10 9 8 7 6 5 4 0.0 0.1 0.2 0.3 0.4 0.5 Probability Density Function (PDF) Present Value of $10 in 1 month Density 9 In examining stochasticity, the main goal in this report is to test the premise of trial independence – the assumption that each trial is, as above, a random draw from a distribution. In other words, I consider the possibility that some of the stochasticity in delay discounting behavior could be temporal, such that state changes in behavior over the course of a task can be reflected in an individual’s decisions. Since many decisions with an intertemporal component are in fact made repeatedly (e.g., weighting the immediate pleasure of an available fatty food vs. its long-term cost), these sequential effects can potentially play a key role in transitioning from the lab to the real world. There is little doubt that in the domain of the real-world self-control struggle, dynamic processes are important. The up- and down weight of dieters, and the “falling-off-the-wagon” binges of individuals trying to quit drinking (known as the “abstinence violation effect” (Marlatt & Gordon, 1978)) both suggest the importance of patterns in behavior over time. Gaining a better understanding of sequential and/or trial effects in delay discounting has the potential to offer the following contributions: 1) It will improve individual models of behavior by accounting for unexplained variance. Recent attempts to best estimate delay discounting typically involve trying to calculate the best fitting among models at the aggregate level, per individual (Wileyto, Audrain-McGovern, Epstein, & Lerman, 2004; Vincent, 2016; Chavez, Villalobos, Baroja, & Bouzas, 2017). Understanding sequential effects can help modelers get closer to 10 understanding how much variability in modeling is actually due to measurement error. 2) It may allow us to better understand processes involved in ITC. While there is not much literature regarding nonindependence in delay discounting, documented effects from previous research on sequential effects in behavioral decision-making, as well as other self-control tasks, are potentially relevant. For example, a line of study referred to as ego depletion frames self-control as being similar a muscle, notably in that it begins to tire or lose strength with increased use in the short-term (Baumeister, Bratslavsky, Muraven, & Tice, 1998; Baumeister, Vohs, & Tice, 2007). As evidence for this hypothesis, participants who complete self-control tasks, as opposed to control tasks, perform less well in a subsequent self-control task (although a recent attempt by many outside labs to replicate a set of results in this topic largely failed to reproduce the original results (Hagger et al., 2016)). Similarly, the study of moral licensing has found that participants who act morally when given the opportunity in an initial task will act less morally in a subsequent task (Effron, Cameron, & Monin, 2009). Cognitive dissonance theory, and the related idea of ‘choice-induced preference,’ suggests that post-hoc evaluation after making a difficult choice serves to emphasize the positive attributes of the chosen option, thus strengthening preference for the previously chosen option (Brehm, 1956; Festinger, 1957; Sharot, De 11 Martino, & Dolan, 2009). Alternatively, in the context of gambling, Xue and colleagues used a modified cups task and determined that participants were more likely to accept a risky gamble when they did not accept a gamble in the previous trial of a task (Xue, Lu, Levin, & Bechara, 2010). 3) Implications for neuroscience – If we assume that time preference, as with any behavior, is ultimately reducible to a complex function of neurobiological inputs, then it follows that moment-to-moment variability in strength of these inputs may causally modify an individual’s time preference resulting in temporal variability in an individual’s time preference function – which is one possible cause for what I will refer to as moment-to-moment discounting. Identifying nonindependence in choice behavior and finding connections to brain activity will lead us closer to being able to identify neural correlates of variability in delay discounting behavior. Alternatively, it is also possible that choice dynamics affect the way value is computed and/or compared, thus causally modifying an individual’s time preference, momentarily impacting what the elicitation reveals in any given trial. fMRI studies relating to the aforementioned idea of choice-induced change in valuation have correlated choice-related preference change with activity in areas of the brain implicated in value- tracking, including striatal regions and the PCC (Sharot et al., 2009; Nicolle, Bach, Driver, & Dolan, 2011). Other regions related to emotion processing, especially the insular cortex, have also been identified 12 (Kitayama, Chua, Tompson, & Han, 2013). In the previously mentioned study by Xue and colleagues, participants more likely to take a risky gamble showed more activation bilaterally in the insula. Furthermore, a trait urgency score was correlated with both risky behavior and activity in the right insula. These findings were interpreted as supporting the role of the insula in processing the urge to take a risk, especially in the context of a sequence of trials where a risk was not previously taken (Xue et al., 2010). These studies lay the framework for the possibility that effects of previous behavior may serve as an input to subsequent decisions, and that identifying neural correlates associated with these effects may help us understand the behavior. Therefore, we conducted a series of studies designed to 1) identify and quantify non-independence during intertemporal decision-making tasks, 2) explore alternative accounts to observed non-independence, and 3) carry out a preliminary neuroimaging analysis to identify neural correlates of non- independence. In Chapter II, I will present three intertemporal choice studies and connect them to previous literature and modeling methods. Using the data from these studies, I will then present consistent evidence of non-independence in ITC in Chapter III. Discussion of these findings will focus on two possible explanations for observed effects. In Chapter IV, I will present secondary analyses using a different elicitation task, intertemporal matching, that potentially provides 13 evidence contributing to one of the two alternative explanations. Next, in chapter V, I will provide evidence of neural correlates of streakiness in choice from a secondary fMRI analysis. Then, I will conclude in Chapter VI with general discussion as to the causes of nonindependence, assess whether the intertemporal matching task helped answer this question, and suggest future directions for this line of research. 14 Chapter II. Laboratory Studies of Intertemporal Choice In this chapter I will introduce three intertemporal choice studies on which data analysis was conducted, before exploring stochastic behavior in the next section. My goal is to demonstrate a consistent pattern of results that builds from existing studies in this topic. Therefore, I will conduct preliminary analyses as “sanity checks” to establish the validity of the data, which will make a case for the generalizability of subsequent findings. Methods The studies included all follow a similar set of procedures leading to the main tasks that are discussed. In each case, a preliminary adaptive task was conducted that elicited a latent hyperbolic discounting parameter k indifference best describing each individual’s time preference. This k indifference was input into the main task for each study, and used to generate ITC trials where alternatives were matched so as to have similar subjective value, as well as mismatched so that one option was designed to have larger subjective value than the other. The method of presenting stimuli for the preliminary task was identical to that of the main task. This was done so that measurement from the preliminary and main task would be as consistent as possible, and also so that the participant would be trained in how to complete the task. STUDY 1 METHOD: Choice task with fMRI Participants were recruited via Craigslist from the greater Los Angeles area. While having their brain scanned using fMRI, participants completed 2 runs 15 of a 36-trial choice task. Initial results from this study were published in (Clewett et al., 2014). Task In the scanner task, trials were administered in three conditions: Matched, Mismatched, and Control. Matched intertemporal choice (ITC) trials were generated based on the k indifference estimated from the previous task, such that the participant should be indifferent between the options. On the other hand, the k- parameter of Mismatched ITC trials were an order of magnitude larger or smaller than for Matched Choices—such that participants should have a clear and predictable preference for either the later-larger (LL) or sooner-smaller (SS) option on each such trial. For example, if during the adaptive task we estimated a k indifference of 0.014 for a participant, then during the task we may generate a Matched choice trial where they would be required to choose between $16 today and $27 in 120 days, with the expectation that they would be nearly indifferent between the two options (16 = 27/(1+.014*120)), thus being equally likely to choose the immediate and delayed options. A Mismatched choice trial that we would generate for this participant would be a choice pair that would instead be equally valued for a participant for whom discounting behavior was characterized instead by a k-parameter of 0.14 or 0.0014 (one order of magnitude larger or smaller than the actual estimated parameter fit). Specifically, in this example, she would either be asked to choose between $3 today and $27 in 120 days (k=0.14, in which case we can expect a strong preference for the delayed 16 option), or $25 today and $27 in 120 days (k=0.0014, in which case we would expect a strong preference for the immediate option.) It should be pointed out that the procedure for generating Mismatched choice trials is not dependent on the degree of stochasticity in the participants’ behavior. Highly stochastic participants would be expected, by definition, to be more likely than other participants to choose the option generated to be inferior in Mismatched trials. Lastly, in Control trials, both rewards were available with equal immediacy, so participants are expected to choose the reward that is larger in absolute size. Furthermore, trials were delivered in blocks of three, with each trial lasting eight seconds, and each block therefore lasting twenty-four seconds. In each trial, choices were presented side-by-side. The participant made decisions in the scanner by pressing a button box in their left hand to choose the option presented on the left side of the screen, and by pressing a button box in their right hand to choose the option on the right. After making their decision, the text corresponding to the chosen alternative was briefly highlighted, followed by the screen turning blank for the remainder of the eight seconds of the trial. There were four blocks of each condition in each of two runs, resulting in 72 total trials per participant. There was a coding error that resulted in reaction time data not being included for 13 participants. Additionally, some participants were excluded for choosing the same response on >85% or more of matched trials, or choosing the unexpected response on >15% of mismatched trials (both of which suggest that 17 the procedure for separating Hard and Easy trials may not have been successful.) As a result this analysis includes only data for 72 participants, and 59 participants for any analysis including reaction time data. STUDY 2 Methods: Choice task with eyetracking Participants were USC students, mostly recruited from the USC psychology subject pool. Students recruited from subject pool were given course credit and additionally received a bonus of the chosen reward for one randomly selected trial made during the session. Students not participating for course credit received a $20 payment for showing up in addition to their bonus. 11 participants were male and 27 were female, with a mean age of 21. Based on choices in the Matched condition, data from only 20 participants were used for the final analysis. Participants were excluded using similar criteria to Study 1, with slight modification. If during a block of trials the participant chooses the same option on greater than 80% of Matched trials in a range of rewards (small or large), or chooses the unexpected reward on >15% of Mismatched trials, the participant was excluded. As a result, a much greater percentage of participants in this study were excluded as compared to Study 1 (47% vs. 23%). Since there were several procedural differences across studies, it is not clear why rates of exclusion were higher in Study 2. However, the use of undergraduates completing the task for course extra credit in Study 2 may have been a factor. 18 Task The task was written and presented using the Psychtoolbox extension for MATLAB. While having their optical responses recorded with an eyetracker (SMI Red 500, sampling at 120 Hz), participants made two types of intertemporal choice: Matched, where they were expected to be indifferent between the SS and LL options, and Mismatched, where trials were generated such that either the SS or LL option should be clearly preferred (similar to the method described for Study 1). Choices were presented such that the delayed option varied between $21 and $60, all at 120 days in the future. Immediate choices were generated based on a choice procedure completed prior to the task that ascertained an indifference-k parameter that best fit the participant’s discounting behavior. Matched and Mismatched trials were generated identically to the prior study -- where in Matched trials immediate options were generated to match the subjective present value of the delayed option, whereas Mismatched trials were generated so that immediate options were either much greater or much less in subjective value compared to delayed options. Matched and Mismatched options were presented in blocks of six trials (six Matched trials in succession, followed by six Mismatched trials) in two runs of 64 trials, for a total of 128 choices. As 12 (one block of Matched trials plus one block of Mismatched trials) does not divide evenly into 64, the final blocks of trials are two each of Matched and Mismatched. If participants chose one among the SS or LL more than 80% of the time in the first run, for the second run the k match 19 parameter was adjusted a quarter-log 10 step in the direction that might lead them to choose the other alternative more often. The initial intention of this study was to examine pupil dilation in response to intertemporal choice stimuli. During each trial, participants first viewed a control stimulus for two seconds, followed by one possible option of the choice in the center of the screen for two seconds (this will be referred to as the ‘value’ phase of the trial, since it allows for the assessment of possible association between value and pupil dilation). Next, they viewed another control stimulus for two seconds, followed by a screen where they saw both options side-by-side, and were required to choose between the rewards (choice phase). Upon making their choice, the chosen option was highlighted in red for two more seconds. Control stimuli were intended to set a baseline of comparison for each participant’s pupil dilation, and have similar visual properties to the subsequent stimulus, so as to minimize the change in luminance, which can affect dilation, that comes with the onset of the trial stimulus. As such they are of sufficient length that the pupil has adjusted away from activity related to the stimulus immediately preceding the control stimulus. Shapes were used to represent the delay associated with the reward, such that a dollar reward inside one shape meant that the option indicated a reward of that dollar amount available either immediately, and a monetary amount in a different shape indicated the reward alternative that was available in 120 days. Two versions of this task were administered: in the first version, rewards were shown inside of a circle or a 20 square, while in the modified version, rewards were inside of either a square or a diamond (Figure 2). In both cases, approximately 50% of participants were told that the reward inside of the square was available immediately with the alternative available in 120 days, and approximately 50% of participants were told that the reward inside of the square was available in 120 days with the alternative available immediately. All delays were set at 0 or 120 days, and associated with shapes, so as to minimize variance in reaction time and processing not related to evaluating the decision. In order to be assured that participants distinguished between the rewards, they made choices between the shapes first during the adaptive procedure. Additionally, they were quizzed after the initial instructions were read before completing the adaptive procedure, and again prior to the main task. 21 Figure 2: Trial design for Study 2. Participants viewed a control stimulus for 2 seconds, followed by a “value stimulus,” which consisted of a reward amount inside of a diamond or square. The shape that the reward is inside of indicates the delay associated with the reward (either 0 or 120 days). This shape is shown for 2 seconds, followed by another control stimulus that is shown for 2 seconds. Next, a choice window is displayed, which includes the reward associated with the “value stimulus” as one of its options. After one option is chosen, the reward is highlighted for 2 seconds, before proceeding to the next trial. Figure 3: Trials for a sample participant in studies 1 and 2. Decisions for every participant were generated to resolve the equation A 1 = A 2 /(1+k i *D) where A1 and A 2 are the amounts of the immediate and delayed rewards, D is the delay in days, and k i is a predefined discounting parameter. In both figures, different colors (blue and black) represent trials of different magnitudes; the middle dark dotted lines/curves represents the initially estimated k indifference for two different magnitudes of rewards, the lighter lines and curves represent decisions where one reward is specified to be more attractive. In the first figure (left), the squares represent actual choice of the SS and diamonds represent choice of the LL for the sample participant. In the second figure (right) dots represent the percentage of LL choice for each k i , and the curve represents the fitted logistic regression model. 0 20 40 60 80 100 120 0.0 0.2 0.4 0.6 0.8 1.0 Days Now amount/Delay amount -8 -7 -6 -5 -4 -3 -2 0.0 0.2 0.4 0.6 0.8 1.0 ln(k) %LL 22 STUDY 3 Methods: $20 now vs. trial-specific larger later alternative 18 participants (7 male, 11 female) completed a ‘Yes-No’ intertemporal choice task. All participants were USC students, some of whom completed the study for course credit. Participants who did not complete the study for credit received $20 base for their participation. All participants received the reward of a randomly selected option from their session. Task While pupil dilation was recorded with an eyetracker, participants completed 3 runs of 60 trials each for a total of 180 trials. In each trial, participants chose between a reward of $20 on the day of the session, and a larger reward available after a delay between 60 and 150 days in the future. Trials were preceded by a fixation cross which appeared for five seconds, after which the LL option appeared in the middle of the screen and the participant chose between this option and ‘$20 today.’ They did so by pressing one button to choose $20 today and another button to choose the onscreen, LL option. Upon making their decision, the outline of a rectangle appeared around the text on- screen for one second before the beginning of the next trial. The rectangle was drawn in green if the participant chose the onscreen option, and red for choice of the off-screen option. The task was preceded by an adaptive procedure that estimated the participant’s discounting. This procedure was additionally used to train participants on the task. 23 In each trial, the delay after which the LL reward would become available was varied between 60 and 150 days (random with uniform distribution), with a mean of 105 days. The delayed amount A LL of the trial was generated by algebraically solving for it based on the delay and k trial . A LL = 20*(1+k trial * D) In each run, 20 trials were designed so that the participant would be indifferent between the two alternatives (k trial = k match , where the initial value of k match is equivalent to the k indifference estimated prior to the task), 20 trials were 1/8 log-step greater or less (10 each) than indifference (k trial = k match *10 1/8 , k trial = k match *10 -1/8 ), and 20 trials were ¼ log-step greater or less (10 each) than indifference (k trial = k match *10 1/4 , k trial = k match *10 -1/4 ). Participants are therefore more likely to choose the LL option when k trial is increasingly greater than k indifference , and more likely to choose the immediate option when the k trial is more and more less than k indifference . Figure 4: Trial design for Study 3. Trials were preceded by a 5-second fixation, followed by the presentation of a delayed reward. Participants were instructed to choose between the delayed reward shown on screen, and a reward of $20 on the day of the experiment. If the delayed reward was selected (shown), a green rectangle was drawn around the reward on screen, before proceeding to the next trial. If the immediately available reward of $20 was selected, a red rectangle was drawn instead. 24 However, if the participant’s discounting was not accurately captured during a run of the task (if either the SS or LL was chosen for more than ¾ of trials designed to be at indifference, and if participants did not choose SS more often when the trial was less than indifference or LL more often when the trial was greater than indifference), trials in subsequent runs were adjusted similar to Study 2 so that participants would be more likely to make both SS and LL choices in the future. Figure 5: Trials for a sample participant in Study 3. Decisions for every participant were generated to resolve the equation 20 = A 2 /(1+k i *D) where 20 and A 2 are the amounts of the immediate and delayed rewards, D is the delay in days, and k i is one of five discounting parameters, each separated by log(10 1/8 ). In both figures, the dotted black line/curve represents the initially estimated k indifference , the light red and red curves represent decisions where ‘$20 today’ is more attractive, and light green and green represent where the delayed option is more attractive. In the first figure (left), the red dots represent actual choice of the SS and green dots represent choice of the LL for the sample participant. In the second figure (right) dots represent the percentage of LL choice for each k i , and the curve represents the fitted logistic regression model. Data Analysis (General) Across studies, data were analyzed primarily using R statistical software package. Reaction time (RT) was considered to be the time between the appearance of the two options on the screen until the moment when a button was 0 50 100 150 0.0 0.2 0.4 0.6 0.8 1.0 Days Now amount/Delay amount -2 -1 0 1 2 0.0 0.2 0.4 0.6 0.8 1.0 ln(k) %LL 25 pressed indicating a choice being made. In the case of the yes-no task, where only one option appeared on screen, RT was considered to be the time between when the delayed option appeared, and the moment ‘yes’ or ‘no’ was selected. All reported models used hierarchical regression using the “lme4” library (Gelman & Hill, 2007), where choice (either the likelihood of choosing the LL option, or the likelihood of choosing the same option as on the previous trial) was the dependent variable and the participant is treated as a random intercept (each participant has a different likelihood of choosing LL on any given trial that is not modeled). Choice Probability As explained above, tasks are designed such that trials can be grouped into bins by k. That is, for every trial in a bin, k trial =(A LL -A immediate )/(A immediate *D LL ) with some rounding error (all variables other than k are integers), where A immediate and A LL are the amounts of the immediate and delayed rewards, and D LL is the delay. Using choices for trials within each bin I produce a probability of choosing LL (or SS) in choices for each bin. In some analyses, the k trial corresponding to each trial, which is larger when the delayed option is larger and/or the delay is smaller, is used as a variable of interest. In particular, I use a logistic regression to create a model that predicts choice based on k trial . The estimates of this fitted model for each k=k trial can be compared to the mean of choices for the bin it corresponds to. 26 Results First, I intend to show that the data we have collected is similar to others that have been published in our lab and elsewhere. To do so, I intend to confirm the following: 1- Participants are more likely to choose the delayed reward the larger its subjective value (SV), compared to the immediate reward. Using SV modeled under the assumption of hyperbolic discounting, decisions are uncertain when the value of the alternatives are similar, where the probability of choosing the delayed alternative increasing when its value increases (Wileyto et al., 2004; Rodriguez et al., 2014; Vincent, 2016). 2- Reaction Time is longer when the difference in subjective value between the delayed and immediate reward is small (Rodriguez et al., 2014; Krajbich, Bartling, Hare, & Fehr, 2015). Choice is a function of subjective value Recent studies have demonstrated that intertemporal choice can be modeled probabilistically based on the relationship in value between rewards (Dai & Busemeyer, 2014; Vincent, 2016). Typically, a logistic regression with either a probit or logit link function is used to predict the likelihood that one reward will be chosen, based on the relationship in value between rewards. For example, this model would predict that a participant who is indifferent between receiving $20 immediately and waiting for $32 in 120 days would be somewhat more likely than chance to choose the delayed reward if it was $33 in 120 days, 27 and less likely than chance to choose the delayed reward if it was $31 in 120 days. Using hierarchical logistic regression, I demonstrate that participants are more likely to choose the delayed reward when it is more valuable, in relation to the immediate reward. I treat the choice (whether the larger, delayed choice was chosen) as the dependent variable and the relation in value as the independent variable. In order to set some equivalence across participants, the IV was transformed so that matched trials are coded as zero, trials which are mismatched so that the participant is more likely to choose the SS are negative, and trials where the participant are more likely to choose the LL are positive. Moreover, the IV was coded according to the k match that was used to generate trials in the final run of trials for each participant, such that cases where the k match was changed between runs will be reflected in the coding. Across all studies and all participants, choice of the immediate option was more likely when A 1 >A 2 /(1+k match *D 2 ), and less likely when A 1 <A 2 /(1+k match *D 2 ), for immediate option 1 and delayed option 2. Table 1 shows the results from logistic regressions (using a probit link function) testing whether the likelihood of LL choice increases when the LL alternative is relatively larger in value. The probit model, referred to as a Φ() function, is a transformation to the cumulative normal distribution, and the probability of LL choice is written as Φ(value difference/uncertainty), where the intercept in the table indicates the value difference for which probability of choice is expected to be 0.5, and where a 28 larger uncertainty indicates a sharper curve, such that a smaller increase in value corresponds to the same increase in choice probability. The results in the table therefore suggest that at the study level, indifference does not consistently deviate from the k indiff we estimated for each participant, since none of the intercept statistics deviate from zero. On the other hand, the uncertainty statistics significantly deviate from zero in each study, indicating that participants are in fact more likely to choose the LL reward as the value difference (A 2 - A 1 )becomes more positive. Figure 6 displays the fitted curves for the regression models corresponding to each study, along with binned averages for each participant at each level of value difference. Study 1 Study 2 Study 3 Participants 72 37 18 Total # ITC trials 3288 4654 3230 Probit Regression model Intercept 0.09 0.26 -0.03 Uncertainty 0.34*** 0.47*** 1.18*** Table 1: Coefficients for Probit Regression models for Studies 1-3, predicting choice based on value difference. The dependent variable is set such that 0 represents k match , the discounting parameter used to generate trials where immediate and delayed options had relatively equal subjective value. 29 Figure 6: Percentage of trials where LL option was chosen, by difference in subjective value between options. A value difference of zero represents trials where the SS and LL options are matched (ie. individually estimated to be equally valued), and each unit i of value difference represents a 10 i/8 increase in k for two alternatives where A 1 =A 2 /(1+k trial *D 2 ). Fitted curves represent probit model corresponding to probability of LL choice in each study. Study 1 – black; study 2 – red; study 3 – blue. Reaction Time is longer when the rewards are ‘matched’ Using initial estimates of indifference-k, I show that RT is longer for trials where A 1 =A 2 /(1+k indiff *D 2 ) is approximately true, for reward A 1 available immediately and reward A 2 available after D 2 days. For study 1 and 2, hierarchical linear regression to demonstrate that RT is longer for trials where rewards are matched, as opposed to where rewards are mismatched. For study 3, RT is longer for trials where rewards are most closely matched, in comparison to where they are less matched. I show this by first creating an value disparity variable for the participant’s choice, with scores close to zero representing trials where the participant is almost certain to choose one of two options, and scores close to one for choices where the participant is relatively equally likely to choose either option. I then use -10 -5 0 5 10 0.0 0.2 0.4 0.6 0.8 1.0 value difference %LL choice 30 this variable as a predictor of reaction time in a hierarchical linear regression model. The results, presented in Table 2, suggest that participants take longer to make a decision in trials where there is more uncertainty in the decision. In study 1, reaction time is increased by 0.72 seconds between trials where it is predictable which alternative will be chosen, and trials where both options are equally likely to be chosen. In studies 2 and 3, the difference is 0.91 seconds and 0.75 seconds (given the same scale), respectively. The boxplots in figure 7 demonstrate that RTs are longest for “matched” trials where the participant’s decision is designed to be most uncertain. Study 1 Study 2 Study 3 participants 59 37 18 Total # ITC trials 2832 4654 3230 Effect of uncertainty on RT Intercept 1.73 1.39 1.6 trial uncertainty 0.72*** 0.91*** 0.75*** Table 2: Hierarchical linear regressions predicting reaction time based on choice uncertainty in each study. Choice uncertainty is coded as being how close the probability of choosing an option is to 0.5. 31 Figure 7 (A-C): Reaction time correlates with uncertainty of decision. Decisions take longest for trials where the SS and LL option are designed to be equally preferable ("matched" trials), and take increasingly shorter for trials where one option is designed to be increasingly more attractive. In Study 1 (A) and Study 2 (B), the distribution of RT for “matched” trials is larger than for “mismatched” trials, and there is not a difference for mismatched trials where the SS vs. LL option are designed to be more attractive. In Study 3 (C), non-matched trials are not as mismatched as for Studies 1 and 2, but it still appears that RTs are shortest when the value difference between trial options is larger. In this chapter, I introduced three intertemporal choice studies we completed in the lab, and modeled the uncertainty in participant responses under the assumption of independence, additionally demonstrating reaction time as an indicator of uncertainty in choice. While similar results have been demonstrated in data from an increasing number of laboratories using the ITC paradigm, I re- establish these findings as a framework for the next section, where I will seek to test the validity of the assumption of independence. 0 2 4 6 mismatchSS matched mismatchLL Value difference Reaction Time (A) Study 1 0 2 4 6 mismatchSS matched mismatchLL Value difference Reaction Time (B) Study 2 0 2 4 6 -2 -1 0 1 2 Value difference Reaction Time (C) Study 3 32 Chapter III. Nonindependence: Streakiness in Intertemporal Choice In the previous section, I presented three studies of intertemporal choice and have demonstrated that despite some variations in how stimuli were presented, behavior in this task is consistent and similar to previously published work. In this section I address a question that is not previously considered – that of whether there are sequential effects in ITC. In other words, is there any evidence that a decision is statistically dependent on one or multiple previous decisions? As demonstrated, intertemporal choices can be modeled probabilistically – given a reasonably accurate model of overall behavior, there are many choices for which there is considerable uncertainty as to what the individual will choose, especially when the difference in subjective value between options (V SS – V LL ) is small. This uncertainty can be quantified as a probability of choosing one option over the other, and this probability is fixed if we assume independence. In this section, I use data from three studies to test whether there is evidence against independence in a choice context, by examining whether the probability of a choice is changed depending on the previous choice that was made. Using our data, in order to test for independence we compare the calculated probability of choosing an option with whether the option is actually chosen. Specifically, I look at whether the choice made was the same type of choice as was made in the previous trial (ie. SS after SS; LL after LL). 33 For study 1 (reward study), I did this only looking in blocks of trials where the choice alternatives were matched so as to be similarly valued by the participant. As these trials occurred in blocks of three, I specifically analyzed the second and third trial to test whether they were the same as the previous (matched) trial. I found that participants chose the same option as on the previous trial 66% of the time. However, while this may seem to indicate “streakiness” in preference, greater than 50% repetition of preference on matched trials can be caused by imperfections in the procedure used to tailor matched trials to participants’ discounting. A participant that chooses the LL option on only 20% of a set of similarly matched trials will, for example, choose the same option on consecutive trials with probability of .68 for any given sequence, even if each trial is independent of the previous (.8 2 + .2 2 ). Thus, in order to test whether participants were more likely to choose the same reward type in consecutive trials, I compared the number of times they did so to a simulated estimate of how many times they would do so when each choice was assumed to be independent. Specifically, I simulated sequences of the task and subsequently compared the simulated number of times choice is repeated with the actual number. Simulation was used because mathematical computation would not have been straightforward, given that trials assigned to different bins were interleaved with each other. If individuals are inclined to choose the same option 34 twice-in-a-row either more or less often than the simulation predicts, then that would indicate non-independence in behavior (See appendix I for further details). To start, I binned each participant’s decisions, categorizing them based on the magnitude of reward size (small: $21-35, and medium: $46-60). In the design for this study, all choices in each bin were generated such that A 1 =A 2 /(1+k bin *D 2 ) with some rounding error, for a k bin determined by the titration task and amounts A and delay D corresponding to alternatives 1 and 2 (where 1 is the immediately available alternative). Using each participant’s overall behavior (%LL choice for the relevant magnitude bin) as an overall probability that they would choose the LL on any given trial, I generated a thousand sequences of choices for each participant, where in each sequence there was the same number of matched choices, in the same order, as the participant actually received. In each sequence, I then summed the number of times that the same choice was simulated consecutively in a block of matched trials, just as the participant would have. I use the resulting 1000 sums as a probability distribution for how many times each participant would choose the same as the previous trial, if they were choosing at random. Next, I ran a paired t-test comparing the number of expected versus actual repeats in choice for the set of participants. Participants were more likely to choose the same option on successive trials than would be predicted if each choice was an independent event with probability equal to the overall probability of choices observed (t(71)=2.6,p=0.01). With the expected repetitions normed as 35 100%, participants actual repetition ranged between 49.9 and 136.6 % of expectation, with a mean of 105.4%. For 17 participants, observed streakiness was larger than the 50 th percentile of the distribution generated by simulation, whereas for 7 participants it was streakier for smaller than the 50 th percentile, and the remainder of participants were equally streaky compared to the median of the distribution of simulations (Given the relatively small number of sequences between two matched trials, the range of outcomes simulated was, for many participants, not large – as a result a large number of participants were equally streaky compared to the median of the simulations). It is important to note that the presentation side for the SS and LL options was randomized, so making the same button press on consecutive trials (e.g., as might happen if a participant was disengaged from the task) would not result in non-independence. Next, I sought to replicate this result using data from the second and third studies. Given that Study 1 was a secondary analysis, the design was not specifically tailored for answering our questions related to trial dependence. For instance, since matched trials were presented in blocks of three, I could only obtain information from consecutive Hard trials between the 1 st and 2 nd trial and between the 2 nd and 3 rd trial, in other words for 2/3 the total number of Hard trials. This specific drawback was addressed in study 2 by using longer blocks (6 trials/block) to allow for more data points of consecutive trials (a block of six trials yields five pairs of consecutive trials). Also, drift over a sequence of choices could potentially be shown over longer blocks more so than for shorter blocks. 36 Thus, for the second study, I again put trials into bins such that in each bin, every decision was made between alternatives of similar amounts (small and medium), and similar difference in attractiveness (ie. A 1 =A 2 /(1+k bin *D 2 ) resolves to approximately the same k bin for all trials in each bin). Again, I calculated the probability of LL choice in each bin as the percentage of LL choices in the bin, and then simulated each choice 1000 times based on these probabilities in the order presented. For each simulation of the task, I counted the number of times that the same choice was generated twice in a row, and then compared this simulated distribution to the actual number of times the same option was chosen in consecutive trials of the actual experiment. I then ran a t-test comparing the expected and actual repeated choices. For the 20 participants I included in this analysis, the same choice was repeated on consecutive trials with a range of 96.6% to 125.4% compared to the predicted number of times choice would be repeated, with an average of 108.8%, where 14 participants were more streaky than the 50 th percentile of the simulated distribution, and only one was less streaky. A paired t-test comparing the simulated results with the actual data suggested that participants chose the same option in succession more often than would be predicted by chance (t(19) = 4.4, p<0.001), replicating the results of Study 1. Finally, I used data from study 3 to attempt to replicate the finding from Studies 1 and 2. However, as the design was somewhat different, I sought to test for trial independence in a slightly different way. In this study, an initial estimate 37 of the participant’s best fitting hyperbolic discounting parameter was calculated in a manner similar to the previous studies. However, whereas the previous studies consisted of trials where the alternatives were matched (the indifference-k was used to generate options) or “mismatched” (the indifference-k is exponentiated, multiplying it by either 0.1 or 10), trials in this study that are either matched or only somewhat mismatched (the indifference-k is multiplied by either 10 -2/8 , 10 -1/8 , 10 1/8 , 10 2/8 ). In this sense, the study was designed such that the participant will experience some amount of ambivalence in all of the decisions they make. If I consider, for example, a participant for whom I estimate an indifference-k of 0.005, in Studies 1 and 2, she may be asked to choose between receiving $20 today and $32 in 120 days in the matched condition, and between $20 in today and either $21 or $140 in 120 days in the mismatched condition. Meanwhile, in Study 3, she might be asked to choose between $20 today and either $27, $29, $32, $36, and $41 in 120 days. For this study, I included all trials to test whether participants are again more likely to choose the same trial in sequence, compared to an assumption of independence. This was done in two ways – 1) similar to the first two studies, I binned the trials based on the levels of preference in the design, such that in each bin, the alternatives in all trials resolved A 1 =A 2 /(1+k bin *D 2 ) for the same k bin , and calculated overall probabilities of LL choice by bin, and 2) I fit each participant’s decisions using a logistic regression framework where the decision was the dependent variable and the independent variable was the log-transformed value of k that was used to 38 generate the alternatives, and then used the fitted values of this model as the probability of LL choice for each trial. I then simulated decisions based on the generated probabilities, and as with studies 1 and 2, compared the number of times the same choice was made sequentially in the simulations, in comparison to the actual study. The 18 participants in Study 3 chose the same option on consecutive trials an average of 102.0% of the time compared to expectation using both method of calculating probabilities, with a range between 89.2% and 110.8% in the model- based method and between 91.1% and 111.0% for the model-free method. Using both methods, 14 participants were more streaky than the median of the simulations, whereas only one participant was less streaky. A paired t-test between actual and simulated counts of repeated choice provides some evidence that participants are more likely to choose the same option in consecutive trials than would be predicted by chance, (t(17)=2.1, p=0.048) using the model-free method and (t(17)=1.9, p=0.08) using the model based method. 39 Figure 8: Intertemporal choice is Streaky. For choices where alternatives are generated to be similarly preferred, participants are more likely to choose the same option in consecutive trials, relative to a simulation generated from the probability of choosing the same option under an assumption of independence (streakier participants fall above the y=x line). In study 1 (left), participants chose the same option as in the previous trial an average of 5.4% more than expected (t(71)=2.6,p=0.01). In Study 2 (center), the same choice was repeated 8.8% more than expected (t(19) = 4.4, p<0.001), and in Study 3 (right), choice was repeated 2.0% more than expected (model-free: t(17)=2.1, p=0.048, model- based: t(17)=1.9, p=0.08). Discussion Based on significant evidence in two studies and consistent, but sub- threshold, evidence in a third study, I conclude that intertemporal choice behavior is autocorrelated, such that participants are more likely to choose the same option repetitively, rather than in a probabilistically independent fashion. This evidence is robustly established using varying trial designs. In two studies, presentation is randomized, whereas in the third, participants are choosing in comparison to a constant immediate value. However, based on this evidence alone, it remains unclear what kind of mechanism(s) are responsible for these results. One possibility is that prior choice is influencing the participant’s current preference. For example, one line of research relating to the theory of cognitive dissonance suggests that after making a difficult decision, the individual seeks to justify that they made the correct 5 10 15 20 5 10 15 20 Study 1 Simulated Repeats Observed Repeats 5 10 15 20 25 30 5 10 15 20 25 30 Study 2 Simulated Repeats Observed Repeats 90 100 110 120 130 140 150 90 100 110 120 130 140 150 Study 3 Simulated Repeats Observed Repeats 40 decision by seeking confirming evidence for the superiority of their chosen option. A subset of this topic referred to as choice-induced preference finds that preference for chosen options increases between pre-choice valuation and post- choice evaluation. A second way that prior choice could affect current preference has been referred to as ‘behavioral momentum.’ This idea metaphorically frames sequential behavior as behaving similarly to laws of physical behavior, particularly with regard to concepts such as momentum, velocity, and inertia. Critically, once there is an established relationship between stimulus and reinforcement, the behavior has a “resistance to change” that can be evidenced by the difficulty of extinction or satiation (Nevin, 1995; Nevin & Grace, 2000). The introduction of a novel reward can also loosen the contingency of the original behavior, but there is evidence that whether or not the relationship can be “disrupted” by an alternate reward is dependent on the maintenance of the existing S-R relationship, as opposed to attributes of the novel reward schedule (Nevin, Tota, Torquato, & Shull, 1990). Moment-to-moment discounting Alternatively, I consider the possibility that choices may not have a causal effect on subsequent choices. If choice was not responsible, one plausible remaining hypothesis for autocorrelation is that it is a reflection of changes in discounting over the course of the experimental session. Consider an individual whose time preference changes over the course of an ITC task. That is, as the session progresses, their preference for either immediate or delayed rewards, 41 becomes stronger. Therefore the individual’s subjective value SV for any given delayed reward A in D days is changing over time. Assuming hyperbolic discounting, we can say that the change in SV is a reflection of changes in the latent discounting parameter k, such that SV i =A/(1+k i *D) at any time i, since A and D are fixed. Especially given that our methodology presents participants with rewards that are similarly valued, the individual will therefore be more likely to choose an SS reward during periods of time when k i is relatively larger, and more likely to choose an LL reward during periods when k i is smaller. If this moment-to-moment variance in discounting is autocorrelated, such that measurements taken close in time are more alike than measurements further apart in time, then gradual changes may be observed in elicitations that reveal preference, resulting in selections tending to “clump” together. Figure 9: two alternative hypotheses for autocorrelation in intertemporal choice. In A (left), consecutive choices {ch1, ch2, ch3} are based on a valuation of options (not represented) that follows from a latent discounting parameter k. The second and third choices are both dependent on the previous choice. That is, streakiness may arise from a choice effecting the subsequent choice. In B (right), latent, moment-to-moment, discounting parameters {k1, k2, k3} are dependent on the overall discount parameter k but are also autocorrelated (correlated to the moment-to-moment k corresponding to the adjacent timepoint). In this framework, choices are streaky because of autocorrelation in the discounting parameter of the moments in which the choice is happening. The mechanisms behind the two alternative hypotheses are demonstrated in figure 9. In the diagram on the left there is one fixed discounting parameter k for the individual. In this case, where the hyperbolic model is assumed, I refer to 42 this as the k-parameter. Each choice is based on how the alternatives (not shown) are valued using the discounting model, but are also causally related to the previous choice in the sequence. Alternatively, in the diagram on the right, I consider that there is a unique set of parameters, here k1, k2, and k3, which represent states that contribute to discounting at the moment of sampling. Again, choices are based on how alternatives are valued at that moment in time. I suggest that states are autocorrelated, so that each of k1, k2, and k3 are connected. In this model, choice is streaky not because the choice is directly influencing the subsequent choice, but because there is preference in the individual’s moment-to-moment states that determine time preference. In upcoming sections I will further examine whether the data can distinguish between the two hypotheses related to streakiness. In the next section, I will use data from an intertemporal matching study to test whether autocorrelation can be observed in a context that does not involve binary choice. Following that, I will use fMRI data corresponding to study 1 in order to examine if brain activity is predictive of autocorrelation. 43 Chapter IV. Non-Independence without choice? Can Intertemporal Matching imply streakiness? Using data from three intertemporal choice studies, I have demonstrated that sequences of choices violate the assumptions of independence in that individuals are more likely to choose the same type of option in consecutive trials; in other words, they appear to be streaky. The choice context naturally connects to a sizeable literature regarding behavior as a consequence of the act of committing to a decision, cognitive dissonance being the most prominent of these theories. From the perspective of classical economic theory, decisions are a process that ‘reveal preference,’ and an intertemporal choice would be said to reveal time preference. A cognitive dissonance explanation for streakiness in intertemporal choice may then suggest that preference is changing as a byproduct of choice, which is reflected in the subsequent choice having an increased probability of being the same as the previous. Based on this type of framework, non-independence would no longer be present if the choice context were removed. However, as previously discussed, if time preference itself varied from trial-to-trial in a manner that was autocorrelated, as opposed to being randomly drawn from a distribution, then some amount of streakiness would also be observed in a choice task, as well as tasks that elicit discounting using methods other than binary choice. In this section, I present an analysis of an intertemporal matching study, where participants “bid” on a reward by entering an 44 amount available after a different time that would be considered equivalent (Thaler, 1981; Read & Roelofsma, 2003). I attempt to determine whether discounting is autocorrelated; in other words, whether measurements of discounting are more closely correlated with other measurements that are taken closer in time. I am able to do so in this methodology by obtaining an estimate of discounting observed in each trial, as opposed to the choice method which only reveals time preference to be greater or less than the temporal discount that is implied by the trial (although as previously discussed, reaction time can be informative as an indicator of ambivalence). More importantly, since the matching procedure does not include choice, it allows a partial dissociation of the drift vs. choice-based explanations of non-independence. While from a glance it may seem that matching as a method should be preferable in its ability to more precisely estimate discounting, it tends to be less popular as a task. A recent comparison of methods from discounting studies showed published choice studies to be more ubiquitous, outnumbering matching studies by an approximately 5:3 ratio. There is additionally some evidence that elicitation via choice has higher ecological validity, in showing higher correlation with real-world outcomes theorized to involve a temporal discounting component (Hardisty, Thompson, Krantz, & Weber, 2013). 45 Methods Participants Forty participants (18 male, 22 female) completed an intertemporal matching task. Behavioral data collection for twenty-six participants (14 male, 12 female) was accompanied by the acquisition of functional imaging data. Participants had a median age of 25 (27 for fMRI participants) with range between 19 and 47. Those who completed the study in fMRI were right-handed, had normal or corrected-to-normal vision, and free from any psychological and neurological disorders. Task The task was administered using MATLAB with Psychophysics toolbox. In each trial, participants were presented with a reward that varied in two attributes: amount and immediacy (delay). The participant then entered a bid on this option at a different delay; that is, they entered an amount for a specified delay that was subjectively equivalent in value to the initially presented reward. In roughly 50% of trials, participants were asked to 1) enter an immediate amount that was equally preferred to a delayed reward (bid-now), and in roughly 50% of trials they were asked to 2) enter an amount at a given delay that is equally preferred to an immediate reward (bid-later). Delays presented in the two conditions were either 14, 28, 40, or 80 days. The amount of delayed rewards were randomly generated along a uniform distribution between $16 and $49. In the bid-now condition participants specified a present amount that was equivalent to the 46 randomly generated reward at its delay; in order to approximately balance the value of bid-now and bid-later conditions, in the bid-later condition, the immediate amount offered was generated by first selecting a “shadow” delayed amount according to the same procedure as used in the bid-now condition, and then computing the immediate amount that would be equally valued by the participant given his prior discounting behavior during the task. A hyperbolic functional form was assumed for purposes of modeling the participant’s discounting, and generating immediate offers in the bid-later condition. Again, the purpose of this procedure was to minimize divergence in value across the experimental conditions. Participants were trained to understand the task and completed a four- minute practice session. Data was collected during two subsequent ten-minute runs, either inside or outside of the scanner. During each run, participants completed as many trials as fit into the allotted time, at their own pace. The stimuli of each trial were as follows: Participants initially saw a fixation cross for an amount of time that was drawn from an exponential distribution with a mean of 2 seconds, but bounded with a minimum of 0.5 seconds and a maximum of 5 seconds. In the next phase of the trial, a variant of the following statement appeared: “N$ today is as good as ?$ in d days.” The participant was instructed to think about an amount appropriate to the question mark, and press a button when they thought of a bid. Once the button was pressed, the ? is replaced by a zero: “N$ today is as good as 00$ in d days.” The bid was entered by button 47 pressing to increase the tens and ones. Other buttons were pressed to reset the counter in case of an error, and to confirm the bid. Once there was confirmation that the match was entered, the program proceeded to the next trial. Participants were instructed that one trial would be chosen at random as the bonus for the participant’s payment. For 19 of 40 participants, this task was administered in addition to another task, and the probability of a trial from the present task being chosen for the bonus was 0.5, whereas for the rest of the participants, this task was used for payout. Payment from the chosen trial was determined through a modified Becker-Degroot-Marshak (BDM) auction (Becker, Degroot, & Marschak, 1964). A random number x between 15 and 30 was generated; if this number was larger than the bid amount the participant generated during the trial, the participant was paid the random number, in dollars, as their reward. If however, the random number was smaller than the participant’s bid, the original default offer from that trial was paid out to the participant. Using this method, a participant would maximize their utility by entering the subjective-value equivalent in each trial; and this was explained slowly to participants why this was the case. Participants were walked through several concrete examples of the procedure to resolve possible confusion. Behavioral analysis Matching data was analyzed using the R statistical package R. Hierarchical models used either the lmer function from the lme4 library, or the lme function from the nlme package, in order to account for the repeated 48 measures aspect of the data. There are instances where I used both functions to run the identical model. This was done for some base models which are compared to more complex hierarchical models that utilize special features of the specific functions (lmer vs. lme). In all cases, where a model is run using both lmer() and lme(), the differences in model fit are near-identical (difference in BIC of less than 0.5) (Pinheiro & Bates, 2001; Gelman & Hill, 2007). Modeling of Discounting Behavior In each trial, participants are shown an alternative and asked to produce an alternative that is equally valuable. Each alternative has two attributes, Amount and Delay. In each trial, one alternative is available today, in other words, it has Delay=0. Data was modeled both by fitting to functional forms of discounting (model-based) and not. For a model-based approach, the data was analyzed by fitting to a hyperbolic discounting function. Specifically, for each trial we solved for k in the equation: A 1 /(1+k*D 1 ) = A 2 /(1+k*D 2 ) where each side of the equation represents A i /(1+k*D i ) represents the subjective present value of either the default or bid alternative i. As an alternative to modeling the data based on the hyperbolic assumption, I considered the discount to be the ratio of the sooner (immediate, I) reward to the delayed (D) reward A I /A D , regardless of which reward was entered as the bid. In other words, the discount is the percentage which the reward diminishes in value associated with its delay. Analysis examined whether this 49 discount differs by condition (whether the immediate or later reward was being bid on), magnitude, and length of delay. Tests for Autocorrelation I test for autocorrelation using the nlme package in R. Specifically, I specify an autocorrelation structure in the data and test whether such a model will substantially improve model fit over one with no autocorrelation structure specified. The DV in this set of models is the z-scores of the implied discount rate log(k) estimated from each trial. More specifically, I binned the data by individual, condition, and delay, and obtained residuals of a linear model estimating the effect of log(reward magnitude) on discounting. The Bayesian information criterion (BIC) is used to compare the fit of different models. It is calculated as -2*log-likelihood + ln(N)*p, where p is the number of parameters and N is the total of trials. The BIC is chosen over other tools for model selection because of its balance between goodness of fit and model complexity (Schwarz, 1978; Vanderckhove, Matzke, & Wagenmakers, 2015). (Autoregressive) Moving average model In order to test whether the discounting implied by bids is temporally independent, we fitted data as an (AR)MA model. We consider two options: 1) that the error in the bid is not temporally independent, and/or 2) that the error in the slope and intercept of the discounting parameter computation, m p and c p , are not temporally independent. In terms of 2, a slope and intercept are fitted, with 50 error, to the implied discounting parameter and its dependence on reward size; temporal dependence would imply that discounting is dependent. In an autoregressive model, the value (𝑧𝑡) is predicted based on previous value(s) (𝑧𝑡−1 ⋯ 𝑧𝑡−𝑝). 𝑧𝑡 = 𝜙 1𝑧𝑡−1 + 𝜙 2𝑧𝑡−2 + ⋯ + 𝜙𝑝𝑧𝑡−𝑝 + 𝑎𝑡 The fitted autoregression coefficients 𝜙 1 ⋯ 𝜙𝑝 determine the influence that previous values have on the current value, which is one indicator of the extent of nonindependence, and 𝑎𝑡 represents the residual “white noise” error. In a moving average model, the value (𝑧𝑡) is predicted by errors from the current (𝑎𝑡) and previous (𝑎𝑡−1 ⋯ 𝑎𝑡−𝑞) trials. 𝑧𝑡 = 𝑎𝑡 − 𝜃 1𝑎𝑡−1 − 𝜃 2 𝑎𝑡−2 − ⋯ − 𝜃𝑞𝑎𝑡−𝑞 Here, the moving average coefficients 𝜃 1⋯𝜃𝑞 determine the weight of the error terms of the previous trials An autoregressive moving average model combines the two 𝑧𝑡 = 𝜙 1𝑧𝑡−1 + ⋯ + 𝜙𝑝𝑧𝑡−𝑝 + 𝑎𝑡 − 𝜃 1𝑎𝑡−1 − ⋯ − 𝜃𝑞𝑎𝑡−𝑞 An (AR)MA(p, q) model therefore fits the observation z based on autoregression terms 1 thru p and moving average terms 1 thru q based on the observations of the previous p or q trials (Box, Jenkins, Reinsel, & Ljung, 2016). Results Participants (n = 40) completed between 57 and 123 trials in the 20 minutes they were each given, with a mean of 84, which accounts for a total of 3359 trials, with 13 trials subsequently removed due to the participant not entering a response. Trials were evenly divided between those in which 51 participants generated delayed-equivalent amounts (bid-later) and present- equivalent (bid-now) amounts. In 0.87% of trials, responses indicated negative delay discounting (in other words, a larger immediate amount was equivalent to a smaller delayed, and in 1.34% of trials, responses indicated no delay discounting. All trials where a bid greater than zero was entered were used for analysis, except when otherwise indicated. Hyperbolic Discounting Using hierarchical linear modeling, I take the hyperbolic discounting parameter that can be fit by trial and look for its predictors. A main effect is shown for condition where k-values are larger in the bid-later in comparison to the bid-now condition ((B, SE) = (0.006, 0.0009), p<0.001). Next I looked at hyperbolic discounting separately by condition. In the bid-later condition, trial-by- trial hyperbolic fits trend towards but aren’t predicted by delay (B,SE) = (- 0.00005, 0.00002), p=0.053), while in the bid-now condition, trial-by-trial hyperbolic fits are predicted by delay ((B, SE) = (-0.0003, 0.00002), p<0.001) (Figure 2). 52 Figure 10: Hyperbolic fits separately by condition. The y-axis are immediate amounts and the x-axis are delays. Dots are jittered to show density of amounts at each delay. Hierarchical linear models are fitted for each delay with participant as a randomized intercept to test for magnitude predicting hyperbolic discounting. Fitted curves proceed from the mean delayed amount corresponding to y=0, to the delay for which the model was fit. Fitted curves appear different in the Bid-Now condition (left), while similar in the Bid-Later condition (right), suggesting that hyperbolic discounting is not as good a fit in the Bid-Now condition. Discounting in the matching task therefore differs by condition, and furthermore sometimes deviates from the hyperbolic assumption, particularly in the bid-now condition. Autocorrelation in Discounting In order to test for autocorrelation, I ran hierarchical regressions using both the lmer function from the R lme4 package and the lme function from the nlme package, in order to compare model fit. These models attempt to predict trial-by-trial discounting. As I observe systematic differences in discounting due to condition, delay, and magnitude, I first constructed linear models for each participant that predicted discounting, with each of these sources as independent 0 20 40 60 80 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Hyperbolic: Bid-Now Delay Now Amount/Delayed Amount 0 20 40 60 80 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Hyperbolic: Bid-Later Delay Now Amount/Delayed Amount 53 variables. I subsequently use trial-by-trial residuals from these individual models as a dependent variable to test for autocorrelation in discounting. In other words, sources of variance related to study design are controlled for. The models to be compared are 1) a base model that fits a random intercept for every participant, 2) a model where trial number (per participant) is the IV, in other words fitting a slope to test for existence of a trend in discounting over the course of the task, across all participants, 3) a model where there is a random slope for trial number, to test for separate trends in discounting over the course of the task, by participant, 4) a model fitting an autoregression coefficient with a lag of one trial, 5) a model fitting a moving average coefficient with a lag of one trial, 6) a model fitting an ARMA correlation structure, allowing for discounting to vary over the course of the task. The results are reported in table 3. For models 1 and 2, both the nlme and lme functions were used, yielding near- identical results. The base model is improved by adding a random slope for trial number (Model 3), suggesting that discounting can vary in one direction over the course of the task. Adding a fixed effect for slope (Model 2) does not improve the model, suggesting that participants are not more likely to trend in one direction, compared to the other (ie. becoming more patient vs. impatient). Also, while autoregressive and moving average models specified with a lag of one trial (Models 4 and 5, respectively) both debatably improved model fit (the penalty for increased degrees of freedom scale with variance explained), a model fitting both autoregression and moving average parameters (Model 6) offers some 54 improvement in model fit over both the base model and models with only one autocorrelation parameter being fit. Model Number of free parameters BIC random intercept by participant 3 9229 discounting trend 4 9234 discounting trend by participant 5 9186 AR(1) 4 9228 MA(1) 4 9229 ARMA(1,1) 5 9211 Table 3: Nonindependence in discounting for an intertemporal matching task. Modeling for autocorrelation structure, and for individual trends in discounting, allowed for a better fit of the data. Next I ran hierarchical regression separately for the two bidding conditions, for trials of each bidding condition that are preceded by trials from the same condition. For the bid-now condition, only the model that tests for a trend in discounting by participant (discounting trends in one direction over time) improves model fit over the base model. As was the case with the analysis using the complete dataset, since model 2 fails to improve model fit, there is not sufficient evidence to suggest that there is a fixed trend across participants, only evidence at the individual level where participants are relatively equally prone to trending in either direction (more patient vs. impatient). The results are presented in Table 4: 55 Model Number of free parameters BIC random intercept by participant 3 4579 discounting trend 4 4586 discounting trend by participant 5 4572 AR(1) 4 4586 MA(1) 4 4586 ARMA(1,1) 5 4594 Table 4: Modeling autocorrelation for sequences of bid-now trials (matching an immediate amount to a delayed reward. Modeling for an individual trend improves fit, but autoregression and moving average models do not improve fit. For the bid-later condition (see Table 5 for results), I again find that model 2 (fixed slope of trial number) fails to improve fit while model 3 (random slope of trial trial number by individual) fits better than the base model. Additionally, there is evidence of autocorrelation in that the AR(1), MA(1), and ARMA(1,1) models all improve model fit over the base model, with the ARMA model providing the best model fit: Model Number of free parameters BIC random intercept by participant 3 4670 discounting trend 4 4674 discounting trend by participant 5 4630 AR(1) 4 4630 MA(1) 4 4639 ARMA(1,1) 5 4618 Table 5: Modeling autocorrelation for sequences of bid-later trials (matching a delayed amount to an immediate reward and specified time delay). Modeling for both individual trends and for autocorrelation (autoregression and moving average models) improved fit. The next table (6) presents autoregression and moving average coefficients corresponding to the time series analyses. 56 All Data Bid-now condition Bid-later condition Model ϕ 1 θ 1 ϕ 1 θ 1 ϕ 1 θ 1 AR(1) 0.05 -- -0.01 -- 0.25 -- MA(1) -- 0.04 -- -0.01 -- 0.22 ARMA(1,1) 0.87 -0.82 -0.45 0.44 0.81 -0.65 Table 6: Autoregression and moving average coefficients corresponding to models included in Tables 3-5. Notably, in both the ARMA models fit on the whole dataset, and the Bid- later condition only, the autoregression coefficient ϕ and the moving average coefficient θ have opposite signs. Finally, I tested a novel method for addressing discrepancies in discounting across the two different frames. For a series of two consecutive trials where one trial was in the “bid-now” frame and the other trial was in the “bid-later” frame, I recoded as the discounting residual as being the mean of each pair of data points from the two trials. I then fitted these new datapoints using time- series models, the results of which are presented in Table 7. Model Number of free parameters BIC random intercept by participant 3 5466 discounting trend 4 5472 discounting trend by participant 5 5426 AR(1) 4 5452 MA(1) 4 5455 ARMA(1,1) 5 5446 Table 7: Modeling autocorrelation for a dataset where bid-now and bid-later trials were averaged when they occurred in sequence. Modeling for both individual trends and for autocorrelation increased fit. 57 Again, I find that a model that allows for participants to have linear trends over the course of the task gives an improvement in fit, but a model that implies such a trend is unidirectional (towards either more or lesser discounting) does not. Models that fit for autoregressive and moving average behavior all further improve model fit. Autoregression and moving average coefficients for these models are shown in Table 8, demonstrating that both previous estimates and previous error influence current behavior, assuming that combining trials from different conditions reasonably approximates underlying behavior. combined frames ϕ 1 θ 1 AR(1) 0.1 -- MA(1) -- 0.09 ARMA(1,1) 0.78 -0.69 Table 8: Autoregression and moving average coefficients corresponding to dataset where bid-now and bid-later trials were averaged when occurring in sequence. Overlapping participants I compare streakiness in choice with autocorrelation in matching by next looking specifically at ten participants who completed both the matching task and the choice task of Study 1. For each overlapping participant, I used the implied hyperbolic discounting from the first 72 trials of the matching task to impute decisions in the choice task. These decisions were made following the rule that if the discounting in the matching trial was smaller than in the choice trial, the immediate option would be chosen, with the later option being chosen otherwise. In essence, this process is an attempt to determine how these participants would make decisions in the 58 choice task based only on trial-by-trial revealed preference from the matching task. Using this set of decisions, I then looked at sequences of trials in what would be the matched (choice) trials condition and compared to a simulation, as was done in previous sections. In doing so, I did not find any evidence of streakiness as participants would choose the same option as in the previous trial an average of 98.7% of the time in comparison to the previous trial, or 1.3% less than expected. In the actual study 1 choice task, this subset of participants chose the same option as in the previous task 103.5% of the time (compared to 105.4% for the entire sample). Discussion In the previous sections, I presented evidence for non-independence in intertemporal choice behavior, specifically finding that participants were streaky in being more likely to choose the same option (SS or LL) in consecutive trials, in comparison to how they should behave under an assumption of independence. I considered two types of explanations: 1) that streakiness arises from certain mechanisms related to choice, and/or 2) there is autocorrelation in the latent, moment-to-moment discounting function, which causes the revealed preference (choice) to appear streaky. In this section I focus on the second explanation by looking at moment-to- moment discounting implied from an intertemporal matching task. I find evidence for autocorrelation when consecutive trials are both in the bid-later condition, that 59 is, when participants are entering a delayed reward that is equivalent to an immediate reward, but not in the bid-now condition, where an immediate reward is entered to match to a delayed reward. Using a method to address different behavior across the two frames, I found additional evidence of autocorrelation across sequences where consecutive trials were presented in different frames. In leveraging participants who completed both tasks, however, I was not able to demonstrate that the magnitude of autocorrelation observed in the matching task is sufficient to produce any streakiness in the choice task, let alone account for the amount of streakiness I observed in choice. It should be pointed out that for this demonstration, I used data from both conditions of the matching task, whereas autocorrelation was only observed in one. It is also worth noting that a large amount of variability in discounting could be attributed to magnitude, condition, and length of delay, none of which was controlled for in using matching to impute choice. Finally, while matching and choice are two possible methodologies for eliciting individual discounting, there is some evidence that the procedures lead to systematically different estimates, most notably that discounting is steeper, ie. participants are more impatient, in the choice context (Hardisty et al., 2013; Read & Roelofsma, 2003). Indeed, I observed substantially different discounting rates by task in a few of the participants who completed both tasks – although some of this difference could also be because of the sessions taking place months apart. 60 Chapter V. Neural correlates of streakiness in intertemporal choice In order to further explore the behavioral results mentioned in the previous chapters, I will discuss the use of neuroimaging to identify correlates of non- independence. In previous work, I did not find neural correlates relating to trial- by-trial variance in discounting during an intertemporal matching task (Hsu, Brocas, & Monterosso, 2013). For the current chapter, I analyzed fMRI data corresponding to Study 1 to see if any brain activity corresponded to choosing the same option in consecutive trials. As in the behavioral analysis of Study 1, I here focus exclusively on trials within “matched” blocks in which the SS and LL alternatives were generated to be of similar subjective value given the participant’s delay discounting, where trials categorized as “repeat” are those in which the same response is made as was made on the previous trial, and trials categorized as “switch” are those in which a different response is made from that made on the previous trial. I first conducted an analysis using an event-related design, where I contrasted signal during the decision period of trials where individuals repeated the previous matched choice with the decision period from trials where they switched. The decision periods of interest began with the onset of the two alternatives, and ended at the moment a selection was made. For the purpose of regressing out other sources of variance, I also modeled as regressors of no interest the time periods corresponding to 1) mismatched trials, 2) control (non- ITC) trials, and 3) matched trials that were not preceded by a matched trial, 61 leaving ITI for all trials as part of an implicit baseline. Unfortunately, response latencies -- which are necessary for identifying the events of interest during imaging sessions -- were not recorded for the first 15 participants in this sample, and so these participants could not be included in this MRI analysis. An additional participant was dropped due to poor anatomical coverage. Thus 56 participants were included in this analysis. For 45 of these participants, two runs of the task were included in the analysis. For the remaining 11 participants, only one task run could be included because responses during the other run were either all repeat or all switch. In addition, a parametric block-based analysis was carried out, which allowed us to include participants for whom response times were not recorded. In this analysis, I weighted each block of matched trials based on the number of repeated selections in the block. That is, 1) for each block of 3 matched trials, I counted the number of times the participant chose the same option as in the previous (matched) trial, which could range between zero times (ie. SS-LL-SS or LL-SS-LL) and two times (ie. SS-SS-SS or LL-LL-LL). After obtaining a count for each block, I then created a parametric block regressor for matched trial blocks where the blocks were scaled based on counts from all matched trial blocks in each run. In the same analysis, I included a similar parametric regressor for mismatched trial blocks as well as non-weighted blocks for both matched and mismatched trials. I then modeled the correlation between brain activity and ‘streakiness’ in both matched and mismatched trials. Seventy-one participants 62 were used for this analysis. For 19 participants one of the two runs was dropped because there was no variance in streakiness by block within the run. fMRI analysis Imaging data were analyzed using FEAT (fMRI Expert Analysis Tool) version 6.00, part of the Oxford University Centre for Functional MRI of the Brain (FMRIB) Software Library (www.fmrib.ox.ac.uk/fsl). FMRI data were preprocessed using spatial smoothing (Gaussian kernel of full-width at half- maximum of 5 mm) and temporal smoothing with a high pass filter (100 second cutoff) and motion correction. The preprocessed data were then submitted to a general linear model that was used to analyze the contributions of experimental factors to BOLD. All within-subject statistical analyses were performed in each subject’s own image space, and then transformed to standard space before high- level analysis. Echo planar images were realigned to the anatomical images acquired within each scanning session and normalized to a standard brain using affine transformation (Jenkinson & Smith, 2001). All regressors were convolved with canonical double gamma hemodynamic response function and temporal derivatives were added as well. A fixed-effect model was used for cross-run analysis (Beckmann, Jenkinson, & Smith, 2003). Cross-run analysis results were input to group-level analysis using a mixed-effects model (Woolrich, Behrens, Beckmann, Jenkinson, & Smith, 2004). 63 Results For the trial-based analysis, the repeat>switch contrast revealed four significant clusters of activity. As shown in Figure 11 and Table 9, significant increases in activity were observed during repeat decisions (relative to switch) in 1) the left ventral striatum, 2) the posterior cingulate cortex/ left precuneus, 3) the right lateral occipital cortex, and 4) a cluster that included a section of parietal operculum, the right PCC, and the right precentral gyrus. There was no activity that survived cluster correction for the switch>repeat contrast. Region MNI Coordinates max z- score R parietal operculum/R PCC/ R precentral gyrus (36, -24, 24) 4.25 ventral striatum (-14, 12, -12) 3.90 R LOC (34, -88, 22) 3.47 PCC/L Precuneus (-10, -52, 42) 3.68 Table 9: Whole brain analysis for the repeat>switch analysis. Peak coordinates (MNI) in contrast decision period of repeat>switch (Z>2.3, p<0.05, cluster-level correction). 64 Figure 11: Significant clusters corresponding to the Repeat>Switch contrast. Activity is observed in the Ventral Striatum and the Posterior Cingulate Cortex, as well as in the right operculum. Next, I carried out the block-based analysis described above in the larger data set (N = 71). Group analysis again revealed activity associated with repeated selections (either LL à LL, or SS à SS) in the PCC, and VS, as well as activity in the vmPFC, and a cluster including the right hippocampus and the right posterior insula, and a cluster that included parts of the right precuneus and operculum. As in the trial-based analysis, I did not observe any activity that was associated with response switching. I also looked at neural correlates of switching in mismatched blocks. Here, variance in switching is almost completely driven by the series of random computer generated events (which option is generated to be more attractive from one trial to the next). At the group level, no 65 activity was observed to be correlated with streakiness in these mismatched trials. Region of interest MNI Coordinates max z- score posterior cingulate/precuneus (-6, -42, 34) 4.41 ventral striatum (6, 0, -2) 4.01 R hippocampus/ superior temporal gyrus/ insula (34, -26, -8) 3.92 vmPFC (4, 40, -12) 3.72 r operculum (32, -34, 20) 3.85 Table 10: Whole brain analysis. Peak coordinates (MNI) in parametric block regressor for streakiness (Z>2.3, p<0.05, cluster-level correction). Figure 12: Significant clusters observed in correlation to the amount of streakiness in each three-trial block. Activity is observed in the posterior cingulate cortex (PCC), the ventral striatum (VS) including the left nucleus accumbens, and in the ventromedial prefrontal cortex (vmPFC), as well as in the right hippocampus. Discussion I conducted secondary fMRI analyses corresponding to data from Study 1. 66 Relative to switching, repeated preference (LL à LL, or SS à SS) was associated with increased activity in a group of regions related to tracking subjective value, including the VS and PCC (for meta-analyses, see Bartra et al., 2013; Clithero & Rangel, 2013). Repetition in choice was also associated with relative increase in the lateral occipital cortex which also has been shown to track value when stimuli are presented visually (Persichetti, Aguirre, & Thompson- Schill, 2015). In order to allow for inclusion of a larger number of participants, I followed up by conducting a block-based analysis in which each group of 3 trials was coded based on the number of switches. Again I observed an association between repeating preference and activity in regions previously linked to value tracking. This included activation in bilateral VS and PCC (both overlapping observed clusters with the trial-based analysis) as well as the vmPFC. Early studies examining neural correlates of intertemporal choice reported these regions to be correlated with subjective value of rewards (McClure et al., 2004; Kable & Glimcher, 2007), and sometimes with the choice of the more immediate option (McClure et al., 2007). In evaluating results correlating streakiness and value-related regions, one consideration is that these analyses are actually capturing variance related to what has been previously demonstrated in the ITC literature. These value-related regions may be more active in some cases when the individual is increasingly likely to continuously choose the larger, delayed choice, as well as in other cases increasingly likely to continuously choose the smaller, immediate choice. More 67 generally, if one option is preferred on most trials, subjective value may tend to be lower on the minority of trials in which the other alternative is selected. Importantly, in the case that “matched” ITC trials are not actually well-matched, switch trials will always include one choice of either the less preferred options (and repeated choice is more likely to be a sequence where the generally preferred option is chosen twice in sequence). To test for this possibility, I extracted mean signal from the VS and PCC in the event-related analysis, and examined signal change among the half of participants in which there was less evidence of mismatch (LL and SS choices between 33% and 67%). Using independent-samples t-test, I found no evidence that the effect was diminished in the well-matched subsample relative to the less well-matched subsample (VS: t(60) = -.06, p=0.98 for block design, t(48) = 0.02, p = 0.98 for event related design; PCC: t(60) = -1.4, p = 0.16 for block design, t(48) = -0.34, p = 0.74 for event-related design; vmPFC: t(60) = -0.41, p = 0.66 for block design). A possible connection of these imaging analysis results may be to fMRI studies using cognitive dissonance designs, which have observed activity in value-tracking regions of the brain, including the PCC (Kitayama et al., 2013), and various substructures in the striatum (Sharot et al., 2009; Izuma et al., 2010) as part of the process that strengthens preference for previously chosen alternatives. An interpretation that has been offered is that subsequent to difficult choice, the individual seeks to justify the chosen option, which is observed as increased activity associated with value-tracking regions. While this is a possible 68 explanation for the correlation of the PCC, vmPFC, and VS with streakiness in ITC, the present analysis is not precise enough in terms of demonstrating the time course corresponding to increased activity. Therefore, we do not know if these value-tracking regions activate subsequent to choice in a way that influences the choice of the next trial, as the cognitive dissonance/choice-induced preference story indicates. One other complication, specific to the trial-based analysis, for interpreting differences in activity in relation to contrasting repeat and switch trials is that there is a systematic difference in reaction time between the two types of trials. Specifically, reaction times for switch trials are longer. While this has been recently proposed to be an interesting finding in related work in perceptual decision-making (Urai, Braun, & Donner, 2017), for the current study it appears that this effect follows from the fact that when matched ITC trials closely reflect true indifference between options, reaction time corresponding to SS and LL choice are more similar, whereas when either SS or LL is more likely to be chosen, reaction time corresponding to the less likely choice is systematically longer (Rodriguez et al., 2014; Krajbich et al., 2015). In study 1, linear regression significantly predicts reaction time based on the observed probability of selecting the chosen option (i.e. the percentage of time the same choice was made within its bin of trials, see chapter 2) where the β for choice probability is -0.30 (t = -3.8, p < 0.01) This implies, for example, that the average individual will take approximately 0.15 seconds longer to choose an option they are only 25% likely 69 to choose than when choosing an option they are 75% likely to choose. Because unlikely choices are infrequent, they have, as noted above, a greater than 50% chance of being switch trials. Switch trials may thus tend to have longer RTs since there are more unlikely choices in this category than would be expected by chance. Consistent with this logic, I found that mean RT by participant for switch trials were on average longer than for repeat trials (Mean (CI) = 0.14 (0.04, 0.24); t(55) = 2.9, p = 0.005), and participants who chose the same option in sequence more often tended to respond slower during trials associated with switching compared to trials associated with repeating (ρ = 0.3, n = 56, p = 0.03). Since time-on-task has been shown to correlate with BOLD signal differences, it is possible that our observed differences are influenced by time on task as opposed to anything relating to the two types of sequential decisions. I tested whether average signal by participant in the event-related analysis (where decision time may have been systematically different for the events being contrasted) was associated with average participant difference in reaction-time between repeat and switch trials, and did not find evidence of correlation (ρ = 0.08, d.f. = 50, p = 0.59). fMRI analysis of streakiness in ITC revealed that choosing the same option sequentially in choice was correlated with brain activity in regions implicated in value representation, including the PCC, VS, and vmPFC. To the extent that it could be considered to be a bias towards streakiness in choice, it 70 may occur related to a mechanism similar to choice-induced preference, a subset of the cognitive dissonance literature. 71 Chapter VI. General Discussion In a series of intertemporal choice studies, I demonstrate that there is nonindependence in this type of decision-making – in choosing between smaller, immediate rewards and larger, delayed rewards, individuals are more likely to choose the same option in consecutive trials than would be predicted based on a model of their overall decision-making behavior. This is shown using different experimental designs, thus lessening the likelihood that the observed effects are related to idiosyncrasies of individual experiments such as presentation order. In attempting to understand the mechanisms that lead to this finding, I consider two main possibilities: 1) that there is a behavioral phenomenon responsible for the causal effect of prior behavior on subsequent choice when the options are similarly attractive, and 2) that there exists, at the individual level, a latent, moment-to-moment autocorrelated preference function for evaluating delayed rewards, and that in revealing temporary drifts in preference within individuals, choices will appear to clump. These two explanations are potentially, but not necessarily, in competition, as they could both occur at the same time, and I find unique evidence for both. For example, fMRI results suggest that in trials where participants switch choices, regions of the brain associated with constructing value show increased activity. In support of the hypothesis that preference drifts over the course of a task, I show some evidence from an intertemporal matching task that discounting is autocorrelated, especially in the condition when 72 individuals are repeatedly tasked to produce delayed equivalents to present rewards. Individual Differences One lingering question from these demonstrated results is whether it is possible to predict individuals who are more “streaky” than others in their choices. Some motivation for this question stems from results by Xue and colleagues suggesting the possibility that when reversals of preference happen, they result from momentary ‘urges’ – while showing that participants who rated highly in trait urgency were more likely to accept a monetary gamble immediately after a trial where they declined to take a risk. Higher urgency participants also exhibited more activation in the insular cortex of the brain, a region that has been implicated in the integration of bodily states (Xue et al., 2010). In concert with these experiments demonstrating streakiness in intertemporal choice, we administered a number of psychometric questionnaires, including the Barratt impulsivity scale (Patton, Stanford, & Barratt, 1995), the Behavioral Inhibition/Behavioral Activation Scale (Carver & White, 1994), the UPPS Impulsive Behavior Scale (Whiteside & Lynam, 2001) and the Temperament and Character Inventory (Cloninger, Svaric, & Pryzbeck, 1993). Some of these scales were chosen as measures of impulsivity, a construct that individual differences in delay discounting have been suggested to measure, as well. Indeed, correlations have been observed between behavior in ITC tasks and trait measures of impulsivity (de Wit, Flory, Acheson, McCloskey, & Manuck, 73 2007; Mobini, Grant, Kass, & Yeomans, 2007). Other measures, particularly the positive and negative urgency subscales of the UPPS and the novelty-seeking subscale of the TCI, were administered because I hypothesized their potential relationship to streakiness in decision-making. However, I did not oberve consistent evidence across studies of a propensity for streakiness as being related to specific trait characteristics. One possible explanation for this is the amount of noise involved in identifying individual differences in streakiness contributing to the likelihood of type II error. In the case of studies 1 and 2, we attempted to present participants with choices where either option was equally likely to be chosen – however, due to a number of factors also observed in similar studies conducted elsewhere (see figure 3B of Rodriguez et al., 2015 for an example), participants often showed fairly strong preference in favor of one option over the other. The possibility that streakiness is not equally likely over trials with differing levels of ambivalence suggests that some of the variance attributed to individual differences may instead be the result of design characteristics. Future Directions While I discuss many experiments investigating the nonindependence in delay discounting, there remain many question that are either insufficiently resolved or yet to be addressed. Near the end of chapter V, I attempt to connect two separate delay discounting studies recruiting the same participants to translate from 74 autocorrelation in the matching task to streakiness in the choice task. This analysis was completed with the logic that precise measurements of autocorrelation in an individual’s moment-to-moment discounting from a matching task could determine how much streakiness should be observed in choice – assuming that streakiness in choice was the direct result of drifts in time preference. Unfortunately, there was only a small sample of participants who completed both tasks, and the tasks were administered far apart in time, without counterbalancing order. One suggestion for future work would be administer both tasks in the same session, or combine both matching and choice trials within one task. Another remaining question regards generalizability of these findings across domains. For example, is there similar evidence of nonindependence in other domains of value-based decision-making? In the domain of decision- making under risk, it is well known that gambler’s are more likely to gamble on outcome’s that have been repeatedly observed (the hot-hand effect) or outcomes that are “due,” having not been observed for longer than expected (gambler’s fallacy) (Sundali & Croson, 2006; Rabin & Vayanos, 2010). These behaviors suggest that belief in the non-independence of an independent outcome cause non-independence in gambling behavior. In Appendix II, I report the results of a probability discounting study between certain and risky rewards, where I do not find nonindependence in choice. However, in the study I conducted, decisions were not played out on a trial-to-trial basis, whereas in these cases where 75 nonindependence appears to be observed, the outcomes of decisions are realized before the next decision is made. One other domain where generalizability may be explored is that of perceptual decision-making. In one recently published study, participants completed both a perceptual decision-making task and a financial decision- making task. The experimenters applied drift-diffusion modeling to the observed behavior and found individual differences in performances on both tasks to correlate in what they attributed as a “belief formation” process, which in this case indicates how quickly participants respond when the current stimulus is consistent vs. inconsistent with previous stimuli (Frydman & Nave, 2016). Additionally, a recently published perceptual study examined sequential effects in a task where motion coherence of random dot patterns was compared. They found evidence of streakiness in choices between different pairs of patterns, and further connected these findings to reaction-time and pupillary measures (Urai et al., 2017). Identification of streakiness in perceptual tasks may be initial evidence of a common underlying mechanism. However, future examination of streakiness may benefit from pairing intertemporal discounting with perceptual tasks that may be qualitatively similar, such as tasks related to time perception. In conclusion, I have demonstrated in a series of studies evidence for nonindependence in delay discounting behavior. I find that this may result partly from a drift in discounting, resulting in series of trials where discounting is 76 comparatively shortsighted and other series where discounting is farsighted. However, I find that the amount of drift I observe in a choice-free context may not be able to sufficiently account for nonindependence that is observed from streakiness in choice. To this end, I suggest that non-independence of intertemporal decisions may result in from decisions having a causal effect on subsequent decisions. 77 References Augustine, A. A., & Larsen, R. J. (2011). Affect regulation and temporal discounting: Interactions between primed, state, and trait affect. Emotion, 11(2), 403–412. Bartra, O., McGuire, J. T., & Kable, J. W. (2013). The valuation system: A coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. NeuroImage, 76, 412–427. Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego Depletion: Is the Active Self a Limited Resource. Journal of Personality and Social Psychology, 74(5), 1252–1265. Baumeister, R. F., Vohs, K. D., & Tice, D. M. (2007). The strength model of self- control. Current Directions in Psychological Science, 16(6), 351–355. Becker, G. M., Degroot, M. H., & Marschak, J. (1964). Measuring utility by a single-response sequential method. Systems Research and Behavioral Science, 9(3), 226–232. Beckmann, C. F., Jenkinson, M., & Smith, S. M. (2003). General multilevel linear modeling for group analysis in FMRI. NeuroImage, 20(2), 1052–1063. Box, G., Jenkins, G. M., Reinsel, G. C., & Ljung, G. M. (2016). Time Series Analysis: Forecasting and Control (4th ed.). Wiley. Brehm, J. W. (1956). Postdecision changes in the desirability of alternatives. The Journal of Abnormal and Social Psychology, 52(3), 384–389. Carter, R. M., Meyer, J. R., & Huettel, S. A. (2010). Functional Neuroimaging of Intertemporal Choice Models: A Review. Journal of Neuroscience, Psychology, and Economics, 3(1), 27–45. Carver, C. S., & White, T. L. (1994). Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: The BIS/BAS Scales. Journal of Personality and Social Psychology, 67(2), 319–333. 78 Chavez, M. E., Villalobos, E., Baroja, J. L., & Bouzas, A. (2017). Hierarchical Bayesian modeling of intertemporal choice. Judgment and Decision Making, 12(1), 19–28. Clewett, D., Luo, S., Hsu, E., Ainslie, G., Mather, M., & Monterosso, J. R. (2014). Increased functional coupling between the left fronto-parietal network and anterior insula predicts steeper delay discounting in smokers. Human Brain Mapping, 35(8), 3774–3787. Clithero, J. A., & Rangel, A. (2013). Informatic parcellation of the network involved in the computation of subjective value. Social Cognitive and Affective Neuroscience, 9(9), 1289–1302. Cloninger, C. R., Svaric, D. M., & Pryzbeck, T. R. (1993). A Psychobiological Model of Temperament and Character. Archives of General Psychiatry, 50(12), 975–990. Dai, J., & Busemeyer, J. R. (2014). A Probabilistic, Dynamic, and Attribute-Wise Model of Intertemporal Choice. Journal of Experimental Psychology: General, 143(4), 1480–1514. de Wit, H., Flory, J. D., Acheson, A., McCloskey, M., & Manuck, S. B. (2007). IQ and nonplanning impulsivity are independently associated with delay discounting in middle-aged adults. Personality and Individual Differences, 42(1), 111–121. Effron, D. A., Cameron, J. S., & Monin, B. (2009). Endorsing obama licenses favoring whites. Journal of Experimental Social Psychology, 45(3), 590– 593. Ersner-Hershfield, H., Garton, M. T., Ballard, K., Samanez-Larkin, G. R., & Knutson, B. (2009). Don’t stop thinking about tomorrow: Individual differences in future self-continuity account for saving. Judgment and Decision Making, 4(4), 280–286. Festinger, L. (1957). A theory of cognitive dissonance. Stanford University Press. 79 Figner, B., Knoch, D., Johnson, E. J., Krosch, A. R., Lisanby, S. H., Fehr, E., & Weber, E. U. (2010). Lateral prefrontal cortex and self-control in intertemporal choice. Nature Neuroscience, 13(5), 538–539. Frydman, C., & Nave, G. (2016). Extrapolative beliefs in perceptual and economic decisions: evidence of a common mechanism. Management Science. Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. New York: Cambridge University Press. Hagger, M. S., Chatzisarantis, N. L. D., Alberts, H., Anggono, C. O., Batailler, C., Birt, A. R., … Zwienenberg, M. (2016). A Multilab Preregistered Replication of the Ego-Depletion Effect. Perspectives on Psychological Science, 11(4), 546–573. Hardisty, D. J., Thompson, K. F., Krantz, D. H., & Weber, E. U. (2013). How to measure time preferences: An experimental comparison of three methods. Judgment and Decision Making, 8(3), 236–249. Hare, T. A., Hakimi, S., & Rangel, A. (2014). Activity in dlPFC and its effective connectivity to vmPFC are associated with temporal discounting. Frontiers in Neuroscience, 8:50. Hsu, E., Brocas, I., & Monterosso, J. R. (2013, September). Frame-Based Models of Delay-Discounting and the Neural Correlates. Presented at the Society for Neuroeconomics, Lausanne, Switzerland. Izuma, K., Matsumoto, M., Murayama, K., Samejima, K., Sadato, N., & Matsumoto, K. (2010). Neural correlates of cognitive dissonance and choice-induced preference change. Proceedings of the National Academy of Sciences, 107(51), 22014–22019. Jarmolowicz, D. P., Cherry, J. B. C., Reed, D. D., Bruce, J. M., Crespi, J. M., Lusk, J. L., & Bruce, A. S. (2014). Robust relation between temporal discounting rates and body mass. Appetite, 78, 63–67. Jenkinson, M., & Smith, S. (2001). A global optimisation method for robust affine registration of brain images. Medical Image Analysis, 5(2), 143–156. 80 Kable, J. W., & Glimcher, P. W. (2007). The neural correlates of subjective value during intertemporal choice. Nature Neuroscience, 10(12), 1625–1633. Kitayama, S., Chua, H. F., Tompson, S., & Han, S. (2013). Neural mechanisms of dissonance: An fMRI investigation of choice justification. NeuroImage, 69, 206–212. Krajbich, I., Armel, C., & Rangel, A. (2011). Visual fixations and the computation and comparison of value in simple choice. Nature Neuroscience. Nature Neuroscience, 14(9). Krajbich, I., Bartling, B., Hare, T., & Fehr, E. (2015). Rethinking fast and slow based on a critique of reaction-time reverse inference. Nature Communications, 6, 7455. Laibson, D. (1997). Golden Eggs and Hyperbolic Discounting*. Quarterly Journal of Economics, 112(2), 443–477. Loewenstein, G., & Prelec, D. (1992). Anomalies in Intertemporal Choice: Evidence and an Interpretation. The Quarterly Journal of Economics, 107(2), 573–97. Luo, S., Ainslie, G., & Monterosso, J. R. (2014). The behavioral and neural effect of emotional primes on intertemporal decisions. Social Cognitive and Affective Neuroscience, 9(3), 283–291. Luo, S., Ainslie, G., Pollini, D., Giragosian, L., & Monterosso, J. R. (2012). Moderators of the association between brain activation and farsighted choice. NeuroImage, 59(2), 1469–1477. Mackillop, J., Amlung, M. T., Few, L. R., Ray, L. A., Sweet, L. H., & Munafo, M. R. (2011). Delayed reward discounting and addictive behavior: a meta- analysis. Psychopharmacology, 216, 305–321. Marlatt, G., & Gordon, J. (1978). Determinants of relapse: Implications for the maintenance of behavior change. In Behavioral medicine: Changing health lifestyles. (pp. 410–452). Elmsford: Pergamon. 81 Mazur, J. E. (1987). An adjusting procedure for studying delayed reinforcement. Quantitative Analyses of Behavior, 5, 55–73. McClure, S. M., Ericson, K. M., Laibson, D. I., Loewenstein, G., & Cohen, J. D. (2007). Time discounting for primary rewards. The Journal of Neuroscience, 27(21), 5796–5804. McClure, S. M., Laibson, D. I., Loewenstein, G., & Cohen, J. D. (2004). Separate neural systems value immediate and delayed monetary rewards. Science, 306(5695), 503. Mischel, W. (2014). The marshmallow test: understanding self-control and how to master it (1st ed.). Little, Brown, and Company. Mischel, W., Ebbesen, E. B., & Zeiss, A. (1972). Cognitive and attentional mechanisms in delay of gratification. Journal of Personality and Social Psychology, 21(2), 204–218. Mischel, W., Shoda, Y., & Peake, P. K. (1988). The nature of Adolescent Competencies Predicted by Preschool Delay of Gratification. Journal of Personality and Social Psychology, 54(4), 687–696. Mobini, S., Grant, A., Kass, A. E., & Yeomans, M. R. (2007). Relationships between functional and dysfunctional impulsivity, delay discounting and cognitive distortions. Personality and Individual Differences, 43(6), 1517– 1528. Nevin, J. A. (1995). Behavioral economics and behavioral momentum. Journal of the Experimental Analysis of Behavior, 64(3), 385–395. Nevin, J. A., & Grace, R. C. (2000). Behavioral momentum and the law of effect. Behavioral Momentum and the Law of Effect, 23(1), 73–130. Nevin, J. A., Tota, M. E., Torquato, R. D., & Shull, R. L. (1990). Alternative reinforcement increases resistance to change: Pavlovian or operant contingencies? Journal of the Experimental Analysis of Behavior, 53(3), 359–379. 82 Nicolle, A., Bach, D. R., Driver, J., & Dolan, R. J. (2011). A Role for the Striatum in Regret-related Choice Repetition. Journal of Cognitive Neuroscience, 23(4), 845–856. Patton, J. H., Stanford, M. S., & Barratt, E. S. (1995). Factor structure of the Barratt Impulsiveness Scale. Journal of Clinical Psychology, 51(6), 768– 774. Persichetti, A. S., Aguirre, G. K., & Thompson-Schill, S. L. (2015). Value is in the eye of the beholder: Early visual cortex codes monetary value of objects during a diverted attention task. Journal of Cognitive Neuroscience, 27(5), 893–901. Pinheiro, J. C., & Bates. (2001). Mixed-effects models in S and S-Plus. Springer. Rabin, M., & Vayanos, D. (2010). The Gambler’s and Hot-Hand Fallacies: Theory and Applications. The Review of Economic Studies, 77(2), 730–778. Rachlin, H., Raineri, A., & Cross, D. (1991). Subjective probability and delay. Journal of the Experimental Analysis of Behavior, 55(2), 233. Ratcliff, R., & McKoon, G. (2008). The Diffusion Decision Model: Theory and Data for Two-Choice Decision Tasks. Neural Computation, 20(4), 873– 922. Read, D., & Roelofsma, P. H. M. P. (2003). Subadditive versus hyperbolic discounting: A comparison of choice and matching. Organizational Behavior and Human Decision Processes, 91, 140–153. Rodriguez, C. A., Turner, B. M., & McClure, Samuel M. (2014). Intertemporal Choice as Discounted Value Accumulation. PLoS ONE, 9(2), e90138. Rodriguez, C. A., Turner, B. M., Van Zandt, T., & McClure, S. M. (2015). The neural basis of value accumulation in intertemporal choice. European Journal of Neuroscience, 42(5), 2179–2189. Samuelson, P. A. (1938). A note on the pure theory of consumer’s behavior. Economica, 5(17), 61–71. 83 Schwarz, G. (1978). Estimating the Dimension of a Model. The Annals of Statistics, 6(2), 461–464. Sharot, T., De Martino, B., & Dolan, R. J. (2009). How Choice Reveals and Shapes Expected Hedonic Outcome. The Journal of Neuroscience, 29(12), 3760–3765. Shoda, Y., Mischel, W., & Peake, P. K. (1990). Predicting Adolescent Cognitive and Self-Regulatory Competencies From Preschool Delay of Gratification: Identifying Diagnostic Conditions. Developmental Psychology, 26(6), 978– 986. Sundali, J., & Croson, R. (2006). Biases in casino betting: The hot hand and the gambler’s fallacy. Judgment and Decision Making, 1(1), 1–12. Thaler, R. H. (1981). Some empirical evidence on dynamic inconsistency. Economics Letters, 8(3), 201–207. Urai, A. E., Braun, A., & Donner, T. H. (2017). Pupil-linked arousal is driven by decision uncertainty and alters serial choice bias. Nature Communications, 8. Van den Bergh, B., Dewitte, S., & Warlop, L. (2008). Bikinis Instigate Generalized Impatience in Intertemporal Choice. Journal of Consumer Research, 35(1), 85–97. van den Bos, W., Rodriguez, C. A., Schweitzer, J. B., & McClure, S. M. (2014). Connectivity strength of dissociable striatal tracts predict individual differences in temporal discounting. Journal of Neuroscience, 34(31), 10298–10310. Vanderckhove, J., Matzke, D., & Wagenmakers, E.-J. (2015). Model Comparison and the Principle of Parsimony. In J. R. Busemeyer, Z. Wang, J. T. Townsend, & A. Eidels (Eds.), The Oxford Handbook of Computational and Mathematical Psychology (pp. 300–317). Vincent, B. T. (2016). Hierarchical Bayesian estimation and hypothesis testing for delay discounting tasks. Behavior Research Methods, 48(4), 1608–1620. 84 Weber, B. J., & Huettel, S. A. (2008). The neural substrates of probabilistic and intertemporal decision making. Brain Research, 1234, 104–115. Weber, E. U., Johnson, E. J., Milch, K. F., Chang, H., Brodscholl, J. C., & Goldstein, D. G. (2007). Asymmetric discounting in intertemporal choice: a query-theory account. Psychological Science: A Journal of the American Psychological Society / APS, 18(6), 516–523. Whiteside, S. P., & Lynam, D. R. (2001). The five factor model and impulsivity: using a structural model of personality to understand impulsivity. Personality and Individual Differences, 30, 669–689. Wileyto, E. P., Audrain-McGovern, J., Epstein, L. H., & Lerman, C. (2004). Using logistic regression to estimate delay-discounting functions. Behavior Research Methods, Instruments, & Computers, 36(1), 41–51. Woolrich, M. W., Behrens, T. E. J., Beckmann, C. F., Jenkinson, M., & Smith, S. M. (2004). Multilevel linear modelling for FMRI group analysis using Bayesian inference. NeuroImage, 21(4), 1732–1747. Xue, G., Lu, Z., Levin, I. P., & Bechara, A. (2010). The impact of prior risk experiences on subsequent risky decision-making: The role of the insula. NeuroImage, 50(2), 709–716. 85 Appendices 86 Appendix A. Simulation test In this test I simulate decisions based on a choice set and a participant’s overall set of decisions based on the choice set. In each iteration of the simulation, I compute the number of times a participant chooses the same reward (SS after SS, or LL after LL) as in the previous trial. I then take the mean number of repeated choices simulated for each participant over all iterations. This number was then compared to a count of the number of times each participant repeated their previous choice, over the actual choice set. While the ITC studies conducted herein utilize a number of different designs, one commonality is that at the beginning of each run of each task, a best guess of the participant’s indifference-k is either entered or computed. That is, we estimate the value of k for which the subjective value (SV) of a delayed reward SV = A/(1+k*D), where A is the amount of the reward and D is the delay when the reward is available. In some cases, multiple indifference-k values are estimated to account for different sets of rewards (mainly different magnitudes of reward size). In each different task, participants are presented with decisions tailored to their preference, that is, for alternative 1 available immediately and alternative 2 available after a delay, the following is true (with small rounding error): A 1 =A 2 /(1+k i *D 2 ). In most cases, this is done by computing an immediate reward A 1 that resolves this equation for k and a randomly generated A 2 and D 2 , although in some cases it is A 2 that is computed to resolve this equation. 87 In some tasks, we present options where participants are expected to show some amount of preference for either the immediate or delayed reward over the alternative. This is done by presenting participants with trials by resolving the above equation based on k i = k indiff *10 n , where i= +/- one eighth or one fourth. Presenting a participant with a decision where k i > k indiff gives the expectation that the participant is more likely than chance to choose the LL reward, whereas presenting a decision where k i < k indiff should result in choosing the SS reward at greater than chance. While I intended to present participants with decisions where some ambivalence is displayed, some participants consistently choose SS or LL for a given k i (this is not to say that there was not some form of ambivalence, only that it is not evident in behavior, since decisions in this bin were consistent). In other cases, participants are presented with decisions where n=+/- 1. Here participants are expected to show overwhelming preference for either the SS or LL reward. These choices are referred to elsewhere as easy choices or trials and were expected to generate very little ambivalence. Such trials are ignored in this analysis, unless otherwise stated. For conducting this simulation, I consider for each trial, the probability that the participant will choose the same reward as in the previous trial. Choices are then simulated based on these probabilities and aggregated. Choice probabilities are computed as follows: 1) I divide all trials into bins where each trial in any bin was generated based on the same k i (again, with small amounts of rounding error). Then, I estimate the probability p LL i that a participant will choose the LL 88 reward for each trial as percentage of instances the LL option was chosen for trials in that bin. Next, 2) for each trial I compute the probability that the participant will choose the same reward as the actual choice of the previous trial p same = (p LL if the previous choice was LL; 1- p LL if the previous choice was SS). I refer to this as a model-free approach to determining p LL and p same . Alternatively, as step 1) I fit a logistic regression equation for each participant’s behavior with the decision as the dependent variable and the log-transformed value of k trial as the independent variable. I consider the fitted values of this regression to be the p LL for each trial and use these probabilities to complete 2) as before, in order to compute p same . I refer to this as the model-based approach to computing the probability of choice. 89 Figure A1: The dots represent the percentage of time the LL choice is made at each unit of distance away from the input k, which are considered the model-free p(LL). The fitted logistic curve(based on all participant data) is the model-based p(LL). Both methods of computing probabilities assume independence within each choice set. Therefore, I suggest a demonstration where actual behavior diverges from the results of a simulation based on these assumptions is evidence that behavior is nonindependent. Once I have computed these probabilities for each participant, I simulate sequences of choices based on the probability for each choice in the actual order it occurs in the study. For each such sequence, which represents a simulation of the actual study, I then count the number of times the same option was produced consecutively. Counts of the number of times a choice was repeated are then compared to the actual sequence of choices to determine if there is evidence to suggest that the actual sequence of choices is nonindependent. -3 -2 -1 0 1 2 3 0.0 0.2 0.4 0.6 0.8 1.0 Two ways of determining probability of choice log(k) distance p(LL) 90 Appendix B. Streakiness in Probability Discounting Methods 66 participants (33 male, 33 female) were recruited from USC subject pool, as well as through word-of-mouth. Participants who were not undergraduates participating in subject pool were all graduate students at USC. Those not participating in subject pool received a base payment of $20 for their participation. Task Participants made decisions between certain and risky rewards in two contexts – Matched, where the sure and risky options have similar subjective utility, and Mismatched, where either the sure or the risky option carries significantly larger utility. Utility is measured using a hyperbolic utility model . V= A/(1+hθ); θ = (1/p)/p In this model, V represents subjective value, A the amount of the probabilistic reward, θ the odds against for probability p of receiving the reward, and h a free parameter that determines implied discounting, analogous to the way the k parameter is used in delay discounting. Here, increasingly larger h implies risk aversion, or a tendency to avoid probabilistic rewards in favor of smaller certain awards. Critically, h = 1 implies risk neutrality, or the case where certain and 91 probabilistic options have equal subjective value; it follows that h<1 implies risk- seeking behavior, and h>1 implies risk aversion. For each participant, I obtained a parameter h for two ranges of reward magnitudes (small: a probabilistic reward between $21 and $35; medium: a probabilistic reward between $46 and $60) that approximates subjective utility at the individual level. For this purpose, the adaptive procedure used for intertemporal choice was modified for probability discounting. In the main task, matched and mismatched choices were presented in blocks of six trials (six matched choice trials in succession, followed by six mismatched choice trials) in two runs of 48 trials. Each block included three each of small and medium sized rewards randomized in order. During each trial participants view a fixation cross for 3 seconds followed by a choice screen. Participants chose between a certain option, presented on one side of the screen as “100% chance of $X,” and a risky option, presented on the other side of the screen as “Y% chance of $Z.” Y is always a number between 25 and 75, corresponding to a probability of receiving the reward that is between 0.25 and 0.75. For the majority of participants, all trial probabilities were either _5% or _0% in addition to 33% and 67%, but did not occur at 50%, as it was reasoned that this was easier for calculation and would thus bias reaction time. In Study 2, a large percentage of participants were excluded for failing to display ambivalence during Matched Choices. I also observed this occurring with some of the initial participants of this study; thus in order to prevent this from continuing to happen, 92 participants’ utility parameters were sometimes recalculated after every block of 6 Matched trials. This was done as follows if participants always choose the same option for the 3 trials of either reward size, the parameter for that reward size will be adjusted so that the participant will be more likely to display indifference during the subsequent Matched trial block. One trial from this task was randomly played out for each participant at the end of the study. If the participant selected a probabilistic reward in the randomly chosen trial, I used a prize wheel to determine if the reward was won. This was done by drawing a ‘win’ region of the wheel that matched the probability of the reward, and then spinning the wheel. If the arrow of the wheel landed in the ‘win’ region, the participant received the probabilistic reward. Analysis Data was preprocessed using R. Reaction time was measured as the time to response, following presentation of the choice options. Analysis of risky decision-making again used the hyperbolic utility model. Similarly to intertemporal choice tasks, trials in this task were grouped into bins, based on the h parameter used to generate trial alternatives. For every trial in a bin, h trial =(A risky – A certain )/(A certain *θ). This h parameter is then used to predict individual choices between certain and risky options, using logistic regression. 93 Results 62 of 66 participants are used for this analysis. Excluded participants appear to choose at random, and often failed to choose the expected option in the mismatched condition. For included participants, I first demonstrate that the assumption of the hyperbolic (probability) discounting model offers an adequate fit of the data. In order to show this, I used logistic regression to model choice based on the h trial parameter implied by the attributes of trial alternatives, for the equation A 1 =A 2 /(1+h trial * θ 2 ). Figure B1: Percentage of trials where risky option was chosen, by difference in subjective value between options. A value difference of zero represents trials where the certain and risky options are matched (ie. individually estimated to be equally valued), and each unit i of value difference represents a 10 i/8 increase in h for two alternatives where A1=A2/(1+htrial* θ2), where for probability of risky reward p, θ=(1-p)/p. No evidence of streakiness -10 -5 0 5 10 0.0 0.2 0.4 0.6 0.8 1.0 value difference % Risky choice 94 In order to test for the existence of streakiness in decision-making, I used a simulation test similar to those use for intertemporal choice, in particular to Study 3. Specifically, I used both “model-free” and “model-based” methods to simulate the probability of choosing the same option as the previous trial, for every trial. For the model-free method, trials were binned based on the levels of preference coming from the design (where during the task, level of preference was adjusted when behavior was consistent during blocks of matched trials), such that for all trials in a bin, A certain = A risky /(1+h bin *θ risky ). I then calculated the overall probability of risky choice separately for each bin. For the model-based method, I fit each participant’s decisions separately for small and medium rewards, using a logistic regression framework where the decision was the dependent variable and the independent variables were the log-transformed value of h that was used to generate the alternatives. Fitted values for this model were then used as the probability of choosing the risky option in each trial. For both methods, decisions were simulated based on the generated probabilities for each trial, and the number of times the same choice was made consecutively was compared to the distribution of repeated choice produced by the simulation. Participants chose the same option equally as often as the mean of the simulated distribution based on the model-based method with a range between 71% and 181% of the time. Furthermore, 22 participants being more streaky than the median of simulations and 30 participants being less streaky. Using the model-free method, participants chose the same option 98% of the time 95 compared to the mean of the simulated distribution, with a range between 66% and 143%. Additionally, 17 participants were more streaky than the median of their corresponding simulated distribution, and again 30 participants were less streaky. Paired t-tests between actual and simulated counts of repetition in choice provided no evidence of a tendency for choosing for or against repeating choices in sequence, (t(61) = -1.3, p=0.20) for the model-free method and (t(61) = -0.15, p=0.88) for the model-based method. Figure B2: Choice between certain and risky rewards is not streaky. Two methods of simulation (model-based and model-free) were used to generate distributions of how much repetition would be expected under the assumption of independence. Comparison of behavior with the mean of simulations did not show behavior to systematically deviate from independence (dots representing individual participants do not systematically fall on either side of the y=x line). 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 Model-Based Comparison Simulated Repeats Observed Repeats 5 10 15 20 25 30 35 40 5 10 15 20 25 30 35 40 Model-Free Comparison Simulated Repeats Observed Repeats
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Behabioral and neural evidence of state-like variance in intertemporal decisions
PDF
Behavioral and neural evidence of incentive bias for immediate rewards relative to preference-matched delayed rewards
PDF
Homeostatic imbalance and monetary delay discounting: effects of feeding on RT, choice, and brain response
PDF
Validation of a neuroimaging task to investigate decisions involving visceral immediate rewards
PDF
Choice biases in making decisions for oneself vs. others
PDF
Value-based decision-making in complex choice: brain regions involved and implications of age
PDF
The evolution of decision-making quality over the life cycle: evidence from behavioral and neuroeconomic experiments with different age groups
PDF
Reward substitution: how consumers can be incentivized to choose smaller food portions
PDF
Visual and audio priming of emotional stimuli and their relationship to intertemporal preference shifts
PDF
Reward immediacy and subjective stress modulate anticipation of primary and secondary rewards in temporarily-abstinent cigarette smokers
PDF
The acute impact of glucose and sucralose on food decisions and brain responses to visual food cues
PDF
Toward a more realistic understanding of decision-making
PDF
Insula activity during safe-sex decision-making in sexually risky men suggests negative urgency and fear of rejection drives risky sexual behavior
PDF
Neuroeconomic mechanisms for valuing complex options
PDF
Effect of mindfulness training on attention, emotion-regulation and risk-taking in adolescence
PDF
The effect of present moment awareness and value intervention of ACT on impulsive decision-making and impulsive disinhibition
PDF
Brain and behavior correlates of intrinsic motivation and skill learning
PDF
Individual vs. group behavior: an empirical assessment of time preferences using experimental data
PDF
Social exclusion decreases risk-taking
PDF
Selflessness takes time: altruistic (but not cooperative) prosocial behavior increases with decision time
Asset Metadata
Creator
Hsu, Eustace Chaofung
(author)
Core Title
Sequential decisions on time preference: evidence for non-independence
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Psychology
Publication Date
08/07/2017
Defense Date
06/12/2017
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
behavioral economics,decision neuroscience,delay discounting,fMRI,intertemporal choice,neuroeconomics,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Monterosso, John Robert (
committee chair
), Bechara, Antoine (
committee member
), Coricelli, Giorgio (
committee member
), John, Richard (
committee member
)
Creator Email
eustaceh@usc.edu,eustacehsu@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-423780
Unique identifier
UC11264503
Identifier
etd-HsuEustace-5683.pdf (filename),usctheses-c40-423780 (legacy record id)
Legacy Identifier
etd-HsuEustace-5683.pdf
Dmrecord
423780
Document Type
Dissertation
Rights
Hsu, Eustace Chaofung
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
behavioral economics
decision neuroscience
delay discounting
fMRI
intertemporal choice
neuroeconomics