Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Behabioral and neural evidence of state-like variance in intertemporal decisions
(USC Thesis Other)
Behabioral and neural evidence of state-like variance in intertemporal decisions
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
BEHAVIORAL AND NEURAL EVIDENCE OF STATE-LIKE VARIANCE IN INTERTEMPORAL DECISIONS by Shan Luo A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (PSYCHOLOGY) December 2012 Copyright 2012 Shan Luo ii Acknowledgements Life never fails to surprise me. As I am about to finish my Ph.D. degree in Brain and Cognitive Sciences, it feels unbelievable. I always wanted to be an engineer doing all sorts of fancy and complicated math during graduate school, but life opened a different door to me, a graduate program in Psychology. Surprisingly, I am falling in love with researching interesting and even more complicated human behavior. Life again and again surprised me in the past five years. I started graduate school in a lab doing vision research, but then I got very excited for studying addiction. It was at the same time as I finished first-year graduate school, my mentor Dr. John Monterosso came to USC, and he is an expert in neuroscience of addiction. Wow, out of plain luck, we started re-searching together. Ever since then, I had so much fun and joy working together with John. It’s hard to describe in words how thankful I am for John’s unconditional support, all the financial support for my projects and my traveling around the world to present our work, all the interesting and inspiring conversations, all the early mornings and weekends work parties… Besides these outward things, John left many invisible treasures in my heart. He taught me to purse things with perseverance. As I was very frustrated when data did not look good and results were not as good as I expected, he would always encourage me “ keep moving and you will eventually stumble onto something.” He taught me to be humble, to be kind, to be considerate, to be willing to sacrifice myself for others… Unfortunately, there is always the last day in human life, but I believe it is one of the best times in my life so far. I am truly thankful for my mentor Dr. John Monterosso. iii It takes a village of people to produce a Ph.D. Here I give special thanks to my parents who gave me a lot of freedom to explore the world, without them I won’t be able to come all the way from China to the States pursue this degree; my husband Allen Tsao, who loves me with his whole heart and is always with me during ups and downs; my brothers and sisters from church in Los Angeles, my friends, my labmates Louise Cosand, Eustace Hsu, Peggy Lin, my lab’s research assistants Lisa Giragosion, Xochitl Cordova, my collaborators Dr. George Ainslie, Dr. Gui Xu, Dr. Bernd Figner, and Dr. Alex Pine, and my dissertation committee members Dr. Antoine Bechara, Dr. Richard John, and Dr. Giorgio Coricelli. Last but definitely not least, I am grateful for God carrying me through these wonderful yet challenging five years with sufficient grace. iv Table of Contents Acknowledgements ii List of Tables v List of Figures vi Abstract vii Chapter 1: Functional Components and Associated Brain Structures of Intertemporal Decision-making 1 Functional components of delay discounting task 3 Contextual factors influence valuation of distant expectancies 6 How values are integrated into intertemporal decision-making 9 Brain structures relevant to produce patient choices 17 Concerns and future studies 25 Chapter 2: Incidental Emotions’ Effect on Intertemporal Choice 28 Introduction 28 Materials and methods 33 Results 43 Discussion 53 Limitations 62 Chapter 3: Stochasticity in Intertemporal Choice 63 Introduction 63 Materials and methods 69 Results 75 Discussion 84 References 89 v List of Tables Table 1: Individual AIC estimates 76 Table 2: Parameter estimates based on probit model 77 vi List of Figures Figure 1: Scanner task 38 Figure 2:Mean and SE of individual % LL for each condition 44 Figure 3: Mean and SE of individual % LL for each condition during trials where participants made accurate judgment on secondary task 45 Figure 4: Mean and SE of individual mean RT for each condition during decision-making 46 Figure 5: Mean and SE of individual mean RT for each condition during SS choice and LL choice 47 Figure 6: Neural representation of fear vs. happy facial expression 49 Figure 7: Neutral prime effect on choice 50 Figure 8: Correlation between behavioral prime effect and neural prime effect 51 Figure 9: PPI analysis result 52 Figure 10: Behavioral task with motivation primes 71 Figure 11: Prime effect on consistency of choice 79 Figure 12: Fitted data 80 Figure 13: Prime effect on choice 82 Figure 14: Reaction time 83 vii Abstract Humans are distinct from other species in their capacity to pursue temporally distant goals. One paradigm (“delay discounting”) used to investigate this capacity in lab settings requires people to make trade-offs between the magnitude and immediacy of outcomes. In my dissertation, first I review discounting studies from a two-stage (valuation and choice) model perspective, and provided four different hypotheses regarding the connection between valuation and choice, as well as a summary of brain structures relevant to facilitation of preferences for temporally distal rewards. Second, I examined an important yet underexplored area in intertemporal decision-making: state- like variance in discounting behavior as opposed to traditionally emphasized trait-like variance (study 1), as well as state-like variance in consistency of intertemporal preference (study 2). In study 1, twenty-two participants completed a dual task in which they were required to make intertemporal choices while simultaneously holding an expressive face in memory. From trial-to-trial, the facial expression of the priming stimulus varied between three alternatives: 1) fearful, 2) happy, and 3) neutral. Brain activity was recorded using functional Magnetic Resonance Imaging for sixteen participants. Across participants, the fearful prime was associated with greater preference for delayed rewards (vs. happy prime), and was also associated with greater signal change in the right dorsolateral prefrontal cortex (BA 9). In addition, fearful faces recruited greater brain activation in the posterior sector of anterior cingulate cortex (ACC) (BA 24) during decision-making. Furthermore, individual brain signal change in this activity predicted viii the behavioral difference in preference for delayed options during fear prime vs. happy prime. Given study 1’s findings suggesting some effect of approach-/withdrawal-related motivation on intertemporal preference, in study 2, anticipatory gains and losses-related approach/withdrawal motivation primes were introduced while participants were performing an individualized intertemporal choice task. Primes varied on a trial-by-trial basis from anticipation of gain, anticipation of loss, anticipation of either gain or loss, and a neutral condition without anticipation of gain or loss. Participants’ choice data were modeled using probit regression, and variability of choice was measured by the steepness of the probit function. In order to measure stochasticity with minimal influence of functional form of discounting, choices during study 2 were always between an immediate option and four-month delayed option. Across participants, there was a trend of higher consistency in intertemporal choice in the loss-related motivation prime condition (relative to gain prime). Taken together, these findings suggest that situational affect influences intertemporal behavior. Fear (relative to happiness) increased farsightedness, possibly through “inhibition spillover”, and the posterior ACC is a potential neural correlate of this hypothesized biasing signal. Anticipatory loss-related motivation reduced stochasticity in preference possibly through increased emotional responses or value signaling. 1 Chapter 1: Functional Components and Associated Brain Structures of Intertemporal Decision-making Willpower draws on the evaluation of a prospect, and that evaluation may not take place if attention is not properly driven to both the immediate trouble and the future payoff, to both the suffering now and the future gratification. Remove the latter and you will remove the lift from under your willpower’s wings (Damasio, 1994, page 177). Choices we make in our daily life often involve tradeoffs between short-term and long-term outcomes, such as choosing between going to Hawaii now or saving money for retirement; between eating French fries now or having a healthy body for long-term benefits. The motivations we have in relation to temporally distant consequences of behavior allow us to forgo immediate gratification for greater future payoff or to take on immediate hardship for future gratification. This motivation is essential for willpower and of great advantage for our human living. For instance, it is of no use for us to know sowing the seeds in the spring will bring harvest in the autumn unless we are sufficiently motivated by the prospect to take the action. But how our brain accomplishes this remains an open question in cognitive/ affective neuroscience. In our everyday life, the degree to which behavior is intentionally directed at temporally distant consequences is not always clear. Take the example of a medical school student toiling at school. A superficial analysis would presume that the student is going through immediate hardship in order to reach some distant gain. But the social standing of the individual is immediately increased when he or she is admitted and 2 enrolled into medical school. It is not clear whether what appears to be future-oriented behavior is actually motivated by immediate social approval. Scientists who are interested in future-oriented behavior often use the delay discounting paradigm. In a typical experimental situation, subjects are required to choose between a smaller-sooner (SS) reward vs. a larger-later (LL) reward (e.g. “Would you like $20 today or $30 in a month?”). Various kinds of rewards have been used in such studies: points that can be exchanged for money (Forzano & Logue, 1994), health outcomes (Chapman, 1996; 2000; Chapman et al., 2001; Van der Pol & Cairns, 2001), hypothetical drug or alcohol (Madden, Petry, Badger, & Bickel, 1997; Odum & Rainaud, 2003), hypothetical money with context (e.g., “you just won some amount at a casino”) (Thaler, 1981; Bohm, 1994; Chapman, 1996; Chesson & Viscusi, 2000), hypothetical money without context (Fuchs, 1982; Ainslie & Haendel, 1983; Madden, Petry, Badger, & Bickel, 1997; Monterosso, Ehrman, Napier, O'Brien, & Childress, 2001), actual money(Ainslie & Haendel, 1983; Kirby & Herrnstein, 1995; Richards, Zhang, Mitchell, & Wit, 1999; Crean, Wit, & Richards, 2000), consumer goods (Kirby & Herrnstein, 1995), food (Mischel & Grusec, 1967; Forzano & Logue, 1994), and juice (McClure, Ericson, Laibson, Loewenstein, & Cohen, 2007; Jimura, Myerson, Hilgard, Braver, & Green, 2009). Less frequently, choices among punishments have been used. These include shocks (Cook & Barnes, 1964; Hare, 1966; Mischel, Grusec, & Masters, 1969) and aversive noise (Navarick, 1982). And the number of alternatives can be more than two. Rewards lose some of value when they are delayed, and typically how much future 3 rewards are devalued is quantified by the temporal discounting function(s) and differs across individuals. There has been a growing set of delay discounting studies paired with functional Magnetic Resonance Imaging (fMRI) (most reviewed in (Carter, Meyer, & Huettel, 2010; Monterosso & Luo, 2010)). However, no consensus has yet been reached regarding what has emerged from these studies. One of the challenges delay discounting studies face is partly from the fact that functional components involved in producing choices are not easy to specify and indeed may vary considerably across participants. Broadly, decision- making can be divided into two stages: the first stage is valuation of all alternatives and the second stage is choosing among alternatives (Kable & Glimcher, 2009). In the first half of this review, I will present delay discounting studies in the context of this two- stage framework. Functional components of delay discounting task Valuation of intertemporal rewards Rewards have values, not only the objective values from stimuli themselves, but also subjective values from particular characteristics of individuals such as attitudes, beliefs, etc. Valuation of rewards provides a basis for making choices. Behavioral economic theory posits that humans value future rewards based on their subjective values (integration of both the magnitude and the delay of rewards). A growing body of neuroimaging studies on humans suggested that subjective value of rewards across different temporal delays (seconds, days, months, years) is encoded in regions including 4 the ventral striatum (VS), medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC) during intertemporal choice (McClure, Laibson, Loewenstein, & Cohen, 2004; Kable & Glimcher, 2007; McClure, Ericson, Laibson, Loewenstein, & Cohen, 2007; Gregorios-Pippas, Tobler, & Schultz, 2009; Peters & Buchel, 2009; Pine, et al., 2009; Prevost, Pessiglione, Metereau, Cléry-Melin, & Dreher, 2009; Sripada, Gonzalez, Phan, & Liberzon, 2010). Although a prior study had paired fMRI with intertemporal choices (McClure et al., 2004), the first fMRI experiment investigating subjective valuation of intertemporal rewards on humans was conducted by Kable and Glimcher (2007). In that study, participants were faced with a series of binary intertemporal choices between a fixed immediate reward $20 now and delayed options which were constructed using one of six delays (6h-180d) and one of six amounts ($20.25-$110). The authors observed that subjective value contributes to enhanced activity in the VS, mPFC and PCC. More strikingly, they found a correspondence between participants’ subjective preferences and neural activity in the VS, mPFC and PCC when neural data was compared to value inferred from behavior. However, in their design, it is not clear what kind of subjective value signals the VS, mPFC and PCC are tracking. Since one alternative was held constant across all the trials ($20 now), 1) subjective value of later rewards, 2) the sum of both rewards, and 3) the difference in subjective value of reward pairs (either subjective value of LL minus subjective value of SS or vice versa) are linear transformations of each other, and could not be distinguished without assuming a particular scaling between fMRI signal change and value. A later study by Sripada et al., 2010 addressed this issue by varying the magnitude and delay of earlier rewards, so that subjective value of later 5 rewards were not perfectly collinear with either the difference or sum of subjective value of the two rewards. Significant signal changes in the VS, mPFC and PCC were positively correlated with trial-by-trial difference of subjective value of the discrepancy between the later and earlier rewards, but not significantly scaled with the other two regressors (sum of the subjective value of the two rewards and the larger subjective value of the two alternatives). The result implies that activity that is supposed to be valuation at least in part codes value differences, and seems to actually map results of a value comparison. Subjective valuation of rewards was examined in the absence of choice as well. Kobayashi and Schultz (2008) looked at the effect of delay on value signals in dopamine neurons, and reported that responses of dopamine neurons to pavlovian conditioned stimuli were influenced by the subjective value of rewards, with reduction as a function of delay and increase as reward magnitudes become larger. The role of dopamine neurons in intertemporal choice was further explored. Roesch and his coauthors (Roesch, Calu, & Schoenbaum, 2007) recorded dopamine activity from ventral tagmental area in rats performing a time-discounting task where the size and delay of rewards were manipulated independently. During trials in which rats can choose freely, dopamine neurons initially responded stronger for the better available option (either reward with bigger magnitude or reward having a shorter delay), irrespective of the response selected. The authors argued that dopamine plays a role in signaling the prospect of taking a certain action in the early stage of decision-making. Pine and his colleagues (Pine, Shiner, Seymour, & Dolan, 2010) measured neural signal changes in dopamine rich 6 striatum induced by dopamine precursor L-dopa, and observed that preferences for SS options was increased and that this increase was statistically mediated by L-dopa’s effect on activation within the striatum. They proposed that the dopamine system influences intertemporal choices through controlling selectively the incorporation of delay into the subjective value computation. Contextual factors influence valuation of distant expectancies Self-relevance of temporally distant rewards There are functional and physiological distinctions between processing related to self vs. others, and the extent to which temporally distant expectancies are processed as occurring to the self vs. others may affect intertemporal choices. If people tend to think of their future selves as others, this tendency may lead them to be more shortsighted. Mitchell and his colleagues (Mitchell, Schirmer, Ames, & Gilbert, 2011) measured the attenuation of activity in the ventromedial prefrontal cortex (vmPFC) when participants were thinking about enjoyable events in the future vs. in the present. They observed that those who showed the greatest attenuation in the vmPFC activation for the delayed events (which they interpreted functionally as attenuated self-referential processing) later exhibited steeper delay discounting. A similar study was conducted in which participants were required to judge whether the trait words appearing on the screen applied to targeted persons (self vs. others) at current time or in the future epoch. Rostral anterior cingulate cortex (rACC) engagement was greater during the ‘current self’ condition relative to the ‘future self’ condition and individual differences in this activation predicted delay 7 discounting (less engagement of rACC for future self discounted future rewards more steeply), implying that consideration of future self (not just future in general) is associated with discounting future rewards (Ersner-Hershfield, Wimmer, & Knutson, 2009). Episodic imagery A recent meta-analysis (based on 13 fMRI delay discounting studies) using activation likelihood estimation (ALE) method observed two reliable brain networks involved in intertemporal choice, one of which was interpreted as future thinking network(Carter, Meyer, & Huettel, 2010). This network has been shown to overlap dramatically with regions implicated in episodic memory, including the vmPFC, the medial temporal lobes (hippocampus and parahippocampal gyrus), the precuneus, and PCC (Addis, Wong, & Schacter, 2007; Botzung, Denkova, & Manning, 2008; Spreng, Mar, & Kim, 2011). Consistent with the hypothesized role of episodic memory in farsightedness, a recent fMRI study (Peters & Büchel, 2010) interjected episodic future imagery tags during a delay discounting task. The authors observed a shift towards shallower delay discounting that was accompanied by greater recruitment within the episodic imagery network (including the PCC and vmPFC). They also reported significantly greater functional coupling between the ACC and bilateral hippocampus during the imagery tag condition relative to a standard delay discounting condition. Moreover, the degree of functional coupling across individuals was associated with the degree to which discounting behavior shifted towards increased patience in the episodic 8 tag condition. The authors suggested that the ACC might play a controlling function during intertemporal decision-making that interacts with neural correlates of episodic imagery (in particular the hippocampus). This study demonstrated that manipulation of episodic imagery can affect the evaluation of rewards during intertemporal choice, however, it remains to be seen 1) what determines spontaneous variance in episodic imagery (in the absence of experimental manipulation) 2) the extent to which spontaneous variance in episodic imagery (in the normal condition without tags) affects intertemporal choices. Other factors Valuation of distant rewards is very complex, and it varies across people. Several factors, such as individual difference in intelligence(Shamosh & Gray, 2008), sensitivity to rewards(Hariri, Brown, et al., 2006), genetic variance (Boettiger, Mitchell, Tavares, Robertson, Joslyn, D'Esposito, & Fields, 2007)and age (Christakou, Brammer, & Rubia, 2011) affect this evaluation process. For instance, people who are more responsive to rewards (reflected as greater VS activity for positive feedback vs. negative feedback) have been reported to discount future rewards more steeply (Hariri, Brown, et al., 2006); individuals with higher intelligence exhibited lower discounting rate of distant rewards (Shamosh & Gray, 2008), and this association is suggested to be partially mediated by working memory-related activity in left anterior prefrontal cortex (Shamosh, DeYoung, Green, Reis, et al., 2008). 9 How values are integrated into intertemporal decision-making Does choice follow the output of valuation? Valuation processing has been studied extensively, but very few imaging studies investigated the mechanics of how intertemporal decisions are produced. A simplistic hypothesis is that choices follow the output of valuation. For example, if value A is greater than B, then option A will be chosen over option B; and brain signals from valuation center(s) can be directly mapped to decisions. If this is the case, regions that are responsible for producing choices should largely overlap with regions tracking values. In one relevant study (Roesch, Singh, Brown, Mullins, & Schoenbaum, 2009), rats were deciding between different amount of rewards (smaller vs. bigger) or delayed rewards (shorter vs. longer) available on the left or right indicated by different odor cues while single units in the VS recorded the firing of neurons. The authors found that cue-evoked activity in the VS neurons was greater when rewards were of higher expected value and subsequently chosen by executing directional responses, implying that the VS might serve as a transformer changing value signals into actions to obtain the rewards. Several studies on humans observed similar findings. Lebreton et al. (2009) conducted a study simultaneous with fMRI in which participants were required to rate either the pleasantness or the age of pictures (face, house and painting) while viewing the stimuli. Outside of the scanner, subjects were presented with the same pictures in pairs and asked to decide which one appeared to be more pleasant. Interestingly, brain signal changes in regions including the vmPFC, VS, hippocampus, and PCC during pleasantness rating predicted subjects’ subsequent choices (some of them were made a 10 month later after MRI scans). Moreover, these value signals were present during age rating when valuation is not required, which implies the occurrence of automatic valuation (supporting the idea that valuation is antecedent to, and the likely basis of choices). Similar patterns of results were replicated while people were making choices between economic goods. In a previous fMRI study (Knutson, Rick, Wimmer, Prelec, & Loewenstein, 2007), subjects were required to choose whether or not to buy consumer goods after presentation of the items and price display. Value signals in the striatum during presentation of goods and the mPFC during display of price predicted participants’ subsequent purchase choices expressed a few seconds later. Another recent study (Levy, Lazzaro, Rutledge, & Glimcher, 2011) looked at the possibility of predicting individual decisions between consumer goods based on value signals in a non-decision context. In their fMRI experiment, participants were first passively viewing different kinds of consumer goods (e.g., DVDs, CDs, posters) while thinking of how much the goods were worth; later outside of the scanner, they made all possible pairwise choices between the same items viewed during fMRI task. Participants were not required to make any choice during passive viewing and were not aware that they had to decide which goods to purchase after scanning. It was observed that signals in predefined value-tracking regions such as the VS and mPFC in a non-choice context (passively viewing without making a choice) predicted later purchase decisions, and the prediction accuracy ranged from 80% at the highest, when relative activation yielded by pairs was greatest, to the lowest when pairs exhibited similar level of value signal change. These data suggest that 11 there is a single neural mechanism representing values both in decision context and decision-free context, and such value signals predicted later choices. Transformation of subjective value to relative subjective value during choice Decision scientists proposed that our choices are based on relative values (Kahneman & Tversky, 1979). And some previous animal work identified a neural signal in lateral intraparietal (LIP) area resembling the relative subjective value (RSV) variable during goal-directed choice tasks. RSV is defined as subjective value associated with a reward divided by the sum of the subjective value associated with all available rewards). In a study (Platt & Glimcher, 1999) where monkeys were freely choosing between two eye-movement responses (gazing left or right) associated with juice rewards varying both in magnitude and probability, neuronal activity in LIP was correlated with behaviorally derived estimates of relative value associated with each response (measured by the frequency of the response chosen). This result was corroborated in experimental situations where monkeys were engaged in a foraging task(Dorris & Glimcher, 2004; Sugrue, Corrado, & Newsome, 2004). In a recent study where monkeys were choosing between SS vs. LL alternatives, LIP activity initially covaried with relative delay- discounted value of available rewards independent of choice probability (modeled by a logistic function), and late in the decision process the same neurons came to encode the actual alternative the monkey selected (Louie & Glimcher, 2010). The transformation of SV to RSV is adaptive because a boundless range of potential reward magnitudes must be mapped within the finite dynamic range of neuronal firing. 12 It is convincing that LIP encodes subjective value in a relative fashion based on animal studies. However, it is not established whether such RSV processing is also present during human decision-making. Several fMRI studies of humans provide evidence that medial orbitofrontal cortex (mOFC) encodes relative value both in a non- choice occasion where people were rating the pleasantness of the odor in the context of a relatively more pleasant or less pleasant odor (Grabenhorst & Rolls, 2009) or responding to a monetary reward cue in a relative value context (Elliott, Agnew, & Deakin, 2008), and in a choice circumstance where people were deciding between some amount of money and a bundle of items (FitzGerald, Seymour, & Dolan, 2009). As discussed above, Sripada et al., 2010 showed that value signals in the VS, mPFC and PCC track differences in subjective value parametrically during intertemporal decision-making, which implies that valuation is not independent of value comparison (a critical process for choice selection). Top-down control of value signals Recent cognitive neuroscience studies implicated the PFC in top-down modulation, through which value signals are influenced (Heatherton & Wagner, 2011). Such top-down control could possibly lead to dissociation between valuation and choice if it is differentially engaged during decision-making. For instance, a mouthwatering red velvet cupcake may get devalued, particularly when one is deciding between it and a healthy alternative. Arguably, the choice context provides an occasion for the top-down influence to be exerted. In one study, the signal in the left dorsolateral prefrontal cortex 13 (dlPFC) was reported to indirectly mediate activation in the vmPFC (a region which tracks goal value at the time of decision-making) while participants were avoiding liked- but-unhealthy food items (Hare, Camerer, & Rangel, 2009). Diekhof and Gruber (2010) have shown that the reward system (VS) and anteroventral prefrontal cortex (avPFC) were negatively correlated when people were confronted with a proximal reward option in the pursuit of a distal goal. Moreover, the success of overcoming temptation from immediate rewards to achieve distant goals was correlated with the degree of functional connection between these two regions. A similar finding was observed when smokers were instructed to think about long-term consequences of smoking while their craving was evoked by cigarette cues (Kober, Mende-Siedlecki, et al., 2010). Their self-reported craving scores were reduced using this strategy. In addition, reduction in craving was accompanied by decreased brain activity in the VS and increased in the dlPFC, with VS as the mediator of the association between the dlPFC and craving. Taken together, these results suggested that higher-level cognition could alter valuation of rewards (liked food items, proximal rewards, immediate pleasure of smoking) through a top-down control mechanism evidenced by a functional interaction between the PFC and reward system including the mPFC and VS. Choice diverges from valuation What the models presented above have in common is that valuation guides decision-making although different neural systems may contribute to the evaluation of rewards. An alternative idea is that there is something (e.g. goals, reasoning and other 14 higher-level cognitive abilities) apart from valuation (or apart from one system’s valuation if there are different valuation systems) that leads to choices. McClure and his colleagues (McClure, Laibson, Loewenstein, & Cohen, 2004) proposed that choice is governed by a competition between the short-run oriented valuation system and long-run oriented valuation system within decision makers, and these two systems can be mapped into two distinguishable and separate neural networks. The key here is that choices can be reached by either of the two systems, and indeed they observed that participants’ intertemporal choices were driven by relative activation of the two competing valuation systems (limbic/paralimbic systems preferentially activated when choice pairs involving immediate options vs. lateral prefrontal -parietal regions that exhibited similar activity for all choices irrespective of delays). Specifically, greater lateral PFC activation was associated with making LL choices, whereas greater activation in limbic/paralimbic regions was associated with making SS choices. This data suggested that choices diverged from one type of valuation. Their interpretation of dual valuation has been disputed (Kable & Glimcher 2007), but the research on “multiple valuation systems” remains active. This type of model (i.e., dual valuation) is an intuitive fit with the subjective phenomenon of ambivalence. For instance, the subjective experience of imagining a “high” feeling from taking a cigarette now vs. worrying about negative long- term consequences, or of craving for some appetizing dessert vs. being afraid of gaining weight, etc. Advocates of dual systems generally believe that there is a system within decision maker that is independent from another system responding to visceral valuation 15 (biologically hardwired automatic response directly from stimuli), guiding behavior for long-term benefits. Models such as “reason” vs. “passion”, “ego” vs. “id”, “planner” vs. “doer”, “hot” vs. “cool” are in this spirit. A related idea, the “strength model of self- control” proposed that self-control can be exerted by a control system to overwhelm visceral valuation system, but it has limited resources, and can be depleted by a range of activities and also strengthened by training and exercising like a muscle (Muraven & Baumeister, 2000). For limitations of this model, see Ainslie (1996) and Kurzban (2011). Choice as input to valuation Above I presented evidence, which was particularly compelling in the animal literature, that valuation is independent and antecedent to choice. On this view, choice is just the winner of valuation comparison. An alternative hypothesis holds that choice itself feeds back into the valuation process. Several self-control models including intertemporal bargaining (Ainslie, 1992) and self-signaling (Bodner & Prelec, 2003) imply that choice itself can add values to decision-making, and decisions should be more farsighted than would be predicted by valuation of rewards considered outside of a decision context. For instance, a dieter facing an overwhelmingly tempting dessert, perceives the contingency between her current choice of turning down this food and increased likelihood of reaping the benefits of her diet generally, so has more incentive to resist against temptation than if the only cost was the effect a single transgression. In other words, sticking with her diet may take on positive value beyond the direct caloric 16 consequences of one piece of cake, because the choice itself is perceived to be significant. Specifically, according to the intertemporal bargaining perspective, her current choice serves as input for valuation of expected future behavior. If choice itself can add values to decision-making, an individual who chooses $60 in two months over $45 now, might nevertheless show behavioral and brain evidence that the value of the latter is higher if it is encountered outside the choice context. Luo et al. (2009) measured individual valuation of intertemporal rewards (two preference-matched reward pairs inferred from decision context) using a monetary incentive task in which participants were motivated to respond to each reward prize as quickly as possible in order to win bonus money. I observed that participants responded faster to immediate rewards, and brain activities in reward-associated regions were differentially activated for immediate rewards vs. preference-matched delayed rewards. I argued that this discrepancy in incentive might be attributable to self-control process recruited during decision-making but absent when rewards are evaluated individually outside of decision context. Taking this a step further, Figner et al. (2010) reported data suggesting the dissociation between the valuation of rewards considered individually, and intertemporal decisions has some causal connection to the left dlPFC. When low frequency repetitive Transcranial Magnetic Simulation (rTMS) was used to disrupt functioning in this region, evaluation of individual immediate and delayed rewards (based on ratings of how attractive each reward was) did not change, but intertemporal decisions became less farsighted (immediate choices were more often made). And the greater tendency towards immediate tempting rewards is strongest when two alternatives are equally valued. 17 Findings above suggested valuation in non-choice context differed from valuation in the context of choice. However, it remains to be seen whether such discrepancy is due to choice-induced value signals that are absent in a choice-free context. Such signals have not been identified directly yet, and in fact it might be challenging for fMRI to detect because 1) they are temporally entangled with other signals (e.g., value signals from stimuli) 2) fMRI is not an ideal tool to tease apart temporally adjacent events because of its low temporal resolution. Taking a step back, if it is the case that choice itself can function as input to valuation processing, it might be true that the action of choice itself can modulate valuation. Sharot and his colleagues(Sharot, Martino, & Dolan, 2009; Sharot, Shiner, & Dolan, 2010) measured brain signal changes while participants were imagining events before and after choosing between them, and reported that after choices the value signals especially in caudate increased for positive events participants chose to experience in the future and decreased for chosen negative events compared to prior to choices. Although this data was not exactly addressing the idea intertemporal bargaining posits, which is choice as part of valuation, the data do indicate that choice influenced subsequent valuation of rewards in a non-choice context. Brain structures relevant to produce patient choices Much interest in the delay discounting task has something to do with the underlying expectation that it reflects the self-control struggles people experience in the real world. The struggle between enjoying the immediate pleasure of taking a cigarette and suffering from long-term health problems can be characterized as trading off between 18 the magnitude and immediacy of rewards. Studies have shown that individual differences in the degree of discounting are correlated with individual differences in real- world behavior. Children who were able to wait longer for delayed but larger rewards during preschool scored higher on their SAT, were described having high social- cognitive competence, and achieved significantly higher education levels later in life ( Mischel, Ayduk, et al., 2010). Moreover, abnormally steep delay discounting has been observed in groups with higher incidence of problematic behavior, including drug addicts(Bickel, Odum, & Madden, 1999; Kirby, Petry, & Bickel, 1999; Baker, Johnson, & Bickel, 2003; Odum & Rainaud, 2003; Kirby & Petry, 2004; Heil, Johnson, Higgins, & Bickel, 2006; Higgins, Heil, Sugarbaker, et al., 2007; Hoffman, Schwartz, Huckans, McFarland, et al., 2008; Johnson, Hollander, & Kenny, 2008), pathological gamblers (Petry & Casarella, 1999; Petry, 2001; Dixon, Marley, & Jacobs, 2003), and binge eaters (Weller, Cook, Avsar, & Cox, 2008; Davis, Patte, Curtis, & Reid, 2010). Some researchers considered delay discounting paradigm an experimental proxy of self-control struggle, and there has been an increased amount of neuroimaging work examining the neural substrates that mediate decision-making to produce patient choices. In the second half of this review, I will first present three different approaches these studies used, and then summarize brain structures associated with greater patience. I will consider these findings within the context of the “somatic marker hypothesis”, which provides a cohesive neural framework. 19 Choice of LL-SS Selection of later options requires varying degree of farsightedness, so that regions differentially responding to LL vs. SS choices may be candidates for regions that bias choice in the direction of patience. Combining results of several studies, it was observed that several regions including the dlPFC, insula, ACC, PCC, parietal cortex, occipital cortex, temporal cortex as well as cerebellum were more activated when people chose delayed alternatives (McClure, Laibson, Loewenstein, & Cohen, 2004; Wittmann, Leland, & Paulus, 2007; Weber & Huettel, 2008; Rubia, Halari, Christakou, & Taylor, 2009; Christakou, Brammer, & Rubia, 2011; Luo et al., 2012). Moreover, increased functional coupling between the vmPFC and dlPFC, parietal and insular cortices were reported to be associated with choice of delayed alternatives from early adolescence into mid-adulthood (Christakou, Brammer, & Rubia, 2011). A downside of this approach is the difficulty in dissociating regions that are causally involved in LL choices from regions reflecting consequences of choosing LL choices. If a given region is causally relevant in making patient choices, arguably it will be more engaged where LL is selected when choosing SS is more tempting, than when LL is predicted by his or her overall behavior. Quantitatively, a logistic function can be used to model individuals’ overall behavior. If LL choices can be separated into trials when the model predicts that the probability of choosing SS is high (hard LL) vs. low (easy LL), then brain regions that are preferentially responding to hard LL choices are more likely to be those causally involved in farsighted choices. However, there are no existing reports of this analysis. 20 Correlation with discounting rate/group difference between healthy controls vs. temporally myopic people A second approach researchers have taken is to look for brain structures that are correlated with shallower discounting or that distinguished healthy controls vs. clinical populations (usually populations that exhibit temporally myopic behavior such as drug addicts). The idea is that patient people (generally shallower discounters/controls) should recruit brain regions related with patient choices during delay discounting tasks. Shallower discounters recruited more brain activity in the PFC network (dlPFC, lateral OFC, inferior frontal gyrus (IFG), ventrolateral prefrontal cortex) during intertemporal decision-making (Boettiger, Mitchell, et al., 2007; Monterosso, Ainslie, et al., 2007; Wittmann, Leland, et al., 2007). Interestingly, the same tendency was associated with greater proportional lateral PFC volumes (calculated as lateral PFC gray matter volumes over whole cerebral brain volume)(Bjork, Momenan, & Hommer, 2009). In addition, healthy controls compared with clinical populations (e.g., attention deficit hyperactive disorder (ADHD) patients, methamphetamine abusers) yielded several significant clusters including the prefrontal-parietal cortex, dorsal striatum, ACC, PCC, precuneus, cuneus, occipital cortex and cerebellum either when they chose LL vs. SS (Rubia et al., 2009) or during “hard choices” in which two alternatives were similarly valued given their discounting rate vs. “easy choices” in which values of two alternatives were very different (Monterosso, Ainslie, et al., 2007; Hoffman, Schwartz, et al., 2008). 21 Lesion and TMS studies Functional imaging studies are useful in establishing correlation between brain structures and patient choices, but they cannot provide definitive information regarding whether a certain region(s) is causally involved. A third method researchers use is to examine how abnormality of certain regions caused by TMS or lesions affects preference for immediate rewards. The rational behind this approach is that if a given region(s) were causally relevant in making farsighted choices, functional abnormality of that region induced by TMS or lesion would produce impulsive choices. A recent TMS study (Figner, Knoch, Johnson, Krosch, et al., 2010) on delay discounting as mentioned earlier suggested a causal role of left dlPFC in patient choices. When it was temporarily disrupted, choices of SS were increased. In another relevant study (Cho, Ko, Pellecchia, Van Eimeren, et al., 2010) where continuous Theta Burst Stimulation (cTBS) was applied to excite right dlPFC, the discounting rate was reduced compared with sham condition. Also of interest, disruption of the dlPFC activity (especially right dlPFC) was causally associated with risky choices (more risky decisions when it was suppressed (Knoch, Gianotti, et al., 2006; Knoch & Fehr, 2007), and less when it was activated (Fecteau et al., 2007)). Besides the dlPFC, Sellitto et al. (2010) have also shown that medial orbitofrontal (mOFC) was engaged in valuation and preference of future rewards in intertemporal choice. Patients with mOFC lesions exhibited greater preferences for immediate rewards than did healthy people and patients with lesions outside of frontal cortex. This study is in contrast with a previous lesion study (Fellows & Farah, 2005) that reported no 22 discounting abnormality among frontal lobe injuries (either in the vmPFC or dlPFC) patients. Sellitto and his colleagues argued that some patients’ lesion in Fellows & Farah, 2005 were spared in the OFC, and perhaps that was the reason why they failed to see the effect of lesions on temporal discounting. Two general issues of lesion studies related with this negative finding are 1) other intact brain regions could compensate for damaged ones, 2) sample size is typically small (N=12 for vmPFC group, N=13 for dlPFC group in Fellows and Farah 2005), so correlation result is influenced by extreme values to large extent. Damage to the vmPFC has been associated with dysfunction characterized clinically as shortsightedness (Damasio, 1994), and the degree of shortsightedness increases as lesions move from anterior to posterior vmPFC (Hochman, Yechiam, & Bechara, 2010). According to somatic marker hypothesis (Damasio, 1994), future- oriented decisions rely heavily on the intactness of neural systems that activate somatic markers (bodily sensation), which are visceral responses signaling prospective consequences of an action. The vmPFC, one of the key brain structures responsible for triggering somatic states, was associated with patience as studies presented above demonstrated (Ballard & Knutson, 2009; Sellitto, Ciaramelli, et al., 2010; Christakou, Brammer, et al., 2011). In order for somatic signals to affect future-oriented behavior, they need to act on appropriate brain systems. Here three relevant systems are outlined: memory system including the hippocampus (storing distant past information) and the dlPFC (working memory: bringing information online so a particular presentation of bodily states can be strengthened or weakened), is linked with generation of affective 23 states, and past emotional signals (distant past or near past) can be used to forecast future. The importance of the memory system in farsighted behavior is suggested by the characterization of a temporal lobe lesion patient K.C. who suffered profound loss of episodic memory. This loss was accompanied by a similarly dramatic loss in the ability to think about his future. Of his patient, Endel Tulving wrote, “… he cannot tell the questioner what he is going to do later on that day, or the day after. Or any time in the rest of his life.” (Tulving, 2002). In addition, the hippocampus was suggested to play a role in patient choices during a delay discounting task coupled with episodic future imagery tags (Peters & Büchel 2010). The importance of working memory (related to maintaining information “on-line”) is suggested (Levy & Goldman-Rakic, 2000; Curtis & D'Esposito, 2003), and it has been shown that the dlPFC contributes to farsighted behavior: temporal impairment of dlPFC induced by rTMS leads to exaggerated preferences for immediate rewards (Figner, Daria Knoch, et al., 2010), and the degree of its recruitment was correlated with selection of LL choices (McClure, Laibson, et al., 2004; Weber & Huettel, 2008;Christakou, Brammer, et al., 2011; Luo et al., 2012) and shallower discounting (Ballard & Knutson, 2009; Bjork, Momenan, et al., 2009). The second critical system can be referred to as the Emotion system (e.g. the insula), which is involved in mapping bodily states that help generate feelings, and crucial for patient behavior. Naqvi and his colleagues (Naqvi, Rudrauf, Damasio, & Bechara, 2007) reported that smokers who had damage to the insula (either left or right) had higher likelihood of experiencing disruption of addiction than smokers whose brain damage did not involve it, and they argued that abnormality in the insula weakened the 24 conscious urge to smoke. In risky decision-making, patients with damage to the insula failed to adjust their gambles according to probability of winning and thus experienced more bankruptcies(Clark, Bechara, Damasio, Aitken, et al., 2008), and robust insula activation was observed when decision-making impairments lead to increased risk(Venkatraman, Chuah, Huettel, & Chee, 2007; Paulus, Lovero, Wittmann, & Leland, 2008). In intertemporal decision-making, an association between the degree of insular engagement and patient choices was indicated by correlational studies (Wittmann, Leland, & Paulus, 2007; Christakou, Brammer, et al., 2011; Luo et al., 2012). It is not clear whether the insula facilitates farsighted choice or represents consequence of choosing LL options. On initial consideration, these results (greater insula activity associated with more LL choices) may appear in contradiction to Naqvi and his colleagues’ findings. However, there are differences between quitting smoking and making LL choices. At least one key difference is that the insula lesion patients did not quit smoking because they did well in resisting the immediate urge to smoke, but rather reported an absence of smoking urge. So their behavior was not analogous to choosing an LL reward over a tempting SS. The insula damage caused smokers to stop craving rather than make them better at following future consequences. The third important system is Striatum system, into which the representation of somatic signals is fed. It is generally considered that the striatum is involved in initiating and maintaining motivated actions such as approach or withdrawal behaviors. One recent lesion study (Gill, Castaneda, & Janak, 2010) using sustained response paradigm where rats had to maintain actions progressively in order to receive greater rewards, reported 25 that rats with lesions in the core of the nucleus accumbens (AcbC) reduced their total amount of responding to reinforcers, and the authors attributed this to the role of the AcbC in initiating and maintaining motivation-related actions. Similar action-related motivation signals were observed among single-recording studies where monkeys were doing a delay discounting task (Roesch et al., 2009; Cai et al., 2011). Additionally, there is direct and indirect evidence suggesting the role of striatum in patience. An earlier lesion study on rats found that damage to the AcbC increased impulsivity, characterized by reduced preference towards delayed reinforcers (Robbins & Everitt, 2001), whereas lesions of two of its afferents (ACC and mPFC) did not have any effect on this. People who discount less steeply recruited more of the VS to larger magnitudes of future rewards during a delay discounting task (Ballard & Knutson, 2009). Concerns and future studies From the ancient Athenians to modern behavioral scientists, the effect of delay on motivation is considered one of the central sources of irrational behavior. The tendency to favor immediate pleasure versus the potential benefits for delaying gratification is generally conceived to be a conflict in individual development. Research scientists carry out delay discounting research described above in part because of their interest in this central topic of human irrationality, and research findings are discussed in a manner that implicitly presumes this connection. A number of factors might give us some doubt regarding the generalizability from delay discounting research to the broad domain of behavior in which delay is relevant. 1) Narrowness of choice set: delay discounting 26 research almost always includes a very small and well-defined alternative space, and participants are generally presented with two discrete alternatives within the same reward domain. This of course differs from the open-ended aspect of real life, in which people can pursue countless alternatives. 2) Coolness of environment: most of rewards in delay discounting studies especially those paired with fMRI are presented in a context that does not include intense emotion. This is quite different that the emotionally/ motivationally hot context in which a temptation must be considered against its long-term costs. 3) Explicitly quantified alternatives: unlike the typical discounting experiment, the long- term cost of many important intertemporal decisions are not given to explicit quantification. For example, damaging one’s reputation by flaking on a deadline, damaging a relationship by losing one’s temper with a friend, or even the cost to health of indulging in drug use. Given the narrowness of these discounting studies, future research investigating self-control/delay gratification might need to consider developing decision- making tasks that are more like the critical real world situations in which people struggle with self-control. For example, in Hare et al., 2009, participants were making food choices pitting taste against health. This task resembles choices people generally face in daily life, and has the implicit “time” component. Conceptually speaking, choosing healthy-but-less tasty food items is similar as LL, and choosing tasty-but-unhealthy food is similar as SS. Another approach future studies might think of taking is to look for systematic deviations in behavior related to time. People generally do not have consistent preferences along the dimension of time. The basis or bases of irrationality related to time 27 are a matter of ongoing debate, but the investigation of irrationality in behavior related to time will provide clues related to how rational goal-directed action works. Several questions are worthy of investigation along this line: (1) What are the factors influencing time-relevant irrationality? And which dimensions (valuation or choice) these factors exert influence? (2) People exhibit variability to certain extent in their choices (sometimes they choose irrationally, but sometimes not). It might be illuminating to examine this intrasubject variability and its neural correlates. (3) Some individuals struggle with self-defeating behavior (i.e. drug addiction), but some do not. What distinguishes individuals in their time-relevant behavior? 28 Chapter 2: Incidental Emotions’ Effect on Intertemporal Choice Introduction Humans are distinct from other species in their capacity to pursue temporally distant goals. One paradigm (“delay discounting”) used to investigate this capacity in lab settings requires people to trade-off between magnitude and immediacy of expected gains (for example, would you prefer $25 today or $50 in 40 days?) or less frequently, of expected losses. Research on human discounting has focused on stable individual differences rather than situational factors. For example, researchers have compared discounting behavior among socially or clinically relevant groups of people (e.g., healthy controls vs. drug users, pathological gamblers or binge eaters) (Kirby, Petry, & Bickel, 1999; Kirby & Petry, 2004; Petry & Nancy, 2001; Dixon, Marley, & Jacobs, 2003; Critchfield & Kollins, 2001; Weller et al., 2008; Davis, Patte, Curtis, & Reid, 2010). This work emphasizes the trait-like component of the behavior, supported by its relative stability over long periods of time (test–retest correlation = .71 across a 1-year span (Kirby, 2009)). Recently, there has been increased interest in investigating situational factors that modulate intertemporal choice at both the behavioral and neural level. It has been shown that appetitive stimuli increase preference for immediate rewards within the same consumption domain (Loewenstein, 1996). For example, people are more impatient to eat desserts when exposed to the sight and smell of the desserts. Recent studies have extended this work by examining how delay discounting is affected by motivational states that are not directly related to the intertemporal alternatives that are presented to 29 the participants (Wilson & Daly, 2004; Van den Bergh, Dewitte, &Warlop, 2008; Li, 2008). Specifically, looking at pictures of attractive women made men discount monetary rewards more steeply (Wilson & Daly, 2004), and a similar study suggests that the effect is moderated by sensitivity to reward, as measured by the Behavioral Activation Scale (BAS scale) which is designed to measure the individual’s trait approach motivation (Van den Bergh, Dewitte, &Warlop, 2008). Along similar lines, exposure to appetizing desserts produced increased preference for immediate monetary rewards, and being in a cookie-scented room made individuals less happy with delayed monetary gains (Li, 2008). A recent fMRI study (Murawski, Harris, Bode, Dominguez, &Egan, 2012) reported subliminally presented Apple logos biased participants’ preferences towards immediate rewards, and this prime effect was associated with neural encoding of subjective values in the mPFC. The precise mechanism(s) behind the priming effect of cues on discounting behavior remains an open question. The first goal of this study was to examine whether “incidental emotions” (emotions the individual experiences that are not linked to the anticipated outcomes Loewenstein & Lerner, 2003) affect intertemporal choices. I hypothesized happiness (as elicited by the prime of a smiling face) might function similarly to elevated visceral responses, which undermines willingness to wait. However, with regard to the effect of fear (as elicited by the prime of a fearful face) on intertemporal choice, the weight of prior evidence is less clear. It may be that emotional primes, whether happy or fearful, undermine willingness to wait for larger rewards. If, as has been suggested (McClure, Laibson, Loewenstein, & Cohen, 2004; Metcalfe & Mischel, 1999) intertemporal 30 decision making is based on the combined affect of a farsighted, analytical, and emotionally “cool” system and a temporally myopic, visceral, and emotionally “hot” system, it could be that emotional arousal of any type leads to steeper discounting, through excitation of the hot system. Alternatively, it could be the case that particular emotions have particular effects on discounting behavior. Such emotion-specific mapping could occur if 1) intertemporal choices are conceived as multi-attribute decisions (i.e., immediacy and amount as distinct attributes), and 2) the relative utility of the attributes is moderated by emotion. If, for example, the attribute of immediacy has greater value when individuals are happy, and lower value when afraid (in both cases, relative to in the absence of an emotional state), then these particular emotions would be expected to have opposite effects on intertemporal choice. Indeed, alteration of the valuation of specific outcomes is, according to one evolutionary framework, the central to what emotions are (Tooby & Cosmides, 2008). Another hypothesis proposed is that efforts at affect regulation can result in emotion-specific effects on intertemporal choice. In particular, it has been suggested that people are more likely to choose immediate rewards when in a sad mood, which has been interpreted as behavior directed toward the alleviation of negative affect (Augustine & Larsen, 2011; Lerner, Lib, & Weber, n.d.). Along a similar line, Query theory (Weber et al., 2007) posits that the order decision- maker attended to choice options makes a difference in discounting behavior. For example, people who first think about the delayed option have been shown to be more patient (Weber et al., 2007). And particular emotion could moderate which option is attended first (Lerner, Lib, & Weber, n.d). Alternatively, specific emotions may 31 interfere, in idiosyncratic ways with the valuation of immediate and delayed rewards. If particular emotions have particular effects on intertemporal choice, fearful emotional primes may have an effect on intertemporal choice that is opposite to that of happy emotional primes, increasing willingness to wait for larger rewards. The second goal of this study was to identify a possible neural mechanism(s) of the emotional prime effect on discounting behavior. There has been a spate of neuroimaging studies connecting discounting behavior to system-level neural functioning. Research has converged upon a common set of brain regions that appear to track value during intertemporal choice tasks. These include the VS, PCC and mPFC. There are at least two different hypotheses related to the role these regions played in intertemporal choice: 1) According to the “dual system model” (McClure, Laibson, Loewenstein, & Cohen, 2004; McClure, Ericson, Laibson, Loewenstein, & Cohen, 2007), this network is specialized in representing value of immediate options, and competes against a more cognitive, prefrontal-based brain network to bias choices towards immediate rewards. 2) According to the “single valuation model”, this network tracks subjective value of both immediate and delayed rewards. The single valuation model was evidenced first in Kable & Glimcher 2007, and later replicated in other studies (Gregorios-Pippas, Tobler, & Schultz, 2009; Peters & Buchel, 2009; Pine et al., 2009; Prevost, Pessiglione, Metereau, Cléry-Melin, & Dreher, 2009; Sripada, Gonzalez, Luan Phan, & Liberzon, 2010). A recent proposed “self-control model” combines the idea (central to the dual-valuation model) that the lateral PFC facilitates far-sighted behavior, while still hypothesizing that value-tracking regions (including VS, mPFC, and PCC) 32 represent overall value (Hare, Camerer, & Rangel, 2009). This model proposed that the dlPFC plays a role in top-down moderation of value signaling in structures such as the mPFC (rather than as an alternative value system as suggested by the “dual system model”). Consistent with either the dual system or the self-control account, the lateral sector of PFC and other regions implicated in cognitive control (e.g., insular cortex, ACC) have been repeatedly observed to be associated with choice of LL over SS alternatives (McClure et al., 2004; Wittmann, Leland, & Paulus, 2007; Weber & Huettel, 2008; Rubia, Halari, Christakou, & Taylor, 2009; Christakou, Brammer, & Rubia, 2011). In a recent TMS study on delay discounting, Figner and colleagues (2010) observed reduced preference for LL alternatives when functioning of the left dlPFC was temporarily disrupted, suggesting the left dlPFC (around MNI coordinate: x = -38, y = 36, z = 22) has some causal role in farsighted intertemporal decisions. If incidental emotions do affect intertemporal choice behavior, it may do so through an affect on these value tracking and (or) choice-related regions. In the present study, I used a novel dual-task paradigm in which participants were required to make intertemporal choices while maintaining different emotional faces in working memory. While my ultimate interest is in the effect of incidental emotions on decision-making, I note from the outset that the relationship between perception of an emotional face and experience of an emotion is imperfect (a point that I return to in discussion below). Emotional primes varied from trial-to-trial between fearful, happy and neutral. An adaptive intertemporal choice procedure was used (Luo, Ainslie, Pollini, 33 Giragosian, & Monterosso, 2012) to create an environment where there was maximum variability in choice, thereby increasing the power to observe potential emotional priming effects. These data allowed to assess 1) whether emotional primes effected intertemporal choices, and 2) whether any effects in behavior were related to observed changes in neural functioning. With regard to the latter, I hypothesized that the effects of incidental emotions on intertemporal choice would be mediated by those regions discussed above, which have previously been shown to be associated with intertemporal choice, and (or) with the mediation of value tracking activation. Material and Method Participants Twenty-two healthy volunteers participated in the study (mean age= 33.6+7.7 years, 9 females). All participants were right-handed, had normal or corrected-to-normal vision, were in a good health, and were free from any psychological and neurological disorders. This study was approved by the Institutional Review Board of the University of Southern California (USC). Participants provided written informed consent and were paid for participation. Of the 22 participants, 16 underwent MRI scanning. One participant was excluded from analysis because of excessive head motion during MRI scanning. 34 Study Procedure Overview For each individual, pre-testing was used to identify intertemporal choices in which SS and LL options were similarly preferred. Following this, participants completed an individualized intertemporal choice task in the scanner, in which a “dual- task” method (described below) was used to introduce emotional primes during decision- making. All participants additionally completed a questionnaire related to variability in behavior activation and inhibition level (brain activation system (BAS) and brain inhibition system (BIS)) (Carver & White, 1994). Pre-testing Delay discounting was assessed first using a computerized version of the Monetary-Choice Questionnaire, developed by Kirby et al., (1999), then an adaptive delay discounting choice task (Luo, Ainslie, Giragosian, & Monterosso, 2009). Individuals’ choices were fitted to a simple hyperbolic discount function: (Eq.1) where V = time discounted value (i.e., “present value”) of a delayed amount, A = amount, D = specified Delay (in days), and k is a fit parameter (Mazur, 1987). The best- fit parameter k was used to generate initial “indifference point” for the subsequent intertemporal choice task in the scanner. The “indifference point” is the k-value at which both options of the choice would be, on average, equally preferred (referred as equivalence-k or “k e ” throughout). Although I used this model, it should be noted that it assumes a linear relationship between Amount and Value; concavity in the actual ! V = A 1+Dk 35 mapping from Amount to Value will result in inflation of estimation in the discount parameter k (Ho et al., 1999; Pine et al., 2009; Pine et al., 2010). Moreover, while the hyperbolic functional form has been found to fit choice data in similar paradigms as well or better than other simple alternatives like exponential or beta-delta discounting (Johnson & Bickel, 2002; McKerchar et al., 2009; Pine et al., 2009), individuals vary in how, “hyperbolic” their decisions are (Coller, Harrison, & Rutström, 2011). Since my procedure utilizes a hyperbolic functional form in generating indifference pairs, individual differences in discounting functional form will affect how successful the procedure is. Scanner task In the scanner, participants performed an individualized adaptive intertemporal choice task intermixed with different emotional primes (Figure 1). Each trial started with a “+” with 0.5~5 seconds variation. Participants were instructed to focus on the center of screen and get ready for the trial. Then a computer generated face by the FaceGen Modeller 3.2 (http://www.facegen.com) was displayed on the screen for 1~2 seconds. Participants were instructed to remember the facial expression of the individual. The facial expression differed from trial-to-trial between fearful, happy and neutral. After the face disappeared, intertemporal choice options were presented on either side of screen, with side of the more and less immediate alternative randomized. Participants were asked to choose between these two options by pressing either left or right button on a response pad, corresponding to the side of the display on which their preferred alternative 36 was presented. Participants were free to choose any time during presentation of choice alternatives and the text of the option turned from white to yellow after their selection. If a response was not made within 4 seconds, “please respond” appeared on the bottom of the screen. After 6 seconds, regardless of whether participants responded or not, the alternatives were cleared from the screen indicating that an intertemporal choice could no longer be made, and a second facial expression was presented on the screen. In all cases, the second face was of the same individual as the first, but the intensity of the facial expression differed on half of the trials. Participants were asked to press the left button if the second emotion was at the same intensity as the first one, or right button if it was at a different intensity level. Feedback was not given in order to minimize emotional responses related to task performance (i.e., frustration for wrong judgment or excitement for accurate response). However, if participants failed to respond to the second facial expression, “ no response” appeared on the screen. I will refer to the delay discounting task as the, “primary task” and the emotional expression task as the “secondary task”. It is important to note that this secondary judgment task was designed to force participants to hold the target expression in memory, with the goal of increasing its impact (relative to what would be observed given mere presentation of the prime stimulus). The participant’s accuracy also allowed us to verify engagement in the priming task. Participants received one of the rewards that they selected (randomly). Accurate or inaccurate responses on the secondary task did not affect payment. The delay of the LL ranged from 14 to 56 days, and the delay of the SS varied from today to 13 days. In half of trials, there was an immediate (“today”) alternative and 37 in half, both alternatives were delayed. The amount of the LL was randomly generated from $20 to $65. The SS amount on each trial was generated using a value of k e that was continually adjusted throughout the session (detailed explanation below) in order to keep alternative pairs closely matched given the delay discounting exhibited by the participant. Subjects were given 51 trials for a single run with 17 trials for each of the three emotional prime conditions. The 51 trials were divided into 17 blocks, and each block was comprised of three different conditions in a random order. Choice alternatives within each block had the same k e . K e in this experiment was adjusted on a block-by-block basis. Specifically, k e was adjusted upward a quarter step on a log10 scale if participants within a block chose SS options twice or more, and consequently, the relative value of the LL alternatives was increased for the next block. Conversely, within a block, if participants chose LL options twice or more, k e was adjusted downward a quarter step on a log10, resulting in the increase of relative value of SS alternatives for the next block. Otherwise, k e remained the same on the subsequent block. The procedure ensured that the SS and LL alternatives would be nearly equally preferred (near “indifference point”) for all participants. Participants completed two runs of this task separated by a structural scan (4 min), and each run was 11-minute long, resulting in total 26-minute running time in the scanner. They were instructed that one of the trials would be randomly selected at the end of the task, and that they would receive whichever option they selected for that trial (available after the indicated delay). 38 Figure 1: Scanner task An example of one of trials participants encountered in the scanner. First, participants were presented with a facial expression, in this trial a fearful facial expression for 1~2 second. Then they were required to choose between smaller- sooner (SS) and larger-later (LL) alternatives, and received one of their preferred options as determined by a random selection at the end of task. A few second later, participants were instructed to make a judgment once the second facial expression appeared on the screen. Task stimuli Fearful, happy and neutral facial expressions were generated using FaceGen Modeller 3.2 (http://www.facegen.com). The intensity level for emotional faces that appeared first in each run varied from 71% to 100% of maximum setting in FaceGen (Mean=85.4%, SD=9.0%). When the target face was non-neutral (Fearful or Happy), the second emotional faces was at the same intensity level half of times, and at 60% lower intensity level the other half of trials (e.g., 80% of “maximal happy” during target presentation to 20% of maximal happy during test presentation). During the neutral face condition, the second expressions was either again neutral, or was 60% maximal happy or 60% maximal fearful. Different faces were used across runs in order to avoid familiarity- related adaptation, and all faces had the same characteristics (race: European, age: 25~30 years). Subsequent to the scanner task, participants were asked to rate facial expressions they saw in the scanner task according to this instruction “Now I want you to tell me how you felt about those facial expressions you saw in the scanner task. Facial expressions 39 you saw at the scanner will be displayed one by one on the screen. After viewing each expression, please report: What is the intensity of each emotion as felt when you saw this facial expression (from 1 to 10), 1 means the emotion was strongly fearful, 5 means neutral, and 10 means the emotion was strongly happy.” Due to a programming error, two participants’ post-task rating data were not recorded. BAS/BIS scale Participants completed the BAS/ BIS (Carver & White, 1994) scale, a questionnaire that assesses individual variability in trait approach and withdrawal motivation. The BAS scales include three subscales: BAS drive (e.g., “When I want something, I usually go all-out to get it ”), BAS fun seeking (e.g., “I crave excitement and new sensations ”), and BAS reward responsiveness (e.g., “When I see an opportunity for something I like, I get excited right away ”). The BIS scale was designed to probe sensitivity to punishment. One example of BIS item is, “ I worry about making mistakes.” MRI acquisition fMRI scanning was performed using a Siemens 3T Magneton Trio MRI system equipped with a quadrature radiofrequency head coil in the Dana and David Dornsife Cognitive Neuroscience Imaging Center at USC. Participants laid supine on a scanner bed, viewing stimuli through a mirror mounted on head coil. An echo planar imaging (EPI) sequence was used to measure Blood oxygen level-dependent (BOLD) response. A 40 total of thirty-two axial slices that covered the whole brain with no gap were acquired using these parameters: repetition time (TR) = 2 s, echo time (TE) = 30 ms, flip angle = 90°, FOV = 192, in-plane resolution = 64 × 64. The slices were tilted 30°along anterior commissure–posterior commissure plane to gain better signal in orbital frontal cortex. Anatomical images (256 × 256 × 176) with 1 × 1 × 1 mm3 resolution used a T1- weighted three-dimensional magnetization prepared rapid gradient echo (MPRAGE) sequence (inversion time, 900 ms; TR, 1950 ms; TE, 2.26 ms; flip angle, 90°). fMRI analyses Imaging data were analyzed using FEAT (fMRI Expert Analysis Tool) version 5.98, part of the Oxford University Centre for Functional MRI of the Brain (FMRIB) Software Library (www.fmrib.ox.ac.uk/fsl). FMRI data were preprocessed using spatial smoothing (Gaussian kernel of full-width at half-maximum of 5 mm) and temporal smoothing with a high pass filter (100 second cutoff) and motion correction. The preprocessed data were then submitted to a general linear model (GLM) that was used to analyze the contributions of experimental factors to BOLD. All within-subject statistical analyses were performed in each subject’s own image space, and then transformed to standard space before high-level analysis. Echo planar images were realigned to the anatomical images acquired within each scanning session and normalized to a standard brain [Montreal Neurological Institute (MNI)] using affine transformation (Jenkinson & Smith, 2001). 41 My primary analyses targeted brain signal changes 1) during presentation of the emotional prime, and 2) during intertemporal decision-making. The emotional prime period was jittered with 1~2 second variation, in order to isolate this period from decision-making. These two periods were modeled separately for subsequent analyses. First, I analyzed differences in the condition regressors. Seven events were added in the model: fearful prime, happy prime and neutral prime during emotional prime period (starting from the onset of presentation of prime and ending at when choice alternatives appeared); fearful prime, happy prime and neutral prime during decision- making (starting from the presentation of choice pairs ending at when a selection was made). Events during the emotional prime period were orthogalized to events during choice period, so that common variances were attributed to the latter. An additional event was added to take into account variance associated with completion of the secondary task. The goal of this analysis was to identify regions differentially associated with emotional primes during emotional prime and intertemporal decision-making (i.e., brain regions in which there was greater recruitment during choice for the emotional prime condition vs. the neutral condition or for fear prime vs. happy prime). Thus the following contrasts were performed: fearful vs. happy, fearful vs. neutral and happy vs. neutral. Each was examined separately for the period in which the emotional prime was presented, and for the subsequent period in which the intertemporal choice was made. These contrasts were thresholded using cluster detection statistics, with a height threshold of Z > 2.3 and a cluster probability of p < 0.05 corrected for multiple comparisons associated with whole-brain analyses. Correlation analysis was also conducted relating 42 individual difference in BAS/BIS with BOLD signal change difference between emotional primes. Next, I correlated individual differences in the prime effect on behavior with neuroimaging data. This was performed in FSL by a regression analysis on the subject contrast images of neural prime effect (i.e., fear>happy during emotion prime period and decision-making period) using the behavioral prime effect (i.e., LL choice difference between fear and happy prime) as the independent variable. Finally, I used psychophysiological interaction analysis (PPI) (Friston et al., 1997) to analyze functional coupling (during decision-making) between 1) regions that were associated with the effect of emotional primes on preference, and 2) the rest of brain. The goal of this connectivity analysis was to identify network activity associated with any observed effects of emotional primes on intertemporal choice behavior. All the above regressors were convolved with canonical double gamma hemodynamic response function (HRF) and temporal derivatives were added as well. A fixed-effect model was used for cross-run analysis by forcing the random effects variance to zero in FLAME (FMRIB's Local Analysis of Mixed Effects) (Beckmann et al., 2003). Cross-run analysis results were input to group-level analysis using a mixed-effects model (Woolrich et al., 2004). 43 Results Prime effect on choice The percentage of trials during which LL was chosen (%LL) was calculated for each condition. On average, %LL was 47.06 (SD=8.63), 52.80 (SD=8.50), and 51.26 (SD=6.81) for happy, fearful and neutral prime. The main effect of emotional prime on %LL was significant (F (2,40) =3.371, p=. 044) according to repeated-measures analysis of variance (ANOVA) with “condition” (fearful, happy and neutral) as a within-subject factor (Figure 2). Paired-t tests suggested that the fearful prime yielded significantly higher %LL than happy prime (t (1,20) =2.692, p=0.014); but neither of the emotional primes was significantly different from the neutral condition (p=0.56 and p=0.06 for fearful vs. neutral and happy vs. neutral respectively). 44 Figure 2: Mean and SE of individual % LL for each condition The individual %LL was computed as distance from overall %LL for each participant. The order of conditions: happy, fear, and neutral prime (from left to right). Then I re-examined the association between prime and choice, excluding trials in which an error was made on the secondary task, since errors on the secondary task suggested that the primes had not been maintained in working memory. As discussed in more detail below, across conditions, errors were made on 26.1% of trials in the secondary task. Excluding these error trials, participants made 47.48 %LL (SD=7.91), 55.08 %LL (SD=10.86), and 52.13 %LL (SD=8.97) for happy, fearful and neutral prime respectively. Repeated-measures ANOVA of %LL as a function of condition revealed a significant main effect (F (2,40) =4.492, p=. 017) (Figure 3). %LL during fear prime was significantly higher than during happy prime (t (1,20) =2.985, p=0.007), but neither of the 45 emotional prime condition was significantly different from the neutral prime (p=0.311 and p=0.053 for fearful vs. neutral and happy vs. neutral respectively). Figure 3: Mean and SE of individual % LL for each condition during trials where participants made accurate judgment on secondary task The individual %LL was computed as distance from overall %LL for each participant. The order of conditions: happy, fear, and neutral prime (from left to right). Mean RT for each subject during decision-making was modeled using repeated- measures ANOVA with condition as a within-subject factor. There was no significant main effect of condition (p=. 264) (Figure 4). 46 Figure 4: Mean and SE of individual mean RT for each condition during decision-making The individual mean RT was computed as distance from overall mean RT for each participant. The order of conditions: happy, fear, and neutral prime (from left to right). RT was further examined separating trials based on whether participants chose the SS or LL. Repeated-measures ANOVA of RT as a function of condition and choice did not show any significant main effect (p=. 256 and p=. 065 for condition and choice respectively) or two-way interaction (p=. 619) (Figure 5). It was suggestive that RT was faster during LL choice than SS choice. 47 Figure 5: Mean and SE of individual mean RT for each condition during SS choice and LL choice The individual mean RT was computed as distance from overall mean RT for each participant. RT for SS choice is shown in green, and RT for LL choice is in red. The order of conditions: happy, fear, and neutral prime (from left to right). Accuracy and RT on secondary task The percentage of times participants made accurate judgment on secondary task was 70.87% (SD=14.19), 74.37%(SD=14.65) and 76.47%(SD=18.81) respectively for happy, fearful and neutral prime. There was no significant difference in accuracy across conditions (p=. 280). Mean RT on secondary task (starting at the time the second face appeared on the screen ending when subjects responded) was calculated. Repeated- 48 measures ANOVA with condition as a within-subject factor did not yield significant main effect of condition on RT (p=. 250). Neuroimaging results Emotional prime period: Contrast map of “fear >happy faces” Relative to happy faces, fearful faces recruited greater activation in the right dlPFC (BA 9), precentral/postcentral gyrus, and left parietal cortex (Z>2.3, p<. 05 corrected for whole-brain analysis)(Figure 6A). The reverse contrast did not indicate any regions in which signal was greater when participants viewed happy (relative to fearful) faces. Individual brain signal change in the right dlPFC cluster for the Fear > Happy contrast was positively correlated with individual variability in the BIS score, a measurement of trait withdrawal motivation (Spearman’s rho=. 473, p=. 075) (Figure 6B). Covariate analysis relating individual difference in the size of behavioral prime effect (%LL_fear-%LL_happy) to brain activity difference between fear>happy during emotional prime period (within clusters that showed differential activation for fear >happy as identified in Figure 6A below) did not reveal any significant result. 49 Figure 6: Neural representation of fear vs. happy facial expression A B A) Significant clusters were observed in the right dorsolateral PFC (dlPFC), precentral/postcentral gyrus, and left parietal cortex during Fear > Happy prime (Z>2.3, p < .05, cluster level correction). B) People who had greater self- reported behavioral inhibition recruited greater brain activation in the right dlPFC for fear vs. happy faces (p=. 075). 50 Decision-making period: Neural prime effect on intertemporal choice During intertemporal decision-making, the Fear (relative to Happy) face prime was associated with greater signal change in the ACC, Supplementary motor area (SMA), bilateral precentral/postcentral gyrus, PCC/precuneus cortex and left parietal cortex (Z>2.3, p<. 05 corrected for whole-brain analysis) (Figure 7). No significant differences were observed for the “happy>fear” contrast during intertemporal choice period. Figure 7: Neural prime effect on choice Areas where signal during Fear > Happy prime in intertemporal decision-making (Z>2.3, p < .05, cluster level correction) included the anterior cingulate cortex (ACC), supplementary motor area (SMA), bilateral precentral/postcentral gyrus, posterior cingulate cortex (PCC)/precuneus cortex and left parietal cortex. Behavioral prime effect vs. neural prime effect I conducted covariate analysis relating individual difference in the size of behavioral prime effect (%LL_fear-%LL_happy) to the neural prime effect (above contrast map: fear>happy during decision-making period, within clusters that showed neural prime effect (as identified in Figure 7 above). This analysis revealed a significant 51 positive association between individual behavioral prime effect and neural prime effect in the posterior ACC and SMA (Figure 8A). Figure 8: Correlation between behavioral prime effect and neural prime effect A B C A) Correlation between behavioral prime effect (%LL_fear-%LL_happy) and neural prime effect (regions showed greater association with fear prime vs. happy prime) in the posterior anterior cingulate cortex (ACC) and supplementary motor area (SMA) (Z>2.3, p < .05, cluster level correction). B) Axial sections of the same contrast at a liberal threshold Z>2, p=0.2 corrected for multiple comparison problem without restraining space to clusters identified by main contrast “fear>happy” prime during decision-making. C) The scatter-plot showing a positive relationship between behavioral and neural prime effect in the posterior ACC and SMA. 52 Psychophysiological interaction (PPI) analysis Next I analyzed the functional connection between the SMA/ACC cluster from the above regression contrast and other brain regions during trials where participants were deciding which alternative to select under fear prime vs. happy prime. This PPI analysis suggested that ACC/SMA activation was associated with less right anterior dlPFC (BA 10) recruitment during intertemporal choice in the fear condition (relative to happy condition) (Figure 9). However, this functional connection between the ACC/SMA and right dlPFC did not modulate behavioral prime effect on choice based on covariate analysis using difference in preference for LL (%LL_fear-%LL_happy) as a covariate for this PPI finding. Figure 9: PPI analysis result An inverse relationship was observed between posterior sector of anterior cingulate cortex (ACC)/supplementary motor area (SMA) (seed used for PPI analysis) and right anterior dorsolateral prefrontal cortex (BA 10) during intertemporal choice under fear prime (vs. happy prime) (Z>2.3, p < .05, cluster level correction). Post-task rating results Participants reported that they felt emotionally happy when they saw happy faces which appeared first in the scanner task (mean=8.00, SD=1.06), and emotionally fearful 53 for fearful faces (mean=2.73, SD=. 85), and neutral for non-emotional faces (mean=4.91, SD=. 34). Discussion The present study investigated behavioral and neural emotional prime effect on intertemporal choice using a dual-task design in which participants made selections between rewards received at different points in time, after the presentation of happy, fearful, and neutral faces. I found that fearful and happy primes were differentiated in both choice behavior and associated brain activity. Specifically, a fearful face prime (relative to a happy face) was associated with greater brain signal change in the right dlPFC, precentral/postcentral gyrus and left parietal cortex, and recruited greater activity in the ACC, SMA, PCC/ precuneus cortex, bilateral precentral/postcentral gyrus and left parietal cortex during intertemporal decision-making, and was also associated with greater preference for delayed rewards. Moreover, I observed that neural signal in the posterior ACC/SMA during fearful vs. happy primes in intertemporal choice predicted individual behavioral difference in LL preference between these conditions, and PPI analysis suggested that posterior part of ACC/SMA was inversely correlated with right anterior dlPFC (BA 10) in decision-making under fear prime (relative to happy prime). Exposure to emotional faces cannot, of course, be presumed to elicit the emotions they depict. However, there is suggestive evidence that perception of emotionally expressive faces induces emotional processes (Wild, Erb, & Bartels, 2001). Emotional primes introduced in my study are incidental to decisions (Loewenstein & Lerner, 2003). 54 Nevertheless, previous studies suggested that people incorporated incidental emotions in their decision-making (Bodenhausen, 1993; Loewenstein & Lerner, 2003). For example, Winkielman et al., 2005 presented subliminal happy and sad faces before participants poured and consumed beverage. The authors found that people who were thirsty poured and consumed more drinks after seeing subliminal smiles; also they were willing to pay more and reported more wanting of drinks. Subliminal sad faces showed the opposite effect. They argued that affective primes changed incentive value of beverage even though these two events were not related. Specific-emotion effects (as apposed to valence) have also been observed (Lerner & Keltner 2000). People perceived greater risk after being induced to feel fear than people induced to feel anger (Lerner & Keltner 2001). Incidental sadness made people favor high-risk options, whereas incidental anxiety increased preference for low-risk options (Raghunathan & Pham, 1999). No study has directly tested the effect of incidental emotions on intertemporal choice domain at both the behavioral and neural level. However, related studies suggest that exogenous cues such as sexual stimuli, a highly regarded brand logo, and appetizing dessert made people more likely to choose immediate rewards (Wilson & Daly, 2004; Van den Bergh, Dewitte, &Warlop, 2008; Li, 2008; Murawski et al., 2012). In addition to research examining the effects of appetitive cues on intertemporal choice, aversive “incidental” factors such bladder pressure have also been considered. In Tuk et al., 2011, authors observed that people who had higher level of bladder pressure (requiring a higher degree of maintaining inhibitory control) showed greater preference for delayed rewards. The authors hypothesized that this finding was the result of what Berkman, Burkland, & 55 Lieberman, 2009 referred to as the, “inhibition spillover effect. ” In support of this, Tuk and colleagues observed that individual differences in this effect were positively associated with BIS as measured by self-report scale. The differential effect appetitive and aversive cues had on intertemporal choice might be associated with distinct approach and withdrawal motivation systems (Schneirla, 1959; Lang, Bradley, & Cuthbert, 1990; Gray, 1994). According to this model, the approach system mediates responses to potential rewards, whereas the withdrawal system mediates responses to potential punishment. This dichotomy was mapped at the neural system level with left prefrontal cortex preferentially active during approach, and right preferentially active for withdrawal drives (Davidson, Ekman, Saron, Senulis, & Friesen, 1990; Sutton & Davidson, 1997). Although most supporting evidence for this prefrontal asymmetry hypothesis was from electroencephalography (EEG) work (Davidson et al., 1990), some recent fMRI studies (Herrington et al., 2005; Berkman & Lieberman, 2010) also observed PFC asymmetries for approach and withdrawal motivation. In Berkman and Lieberman 2010, they found that this lateralization is specific to approach and withdrawal motivational direction rather than emotional valence. One possible account of the differential effect of happiness and fear on choice might be related to approach-/withdrawal-related motivation. Alternatively, different emotion might influence the order of which attention is paid to choice options. According to QT theory (Weber et al., 2007), the option people paid attention to first substantially influences which option is chosen. Another possible account might be related to the interaction between emotional prime effects and 56 cognition. I introduced emotional primes by asking participants to hold them in working- memory. Prior studies in delay discounting suggested that preference for LL decreases when participants are put under working memory load (Hinson, Jameson, & Whitney, 2003). Compared to other studies without the prime manipulation, preference for LL in my study might be relatively attenuated. However, working memory load was ostensibly, and I did not observe differences in secondary task performance (i.e., accuracy score) between the happy and fear prime conditions. Moreover, %LL differences between happy and fear prime conditions were not correlated with secondary task performance differences between the two conditions (p=0.327). My results are not consistent with either affect regulation findings in which negative emotion was associated with increased preference for SS alternatives (Augustine & Larsen, 2011; Lerner, Lib, & Weber, n.d). Affect regulation theory hypothesized that negative affect state triggered an individual to engage regulation. In an intertemporal choice context, choosing immediately available rewards, it was further argued, served as a regulatory response to alleviate negative affect (Augustine & Larsen, 2011; Lerner, Lib, & Weber, n.d). However, affect state was primed on a trial-by-trial basis in my study, as opposed to prolonged affect induction (the entire test session) used in previous studies (Augustine & Larsen, 2011; Lerner, Lib, & Weber, n.d). It is plausible that regulatory response to affect would not be recruited in the present study, given the transience of emotion induction. The present data are also not explained by cold-hot dual system account of intertemporal choice, in which farsighted choice is attributed to the cool system and 57 preference for immediate rewards attributed to the hot system. Hot-cold dual system account (McClure, Laibson, Loewenstein, & Cohen, 2004; Metcalfe & Mischel, 1999) posits that discounting behavior is driven by a competition between emotionally “hot” system (impatient system) and emotionally “cold” system (patient system); and imbalance between these two systems would produce either impatient behavior (hot system >cold system) or patient behavior (hot system<cold system). Any type of emotional arousal (i.e., happy or fearful primes) would, presumably, increase the engagement of the hot system, and as a result, steeper discounting behavior would be expected. But I did not observe impatient behavior produced by both emotional primes. It should be noted, however, that the present study did not include any assessment of the visceral arousal produced by primes (e.g., psychophysical measurement). Unlike previous studies that demonstrated prime effects on discounting, in the present study I also observed prime effects on brain function simultaneous to behavior. First, I found that right dlPFC and parietal cortex were more responsive to fearful faces than happy faces. Veling et al., 2011 reported that fearful faces slowed down participants’ unintentionally evoked motor responses toward appetizing food rewards for restrained eaters and suggested that fear can be used as intrinsic withdrawal signals for impulsive response toward tempting rewards. Consistent with this finding, the right dlPFC has been implicated in mediating withdrawal behavior (Davidson & Irwin, 1999; Davidson, 2002). When right dlPFC was disrupted temporarily, people failed to withdrawal attention from angry faces (d'Alfonso et al., 2000). Increased activity in the right dlPFC induced by high frequency rTMS caused reduction in cocaine craving 58 (Camprodon et al., 2007). Tonic activity in the right dlPFC measured by EEG was correlated with individual variability in behavioral inhibition level measured by BIS scale (Shackman et al., 2009). I also observed that BOLD signal change in the right dlPFC was positively correlated with individual sensitivity of behavioral withdrawal response measured by the BIS, which further supported the interpretation that the right dlPFC activation generated by fearful prime was relevant to withdrawal response. The right dlPFC was also activated in general for processing negative affect including fearful expressions (Davidson, 2002; Murphy, Nimmo-Smith, & Lawrence, 2003; Nitschke, Sarinopoulos, Mackiewicz, Schaefer, & Davidson, 2006; Baeken et al., 2010). Threat- related stimuli were strong competitors of attention. I observed greater parietal response to fearful faces, which was consistent with the role of parietal cortex in emotional attention (see a review Vuilleumier, 2005). Taken together, fearful facial expressions might trigger inhibition response represented in the right dlPFC. Second, I observed that fear (relative to happiness) was associated with greater signal change in the ACC, SMA, PCC and bilateral precentral/postcentral gyrus, precuneus cortex and parietal cortex during intertemporal choice, and neural prime signal in the posterior ACC and SMA was linked to individual variance in the observed prime effect on behavior. SMA has been implicated in preparation of intentional act (Libet, Gleason, Wright, & Pearl, 1983). The dorsal sector of ACC (BA 24) is connected to prefrontal cortex, motor area as well as parietal cortex, and is relevant to processing both top-down and bottom-up stimuli. It has been implicated in detecting conflict (Botvinick, Cohen, & Carter, 2004) and signaling the need for engagement in executive control. 59 Neuroimaging studies in delay discounting reported greater activation in the ACC when LL alternatives were selected (Luo et al., 2012) or when delay discounting was reduced (Peters & Büchel, 2010), and when value differences between alternatives were smaller (greater task difficulty/conflict) (Marco-Pallares et al., 2010; Pine et al., 2009). Another suggested functional role of posterior part of ACC is to detect internal states such as pain (especially affective aspect of pain) (Peyron, Laurent, & Garcia-Larrea, 2000; Rainville, 2002). It might be the case that withdrawal signals (representing in the right dlPFC) led to increased recruitment of the posterior part of the ACC during immediately subsequent decision-making, through inhibition spillover. Furthermore, I observed an inverse correlation between the posterior ACC/SMA and right anterior dlPFC (BA 10) during intertemporal decision-making while fear prime was introduced vs. happy prime. The ACC and dlPFC have been both associated with cognitive control function and the specific roles of these two regions are still a matter of debate (Botvinick, Cohen, & Carter, 2004). However, a widely accepted hypothesis is that the ACC is more involved with monitoring external and internal states and assigning appropriate control to other regions but less directly involved with top-down control function as the dlPFC does (MacDonald, Cohen, Stenger, & Carter, 2000; Kerns et al., 2004; Carter & van Veen, 2007). This inverse connection between ACC and dlPFC was observed in suppression of memories (Kuhl, Dudukovic, Kahn, & Wagner, 2007) and interpreted as reduced demand for control. Given these findings, it might be the case that there was a reduced self-control demand signaled by the ACC when it detected extra inhibition signal elicited by fearful facial expressions. 60 Turning to possible neural mechanism of this prime effect on choice, I observed withdrawal-related brain signal change in the right dlPFC when participants saw fearful facial expressions vs. happy ones. The dlPFC (especially BA 9 and BA 10) was associated with “executive control” functioning in decision-making, such as biasing behavior toward the goal (Miller & Cohen, 2001), intentional inhibition of proponent but goal-inconsistent response (Aron, Robbins, & Poldrack, 2004), and planning (Fincham, Carter, van Veen, Stenger, & Anderson, 2002). This was in accordance with findings from delay discounting studies paired with fMRI reporting differential engagement of dlPFC during LL choice (McClure, Laibson, Loewenstein, & Cohen, 2004; Wittmann, Leland, & Paulus, 2007; Weber & Huettel, 2008; Rubia, Halari, Christakou, & Taylor, 2009; Luo et al., 2012; Christakou, Brammer, & Rubia, 2011), as LL choice to some extent implicates self-control deployment. When continuous Theta Burst Stimulation (cTBS) was applied to excite right dlPFC, the discounting rate was reduced compared with sham condition (Cho, Ko, Pellecchia, Van Eimeren, et. al., 2010). In contrast, decreased activation in right dlPFC combined with increased left dlPFC induced by bi- frontal Direct Current Stimulation caused an increase for SS preference (relative to sham condition) (Hecht, Walsh, & Lavidor, 2012). Taken together, it is plausible that fearful facial expressions might generate withdrawal-related right dlPFC activation, a region causally relevant to farsighted behavior, consequently, more LL choices were elicited. However, the observation that individual difference in preference for LL rewards between the fear and happy condition was not significantly associated with signal 61 changes in the right dlPFC during trials where there were fear primes vs. happy primes made this account less favored. Alternatively, analysis relating neural prime effect on intertemrpoal choice to behavioral prime effect speaks directly to the neural mechanisms mediating the observed preference shift to LL choice during the fear prime (relative to happy prime). Prime- related brain signal change in the posterior ACC predicted reduced discounting in the fear condition vs. happy condition. As mentioned above, the posterior sector of ACC is not only implicated in monitoring conflict, but is also involved in detection of internal states (Peyron, Laurent, & Garcia-Larrea, 2000; Rainville, 2002). Inhibition signals elicited by fearful primes might be represented in the ACC. My data suggested that posterior ACC moderated the effect that fear primes had on choice, possibly through inhibition spillover. On this hypothesis, greater activation in the posterior ACC indicated greater inhibition spillover, and led to a greater increase in farsighted choice. However, any specific functional interpretation is, at present, speculative. Furthermore, PPI analysis was suggestive that ACC might function in conjunction with the right dlPFC (BA 10) during intertemporal decision-making when fear was present vs. happiness. But the functional coupling between these two regions failed to modulate behavioral difference between fear and happy condition, which makes it not clear the role of this functional pathway in the observed prime effect on choice. In conclusion, my data suggest that emotional primes of fear and happiness differentially affect intertemporal choice. The increased preference for LL rewards after viewing fearful faces (relative to happy faces) was associated with neural activity that we 62 interpret as a relative increase in withdraw-related signaling produced by the fear prime and carried over to decision-making. The posterior ACC might be a potential neural correlate of inhibition spillover. Limitation There are several limitations of this current study. First, the sample size of this study was relatively small. Related to this small size issue (especially with regard to the high associated risk of type II error), neither fear nor happy prime’s effect on choice was significantly different from the neutral condition. Second, difference in preference for LL rewards between the fear and happy prime might be related to emotional valence or the specific nature of emotions. Future studies can introduce other types of emotions (i.e., disgust, sadness) and examine whether there are differences in discounting behavior between fear and disgust (or other negative emotions) in choice. 63 Chapter 3: Stochasticity in Intertemporal Choice Introduction Valuation of rewards typically varies positively with magnitude and negatively with delay. The mapping between delay and value is referred to as “delay discounting.” An individual’s level of delay discounting is typically modeled by assuming some functional form and then estimating one or more fit parameter(s) that best captures the individual’s choices. This functional form is often referred to as the discount function. What constitutes “good” behavior in the domain of intertemporal choice? From the perspectives of neoclassical economics, the functional form (rather than the individual fit parameter) of the discount function is the key. Rational discounting requires that the plans one makes now are the ones that she or he will actually carry out when the future arrives (Strotz, 1956). In other words, if on Monday, the individual plans on staying at home on Friday night (forgoing the pleasure of socializing) in order to study for a final exam (in order to improve eventual job prospects in the more distant future) she cannot, if she is rational, change her preference on Friday (at least not in the absence of new information). This idea is consistent with Rational Choice Theory (RCT) (Becker, 1976) which models the individual as maximizing utility based on a stable set of preferences. An individual that devalues all future rewards at the same fixed rate per unit of time (“exponential discounting”) does not introduce violations of the axioms of Rational Choice Theory, even if that fixed discounting rate is extremely high. However, by contrast, an individual that discounts future expectancies proportionally to anticipated delay (“hyperbolic discounting”) will, exhibit dynamic preference inconsistency. That is, 64 for some SS vs. LL pairs, he will reverse initial preference for LL rewards when SS rewards become near at hand. For example, if, in accordance with hyperbolic discounting, the valuation difference between a reward delayed in 100 vs. 101 days is proportionally smaller than the valuation difference between a reward delayed in 0 and 1day, then there are choice-pairs that could be offered at 100 and 101 days respectively (e.g., $50 and $55) in which the LL is preferred, but in which preference switches to the SS given the mere passage of time (i.e., after 100 days, when the choice becomes $50 now vs. $55 in 1 day). So the former type of discounter (exponential) is, from this point of view, rational, and the latter (hyperbolic) irrational. Psychologists focus less on functional form, and more on the degree of delay discounting exhibited. Good behavior with respect to this standard, as Jevons put it, requires that “ all future pleasures or pains should act upon us with the same force as if they were present.” (Jevons, 1871, page 76). In other words, a rational agent should be moved equally by delayed and immediate expectancies. An organism that discounts the future expectancies steeply will not do well over time. And even if this view is never completely realized, it serves as the standard: individuals that discount more steeply designated as “impulsive” or bad behavior. A third potential standard of good intertemporal behavior, in addition to functional form and degree of discounting, is consistency of intertemporal decision- making. Consider an individual performing a typical delay discounting task in which she is asked to choose between SS and LL monetary alternatives. If she values both magnitude and immediacy, there are points at which the trade-off between the two are in 65 balance and so, if the alternative pair is presented several times with sufficient other alternatives presented in between to prevent her from remembering her past response, then she will choose the two alternatives equally often. This is referred to as the point of, “stochastic indifference”, or just “indifference”. Moreover, if one were to present alternatives that diverge slightly from this “ balance point”, again with many trials intermixed, the more preferred will be chosen more than 50% of the time (by definition) but if the imbalance is sufficiently small, less than 100%. If one graphs probability of LL with some measure of the trade-off between SS and LL along the X-axis, the typical individual’s behavior will result in an “S” shaped function. Stochasticity can be characterized with a second layer of modeling that maps the consistency of preference as a function of the distance in discounted value of the alternatives. The more gradual an individual’s transition between very low and high probabilities of a particular choice, the greater the stochasticity in her behavior. A steep discounter can exhibit either high or low stochasticity, as can a shallow discounter. At least conceptually, overall discounting is orthogonal to the level of stochasticity in intertemporal behavior. Prior research has linked the degree of discounting to socially and clinically relevant behavior. For example, it has been hypothesized that individuals exhibiting greater delay discounting are vulnerable to engaging in behaviors that are immediately gratifying but that entail negative future consequences, and empirical reports appear to support this for behaviors including smoking, illicit drug use, pathological gambling, and overeating (Bickel, Odum, & Madden, 1999; Kirby, Petry, & Bickel, 1999; Baker, Johnson, & Bickel, 2003; Odum & Rainaud, 2003; Kirby & Petry, 2004; Heil, Johnson, 66 Higgins, & Bickel, 2006; Higgins, Heil, Sugarbaker, et al., 2007; Hoffman, Schwartz, Huckans, McFarland, et al., 2008; Johnson, Hollander, & Kenny, 2008; Petry & Casarella, 1999; Petry, 2001; Dixon, Marley, & Jacobs, 2003; Weller, Cook, Avsar, & Cox, 2008; Davis, Patte, Curtis, & Reid, 2010). It is also plausible that the degree of stochasticity may have relevance to real-world behavior. Just as there are points of stochastic indifference (points where the probability of choosing LL is 50%) there are also points of, for example 95% LL preference, and of 5% LL preference. In domains where success requires consistent farsightedness, for example, success in losing weight, maintenance of reputation integrity, successful response to intertemporal trade-offs may be more related to points of 95% LL preference than points of 50% LL preference. In other words, greater stochasticity would, in these domains, may be associated with worse outcomes. Conversely, in domains in which success only depends on occasional farsightedness (e.g., booking a flight for a vacation or availing oneself to commit to automatic savings plans), higher stochasticity might be associated with better outcomes. The goal of this study was to explore whether consistency in individuals’ preferences could be changed from one motivational state to the next. Previous data (including Study 1 of this dissertation) indicate that situational factors affect intertemporal choices (Wilson & Daly, 2004; Van den Bergh, Dewitte, &Warlop, 2008; Li, 2008; Tuk, Trampe, &Warlop, 2011; Murawski et al., 2012). It is plausible that state- like variance not only exists in discounting behavior but also stochasticity of individuals’ preference. To investigate, I developed a procedure to elicit intertemporal choice pairs where there was maximum variability in choice, so situational factors’ influence on 67 discounting behavior would be more evident. During this procedure, participants were required to make choices between pairs of SS and LL options that ranged from those in which the SS option was just sufficiently large to elicit 100% preference, to those in which the LL was just sufficiently large to elicit 100% preference. In study 1, intertemporal choice pairs involved multiple delays, which makes stochasticity estimation contingent on a particular form of discount function. The use of, for example, a hyperbolic functional form in modeling exponential (or some other alternative to hyperbolic) discounting will result in inflation of stochasticity estimation if multiple delays are used. In order to minimize the extent to which analyses were dependent on a particular form of the discount function, which varies between individuals (Coller, Harrison, & Rutström, 2011), and which may also vary within individuals based on state, all choices in the present study were between an immediate and a four-month delayed amount. Also given Study 1’s finding suggesting emotional primes’ effect on choice might be associated with approach-/withdrawal-related motivation, I introduced approach and withdrawal-related motivation primes using a novel “dual-task” method in this study. Interleaved with intertemporal choices (Task 1), participants were instructed that some trials would be interrupted by the presentation of a target, requiring a fast approach or withdrawal response (Task 2). The Task 2 target in some cases was an opportunity to gain points (which determine cash bonus) by rapidly pulling in on a joystick, and in some cases the target was signaling a threat of losing points, that could be avoided by rapidly pushing out on a joystick. Importantly, the background color during each Task 1 trial indicated to the participant whether the possible Task 2 interruption for that trial was 1) 68 gain (“approach”), 2) loss (“withdrawal”), 3) either gain or loss (approach or withdrawal), or 4) no target possible (“neutral”). As with Task 1, the incentives used in the Task 2 were monetary (gain or lose $.50 of bonus on each Task 2 trial). Although all Task 1 intertemporal choice trials occurred in one of the above four prime contexts, the Task 2 interruption only occurred on 25% of trials. Half of the time that the Task 2 interruption occurred (1/8 th of the trials), just before presentation of the Task 1 alternative pairs, and in the other half of these trials, the Task 2 target appeared immediately after the Task 1 choice was made. The trials used in the primary Task 1 analyses were the 87.5% of trials in which Task 1 choices were made without preceding Task 2 interruption (although I will separately consider Task 1 performance on the 1/8 th of trials in which choices succeeded Task 2 interruption). All participants additionally completed a questionnaire related to variability in responsiveness to reward and punishment (BAS and BIS) (Carver & White, 1994). Although our emphasis in the analysis is on the issue of stochasticity, like Study 1, the data also allow for the assessment of the effect of the experimental manipulation on intertemporal choice behavior. I originally hypothesized that the approach primes would function similarly to positive emotion faces and the withdrawal primes would function similarly to negative emotion faces. However, as will be discussed below, there are, in hindsight, important differences including the fact that in the present experiment, the same domain (money) was used in both the primary and secondary tasks. 69 Materials and Methods Subjects Fourty-eight healthy volunteers participated in the behavioral study with prime manipulation (mean= 34.7+8.0 year, 24 females). The years of their education varied from 12 to 22 (mean =16 +2.5), and their mothers’ education from 6 to 21 (mean=14.2+ 3.5). They were all right-handed, had normal or corrected-to-normal vision, were in good health, and were free from any psychological and neurological disorders. All subjects provided written consent to participate in this study and were compensated. This study protocol was approved by the Institutional Review Board of the University of Southern California. Training before the behavioral task with prime manipulation Participants completed extensive training before the experimental task. First, they were trained to respond to different color backgrounds by pulling the joystick toward them or pushing the joystick away from them (green background indicates bringing the joystick toward, red indicates pushing it away, blue indicates they can choose to either bring it toward or push it away, and grey indicates they should not respond). During this training session, any one of four color backgrounds was presented on the screen randomly, and participants were asked to respond as required as soon as they saw the colored background. Feedback was provided immediately after the participant’s response: smiley face for fast and correct response, sad face for slow and correct 70 response, and word “error” for wrong response (for example, pushing the joystick away when the screen is green). After the initial training, participants were instructed to bring the joystick toward them when a pleasant sound played, and to push the joystick away when an unpleasant sound played. Feedback was given in the same way as before. Finally, participants learned to respond to a combination of color background and pleasant/unpleasant sound. Each trial of this practice session started with a color background display indicating the required direction of joystick movement. A movement was only required when a sound also played. When it did, participants were instructed to bring the joystick toward them if it was a pleasant sound, and to push it away if it was an unpleasant sound. A green background was always paired with the pleasant sound, a red background was always paired with the unpleasant sound, and a blue background was paired with either sound. The word “Error” appeared on the screen if participants moved the joystick when no sound played. Lastly, in order to help participants familiarize with intertemporal choice task environment, they answered a few questions about which of two monetary alternatives they preferred by moving the joystick to the left if the option on the left was preferred, and to the right if the option on the right was preferred. The choices were between some amounts of money today, versus some other amounts of money at a later day (see below for specific description). Behavioral intertemporal choice task with motivation prime manipulation After training, participants performed an individualized adaptive intertemporal choice task intermixed with different motivation primes. Each trial began with the 71 appearance of one of four colored backgrounds from training, which indicated one of four motivational primes (gain, loss, gain+loss, neutral). Subsequent to the 1-second colored background display, choice alternatives were presented on the screen separated by a line, and participants were required to select the option they preferred by moving the joystick to the left or right. There was no time limit for their decision-making, and the option the participant chose was highlighted in yellow after their response. Figure 10: behavioral task with motivation primes The display outlines three possible sequences of events with the same background color. The first and simplest occurred 3/4 of Approach (green) trials. The middle and right trial represents each occurred on 1/8 of Approach (green) trials. On some rounds (one quarter of trials), a target (a pleasant sound or unpleasant sound) appeared either before (50%) or after (50%) presentation of choice alternatives. When it did, participants were required to make a quick approach or withdrawal response (pleasant sound indicates bringing joystick toward and unpleasant sound indicates pushing joystick away). If a pleasant sound played, participants had an opportunity to win money by rapidly pulling in on a joystick; if an unpleasant sound played, participants 72 had opportunity to avoid losing money by pushing out joystick quickly. Feedback was given after their response: a smiley face plus $0.50 for fast and accurate “approach” responses, a sad face and $0.50 loss occurred for slow “withdrawal” responses, “no win” for slow approach response, “no loss” for fast withdrawal response, and “error” message if response was in the wrong direction (for example, approach when unpleasant played). Feedback related to reaction time was predetermined, such that on half of trials participants were given feedback consistent with being fast enough, and half the trials they were given feedback consistent with being too slow. The exceptions to this were trials in which response time was slower than five seconds, which were always classified as “too slow.” The delay of the LL was always 120 days and the delay of the SS was always zero delay. The amount for LL was randomly generated from $25 to $85. The SS amount on each trial was generated using a value of k e that was continually adjusted throughout the session (detailed explanation below) in order to keep alternative pairs closely matched given the delay discounting exhibited by the participant. Subjects were given 64 trials for a single run with 16 trials for each of the four prime conditions. Among these 64 trials, there was 16 times where a target sound played (half pleasant, half unpleasant), and half of them appeared before choice alternatives (and the other half after choice alternatives). The 64 trials were divided into 16 blocks, and each block was comprised of one trial of each of the four different conditions (approach, withdrawal, approach/withdrawal, neutral) in a random order. Choice alternatives within each block had the same k e. K e in this experiment was adjusted on a block-by-block basis. Specifically, k e was adjusted 73 upward a quarter step on a log 10 scale if participants within a block chose SS options more than twice, and consequently, the relative value of the LL alternatives was increased for the next block. Conversely, within a block, if participants chose LL options more than twice, k e was adjusted downward a quarter step on a log 10, resulting in the increase of relative value of SS alternatives for the next block. If participants chose LL and SS alternatives equally within a block (two times each), k e remained the same on the subsequent block. Participants completed two runs of this task, and were instructed that one of the trials would be randomly selected at the end of the task, and that they would receive whichever option they selected for that trial (available after the indicated delay). Behavioral analysis The probability of choosing the LL alternative was modeled as a function of log transformed k e , as follows: ! !! = ! !!! !(!!!"#$%&) (Eq.2) in which α and β parameters are used to specify the relationship between the log transformed k e associated with a given alternative pair and an individual’s probability of choosing the LL. The parameter β captures the marginal effect of one-unit increase in log transformed k e , and reflects the stochasticity of individual performance; the larger the β, the less stochastic. The parameter α represents the mean of all other relevant observable factors not explicitly included in the model. It is important to note that k e is a 74 property of an SS vs. LL alternative pair rather than of the individual's behavior. If k* represents the value that, using Eq. (1), characterizes an individual's underlying tendency to discount (that is the value of k that satisfies P (LL) = 50%) then she will generally choose the LL option if k e > k* and the SS option if k e < k*. However, actual choice will sometimes be at odds with predicted preference, particularly when k e and k* are similar. The parameter estimates were calculated in R using maximum likelihood estimation. Data were fitted using logit and probit model separately. The logit model assumes the underlying distribution is logistic, whereas probit assumes a normal distribution. Results from these two models were compared based on Akaike Information Criterion (AIC). AIC has been widely used in model comparison and selection (Burnham and Anderson 2002; Wagenmakers and Farrell, 2004). Unlike maximized log-likelihood estimate that only takes modeling accuracy into account, AIC also considers the number of free parameters (N) in the model since over-fitted models with many parameters might identify spurious effects. AIC is calculated as: AIC=-2lnL+2N (Eq.3) Thus, the smaller the AIC, the better the model. Model comparisons were carried out in two ways. Using an information-theoretic approach (Pine, 2009), the AIC was summed over all subjects for each model separately, and the absolute difference between these two models was calculated (referred to as ΔAIC). As a rule of thumb, it has been suggested that if the absolute difference is greater than 2, it favors the better fitting model. Alternatively, one-sample t-test was performed on the comparisons of the individual AIC scores under each model as a complementary test. 75 Then the data were modeled at subject level, but this time instead of fitting data separately for each prime condition, I included “prime” in the Eq. 2, as follows, ! !! = ! !!! !(!!!"#$%&!!"#$%&) (Eq.4) γ is the effect each prime condition had on the model, and prime is a categorical variable for the gain prime taking value 1, for the loss prime taking value 2, gain or loss prime taking 3, and neutral taking value 4. Results Prime effect on stochasticity Trials where there were no target interruption were included for this analysis, and regression models failed to converge for two subjects’ data. The best-fit β parameter was calculated for each prime condition for each subject separately. Also AIC was calculated for each prime, for each subject, for each model in order to compare which model (logit vs. probit) fits the data better. ΔAIC was 2.45 in favor of probit model (the sum of AIC across subjects for probit model was 1565.79, and it was 1568.24 for logit model) (Table 1). Also one-sample t-test on the comparison of the individual AIC scores (for each participant, using the mean for the four prime) between these two models was significant (t (1,45) =-4.676, p=0.000). So, given the superior fit of the probit model, it is the basis for all analyses below (Table 2). 76 Table 1: Individual AIC estimates Model Logit Probit Subject Averaged AIC across conditions Averaged AIC across conditions 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 35.26 35.17 12.66 36.43 58.54 37.29 33.23 37.61 29.69 27.37 32.74 32.36 31.32 35.21 38.77 33.73 35.19 36.62 33.60 32.75 33.44 36.70 31.07 30.61 31.16 37.03 22.59 35.15 32.63 35.51 35.75 37.88 32.55 36.45 39.71 35.02 38.59 37.36 32.87 32.99 33.63 30.94 38.71 36.85 24.57 32.95 35.26 35.17 12.66 36.43 58.54 37.29 33.23 37.61 29.69 27.37 32.74 32.36 31.32 35.21 38.77 33.73 35.19 36.62 33.60 32.75 33.44 36.70 31.07 30.61 31.16 37.03 22.59 35.15 32.63 35.51 35.75 37.88 32.55 36.45 39.71 35.02 38.59 37.36 32.87 32.99 33.63 30.94 38.71 36.85 24.57 32.95 Total 1568.24 1565.79 77 Table 2: Parameter estimates based on probit model Prime Gain Loss Gain+Loss Neutral Subject β β β β 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 3.2976 1.3748 3.7518 1.5795 4.4927 3.2612 5.3069 3.7318 11.8446 2.6243 4.9983 7.0109 7.4595 2.7213 1.8296 4.1088 4.9108 3.4162 7.9748 8.2372 4.0060 3.1199 4.4808 6.0567 6.7456 3.1990 5.5800 6.0202 3.8024 3.2902 4.5484 1.4475 5.3341 3.1699 2.1027 6.3752 .5612 3.2227 4.4749 4.6102 .4523 4.0810 3.2131 1.6406 12.8084 2.5660 30.3451 3.0213 11.2864 4.3057 6.0116 4.1977 5.2044 2.7187 27.7931 3.2524 6.9087 6.0617 7.4660 1.8576 1.3774 47.0296 5.1492 2.1184 5.4024 4.8894 2.3720 1.7390 2.2768 7.3278 7.7835 2.1497 11.9558 1.8700 5.4009 4.5832 1.1101 2.7453 6.6380 5.0097 1.0451 4.7953 2.4445 2.6054 4.2247 7.8000 1.7696 3.5998 1.3582 3.3161 27.1877 2.7159 5.5908 3.4659 3.6517 4.0338 14.5235 3.6571 5.0425 2.2318 3.1190 3.1663 4.3375 6.7348 4.0803 2.6389 1.9389 2.8567 2.8028 5.8669 4.2119 5.5873 4.2863 2.5609 5.1486 5.4800 4.7701 6.5830 7.6863 1.2155 4.6102 2.9963 2.7403 2.0503 1.9697 2.6256 .5858 6.0782 2.5627 .0946 10.2330 4.7527 12.2664 6.6562 3.1833 4.4051 10.0759 4.7513 3.9113 1.8806 7.3504 2.7580 6.2605 1.5049 5.5523 3.5189 4.7598 1.9372 5.7625 5.9468 5.8548 1.9684 3.1263 4.1578 3.2461 1.3141 5.2308 6.2705 6.2350 .9614 4.8712 6.4979 6.2865 5.0750 10.8469 4.4507 6.3982 4.4807 2.1491 2.2887 6.9540 2.9781 .3529 3.3912 2.6119 3.3224 4.1807 2.9362 7.6494 7.2487 4.4870 3.7126 10.2256 3.3403 78 On average, β parameter was 4.37 (SD=2.53), 6.79 (SD=8.90), 4.56 (SD=2.83), and 4.48 (SD=2.27) for gain, loss, gain+loss and neutral prime. Repeated-measures ANOVA of stochasticity as a function of prime condition indicated a significant main effect (F (3,135) =3.044, p=. 031). The best-fit β parameter during gain primes was compared with loss prime, and paired-t test suggested that there was significantly greater consistency in choice during loss vs. gain prime (t (1,45) =2.017, p=. 050). Then I compared best-fit β parameter difference between gain and neutral, as well as loss and neutral. It was suggestive that the loss prime made participants more consistent in their choice than the neutral prime (t (1,45) =1.829, p=. 074) (Figure 11). There were no significant associations between individual differences in BAS/BIS and stochasticity. Given the skewed distributions of best-fit β parameters, I looked at the association between prime and stochasticity again but using non-parametric Friedman test. The main effect of prime on stochasticity was not significant (p=. 56). Wilcoxon Signed Ranked Test suggested that β tended towards higher during the loss prime than the gain prime condition (p=0. 142). 79 Figure 11: prime effect on consistency of choice The individual best-fit β was derived from probit model, and demeaned for each participant. The order of conditions: gain, loss, gain+loss, and neutral prime (from left to right). At the single subject-level, prime had an effect on the model for 35 out of 46 (76.1%) subjects at a liberal threshold of p<0.2, and 13 of the participants (28.3%) with p<0.05. Figure 12 displayed four subjects’ data for illustrative purpose. 80 Figure 12: Fitted data Four subjects’ actual choices and the fitted probit functions by condition were shown in different colors. Black: gain prime; Red: loss prime; Green: gain+loss prime; Blue: neutral prime. 81 Also the percent of trials during which LL was chosen (%LL) was calculated for each condition (Figure 13 A). On average, %LL was 52.15 (SD=8.29), 49.11 (SD=8.05), 52.26 (SD=9.72), and 49.02 (SD=7.82) for gain, loss, gain or loss and neutral condition respectively. Individual %LL data were subjected to repeated-measures of ANOVA with prime as a within-subject factor. The main effect of prime on %LL was not significant (F (3,141) =2.158, p=. 096). To explore the data further, paired-t tests were conducted to see whether there was any difference in choice between prime conditions. Gain prime tended towards higher %LL than loss prime (t (1,47) =1.829, p=. 074), and %LL also tended to be higher in the gain prime condition than neutral (t (1,47) =1.710, p=. 094) although neither of the comparison was significant. Additionally, %LL was calculated during trials where target appeared before participants made their choices. %LL difference between the gain and loss prime conditions was in the same direction as in the other trials, but of considerably greater magnitude (Figure 13 B). Individual differences in BAS/BIS were not significantly related to choice findings. 82 Figure 13: Prime effect on choice A B The individual %LL was calculated as a distance from overall %LL for each individual. Figure A included trials without target interruption. Figure B included trials where target appeared before choice was made. The conditions are in this order: gain, loss, gain+loss, and neutral prime (from left to right). 83 Reaction time (RT) for making decisions was calculated for each condition as duration between the time when alternatives were presented on the screen and the time when selection was made. Repeated-measures of ANOVA with prime as a within- subject factor indicated a significant main effect (F (3,141) =3.252, p=. 024). Paired-t tests suggested that decision-making RT was longer during “gain+loss” prime condition than gain prime condition (p=. 002), and was also longer than neutral condition (p=. 030). However, there was no RT difference between the gain and loss prime condition (p=. 106) (Figure 14). Figure 14: Reaction time The individual mean reaction time (RT) was calculated as a distance from overall RT for each participant. The order of conditions: gain, loss, gain+loss and neutral prime (from left to right). 84 Discussion In this study, I explored whether consistency of preference in intertemporal choice could be changed from one motivation state to the next. A dual-task design was used to introduce reward and punishment-related motivation primes while participants were making intertemporal choices between alternatives of similar present-value. Data suggested a trend toward higher consistency of intertemporal preference during anticipatory loss-related motivation prime (relative to gain-related motivation prime). With regard to the effects of the primes on choice, I also observed a trend toward steeper discounting during the anticipation of loss relative to during the anticipation of gain. Given my hypothesis that these prime conditions would correspond to fear and happy emotional face primes respectively, these results appear to be at odds with the observation in Study 1 that fearful faces were associated with shallower delay discounting relative to happy faces. However, it is important to consider that in Study 1, emotional primes were unrelated (emotional faces) to intertemporal decisions (involving money). In this study, primes (introduced by cues signaling an opportunity to win money or opportunity to avoiding losing money) and intertemporal choice task both were in the same domain (i.e., money). In addition to approach and withdrawal motivation, there might be something from monetary prizes per se that affects discounting behavior. It is not clear whether the observed findings were associated with the priming of approach vs. withdrawal, or with something specific about asking about monetary choices while participants were poised to win or lose money in the unrelated secondary task. Several pieces of evidence suggested that the latter might be a more parsimonious account of the 85 observed findings. First, previous studies that used primes in unrelated domains observed that approach motivation-related situational factors increased preference for SS rewards (Wilson & Daly, 2004; Van den Bergh, Dewitte, &Warlop, 2008; Li, 2008; Murawski et al., 2012), whereas withdrawal motivation-related factors increased preference for LL rewards (Tuk et al., 2011). The pattern I observed in this study was in the opposite direction to this pattern. Second, difference in choice behavior between the gain and loss prime condition was more pronounced during trials where target appeared before individuals made their choices. Arguably, approach-/withdrawal-related motivation might be weaker once participants received outcomes (Davidson, 1994). If it is the case that approach/withdrawal motivation is the driving force for observed findings, %LL difference should be smaller when the target on the 1/8 of trials in which the target appeared before the choice, which was the opposite of what I observed. Taken together, it is likely that primes’ effect on discounting were related to the anticipation of monetary prizes per se, rather than to the priming of approach and withdrawal motivation. That is to say, it may be a framing effect in which the chance of winning or losing one immediate monetary prize (or actually winning or losing it) affects intertemporal monetary choices. According to prospect theory, the location of the status quo reference point (which can be affected by framing) contributes to preference (Kahneman & Tversky, 1979). Potential gains and losses-related primes might influence reference points, which in turn might affect participants’ subsequent choices. Specifically, an opportunity to gain some money (immediately) would make SS less attractive since the marginal gain of additional immediate money would be reduced given the decelerating gain function. In contrast, 86 after encountering an opportunity to lose some money, the appeal of the SS would, according to prospect theory be marginally greater than in the absence of the potential loss, since the slope of the Value function is steeper in the loss domain. For each SS and LL choice pair, participants evaluated the two alternatives and then expressed their preference. Such evaluation process is most likely to fluctuate as previous studies showed that there is enormous variability in discounting (Chapman, 2000; Ainslie & Monterosso 2003; Frederick, 2002). In this study, the prime context in which the choice was made was the potential biasing factor contributing to variability in discounting behavior. No prior study has examined the association between anticipatory gain and loss-related motivation primes and variability in intertemporal choices. However, relevant research from behavioral economics and neuroscience of decision- making might provide some hints. People are in general more sensitive to losses than gains (Kahneman & Tversky, 1979), so it might be the case that loss-related motivation prime reduces stochasticity by increasing the “gain” (in the engineering sense) on value signaling. Lee et al., 2009 reported that people are more likely to make consistent preferences when the context elicited greater emotional responses. In a meta-analysis (based on 12 fMRI studies including both anticipation of gains and losses manipulations) that used ALE method, anticipated potential losses (similar as my prime manipulation, cues signaling potential monetary gains and losses) activated brain regions such as bilateral anterior insula to a greater degree than potential wins(Knutson & Greer, 2008). Also the insula was more activated during intertemporal choices involving losses than gains (Xu, Liang, Wang, Li, & Jiang 2009). The insula is a critical component of the 87 brain’s emotional circuitry, involved in especially mapping bodily states that help generate feelings (Damasio, 1994; Naqvi & Bechara, 2009). Given these findings, it is plausible that loss-related motivation prime might trigger greater emotional response (possibly encoded in the anterior insula), and in turn, reducing the stochasticity in intertemporal choice behavior. People have difficulty in placing value on future rewards. Perhaps it has something to do with the fact that future rewards are, as has been noted, “intangible” (Rick & Loewenstein, 2008) and psychologically “fragile” (Ebert & Prelec, 2007). Ebert and Prelec (2007) reported that intertemporal choices were insensitive to time and such time sensitivity could be easily moved either enhanced or compromised by incidental factors (i.e., time pressure). Vulnerability of the time domain could be one of contributing factors to stochasticity in intertemporal choice. Neuroimaging work on delay discounting studies suggested that the vmPFC is involved in subjective evaluation of the effect time has on value (Kable & Glimcher, 2007; Peters & Buchel, 2009; Pine, et al., 2009). And this region is also causally relevant to making consistent preferences. Patients with vmPFC damage were significant more inconsistent in a simple preference judgment task than healthy control subjects (Fellows & Farah, 2007). Based on these findings, it is suggestive that the vmPFC might be one of critical brain regions involved in making consistent evaluation of delayed rewards. According to this account, stronger value signal in the vmPFC might be associated with greater consistency of preference. It is plausible that the value of $1 became more than $1 after losing some money or even encountering an opportunity to lose some amount of money. The increased value signal 88 change (possibly encoded in the vmPFC) might be relevant to the observed increase in consistency. In conclusion, this study suggested a trend of increased consistence in intertemporal preference during anticipation of loss (relative to anticipation of gain). It might be associated with greater emotional responses or increased value signaling elicited by a loss-related prime, which may reduce the noise observed in intertemporal decisions. 89 References Addis, D. R., Wong, A. T., & Schacter, D. L. (2007). Remembering the past and imagining the future: Common and distinct neural substrates during event construction and elaboration. Neuropsychologia, 45(7), 1363–1377. Ainslie, G., & Haendel, V. (1983). The motives of the will. I: Gottheil, E., Druley, K., Skodola, T. & Waxman, H. Etiology aspects of alcohol and drug abuse. Ainslie, G. (1992). Picoeconomics: The strategic interaction of successive motivational states within the person. Cambridge University Press Cambridge. Ainslie, G. (1996). Studying self-regulation the hard way. Psychological Inquiry, 7(1), 16–20. Ainslie, G., & Monterosso, J. (2003). Hyperbolic discounting as a factor in addiction: A critical analysis. In R. Vuchinich & N. Heather (Eds.), Choice, Behavioral Economics, and Addiction. Oxford: Elsevier. Aron, A. R., Robbins, T. W., & Poldrack, R. A. (2004). Inhibition and the right inferior frontal cortex. Trends in Cognitive Sciences, 8(4), 170-177 Augustine, A. A., & Larsen, R. J. (2011). Affect regulation and temporal discounting: Interactions between primed, state, and trait affect. Emotion, 11(2), 403–412. Baeken, C., De Raedt, R., Van Schuerbeek, P., Vanderhasselt, M. A., De Mey, J., Bossuyt, A., & Luypaert, R. (2010). Right prefrontal HF-rTMS attenuates right amygdala processing of negatively valenced emotional stimuli in healthy females. Behavioural Brain Research, 214(2), 450–455. Baker, F., Johnson, M. W., & Bickel, Warren K. (2003). Delay discounting in current and never-before cigarette smokers: Similarities and differences across commodity, sign, and magnitude. Journal of Abnormal Psychology, 112(3), 382-392. Ballard, K., & Knutson, B. (2009). Dissociable neural representations of future reward magnitude and delay during temporal discounting. NeuroImage, 45(1), 143-150. Becker, G. S. (1976). The Economic Approach to Human Behavior. Chicago: University of Chicago Press. Berkman, E. T., Burklund, L., & Lieberman, M. D. (2009). Inhibitory spillover: Intentional motor inhibition produces incidental limbic inhibition via right inferior frontal cortex. NeuroImage, 47(2), 705–712. 90 Beckmann, C. F., Jenkinson, Mark, & Smith, S. M. (2003). General multilevel linear modeling for group analysis in FMRI. NeuroImage, 20(2), 1052-1063. Berkman, E. T., & Lieberman, M. D. (2010). Approaching the bad and avoiding the good: Lateral prefrontal cortical asymmetry distinguishes between action and valence. Journal of cognitive neuroscience, 22(9), 1970–1979. Bickel, W. K, Odum, A. L, & Madden, G. J. (1999). Impulsivity and cigarette smoking: delay discounting in current, never, and ex-smokers. Psychopharmacology, 146(4), 447–454. Bickel, W. K., Yi, R., Landes, R. D., Hill, P. F., & Baxter, C. (2011). Remember the future: working memory training decreases delay discounting among stimulant addicts. Biological psychiatry, 69(3), 260–265. Bjork, J. M., Momenan, R., & Hommer, D. W. (2009). Delay Discounting Correlates with Proportional Lateral Frontal Cortex Volumes. Biological Psychiatry, 65(8), 710-713. Bodenhausen, G. V. (1993). Emotions, arousal, and stereotypic judgments: A heuristic model of affect and stereotyping. Academic Press. Retrieved from http://psycnet.apa.org/psycinfo/1993-97388-001 Bodner, R., & Prelec, D. (2003). Self-signaling and diagnostic utility in everyday decision making. The psychology of economic decisions, 1, 105–26. Boettiger, C. A., Mitchell, J. M., Tavares, V. C., Robertson, M., Joslyn, G., DʼEsposito, M., & Fields, H. L. (2007). Immediate Reward Bias in Humans: Fronto-Parietal Networks and a Role for the Catechol-O-Methyltransferase 158Val/Val Genotype. J. Neurosci., 27(52), 14383-14391. Botvinick, M. M., Cohen, J. D., & Carter, C. S. (2004). Conflict monitoring and anterior cingulate cortex: an update. Trends in cognitive sciences, 8(12), 539–546. Bohm, P. (1994). Time Preference and Preference Reversal Among Experienced Subjects: The Effects of Real Payments. The Economic Journal, 104(427), 1370- 1378. Botzung, A., Denkova, E., & Manning, L. (2008). Experiencing past and future personal events: Functional neuroimaging evidence on the neural bases of mental time travel. Brain and Cognition, 66(2), 202-212. 91 Camprodon, J. A., Martínez-Raga, J., Alonso-Alonso, M., Shih, M.-C., & Pascual-Leone, A. (2007). One session of high frequency repetitive transcranial magnetic stimulation (rTMS) to the right prefrontal cortex transiently reduces cocaine craving. Drug and Alcohol Dependence, 86(1), 91–94. Carter, C., & van Veen, V. (2007). Anterior cingulate cortex and conflict detection: An update of theory and data. Cognitive, Affective, & Behavioral Neuroscience, 7(4), 367–379. Carter, R. M., Meyer, J. R., & Huettel, S. A. (2010). Functional Neuroimaging of Intertemporal Choice Models: A Review. Journal of Neuroscience, Psychology, and Economics, 3(1), 27-45. Carver, C. S., & White, T. L. (1994). Behavioral inhibition, behavioral activation, and affective responses to impending reward and punishment: The BIS/BAS Scales. Journal of Personality and Social Psychology, 67(2), 319-333. Chapman, G. B, Brewer, N. T., Coups, E. J., Brownlee, S., Leventhal, H., & Leventhal, E. A. (2001). Value for the future and preventive health behavior. Journal of experimental psychology applied, 7(3), 235–250. Chapman, Gretchen B. (1996). Temporal Discounting and Utility for Health and Money. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22(3), 771-791. Chapman, Gretchen B. (2000). Preferences for improving and declining sequences of health outcomes. Journal of Behavioral Decision Making, 13(2), 203-218. Chesson, H., & Viscusi, W. K. (2000). The heterogeneity of time-risk tradeoffs. Journal of Behavioral Decision Making, 13(2), 251-258. Cho, S. S., Ko, J. H., Pellecchia, G., Van Eimeren, T., Cilia, R., & Strafella, A. P. (2010). Continuous theta burst stimulation of right dorsolateral prefrontal cortex induces changes in impulsivity level. Brain Stimulation, 3(3), 170-176. Christakou, A., Brammer, M., & Rubia, K. (2011). Maturation of limbic corticostriatal activation and connectivity associated with developmental changes in temporal discounting. NeuroImage, 54(2), 1344-1354. Clark, L., Bechara, A., Damasio, H., Aitken, M. R. F., Sahakian, B. J., & Robbins, T. W. (2008). Differential effects of insular and ventromedial prefrontal cortex lesions on risky decision-making. Brain, 131(5), 1311 -1322. 92 Coller, M., Harrison, G. W., & Rutström, E. E. (2011). Latent process heterogeneity in discounting behavior. Oxford Economic Papers. Cook, J. O., & Barnes Jr., L. W. (1964). Choice of delay of inevitable shock. Journal of Abnormal and Social Psychology, 68(6), 669-672. Crean, J. P., de Wit, Harriet, & Richards, Jerry B. (2000). Reward Discounting as a Measure of Impulsive Behavior in a Psychiatric Outpatient Population. Experimental and Clinical Psychopharmacology, 8(2), 155-162. Critchfield, T. S., & Kollins, S. H. (2001). Temporal discounting: basic research and the analysis of socially important behavior. Journal of Applied Behavior Analysis, 34(1), 101-122. Curtis, C. E., & DʼEsposito, M. (2003). Persistent activity in the prefrontal cortex during working memory. Trends in Cognitive Sciences, 7(9), 415-423. d’ Alfonso, A. A. ., van Honk, J., Hermans, E., Postma, A., & de Haan, E. H. . (2000). Laterality effects in selective attention to threat after repetitive transcranial magnetic stimulation at the prefrontal cortex in female subjects. Neuroscience Letters, 280(3), 195–198. Damasio, A. R. (1994). Descartesʼ error: Emotion, reason, and the human brain. Quill New York: Davidson, R. J., Ekman, P., Saron, C. D., Senulis, J. A., & Friesen, W. V. (1990). Approach-withdrawal and cerebral asymmetry: Emotional expression and brain physiology: I. Journal of Personality and Social Psychology, 58(2), 330-341. Davidson, R. J. (1994). Asymmetric brain function, affective style, and psychopathology: The role of early experience and plasticity. Development and Psychopathology, 6, 741–741. Davidson, R. J., & Irwin, W. (1999). The functional neuroanatomy of emotion and affective style. Trends in Cognitive Sciences, 3(1), 11–21. Davidson, R. J. (2002). Anxiety and affective style: role of prefrontal cortex and amygdala. Biological Psychiatry, 51(1), 68–80. Davis, C., Patte, K., Curtis, C., & Reid, C. (2010). Immediate pleasures and future consequences. A neuropsychological study of binge eating and obesity. Appetite, 54(1), 208-213. 93 Diekhof, E. K., & Gruber, O. (2010). When Desire Collides with Reason: Functional Interactions between Anteroventral Prefrontal Cortex and Nucleus Accumbens Underlie the Human Ability to Resist Impulsive Desires. The Journal of Neuroscience, 30(4), 1488 -1493. Dixon, M. R., Marley, J., & Jacobs, E. A. (2003). Delay discounting by pathological gamblers. Journal of Applied Behavior Analysis, 36(4), 449-458. Dorris, M. C., & Glimcher, Paul W. (2004). Activity in Posterior Parietal Cortex Is Correlated with the Relative Subjective Desirability of Action. Neuron, 44(2), 365-378. Ebert, J. E. J., & Prelec, D. (2007). The fragility of time: Time-insensitivity and valuation of the near and far future. Management Science, 53(9), 1423–1438. Elliott, R., Agnew, Z., & Deakin, J. F. W. (2008). Medial orbitofrontal cortex codes relative rather than absolute value of financial rewards in humans. European Journal of Neuroscience, 27(9), 2213-2218. Ersner-Hershfield, H., Wimmer, G. E., & Knutson, B. (2009). Saving for the future self: Neural measures of future self-continuity predict temporal discounting. Social Cognitive and Affective Neuroscience, 4(1), 85 -92. Fecteau, S., Knoch, Daria, Fregni, F., Sultani, N., Boggio, P., & Pascual-Leone, A. (2007). Diminishing Risk-Taking Behavior by Modulating Activity in the Prefrontal Cortex: A Direct Current Stimulation Study. The Journal of Neuroscience, 27(46), 12500 -12505. Fellows, L. K., & Farah, M. J. (2005). Dissociable elements of human foresight: a role for the ventromedial frontal lobes in framing the future, but not in discounting future rewards. Neuropsychologia, 43(8), 1214-1221. Fellows, L. K., & Farah, M. J. (2007). The Role of Ventromedial Prefrontal Cortex in Decision Making: Judgment under Uncertainty or Judgment Per Se? Cerebral Cortex, 17(11), 2669–2674. Figner, B., Knoch, Daria, Johnson, E. J., Krosch, A. R., Lisanby, S. H., Fehr, Ernst, & Weber, E. U. (2010). Lateral prefrontal cortex and self-control in intertemporal choice. Nat Neurosci, 13(5), 538-539. Fincham, J. M., Carter, C. S., van Veen, V., Stenger, V. A., & Anderson, J. R. (2002). Neural mechanisms of planning: A computational analysis using event-related fMRI. Proceedings of the National Academy of Sciences, 99(5), 3346 -3351. 94 FitzGerald, T. H. B., Seymour, B., & Dolan, R. J. (2009). The Role of Human Orbitofrontal Cortex in Value Comparison for Incommensurable Objects. The Journal of Neuroscience, 29(26), 8388 -8395. Forzano, L. B., & Logue, A. W. (1994). Self-Control in Adult Humans: Comparison of Qualitatively Different Reinforcers. Learning and Motivation, 25(1), 65-82. Frederick, S. (2002). Time Discounting and Time Preference: A Critical Review. Journal of Economic Literature, 40, 351-401. Friston, K. ., Buechel, C., Fink, G. ., Morris, J., Rolls, E., & Dolan, R. . (1997). Psychophysiological and Modulatory Interactions in Neuroimaging. NeuroImage, 6(3), 218–229. Fuchs, V. R. (1982). Time Preference and Health: An Exploratory Study. National Bureau of Economic Research Working Paper Series, No. 539. Gill, T. M., Castaneda, P. J., & Janak, P. H. (2010). Dissociable Roles of the Medial Prefrontal Cortex and Nucleus Accumbens Core in Goal-Directed Actions for Differential Reward Magnitude. Cerebral Cortex, 20(12), 2884 -2899. Grabenhorst, F., & Rolls, E. T. (2009). Different representations of relative and absolute subjective value in the human brain. NeuroImage, 48(1), 258-268. Gray, J. A. (1994). Three fundamental emotion systems. The nature of emotion: Fundamental questions, 243–247. Gregorios-Pippas, L., Tobler, P. N., & Schultz, W. (2009). Short-Term Temporal Discounting of Reward Value in Human Ventral Striatum. Journal of Neurophysiology, 101(3), 1507 -1523. Hare, R. D. (1966). Preference for delay of shock as a function of its intensity and probability. Psychonomic science. Hare, T. A., Camerer, C. F., & Rangel, A. (2009). Self-Control in Decision-Making Involves Modulation of the vmPFC Valuation System. Science, 324(5927), 646- 648. Hariri, A. R., Brown, S. M., Williamson, D. E., Flory, J. D., de Wit, H., & Manuck, S. B. (2006). Preference for Immediate over Delayed Rewards Is Associated with Magnitude of Ventral Striatal Activity. J. Neurosci., 26(51), 13213-13217. Hecht, D., Walsh, V., & Lavidor, M. (2012). Bi-frontal direct current stimulation affects delay discounting choices. Cognitive Neuroscience, 0(0), 1–5. 95 Heatherton, T. F., & Wagner, D. D. (2011). Cognitive neuroscience of self-regulation failure. Trends in Cognitive Sciences, 15(3), 132-139. Heil, Sarah H., Johnson, M. W., Higgins, Stephen T., & Bickel, Warren K. (2006). Delay discounting in currently using and currently abstinent cocaine-dependent outpatients and non-drug-using matched controls. Addictive Behaviors, 31(7), 1290-1294. Hinson, J. M., Jameson, T. L., & Whitney, P. (2003). Impulsive decision making and working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(2), 298–306. Ho, M.-Y., Mobini, S., Chiang, T.-J., Bradshaw, C. M., & Szabadi, E. (1999). Theory and method in the quantitative analysis of ”impulsive choice” behaviour: implications for psychopharmacology. Psychopharmacology, 146(4), 362-372. Hochman, G., Yechiam, E., & Bechara, Antoine. (2010). Recency gets larger as lesions move from anterior to posterior locations within the ventromedial prefrontal cortex. Behavioural Brain Research, 213(1), 27-34. Hoffman, W., Schwartz, D., Huckans, M., McFarland, B., Meiri, G., Stevens, A., & Mitchell, S. (2008). Cortical activation during delay discounting in abstinent methamphetamine dependent individuals. Psychopharmacology, 201(2), 183-193. Herrington, J. D., Mohanty, A., Koven, N. S., Fisher, J. E., Stewart, J. L., Banich, M. T., Webb, A. G., et al. (2005). Emotion-Modulated Performance and Activity in Left Dorsolateral Prefrontal Cortex. Emotion, 5(2), 200-207. Jenkinson, M, & Smith, S. (2001). A global optimisation method for robust affine registration of brain images. Medical Image Analysis, 5(2), 143-156. Jevons, W. (1871). A Theory of Political Economy. London & New York: Macmillan and Co. Jimura, K., Myerson, J., Hilgard, J., Braver, T. S., & Green, L. (2009). Are people really more patient than other animals? Evidence from human discounting of real liquid rewards. Psychonomic Bulletin & Review, 16(6), 1071-1075. Johnson, M. W., & Bickel, W. K. (2002). Within-subject comparison of real and hypothetical money rewards in delay discounting. Journal of the Experimental Analysis of Behavior, 77(2), 129–146. 96 Johnson, P. M., Hollander, J. A., & Kenny, P. J. (2008). Decreased brain reward function during nicotine withdrawal in C57BL6 mice: evidence from intracranial self- stimulation (ICSS) studies. Pharmacology Biochemistry and Behavior, 90(3), 409–415. Kable, J. W., & Glimcher, Paul W. (2007). The neural correlates of subjective value during intertemporal choice. Nat Neurosci, 10(12), 1625-1633. Kable, J. W., & Glimcher, Paul W. (2009). The Neurobiology of Decision: Consensus and Controversy. Neuron, 63(6), 733-745. Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291. Kerns, J. G., Cohen, J. D., MacDonald III, A. W., Cho, R. Y., Stenger, V. A., & Carter, C. S. (2004). Anterior cingulate conflict monitoring and adjustments in control. Science, 303(5660), 1023–1026. Kirby, Kris N., & Herrnstein, R. J. (1995). Preference Reversals Due to Myopic Discounting of Delayed Reward. Psychological Science, 6(2), 83 -89. Kirby, K. N, Petry, N. M, & Bickel, W. K. (1999). Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls. Journal of Experimental Psychology: General, 128(1), 78–87. Kirby, K. N, & Petry, N. M. (2004). Heroin and cocaine abusers have higher discount rates for delayed rewards than alcoholics or non-drug-using controls. Addiction, 99(4), 461–471. Kirby, K. N. (2009). One-year temporal stability of delay-discount rates. Psychonomic Bulletin & Review, 16(3), 457-462. Knoch, D., & Fehr, E. (2007). Resisting the Power of Temptations: The Right Prefrontal Cortex and Self-Control. Annals of the New York Academy of Sciences, 1104(1), 123-134. Knoch, Daria, Gianotti, L. R. R., Pascual-Leone, A., Treyer, V., Regard, M., Hohmann, M., & Brugger, P. (2006). Disruption of Right Prefrontal Cortex by Low- Frequency Repetitive Transcranial Magnetic Stimulation Induces Risk-Taking Behavior. The Journal of Neuroscience, 26(24), 6469 -6472. Knutson, B., Rick, S., Wimmer, G. E., Prelec, Drazen, & Loewenstein, G. (2007). Neural Predictors of Purchases. Neuron, 53(1), 147-156. 97 Knutson, Brian, & Greer, S. M. (2008). Anticipatory affect: neural correlates and consequences for choice. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1511), 3771 -3786. Kobayashi, S., & Schultz, W. (2008). Influence of Reward Delays on Responses of Dopamine Neurons. The Journal of Neuroscience, 28(31), 7837 -7846. Kober, H., Mende-Siedlecki, P., Kross, E. F., Weber, J., Mischel, W., Hart, C. L., & Ochsner, K. N. (2010). Prefrontal–striatal pathway underlies cognitive regulation of craving. Proceedings of the National Academy of Sciences, 107(33), 14811 - 14816. Kuhl, B. A., Dudukovic, N. M., Kahn, I., & Wagner, A. D. (2007). Decreased demands on cognitive control reveal the neural processing benefits of forgetting. Nature Neuroscience, 10(7), 908–914. Kurzban, R. (2011). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton University Press Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (1990). Emotion, attention, and the startle reflex. Psychological Review, 97(3), 377-395. Lebreton, M., Jorge, S., Michel, V., Thirion, B., & Pessiglione, Mathias. (2009). An Automatic Valuation System in the Human Brain: Evidence from Functional Neuroimaging. Neuron, 64(3), 431-439. Lee, L., Amir, O., & Ariely, D. (2009). In Search of Homo Economicus: Cognitive Noise and the Role of Emotion in Preference Consistency. Journal of Consumer Research, 36(2), 173–187. Lerner, J. S., & Keltner, D. (2000). Beyond valence: Toward a model of emotion-specific influences on judgement and choice. Cognition & Emotion, 14(4), 473–493. Lerner, J. S., & Keltner, D. (2001). Fear, anger, and risk. Journal of personality and social psychology, 81(1), 146. Lerner, J. S., Lib, Y., & Weber, E. U. (n.d.). Sadder, but Not Wiser: The Myopia of Misery. Retrieved from http://scholar.harvard.edu/jenniferlerner/publications/sadder-not-wiser-myopia- misery 98 Levy, I., Lazzaro, S. C., Rutledge, R. B., & Glimcher, Paul W. (2011). Choice from Non- Choice: Predicting Consumer Preferences from Blood Oxygenation Level- Dependent Signals Obtained during Passive Viewing. The Journal of Neuroscience, 31(1), 118 -125. Levy, R., & Goldman-Rakic, P. S. (2000). Segregation of working memory functions within the dorsolateral prefrontal cortex. Experimental Brain Research, 133(1), 23-32. Li, X. (2008). The Effects of Appetitive Stimuli on Out-of-Domain Consumption Impatience. The Journal of Consumer Research, 34(5), 649–656. Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of Conscious Intention to Act in Relation to Onset of Cerebral Activity (readiness-Potential) the Unconscious Initiation of a Freely Voluntary Act. Brain, 106(3), 623–642. Loewenstein, G. (1996). Out of control: Visceral influences on behavior. Organizational behavior and human decision processes, 65(3), 272–292. Loewenstein, G., & Lerner, J. S. (2003). The role of affect in decision making. Handbook of affective science, 619, 642. Louie, K., & Glimcher, Paul W. (2010). Separating Value from Choice: Delay Discounting Activity in the Lateral Intraparietal Area. The Journal of Neuroscience, 30(16), 5498 -5507. Luo, S., Ainslie, G., Giragosian, L., & Monterosso, John R. (2009). Behavioral and Neural Evidence of Incentive Bias for Immediate Rewards Relative to Preference- Matched Delayed Rewards. J. Neurosci., 29(47), 14820-14827. Luo, S., Ainslie, G., Pollini, D., Giragosian, L., & Monterosso, J. R. (2012). Moderators of the association between brain activation and farsighted choice. NeuroImage, 59(2), 1469–1477. MacDonald, A. W., Cohen, J. D., Stenger, V. A., & Carter, C. S. (2000). Dissociating the role of the dorsolateral prefrontal and anterior cingulate cortex in cognitive control. Science, 288(5472), 1835–1838. Madden, G. J., Petry, N M, Badger, G. J., & Bickel, W K. (1997). Impulsive and self- control choices in opioid-dependent patients and non-drug-using control participants: drug and monetary rewards. Experimental and Clinical Psychopharmacology, 5(3), 256-262. 99 Mazur, J. E. (1987). An adjusting procedure for studying delayed reinforcement. Quantitative analyses of behavior, 5, 55–73. McClure, S. M., Ericson, K. M., Laibson, D. I., Loewenstein, G., & Cohen, J. D. (2007). Time Discounting for Primary Rewards. J. Neurosci., 27(21), 5796-5804. McClure, S. M., Laibson, D. I., Loewenstein, G., & Cohen, J. D. (2004). Separate Neural Systems Value Immediate and Delayed Monetary Rewards. Science, 306(5695), 503-507. McKerchar, T. L., Green, L., Myerson, J., Pickford, T. S., Hill, J. C., & Stout, S. C. (2009). A comparison of four models of delay discounting in humans. Behavioural Processes, 81(2), 256–259. Metcalfe, J., & Mischel, W. (1999). A hot/cool-system analysis of delay of gratification: Dynamics of willpower. Psychological Review, 106, 3-19. Miller, E. K., & Cohen, J D. (2001). An integrative theory of prefrontal cortex function. Annual Review of Neuroscience, 24, 167-202. Mischel, Walter, & Grusec, Joan. (1967). Waiting for rewards and punishments: effects of time and probability on choice. Journal of Personality and Social Psychology, 5(1), 24-31. Mischel, W., Grusec, J., & Masters, J. C. (1969). Effects of expected delay time on the subjective value of rewards and punishments. Journal of Personality and Social Psychology, 11(4), 363–373. Mischel, W., Ayduk, O., Berman, M. G., Casey, B. J., Gotlib, I. H., Jonides, J., Kross, E., et al. (2010). “Willpower” over the life span: decomposing self-regulation. Social Cognitive and Affective Neuroscience. Mitchell, J. P., Schirmer, J., Ames, D. L., & Gilbert, D. T. (2011). Medial Prefrontal Cortex Predicts Intertemporal Choice. Journal of Cognitive Neuroscience, 23(4), 857-866. Monterosso, J., Ehrman, R., Napier, K. L., OʼBrien, C. P., & Childress, A. R. (2001). Three decision-making tasks in cocaine-dependent patients: do they measure the same construct? Addiction, 96(12), 1825-1837. Monterosso, John R., Ainslie, G., Xu, J., Cordova, X., Domier, C. P., & London, E. D. (2007). Frontoparietal cortical activity of methamphetamine-dependent and comparison subjects performing a delay discounting task. Human Brain Mapping, 28(5), 383-393. 100 Muraven, M., & Baumeister, R. F. (2000). Self-regulation and depletion of limited resources: Does self-control resemble a muscle? Psychological Bulletin, 126(2), 247–259. Murawski, C., Harris, P. G., Bode, S., Domínguez D., J. F., & Egan, G. F. (2012). Led into Temptation? Rewarding Brand Logos Bias the Neural Encoding of Incidental Economic Decisions. PLoS ONE, 7(3), e34155. Murphy, F., Nimmo-Smith, I., & Lawrence, A. (2003). Functional neuroanatomy of emotions: A meta-analysis. Cognitive, Affective, & Behavioral Neuroscience, 3(3), 207–233. Naqvi, N. H., Rudrauf, D., Damasio, Hanna, & Bechara, Antoine. (2007). Damage to the Insula Disrupts Addiction to Cigarette Smoking. Science, 315(5811), 531 -534. Naqvi, N. H., & Bechara, A. (2009). The hidden island of addiction: the insula. Trends in Neurosciences, 32(1), 56–67. Navarick, D. J. (1982). Negative reinforcement and choice in humans. Learning and Motivation, 13(3), 361-377. Nitschke, J. B., Sarinopoulos, I., Mackiewicz, K. L., Schaefer, H. S., & Davidson, R. J. (2006). Functional neuroanatomy of aversion and its anticipation. NeuroImage, 29(1), 106–116. Odum, Amy L., & Rainaud, C. P. (2003). Discounting of delayed hypothetical money, alcohol, and food. Behavioural Processes, 64(3), 305-313. Paulus, M. P., Lovero, K. L., Wittmann, M., & Leland, D. S. (2008). Reduced Behavioral and Neural Activation in Stimulant Users to Different Error Rates during Decision Making. Biological Psychiatry, 63(11), 1054-1060. Peters, J., & Büchel, C. (2010). Episodic Future Thinking Reduces Reward Delay Discounting through an Enhancement of Prefrontal-Mediotemporal Interactions. Neuron, 66(1), 138–148. Peters, Jan, & Büchel, C. (2009). Overlapping and Distinct Neural Systems Code for Subjective Value during Intertemporal and Risky Decision Making. J. Neurosci., 29(50), 15727-15734. Petry, Nancy M. (2001). Pathological gamblers, with and without substance abuse disorders, discount delayed rewards at high rates. Journal of Abnormal Psychology, 110(3), 482-487. 101 Petry, Nancy M., & Casarella, T. (1999). Excessive discounting of delayed rewards in substance abusers with gambling problems. Drug and Alcohol Dependence, 56(1), 25-32. Peyron, R., Laurent, B., & Garcia-Larrea, L. (2000). Functional imaging of brain responses to pain. A review and meta-analysis (2000). Neurophysiologie Clinique/Clinical Neurophysiology, 30(5), 263–288. Pine, A., Seymour, B., Roiser, J. P., Bossaerts, P., Friston, K. J., Curran, H. V., & Dolan, R. J. (2009). Encoding of Marginal Utility across Time in the Human Brain. J. Neurosci., 29(30), 9575-9581. Pine, A., Shiner, T., Seymour, B., & Dolan, R. J. (2010). Dopamine, Time, and Impulsivity in Humans. The Journal of Neuroscience, 30(26), 8888 -8896. Platt, M. L., & Glimcher, P. W. (1999). Neural correlates of decision variables in parietal cortex. Nature, 400(6741), 233–238. Prevost, C., Pessiglione, M., Metereau, E., Cléry-Melin, M.-L., & Dreher, J.-C. (2009). Different valuation systems for delay versus effort discounting in the human brain. NeuroImage, 47(Supplement 1), S117. Raghunathan, R., Pham, M. T., & others. (1999). All negative moods are not equal: Motivational influences of anxiety and sadness on decision making. Organizational Behavior and Human Decision Processes, 79, 56–77. Rainville, P. (2002). Brain mechanisms of pain affect and pain modulation. Current opinion in neurobiology, 12(2), 195–204. Richards, J B, Zhang, L., Mitchell, S. H., & de Wit, H. (1999). Delay or probability discounting in a model of impulsive behavior: effect of alcohol. Journal of the Experimental Analysis of Behavior, 71(2), 121-143. Rick, S., & Loewenstein, G. (2008). Intangibility in intertemporal choice. Philosophical Transactions of the Royal Society B: Biological Sciences, 363(1511), 3813–3824. Robbins, Trevor W., & Everitt, B. J., with Sugathapala. (2001). Impulsive Choice Induced in Rats by Lesions of the Nucleus Accumbens Core. Science, 292(5526), 2499 -2501. Roesch, M. R., Calu, D. J., & Schoenbaum, G. (2007). Dopamine neurons encode the better option in rats deciding between differently delayed or sized rewards. Nat Neurosci, 10(12), 1615-1624. 102 Roesch, M. R., Singh, T., Brown, P. L., Mullins, S. E., & Schoenbaum, G. (2009). Ventral Striatal Neurons Encode the Value of the Chosen Action in Rats Deciding between Differently Delayed or Sized Rewards. The Journal of Neuroscience, 29(42), 13365 -13376. Rubia, K., Halari, R., Christakou, A., & Taylor, E. (2009). Impulsiveness as a timing disturbance: neurocognitive abnormalities in attention-deficit hyperactivity disorder during temporal processes and normalization with methylphenidate. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1525), 1919 -1931. Schneirla, T. C. (1959). An evolutionary and developmental theory of biphasic processes underlying approach and withdrawal. Sellitto, M., Ciaramelli, E., & di Pellegrino, G. (2010). Myopic Discounting of Future Rewards after Medial Orbitofrontal Damage in Humans. The Journal of Neuroscience, 30(49), 16429 -16436. Shackman, A. J., McMenamin, B. W., Maxwell, J. S., Greischar, L. L., & Davidson, R. J. (2009). Right Dorsolateral Prefrontal Cortical Activity and Behavioral Inhibition. Psychological Science, 20(12), 1500–1506. Shamosh, N. A., DeYoung, C. G., Green, A. E., Reis, D. L., Johnson, M. R., Conway, A. R. A., Engle, R. W., et al. (2008). Individual Differences in Delay Discounting. Psychological Science, 19(9), 904-911. Shamosh, N. A., & Gray, J. R.Delay discounting and intelligence: A meta-analysis. Intelligence, 36(4), 289-305. Sharot, T., De Martino, B., & Dolan, R. J. (2009). How Choice Reveals and Shapes Expected Hedonic Outcome. The Journal of Neuroscience, 29(12), 3760 -3765. Sharot, T., Shiner, T., & Dolan, R. J. (2010). Experience and Choice Shape Expected Aversive Outcomes. The Journal of Neuroscience, 30(27), 9209 -9215. Spreng, R. N., Mar, R. A., & Kim, A. S. N. (2011). The Common Neural Basis of Autobiographical Memory, Prospection, Navigation, Theory of Mind, and the Default Mode: A Quantitative Meta-analysis. Journal of Cognitive Neuroscience, 21(3), 489-510. Sripada, C. S., Gonzalez, R., Luan Phan, K., & Liberzon, I. (2010). The neural correlates of intertemporal decision-making: Contributions of subjective value, stimulus type, and trait impulsivity. Human Brain Mapping 103 Strotz, R. (1956). Myopia and inconsistency in dynamic utility maximization. Review of Economic Studies, 23, 165-180. Sugrue, L. P., Corrado, G. S., & Newsome, W. T. (2004). Matching Behavior and the Representation of Value in the Parietal Cortex. Science, 304(5678), 1782 -1787. Sutton, S. K., & Davidson, R. J. (1997). Prefrontal Brain Asymmetry: A Biological Substrate of the Behavioral Approach and Inhibition Systems. Psychological Science, 8(3), 204 -210. Thaler, R. (1981). Some empirical evidence on dynamic inconsistency. Economics Letters, 8(3), 201-207. Thaler, R. H., & Johnson, E. J. (1990). Gambling with the house money and trying to break even: The effects of prior outcomes on risky choice. Management science, 643–660. Tooby, J., & Cosmides, L. (2008). The evolutionary psychology of the emotions and their relationship to internal regulatory variables. Retrieved from http://doi.apa.org/?uid=2008-07784-008 Tuk, M. A., Trampe, D., & Warlop, L. (2011). Inhibitory Spillover Increased Urination Urgency Facilitates Impulse Control in Unrelated Domains. Psychological Science, 22(5), 627–633. Tulving, E. (2002). Episodic Memory: From Mind to Brain. Annual Review of Psychology, 53(1), 1-25. Van den Bergh, B., Dewitte, S., & Warlop, L. (2008). Bikinis Instigate Generalized Impatience in Intertemporal Choice. Journal of Consumer Research, 35(1), 85– 97. Van der Pol, M., & Cairns, J. (2001). Estimating time preferences for health using discrete choice experiments. Social Science & Medicine, 52(9), 1459–1470. Veling, H., Aarts, H., & Stroebe, W. (2011). Fear signals inhibit impulsive behavior toward rewarding food objects. Appetite, 56(3), 643–648. Venkatraman, V., Chuah, Y. M., Huettel, S. A, & Chee, M. W. (2007). Sleep deprivation elevates expectation of gains and attenuates response to losses following risky decisions. Sleep, 30(5), 603–609. Vuilleumier, P. (2005). How brains beware: neural mechanisms of emotional attention. Trends in Cognitive Sciences, 9(12), 585–594. 104 Weber, B. J., & Huettel, S. A. (2008). The neural substrates of probabilistic and intertemporal decision making. Brain Research, 1234, 104-115. Weber, E. U., Johnson, E. J., Milch, K. F., Chang, H., Brodscholl, J. C., & Goldstein, D. G. (2007). Asymmetric Discounting in Intertemporal Choice A Query-Theory Account. Psychological Science, 18(6), 516–523. Weller, R. E., Cook III, E. W., Avsar, K. B., & Cox, J. E. (2008). Obese women show greater delay discounting than healthy-weight women. Appetite, 51(3), 563-569. Wild, B., Erb, M., & Bartels, M. (2001). Are emotions contagious? Evoked emotions while viewing emotionally expressive faces: quality, quantity, time course and gender differences. Psychiatry Research, 102(2), 109–124. Wilson, M., & Daly, M. (2004). Do pretty women inspire men to discount the future? Proceedings of the Royal Society B: Biological Sciences, 271(Suppl_4), S177- S179. Winkielman, P., Berridge, K. C., & Wilbarger, J. L. (2005). Unconscious Affective Reactions to Masked Happy Versus Angry Faces Influence Consumption Behavior and Judgments of Value. Personality and Social Psychology Bulletin, 31(1), 121–135. Wittmann, M., Leland, D., & Paulus, M. (2007). Time and decision making: differential contribution of the posterior insular cortex and the striatum during a delay discounting task. Experimental Brain Research, 179(4), 643-653. Woolrich, M. W., Behrens, T. E. J., Beckmann, C. F., Jenkinson, Mark, & Smith, S. M. (2004). Multilevel linear modelling for FMRI group analysis using Bayesian inference. NeuroImage, 21(4), 1732-1747. Xu, L., Liang, Z.-Y., Wang, K., Li, S., & Jiang, T. (2009). Neural mechanism of intertemporal choice: From discounting future gains to future losses. Brain Research, 1261, 65–74. Yoon, J. H., Higgins, S. T, Heil, S. H, Sugarbaker, R. J., Thomas, C. S., & Badger, G. J. (2007). Delay discounting predicts postpartum relapse to cigarette smoking among pregnant women. Experimental and Clinical Psychopharmacology, 15(2), 176–186.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Behavioral and neural evidence of incentive bias for immediate rewards relative to preference-matched delayed rewards
PDF
Sequential decisions on time preference: evidence for non-independence
PDF
Visual and audio priming of emotional stimuli and their relationship to intertemporal preference shifts
PDF
Homeostatic imbalance and monetary delay discounting: effects of feeding on RT, choice, and brain response
PDF
The acute impact of glucose and sucralose on food decisions and brain responses to visual food cues
PDF
Value-based decision-making in complex choice: brain regions involved and implications of age
PDF
Validation of a neuroimaging task to investigate decisions involving visceral immediate rewards
PDF
Insula activity during safe-sex decision-making in sexually risky men suggests negative urgency and fear of rejection drives risky sexual behavior
PDF
Choice biases in making decisions for oneself vs. others
PDF
Taking the temperature of the Columbia Card Task
PDF
Reward immediacy and subjective stress modulate anticipation of primary and secondary rewards in temporarily-abstinent cigarette smokers
PDF
Social exclusion decreases risk-taking
PDF
Effect of mindfulness training on attention, emotion-regulation and risk-taking in adolescence
PDF
Heart, brain, and breath: studies on the neuromodulation of interoceptive systems
PDF
The evolution of decision-making quality over the life cycle: evidence from behavioral and neuroeconomic experiments with different age groups
PDF
Deconstructing the psychological components of emotional decision making and their relation to the suicide continuum
PDF
Relationships between lifetime chronic stress exposure, vascular risk, cognition, and brain structure in HIV
PDF
Neural correlates of learning and reversal of approach versus withdraw responses to in- and out-group individuals
PDF
The role of good habits in facilitating long-term goals
PDF
The effect of present moment awareness and value intervention of ACT on impulsive decision-making and impulsive disinhibition
Asset Metadata
Creator
Luo, Shan
(author)
Core Title
Behabioral and neural evidence of state-like variance in intertemporal decisions
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Psychology
Publication Date
08/21/2012
Defense Date
08/01/2012
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
fMRI,incidental emotions,intertemporal decision-making,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Monterosso, John R. (
committee chair
), Bechara, Antoine (
committee member
), Coricelli, Giorgio (
committee member
), John, Richard S. (
committee member
)
Creator Email
luoshanbest@gmail.com,shanluo@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-90708
Unique identifier
UC11289445
Identifier
usctheses-c3-90708 (legacy record id)
Legacy Identifier
etd-LuoShan-1159.pdf
Dmrecord
90708
Document Type
Dissertation
Rights
Luo, Shan
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
fMRI
incidental emotions
intertemporal decision-making