Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Toward a more realistic understanding of decision-making
(USC Thesis Other)
Toward a more realistic understanding of decision-making
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
1 Toward a more realistic understanding of decision-making Alex Holt Master of Arts in Economics Master of Arts University of Southern California December 2017 2 Table of Contents Introduction .......................................................................................................................... 3 Neoclassical approaches to decision-making .......................................................................... 4 Challenges to neoclassical approaches to decision-making ..................................................... 6 Heuristics and Biases ......................................................................................................................6 Coherent Arbitrariness ................................................................................................................. 14 The Attraction Effect .................................................................................................................... 20 Descriptive, algorithmic approaches to decision-making ...................................................... 24 Expected Utility Theory and Subjective Utility Theory ................................................................... 24 Subjectively Weighted Utility Theory, Prospect Theory and Jagda’s New Theory of Cardinality ...... 26 Rank-Dependent Expected Utility Theory, Cumulative Prospect Theory and Configural Weight Theory ......................................................................................................................................... 28 Decision Field Theory ................................................................................................................... 31 Proportional Difference................................................................................................................ 32 Computational approaches to decision-making .................................................................... 34 Accumulator Models .................................................................................................................... 34 Neural Substrates of Value and Self-Control in Simple Choice........................................................ 37 Neural Substrates of Complex Choice ........................................................................................... 43 Multi-attribute choice, Confidence, Dieting, Arithmetic ................................................................ 45 Divisive Normalization ................................................................................................................. 48 Conclusion ........................................................................................................................... 49 References .......................................................................................................................... 50 3 Introduction Whether thinking of economics in terms of optimization or as allocating scarce resources, decision-making seemed to be at the heart of the matter. If decision-making was at the heart of the matter, then how individuals decided mattered. It mattered for the way economists modeled the dynamics of the market and it mattered for the kinds of predictions that were consequently feasible. There were various visions of how individuals decided jostling for supremacy. The normative, homo economicus vision put forth by neoclassical economists imagined that the individual was unboundedly rational. To be unboundedly rational meant to have preferences that were complete and transitive, and to be consistent always. While this made sense logically it was not exactly accurate, as nothing like a homo economicus existed outside of a microeconomics textbook or a computer simulation. Pointing out glaring deviations between homo economicus behavior and homo sapien behavior, behavioral economists offered various descriptive, algorithmic visions of how the short-cut seeking, overwhelmed-by-information homo sapiens made decisions. Neuroeconomists supplemented this scholarship by offering computational frameworks showing where and how decision-making occurred inside the brain. This required understanding the underlying mechanisms of value. Understanding the underlying mechanisms of value was therefore vital for understanding decision-making, thus making it vital for producing feasible economic models, offering precise predictions, and prescribing effective policies. 4 Neoclassical approaches to decision-making As was evident from the terminology, neoclassical economics stood on the shoulders of classical economics. It therefore did not seem to have been incorrect to presume that classical economics had an even more aloof conception of the relationship between economics and human psychology. And yet behavioral economists hastened to mention that some of their own supposedly deviant ideas, whether loss aversion or social utility functions, were propounded by classicist economists such as Adam Smith (loss aversion) and Francis Edgeworth (social utility) (Camerer et alii, 2011). That neoclassical orthodoxy did not make allowances for considerations of fairness and human psychology attested to the advent in the twentieth century of social scientists dispersing to and staying in their separate insulated disciplines—economists becoming economists, psychologists becoming psychologists (Camerer et alii, 2011). Whereas Adam Smith may have mused about human motivation in Theory of Moral Sentiments, the mathematically-driven neoclassical economists of the twentieth century did not by and large incorporate Freud’s death drive or factor in Jung’s collective unconscious into their utility functions. To neoclassical economists, individuals were rational agents, and that was that. One of the tenets of neoclassical economics was that individuals had “rational preferences among outcomes” (Weintraub, 1993). What it meant to have rational preferences was a matter of consistency. The “sharpest” definition of rationality as consistency was provided by revealed preference theory (Blume and Easley, 2008). According to revealed preference theory, the choices individuals made illuminated the preferences they had. This intuitive insight by economist Paul Samuelson was also a new avenue by which to approach decision-making. The previous avenue was unobservable utility functions, which Samuelson’s approach 5 reconsidered, suggesting that unobservable utility functions obfuscated the point. The point was to discern preferences. And the optimal way to discern preferences was to observe choices (Glimcher and Fehr, 2014). To be considered rational, preferences were to be consistent. And to be considered consistent, preferences had to satisfy the weak axiom of revealed preferences, a necessary condition (Samuelson, 1938). Per the weak axiom of revealed of preferences, preferences were fixed as long as budget sets did not change. Consider for example two bundles, both affordable according to a fixed budget set. One bundle comprised a soda and a burger, the other bundle comprised water and kale salad. Once a person chose the soda-and-burger bundle, as long as the budget set did not change, she could not choose the water-and-kale-salad bundle without violating the weak axiom of revealed preferences. That was the essence of rationality as consistency. The weak axiom of revealed preferences was not the only axiom Samuelson presented; the strong axiom of revealed preferences was another. What made it strong was a matter of indifference; whereas the weak axiom of revealed preferences afforded individuals to be indifferent between two bundles, the strong axiom of revealed preferences did not —one bundle needed to be strictly preferred over another. In addition to these two criteria came a third, the generalized axiom of revealed preferences, introducing transitivity into the conceptual framework of rational preferences. Consider the previously mentioned bundles and add another: lemonade-and-sushi. If juice-and- sushi was preferred to soda-and-burger then, per transitivity, juice-and-sushi was preferred to water-and-kale-salad. The final element in what we consider to be the essential criteria for rationality was the axiom of independence of irrelevant alternatives (Arrow, 1951; Paramesh, 1973). What this 6 meant in traditional choice theory, in which individuals maximized their utility in a context-free way, was that the addition or subtraction of some alternative into a choice set should not have affected previously established preferences. If a choice set for “a night on the town” included only “opera” and “theater” as alternatives, the introduction of another alternative (“cinema,” for example) should not affect whether opera was preferred to theater, or vice versa. A corollary to the axiom of independence of irrelevant alternatives was regularity. Regularity was the idea that adding an alternative should not have increased choice share. If “opera” and “theater” each had fifty percent market share the addition of “cinema” should take away more market share from “opera” than from “theater,” and vice versa. It was thus not incorrect to assert that based on neoclassical criteria of rationality, values and preferences were not fluid but fixed. And it was not incorrect to assert that based on neoclassical criteria of rationality, vexing human concerns such as self-control did not factor into neoclassical notions of value. Just as it was not incorrect to assert that based on neoclassical criteria of rationality, the reality of ever-shifting contexts did not seep into the romantic world of neoclassical economics. Challenges to neoclassical approaches to decision-making Heuristics and Biases A host of experiments demonstrating lapses in human judgment chipped away at the armor of the homo economicus, revealing chinks of human idiosyncrasy. Some of the most cited assaults were led by Daniel Kahneman and Amos Tversky, the former of whom received the Economics Nobel in 2002 “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under 7 uncertainty” (2014). Looking askance at the homo economicus construct, Kahneman and Tversky suggested that in many domains the individual was an uncertain and unmoored animal. Such an uncertain and unmoored animal, according to Kahneman and Tversky, relied on heuristics and corollary biases to make decisions in a complicated world (1974). The first heuristic which Kahneman and Tversky located was “representativeness.” Representativeness was a heuristic used when an individual, who was not certain whether something or someone belonged to some category, relied on stereotypes to classify that something or someone. Was a married-with-no-children thirty-year-old man named Dick—a man who was “well-liked by his colleagues,” a man of “high ability” and “high motivation” who “promise[d] to be successful in his field”—an engineer or a lawyer? Subjects were asked this, specifically: What was the probability that Dick was an engineer. Subjects were given a sample breakdown —either seventy lawyers and thirty engineers, or thirty lawyers and seventy engineers. Regardless of the breakdown they were given, subjects submitted fifty-fifty as their answer; there was a fifty percent chance that Dick was an engineer, they reasoned. This despite the fact that none of the previously cited snatch of “worthless” information should have swayed them. In fact, when subjects were not provided with that snatch of “worthless” biographical information, they correctly submitted either three-tens or seven-tens as their answer. Indeed, this experiment demonstrated an “insensitivity to prior probability of outcomes,” because the irrelevant information caused subjects to disregard the only salient information: the prior probabilities of outcome. Yet it seemed that Kahneman and Tversky misclassified themselves, mislocating the cause of “insensitivity to prior probability of outcomes” to be the representativeness heuristic. What Kahneman and Tversky deemed to be “worthless” information, a biographical description 8 that was “totally uninformative,” the subjects deemed to be somehow vital to answering the question. And since the “worthless” information did not fit into a stereotype, subjects prone to relying on representative heuristics should have rejected the information. Yet they did not. Thus it appeared that the experiment was better designed to demonstrate how individuals erroneously read into matters, trying to construct a narrative out of random information, find order in the chaos. Even if Kahneman and Tversky mislocated the cause, they nonetheless demonstrated that underneath the theoretical armor of the homo economicus was someone who did not think critically, there was someone who misread information, there was someone who could be tricked. In a related experiment, Kahneman and Tversky inverted the previously discussed experiment, instead providing seemingly informative biographical information (1983). Subjects were told of a thirty-one-year-old named Linda who had majored in philosophy. Subjects were told that Linda was “single,” “outspoken,” “very bright.” Subjects were told that Linda was “deeply concerned with issues of social justice” and had “participated in anti-nuclear demonstrations.” Eighty-five percent of subjects given this description believed that Linda was more likely a feminist bank clerk rather than a just a bank clerk. This statistically did not make sense, as the likelihood of being a bank clerk was greater than the likelihood of being a feminist and a bank clerk. Calling this bias the conjuction fallacy, Kahneman and Tversky attempted to show how stereotyping blinded rational probabilistic thinking. And it demonstrably did. Yet this experiment was not without its critics. Wrote philosopher Jim Holt (2011): “Our everyday conversation takes place against a rich background of unstated expectations—what 9 linguists call ‘implicatures.’ Such implicatures can seep into psychological experiments. Given the expectations that facilitate our conversation, it may have been quite reasonable for the participants in the experiment to take ‘Linda is a bank clerk’ to imply that she was not in addition a feminist. If so, their answers weren’t really fallacious.” But they were fallacious. That implicatures exist suggested economists should not ignore linguistics. That such implicatures “seep[ed] into psychological experiments” demonstrated human irrationality. This was the point of Kahneman and Tversky: Individuals were not rational agents; when expected to answer rationally, they answered fallaciously. Individuals, because of their wiring, could be tricked; and in the inverted Linda experiment, it was clear that they were wired with helpful representative stereotypes. In their next experiment, Kahneman and Tversky linked representativeness to “insensitivity to sample size.” In this experiment, university undergraduates were told of a town with two hospitals—one large, one small. In the larger hospital, forty-five babies were delivered every day; in the smaller hospital, fifteen babies were delivered every day. The university undergraduates were also told that, in the population, approximately half of the babies delivered were boys. Furthermore, the university undergraduates were told that both the small and the large hospital recorded days when sixty percent or more of the babies born were boys. Which hospital recorded more such days within a year, Kahneman and Tversky asked. Twenty-two percent of the university undergraduates answered that the larger hospital recorded more such days. Twenty-two percent answered that the smaller hospital recorded more such days. The remaining fifty-five percent answered that both hospitals recorded about the same amount of days of abnormal boy births. The majority was wrong. According to theory, a smaller sample would 10 statistically be more likely to deviate from the population mean, making the smaller hospital the more likely location for more days with an abnormal number of boy births. That this “fundamental notion of statistics [was] evidently not a part of people’s repertoire of intuitions,” even when those individuals were university undergraduates, was not intuitively surprising. In fact, it seemed as if Kahneman and Tversky’s experiments operated better as studies confirming that there was low statistical literacy in the population, rather than as rebukes to economics. Yet the insight that correct probabilistic thinking should not be taken for granted was a major hindrance to the homo economicus armor, and it was a credit to Kahneman and Tversky for stressing this human characteristic. However, the problem in this experiment was (again) that Kahneman and Tversky attempted to link the subjects’ wrong answers to the “representativeness” heuristic when it could have been equally plausible that the university undergraduates misremembered elementary statistics, or believed the query to be a trick question. Explanations abounded. Other foibles were discussed, such as gambler’s fallacy—the deluded idea that independent events were not independent but connected by a string of self-correction. While some gamblers seemed to believe in a string of self-correction, individuals unable to grasp the concept of regression to the mean did not, as subjects were surprised when an above average performance preceded a below average one. As already suggested, many of the results could be interpreted as individuals wanting to discern a pattern where there was none; and in a related experiment, Kahneman and Tversky showed that predictive power seemed magnified when patterns seemed to become apparent. Subjects for example felt more confident about predicting the future performance of a straight-B student than a student whose report card had a combination A’s and C’s. Kahneman and Tversky 11 called this the illusion of validity. In a related experiment, subjects were provided with a paragraph summarizing a student’s performance in a single class session. Half of the subjects were then asked to rate the performance; the other half was asked to predict future performance. The evaluations were not different from the forecasts, demonstrating that individuals took for granted that the description had complete predictive power. Perhaps this was all evidence that universities should emphasize critical thinking courses and elementary statistics courses. And yet in another experiment Kahneman and Tversky offered withering evidence showing that even trained psychologists ignored sample sizes. In addition to the representativeness heuristic Kahneman and Tversky outlined the “availability” heuristic. Availability referred to available anecdotal or imaginary evidence that could clue a person into deciding the frequency probability of some event occurring. For example, someone who knew someone whose small business failed could judge the frequency probability of small business failure to be high. The error occurred when the individual conflated the availability of evidence with the frequency probability of some event occurring. Again, Kahneman and Tversky demonstrated how individuals, who were not intuitively apt at assessing probabilities, relied on imperfect human heuristics as a proxy for probabilities. The availability bias lent itself to a variety of biases. There were retrievability biases when “the size of a class [was] judged by the availability of its instances.” Usually this occurred because of familiarity, salience, or recency. In a study demonstrating the familiarity bias, experimenters read off names of men and women. The names belonging to one of the genders were famous. Later, when asked whether there were more men or more women read off, subjects answered the gender associated with the famous individuals. Non-experimental evidence was introduced to discuss recency and saliency biases. Witnessing a 12 car crash, rather than reading about a car crash, could sway more an individual’s frequency probability of another accident occuring. This read true and was tangentially applicable to consumer decisions, a likely story being: On the way to a food court someone strictly preferring a soda-and-burger to a water-and-kale-salad spotted a famous person enjoying a water-and-kale- salad; and by the time she reached the food court she had become an irrational agent, suddenly preferring water-and-kale-salad to the soda-and-burger. There were also biases due to the effectiveness of a search set. This was when a search set, being more available than another search set, led individuals to believe that the outcome associated with the more available search set was more frequent. Subjects asked whether abstract nouns appeared more often than concrete nouns said the former, thinking of more contexts for them. Subjects asked whether words were more likely to begin with a consonant such as “r” or “k,” or have that consonant in the third-letter of the word said the former, as it was easier to think of words that began with “r” than of words that had “r” in the third letter position. Kahneman and Tversky also suggested biases of imaginability, when a subject’s ability to imagine one solution or scenario better than another misinformed her perceived frequency probability. In their experiment, Kahneman and Tversky asked subjects the following GRE-like question: How many different committees could be formed from ten individuals when each committee could have between two and eight individuals (inclusive). The results demonstrated that subjects thought that more committees could be formed when each committee had fewer individuals. Which was false. A partition of five yielded the most variations, while partitions of two and eight yielded equal number of different committees. Did this experiment actually demonstrate a bias of imaginbility, or did it demonstrate that that binomial expansion was not intuitive? Did economic rationality actually entail a firm grasp 13 of binomial expansion? Perhaps one was no less rational if, upon seeing the committee problem, she consulted a textbook to figure out how to correctly derive the answer. Kahneman and Tversky offered a more practical example of the imaginability bias, suggesting that “vividly portrayed” difficulties could cause someone to overestimate the burden of some task. Imagining someone’s parachute malfunctioning, for example, could dissuade a potential skydiver from participating. The final heuristic discussed was alternately called “adjustment” or “anchoring.” Both described a tendency to rely on a starting value to answer questions regarding unfamiliar subject matter. In an anchoring experiment, subjects were asked whether the percentage of African nations in the United Nations was greater or less than a randomly generated value, twenty-five or forty-five, that was generated wheel-of-fortune style. Subjects with the higher anchor answered sixty-five percent, subjects with the lower anchor answered ten percent. Why the low anchor caused subjects to adjust down whereas the high anchor caused subjects to adjust up was curious. In an example of the “insufficient adjustment bias,” two sets of subjects were asked to estimate either the product of eight by seven by six by five by four by three by two by one, or the product of one by two by three by four by five by six by seven by eight. When the digits were presented in ascending order, and the subjects saw the one digit first, the median guess was five- hundred-twelve. Which was not exactly a rounding error away from the correct answer of over forty thousand. When the digits were presented in descending order, and the subjects saw the eight digit first, the median guess was two-thousand-two-hundred-fifty. Which was also not exactly a rounding error away from the correct answer, but significantly larger than five- hundred-twelve. 14 Again the psychologists successfully tricked their subjects, exposing their naiveté. And again the experiment was artificial, with contestants given five seconds to answer. Yet being manipulated by a framing device was not trivial. A consumer whose preferences flipped once a question was reframed—if such a consumer were prevalent, then the armor of homo economicus was already fractured. Coherent Arbitrariness It was important to discuss in detail the findings of Kahneman and Tversky because, however flawed some of their methods and conclusions, their contribution to economics was great, insofar as they rerouted economics’ course towards a more realistic understanding of decision-making. Moreover, some of their experiments provided inspiration for other key scholarship that challenged economic orthodoxy. For example, while Kahneman and Tversky’s anchoring experiment was geared toward unfamiliar information, Ariely et alii (2003; 2006) conducted a handful of experiments elucidating anchoring effects for information that should be relatively familiar, especially for business students. In one anchoring experiment, Ariely et alii asked fifty-five business students to evaluate assorted products in two ways (2003). Given a product, the business students were asked whether they would or would not pay the dollar amount equal to the last two digits of their Social Security account, an arbitrary value. The business students were also asked a willingness-to-pay value for the product. Later, one product was randomly chosen. Also randomly chosen was the sales price of the product; it was based either on Social Security number or on the willingness-to- pay figure. In accordance with the arbitrariness hypothesis, subjects with Social Security numbers in the top quantile provided willingness-to-pay values that were from fifty-seven to 15 one-hundred-seven percent higher than willingness-to-pay values provided by subjects with below-median Social Security numbers. For example, a student with a Social Security number in the top quantile was willing to pay on average fifty-six dollars for a cordless computer keyboard. Compare that to sixteen dollars that a student whose Social Security number was in the bottom quantile. Despite this clear anchoring heuristic, subjects still ordered items in a reasonable manner, valuing a keyboard more than a mouse, and a midrange wine lower than a high-range wine. These results therefore vindicated the notion of coherent arbitrariness. Subjects initially had arbitrary valuations that were “highly responsive to both normative and non-normative influences.” And that initial arbitrary valuation became “locally coherent” as subjects attempted to organize similar future decisions. Although Ariely et alii went further than Kahneman and Tversky did toward devising a consumer-esque experiment that challenged economic orthodoxy, perhaps the business students were not interested in electronics paraphernalia, and would have acted more rationally if they were dealing items they actually wanted. Moreover, even interested consumers may not be privy to the base price of some item. Moving away from a consumer-esque experiment, Ariely et alii constructed an experiment that gauged whether anchoring effects existed for appraisals of pain. This time, one- hundred-forty-three business students were randomly assigned into one of six conditions. After listening to an annoying sound for thirty seconds, subjects in the “high-anchor” condition were asked if they would accept fifty cents to listen to the sound again, subjects in the “low-anchor” condition were asked if they would accept ten cents, and subjects in the “no-anchor” condition were not asked anything. Thereafter, all subjects were asked their willingness-to-accept value 16 (how much they would need to be compensated) to bear ten, thirty, and sixty seconds of the annoying sound. Two follow-up questions were asked to assure consistency, not something to be taken for granted in such experiments. Subjects were informed that the computer would randomly select some amount, with lower amounts weighted more than higher amounts. If the amount was greater than what the subject had stated as her willingness-to-accept for the specified duration, then the subject would hear the sound for the specified duration and be paid accordingly. There were three sets of three trials, one trial for each duration. The results confirmed both parts of the coherent arbitrariness hypothesis. Preferences were arbitrary, as willingness-to-accept values were significantly higher in the high-condition than in the low condition or the no-condition. And preferences were coherent insofar as willingness-to-accept values increased directly with duration of the sound. Yet the above experiment could be charged as being trivial, in terms of the duration of the pain and the reward for suffering it. Furthermore, there were concerns that students, wanting to preserve their hearing more than they wanted fifty cents, were overly cautious. As such, in the next iteration of the pain experiment, assurances were made that however unpleasant the sound was, it was not deleterious. The durations were multiplied ten-fold (one- hundred seconds, three-hundred seconds, six-hundred seconds). The payouts linked to the anchor were generated by asking subjects to convert the last three digits of their Social Security number into a cash payment in dollars and cents (two-hundred-thirteen converted into two-dollars and thirteen cents). Again the hypothesis was vindicated, with coherent willingness-to-accept values that were initially arbitrary. Further underscoring the arbitrariness of the initial value was the fact that willingness-to-accept values of individuals who conducted the trials in descending order (six- 17 hundred seconds, three-hundred seconds, then one-hundred seconds) diverged significantly from willingness-to-accept values of individuals who conducted the trials in ascending order (one- hundred seconds, three-hundred seconds, six-hundred seconds). To make the above experiment more economic-esque, the researchers changed the payoff mechanism. Instead of a value being randomly generated and then compared to each subject’s willingness-to-accept value, experimenters looked to a dynamic market mechanism to determine the payoffs. Subjects were pitted against subjects in an auction, with the three lowest willingness-to-pay values getting compensated with the value of the fourth lowest willingness- to-accept value. If market effects existed, there would have been convergence in willingness-to- pay values amongst subjects with different anchors. There weren’t. Given their interesting results, Ariely et alii were curious to what degree individuals’ preferences were malleable and vulnerable to framing, a subject they investigated in Tom Sawyer and the construction of value (2006). This paper’s title referred to an episode from Mark Twain’s Adventures of Tom Sawyer in which the main character, tasked with painting a fence, presented the chore as something desirable, and in effect induced others to pay for the privilege of painting the fence. Whether this episode was merely literary farce or actual possibility interested the researchers. Could individuals even judge the valence of something, the researchers wondered, devising three experiments to locate an answer. In the first experiment, seventy-five (of one-hundred-forty-six) undergraduates at Berkeley were asked if they would be willing to pay two dollars to listen to attend a poetry recital (this was the so-called Pay Group). Three percent agreed. The other seventy-one Berkeley 18 students were asked if they would attend on the condition of receiving two dollars (this was the so-called Accept Group). Sixty percent agreed. Subsequently students were told that in fact the reading would be free, and then both groups were asked if they were interested in attending. In the Accept Group, the attendance dropped from sixty percent to eight percent. In the Pay Group, attendance interest spiked from three percent to thirty-five percent. The experimenters thus suggested that the first question influenced or primed the response to the second question. In the second experiment, the experimenters again showed that subjects’ attitude to the poetry was contingent on priming. This time, the subjects who were in the Pay Group were asked to pay ten dollars to listen to ten minutes of poetry and then asked the monetary value for one, three, and six minutes of listening time. The subjects who were in the Accept Group were offered ten dollars to listen to ten minutes of poetry and then asked the monetary value for one, three, and six minutes of listening time. Results revealed that in both cases subjects extrapolated a direct relationship between time and value (more time meant more value). The experiment inverted the scenario in its next stage. In this stage, subjects previously in the Pay Group were put in the Accept Group, and vice versa. In this stage, instead of poetry, subjects were asked about paying to or being paid to participate in a ten-minute decision-making task and then asked to value one, three and six minutes of participation. Regardless of their condition in the first stage of the second experiment, subjects in the second stage of the second experiment behaved as if they were primed for the first time by the scenario in the second stage, demonstrating re-priming, as well as the fading effects of the initial priming. In the third experiment, the experimenters demystified the poetry by playing a sample of the recital. Then subjects were split into Accept and Pay groups, this time determined by the 19 parity of their Social Security number’s final digit. Of the subjects in the Accept Group, sixty- three percent were willing to attend if compensated the dollar value of their Social Security number’s final digit. Of the subjects in the Pay Group, only twenty percent were willing to attend for the price of the dollar value of their Social Security number’s final digit. As before, subjects in the Accept Group, upon learning that they would not be paid, lost interest in poetry, their attendance-interest dropping to nine percent from sixty-three percent. As before, subjects in the Pay Group, upon learning that they could attend for free, developed a sudden interest in poetry, their attendance-interest spiking from twenty percent to almost fifty percent. Next, subjects in both conditions were asked twenty-one pricing questions. For example: “Would you pay ten dollars to attend? Nine dollars? Eight dollars…Zero dollars? Would you be paid one dollar? Two dollars? Three dollars….Ten dollars?” Subjects in the Pay Group had a mean value of negative one dollar and thirteen cents, meaning they on average preferred to be paid one dollar and thirteen cents to attend. Subjects in the Accept Group had a mean value of negative four dollars and forty-six cents, meaning they on average preferred to be paid four dollars and forty-six cents to attend. The inter-Group mean value difference attested to an anchoring effect. Which was significant, even if these results left something to be desired—namely, results showing subjects in the Pay Group having a positive valuation instead of a negative one. In spite of this, the experimenters demonstrated anchoring and priming, again undermining neoclassical conceptions of value while demonstrating that the Tom Sawyer episode was less farce than fact. Going into detail to describe the Ariely et alii experiments showed that, while Ariely et alii did not make an open-and-shut case against fundamental value, their less stilted and more 20 economic-esque experiments—in service to their coherent arbitrariness hypothesis—posed a significant challenge to the neoclassical idea of fundamental value. Fundamental value was not a trivial concept in neoclassical economics, as it was assumed that products had fundamental values, not arbitrary ones. For example, in the Theory of General Equilibrium, equilibrium resulted from the interaction of technologies, endowments, and preferences (Arrow and Debreu, 1954). Preferences were assumed to be exogenous, based on fundamental values outside the system. It was a challenge then if the values were not fundamental. It was a challenge then if instead of being fundamental, values were in fact endogenous, determined by other components (such as price) within the system. The Attraction Effect In addition to the being slothful and arbitrary, individuals were also inconsistent, according to scholarship that found them violating the Axiom of the Independence of Irrelevant Alternatives. In an experiment, one-hundred-fifty-three business students chose between three comparable products which were ranked by two attributes: quality and price (Huber et alii, 1982). Two of the products were equally attractive—one was called the “target,” the other the “competitor”; the third was an asymmetrically dominated decoy, completely dominated on both attributes by the “target” product, but only dominated on one attribute by the “competitor.” Theoretically, the decoy should not have affected the proportion of choices for the “target” or “competitor,” and there should not have been any preference reversals. Yet there were. After two weeks, the subjects completed the product choice task again, this time without the decoy. The results demonstrated preference reversals—a violation of Regularity and the 21 Independence of Irrelevant Alternatives Axiom. These preference reversals were labeled the Attraction Effect. The Attraction Effect was defined as an increase in the share of a brand that was similar, yet in all ways better, than an asymmetrically dominated alternative/decoy introduced into the choice set. The decoy therefore tricked the “rational” agent into thinking that the target was better than the competitor. Huber and Puto suggested that the attraction effect was a form of a larger effect called the Similarity Effect (1983). The other forms of the similarity effect were the Substitution Effect and Proportionality. If the Attraction Effect positively affected the “target,” then the substitution effect negatively affected the “target,” with more market share going to the less similar “competitor”; meanwhile, the Proportional Effect had a neutral effect, taking away market share in equal proportion from both “target” and “competitor.” To test this conceptual framework Huber and Puto designed a few experiments (1983). As before, subjects were asked to answer surveys to indicate a preference for one of three items—target, competitor, decoy. As before, a few weeks after the initial session, subjects answered another survey without the decoy. As before, an Attraction Effect was demonstrated. Yet the negative Similarity Effect (The Substitution Effect) was not significant; and when this experiment was replicated with four-choices items compared on three-attributes, there was still no evidence of a Substitution Effect, endangering the new terminology. Huber and Puto cautioned that their terminology may have been vindicated yet—if not in the lab, then in the field. Following up, Heath and Chatterjee analyzed the Attraction Effect phenomenon in a meta-analysis (1995). They put forward two hypotheses correctly, one hypothesis incorrectly, and one hypothesis partially correctly. The first hypothesis they put forward was that expensive 22 brands were better inoculated from the attraction effect than cheaper brands, and this was supported by the evidence. The second hypothesis they put forward was that the attraction effect was stronger when the asymmetrically dominated decoy was more acute, and this too was supported by the evidence. The third hypothesis they put forward was that perceived risk magnified the attraction effect, and that consequently the attraction effect was stronger for less risky durables than more risky nondurables; this was unsupported by the evidence. The fourth hypothesis they put forward was that the target’s share would increase more when an asymmetrically dominated decoy was introduced than when a viable decoy was introduced, and this was supported by the evidence, but only for relatively expensive brands. Yet Heath and Chatterjee’s conclusions were complicated by two factors—income and familiarity. For urban subjects, the Attraction Effect held for expensive brands; for rural subjects, the Attraction Effect held for cheaper brands (1995). With regard to familiarity, Ratneshwar et alii demonstrated that elaboration eliminated the Attraction Effect in all product categories but beer (1987). A caveat to this was that when a “competitor” was superior on a concrete attribute but inferior on a vaguer attribute, it received a higher ranking thanks to the decoy. So, perhaps a Substitution Effect existed, and Huber and Puto’s terminology may have been vindicated yet. There may or may not have been a Substitution Effect, but there was more terminology yet. Simonson suggested a compromise effect in which an individual who was uncertain about which attribute was more important decided on a “compromise alternative” that was relatively strong on both attributes, because the compromise alternative was “easiest to justify” (1989). An experiment, in which some subjects were asked to justify their rankings to their classmates while others did not, yielded evidence of Compromise and Attraction effects. Subjects who were asked to justify their rankings to their classmates were more liable to the Attraction Effect than control 23 subjects, because symmetric domination decreased the perceived likelihood of criticism and the difficulty of justifying a choice of that alternative, whereas a compromise choice only decreased the perceived likelihood of criticism but not the difficulty in justifying a choice. In a follow-up experiment, in which marketing students were asked to perform a similar task but to “think aloud their choices,” the results showed that compromise alternatives took longer deliberation. Yet Simonson’s experiments did not contradict Ratneshwar et alii, whose research asserted that elaboration eliminated the Attraction Effect. Elaboration was not the same as deliberation. The former provided useful information, the latter a path to rationalization. As Ratneshwar et alii investigated whether elaboration mitigated the attraction effect, so too did Mishra et alii (1993). They too found support that information perceived to be of “high quality” could be useful for mitigating the Attraction Effect. Yet if Mishra et alii found anything, it was that even mitigating the Attraction Effects is case-by-case, contingent. “Product class knowledge” for example was chiefly useful for mitigating the Attraction Effects for beers. Meanwhile, decoy popularity increased the Attraction Effect for cars, televisions, and beers, while greater market shares of the decoy were conducive to the Attraction Effect in cars and beers. Thus Ratneshwar et alii were overstating when they stated that rationality thrived with greater available information. The bigger point was the idea of contingency. A decision was based on a host of attributes, some of which were vague. Even when the attributes were concrete, certain considerations—such as, whether the decision was to be kept private or to be made public, or whether the decision’s rationale had to be verbalized or kept to the swirl of one’s internal monologue—influenced the decision-making of the individual. What seemed to recur in the 24 research was a dearth of consistency and a surfeit of contingency. And contingency was inconsistent with the homo economicus. Descriptive, algorithmic approaches to decision-making Expected Utility Theory and Subjective Utility Theory If value was set arbitrarily—and if individuals were not homo economicus but inconsistent, liable to trickery, and dependent on shortcuts—and if choices were not deterministic but probabilistic, did there exist some framework to understand decision-making? Expected Utility Theory was one theory. Proposed by von Neumann and Morgenstern in 1947, it stated that individuals chose “between risky or uncertain prospects by comparing…the weighted sums obtained by adding the utility values of outcomes multiplied by their respective probabilities” (Mongin, 1997). Expected Utility Theory allowed for risk-neutral, risk-loving and risk-averse individuals. Subjective Utility Theory was another theory. Subjective Utility Theory stated that individuals used a personal utility function and a (personal) Bayesian probability distribution, and that individuals’ decisions could be arranged such that convex combinations of the individual’s decisions would preserve preferences (Savage, 1954). Both Expected Utility Theory and Subjective Utility Theories were challenged by a gambling scenario presented by Maurice Allais (1953). This scenario presented two sets of gambles. In the first set of gambles, subjects chose between one million dollars with complete certainty and the following: one million dollars with an eighty-nine percent chance, five million dollars with a ten percent chance, and zero dollars with a one percent chance. Typically, individuals chose the first gamble, preferring one million dollars with complete certainty. In the 25 second set of gambles, subjects chose between two gambles. In the first gamble, they chose between a million dollars with eleven percent chance and zero dollars with an eighty-nine percent chance. In the second gamble, they chose between zero dollars with a ninety percent chance and five million dollars with a ten percent chance. Typically, individuals chose the second gamble, even though they had a slightly higher chance of going home with nothing. This combination of typical choices was irrational according to Expected Utility Theory. This was because the gambles could be restructured. They could be restructured according to two principles. The first principle was the principle of coalescing: Equal outcomes could be combined by adding probabilities. The second principle was the principle of branch independence: When Gamble A and Gamble B had a common branch, a preference for Gamble A or Gamble B had to be independent of that common branch—a common branch being defined as a common outcome of a common event with a common probability. In the first set of gambles, the first choice was partitioned as a coalescing of a million dollars at eighty-nine percent and a million dollars at eleven percent. Then, compared to one million dollars at eighty-nine percent, zero dollars at one percent, and five million dollars at ten percent, the one million at eighty-nine percent canceled out, leaving one million dollars at eleven percent versus five million at ten percent and zero dollars at one percent. The second gamble was restructured as zero dollars at eighty-nine percent and one million dollars at eleven percent versus zero dollars at one percent, zero dollars at eighty-nine percent, and five million dollars at ten percent. Here, the zero dollars at eighty-nine percent canceled out, leaving one million at eleven percent versus five million at ten percent and zero dollars at one percent. 26 Thus the two sets of gambles were the same. Yet individuals behaved contradictorily, because of how the bets appeared at first blush. Which again demonstrated that individuals were inconsistent, not adroit with probabilities, and/or slothful. Criticism that this experiment was trivial—because however well-endowed the economics department may have been, million dollar payoffs were unfeasible—was rebuked when similar results were reproduced with less fanciful figures (Kahneman and Tversky, 1979). Subjectively Weighted Utility Theory, Prospect Theory and Jagda’s New Theory of Cardinality To reconcile the Allais Paradox other theories arose. One theory was the Subjectively Weighted Utility Theory (Edwards, 1954; 1962; Karmarkar, 1978). According to this theory, probabilities themselves were non-linearly weighted such that “low probabilities [were] overstated and high probabilities understated” (Karmarkar, 1978). Which is why individuals appeared to act irrationally when faced with the Allais gamble, when in fact they were simply weighing probabilities that affected their overall utilities. Another theory with a similar thrust was Prospect Theory (Kahneman and Tversky, 1979 and Kahneman and Tversky, 1986). In Prospect Theory, framing mattered. Losses imbued individuals with more pain than winning imbued them with pleasure. This led to an asymmetric, s-shaped value function which was steeper in the losses’ dimension than the gains’. And like Subjectively Weighted Utility Theory, Prospect Theory also represented probabilities as being subjectively perceived, and thus transformed them non-linearly in its formulation of value. Another theory in this vein was Jagda Handa’s New Theory of Cardinality, also proposing a framework in which probabilities were non-linearly weighted (1977). 27 Yet these probability weighting theories, while explaining the Allais Paradox, presented their own contradictions. Peter Fishburn described such a contradiction in a mathematical proof, showing that probability weighting theories had violated “transparent dominance” (1978). Transparent dominance was the intuitive idea that if Gamble A had outcomes strictly better than Gamble B, then Gamble A should be preferred to Gamble B. Transparent dominance was also the intuitive idea that if Gamble A and Gamble B had the same outcomes, while Gamble A had higher probabilities of better outcomes, then Gamble A should be preferred to Gamble B. Common sense. Yet not according to the probability weighting theories. As an example, the following set of gambles. Gamble A: a fifty-fifty shot at either one- hundred dollars or two-hundred dollars. Gamble B: a ninety-nine percent chance at one-hundred dollars and one percent chance at two-hundred dollars (Birnbaum, 1999). Certainly most individuals without thinking too hard chose Gamble A, since the outcomes in Gamble A were the same as in Gamble B while the probability of a two-hundred dollar outcome was more likely in Gamble A. Yet according to theories in which probabilities were subjectively weighted, individuals found greater value in the transparently weaker Gamble B. As another example, another set of gambles. Gamble C: a fifty-fifty chance of getting either one-hundred-ten dollars or one-hundred-twenty dollars. Gamble D: a ninety-eight percent chance of winning one-hundred-three dollars, a one percent chance at winning one-hundred-two dollars, and a one percent chance at winning one-hundred-one dollars (Birnbaum, 1999). Certainly most individuals without thinking too hard chose Gamble C, since both outcomes in Gamble C are of higher value than any of the outcomes in Gamble D. Yet according to theories 28 in which probabilities were subjectively weighted, individuals found greater value in the transparently weaker Gamble D. Indeed these examples were extreme. Nonetheless they did contradict frameworks based on subjectively weighted probability. Rank-Dependent Expected Utility Theory, Cumulative Prospect Theory and Configural Weight Theory Other frameworks emerged to patch up subjectively weighted probability theories’ drawbacks. One new framework was the Rank-Dependent Expected Utility (Quiggin, 1982). While still positing that individuals perceived probabilities in a subjectively biased way, Rank- Dependent Expected Utility deemphasized the idea that individuals overweighed the probability of an unlikely event, such as winning two-hundred dollars with a one percent chance. Instead it suggested that individuals overweight unlikely extreme outcomes, such as major natural disasters. This was more than a semantic change in verbiage; it changed the way in which probabilities were transformed. Instead of transforming the probabilities themselves, the theory proposed transforming the cumulative probability distribution function. In this new framework, the utility of a gamble was expressed such that the utility of Outcome A was multiplied by the weighted cumulative probability that the outcome was greater than or equal than Outcome A minus the weighted cumulative probability that the outcome was greater than Outcome B. In concert with this insight, Kahneman and Tversky updated their Prospect Theory, changing its name to Cumulative Prospect Theory. From the framing effect to loss aversion, most insights remained intact from the previous iteration of the theory. What had changed was that their probability weighting function became a cumulative weighting probability function, 29 and looked as follows: 𝑊 (𝑃 ) = 𝑃 𝛾 [𝑃 𝛾 +(1−𝑃 ) 𝛾 ] 1 𝛾 ⁄ where 𝛾 = .61 (Tversky and Kahneman, 1992). A follow-up model included a risk aversion index: 𝑊 (𝑃 ) = 𝑐𝑃 𝛾 [𝑐𝑃 𝛾 +(1−𝑃 ) 𝛾 ] 1 𝛾 ⁄ (Tversky and Wakker, 1995). Building on this with experimental work, Wu and Gonzalez found that the probability weighting curve was concave, up to a probability of about forty percent, and convex beyond that point (1996). Yet another challenge emerged—violations of Branch Independence. Branch Independence was defined earlier: When Gamble A and Gamble B had a common branch, a preference for Gamble A or Gamble B had to be independent of that common branch, which was defined as a common outcome of a common event with a common probability. In other words “if two gambles [had] branches…that [were] identical, these common branches [had to] have no effect on the ordering produced by other branches” (Birnbaum et alii, 1992). More formally, (x,y,z) was preferred to (x’,y’,z) if and only if (x,y,z’) was preferred to (x’,y’,z’) (Birnbaum and McIntosh, 1996). For example, the following gambles (Birnbaum and Chavez, 1997): The first gamble: a fifty percent chance of winning two dollars, a twenty-five percent chance of winning forty dollars and a twenty-five percent chance of winning forty-four dollars. The second gamble: a fifty percent chance of winning two dollars, a twenty-five percent chance at winning ten dollars and a twenty-five percent chance at winning ninety-eight dollars. Experimentally, most individuals preferred the first gamble. Furthermore, the following two gambles (Birnbaum and Chavez, 1997): The first gamble: a twenty-five percent chance at winning forty dollars, a twenty-five percent chance of winning forty-four dollars and a fifty percent chance of winning one-hundred-eight dollars. The second gamble: a twenty-five percent chance of winning ten dollars, a twenty-five percent chance of 30 winning ninety-eight dollars, and a fifty percent chance of winning one-hundred eight dollars. Experimentally, most individuals preferred the second gamble. These empirical findings contradicted the predictions of Cumulative Prospect Theory and Rank-Dependent Utility Theory, as individuals had not canceled out the independent branches consistently. Showing fault with Cumulative Prospect Theory and the like, Birnbaum steered the conversation toward the Configural Weight Model, his previously proposed framework from 1974. The Configural Weight Model suggested magnifying the probability of an outcome in indirect proportion to the magnitude of the outcome, thereby magnifying the probabilities of the least outcomes (Birnbaum, 1974). After each probability was multiplied, the magnified (or reduced) probabilities were summed. Then each magnified probability was divided by the sum, generating a new weighted probability for each outcome. When this new weighted probability was multiplied by the outcome value, it yielded a modified outcome value. Once all the modified outcome values were summed, the adjusted value of the gamble was evident. This adjusted value of the gamble could be compared to the adjusted value of another gamble, thus determining which gamble was optimal. This process was borne out for the first set of gambles previously discussed. In the first gamble, two dollars was the least amount, followed by forty-dollars, followed by forty-four-dollars. According to the Configural Weight Model, the fifty percent probability— attached to two-dollars, the smallest outcome—was magnified. Meanwhile the probabilities attached to the larger outcomes were reduced. Once that was done, the magnified probabilities were summed. Then each magnified or reduced probability was divided by the sum of the magnified/reduced probabilities, yielding a new weighted probability for each outcome. The new 31 weighted probability for the two-dollar outcome became sixty-seven percent, while the new weighted probability for the forty-dollar outcome became twenty-two percent, and the new weighted probability for the forty-four-dollar outcome became eleven percent. Once these new probabilities were multiplied by their respective outcome values, and the respective products were summed, the adjusted value of the gamble was fifteen-dollars and eleven cents. In the second gamble, this process yielded an adjusted value of fourteen-dollars and forty-four cents. (Which was lower than the adjusted value of the first gamble.) Meanwhile, in the second set of gambles, the second gamble had a higher adjusted value than the first gamble. Thus, in this case, the Configural Weight Model comported with human behavior, which had not comported with predictions by Cumulative Prospect Theory. Despite this, the Configural Weight Model was not the end-all of decision models, and it remained to be seen what its applicability was beyond accounting for a paradox. Decision Field Theory A more recent framework was Decision Field Theory in which “a decision maker’s preference for each option evolved during deliberation by integrating a stream of comparisons of evaluations among options on attributes over time” (Roe et alii, 2001). Each attribute represented as a vector, the attention paid to each attribute was another vector, and the product of these vectors yielded a matrix that measured the relative valence of each option. (With appropriate adjustments, this matrix was scaled up to account for many attributes.) Valences were integrated into a preference matrix. The preference matrix was then multiplied by a feedback matrix. 32 The feedback matrix accounted for two connections. The first connection the feedback matrix accounted for was self-connection, relating memory to the evolution of preferences. Set at zero, self-connection was nil, untethering future preferences from past preferences. Set at one, self-connection was total, tying future preferences to past preference values. The second connection the feedback matrix accounted for was inter-connection, relating preference for one item over another. Inter-connection could have been negative, denoted by preference for one item over another item, connoting the zero-sum nature of some choices. Alternatively, inter- connection could have been zero, denoting by a neutral relationship between items. Inter- connection was generally negatively related to distance in terms of attributes. For example, a Lexus sedan and a BMW sedan were more interconnected than a BMW sedan and a Ford Pinto. This model was temporal and therefore preferences evolved over time, starting with an initial preference, until reaching a threshold. A random error term was included. The key idea according to this theory was that the underlying mechanisms of decision-making were caused by irregularities in attention that dictated shifting valence. Proportional Difference Another stochastic model was Claudia Gonzalez-Vallejo’s Probabilistic and Context- Sensitive Model of Choice Behavior (2002). This model was influenced by Thurstone’s law of comparative judgment (1927). Yet where Thurstone’s framework compared stimuli, Gonzalez- Vallejo’s compared attributes of the stimuli. Gonzalez-Vallejo’s Proportional Difference framework used a function to calculate a Proportional Difference which was positive or negative. To calculate the Proportional Difference, the functional form utilized a maximum or minimum value that was theoretically from the set of attributes being compared, but in fact may not have 33 been, Gonzalez-Vallejo conceded, stating that individuals often made idealizations as they thought of attributes that weren’t there. An adjustment for idealization was possible, however, as in the case of a person comparing two salaries to some idealized salary. In this model, a decision threshold described how sensitive an individual was to attribute disparity. The individual made “trade-offs” as she compared each corresponding pair (or more) of attributes, with varying conditions effecting increased or decreased sensitivity to differences among attributes. The usefulness of Proportional Difference originated in its ability to account for lapses in rationality, such as violations of independence. Through gambling experiments with real payoffs, Myers et alii discovered that “Given the alternative of a sure loss, subjects tend[ed] to gamble more when the risk [was] low, but given the alternative of a sure gain, subjects tend[ed] to gamble more when the risk [was] high” (1965). What they discovered could be termed as a violation of stochastic independence, as subjects reacted differently to the same probabilities because the gain domain had shifted to a loss domain, or vice versa. Furthermore, Proportional Difference modeled and parametrized violations of independence. When probability was fifty- fifty, the threshold was close to zero; when probability was at twenty-eighty, the threshold was positive; and when the probability was at eighty-twenty, the threshold value was negative. (Variance remained constant.) The usefulness of Proportional Difference also originated in its accounts of violations of stochastic transitivity, defined as p(x,y) > ½ and p(y,z) > ½ implies p(x,z) > ½ (Tversky, 1969). For illustration, the following three lotteries: The first lottery offered one-hundred-eighty-dollars at twenty percent, the second lottery offered sixty-dollars at forty percent, the third offered twenty-dollars at sixty percent. In terms of expected value, the first lottery was better than the 34 second, which was better than the third. Of course, individuals, wanting to sleep at night, preferred the third lottery to the first, even while preferring the first to the second. Such violations had been identified (Birnbaum and Navarette, 1998 and Tversky, 1969). Proportional Difference accounted for them, modeling violations of both strong and weak stochastic transitivity. However varied and sophisticated, the aforementioned theories challenging normative economic orthodoxy were descriptive in nature. Individuals did not behave as they should have, and theories such Cumulative Prospect Theory and Decision Field theory provided algorithmic processes to describe their idiosyncratic decision processes. This was useful. Up to a point. Because it still did not get at the underlying mechanisms of decision-making, and by extension, value. Computational approaches to decision-making Accumulator Models To get at the underlying mechanisms, behavioral economists and neuroscientists collaborated, trying another tack—the computational approach. Yet there was another approach, the computational approach, and one version of it adapted an earlier accumulator model, The Drift Diffusion Model. In the Drift Diffusion Model, there existed a threshold-like bound for each choice, such that when a bound was reached, a decision was made in favor of one of the choices, signifying that enough evidence had accumulated (Ratcliff, 1978). The model was visualized as a band, with the top of the band representing the boundary for one choice, and the bottom of the band representing the boundary for the second choice. In between the top and bottom bands was the 35 zero point of a drift rate; any drift toward the top band was positive, whereas any drift toward the bottom band was negative (Ratcliff, 1985). The process in this accumulator model was assumed to be stochastic and noisy, allowing for varying response times and wrong choices (Gold and Shadlen, 2007). The rate at which information was accumulated was the drift rate. The drift rate was fast or slow depending on the quality of the information (Ratcliff and McKoon, 2008). For example, when first explicated, the model tested recognition memory, testing the ability to identify formerly encountered words (Ratcliff, 1085). In such an experiment, words were read off and then read off again, and some words were read off more often others. Fast drift applied to words read off more often, as recognition was more likely in that event than if a word had only been read off once. Yet the Drift Diffusion Model was limited. The model only accounted for simple, single- stage red-pill-or-blue-pill decisions—not intermediary, chess-player decisions that were part of a bigger, more complex macro-decision. The model only accounted for relatively fast decisions that elapsed one to one-and-a-half seconds. These fast, and not exactly deliberative decisions, concerned choices that were random, new, or novel—not habitual. And the model did not factor in dynamic processes such as feedback. What was interesting and more widely applicable was the conceptualization of decision- making in classical economic terms, costs and benefit analysis, which were recast as Subjective Value and Action Costs. For example, in the experiments of the Drift Diffusion Model, in which a subject decided whether to perform some costly task to receive a treat, it was theorized that subjects had a Subjective Value signal for the positive value reward, and an Action Cost signal for the value of the work involved in obtaining the reward (Rangel and Clithero, 2013). Subjective values took into account the probability of receiving the reward, and whether it was 36 delayed; price was incorporated into the Action Cost of a good (Rangel and Hare, 2010). Both Subjective Value and Action Costs were considered brain signals, although they were calculated separately. These Stimulus Values were ascertained à la logistic choice (Luce, 1959), or by an incentive-compatible survey (Becker, DeGroot and Marschak, 1964), or by a combination of the two (Hare, Camerer and Rangel, 2009). The integration of Subjective Value and Action Cost was tantamount to the difference of the Action Cost from the Stimulus Value, or benefit less cost. More formally: Action Value = E[Discounted Stimulus Value|Action] – E[Discounted Action Cost|Action] (Rangel and Hare, 2010). In a variant Drift Diffusion Model, the relative Decision Value signal evolved depending on where the fixation was. Its slope was “proportional to the weighted difference between the values of the fixated and unfixated [sic] items” (Krajbich et alii, 2010). This model was used in an experiment in which hungry subjects made simple choices between foods while being hooked up to an eye-tracker device. This model brought up biases caused by visual saliency, suggesting that irrelevant piquant alternatives could skew decision-making. Further investigating the role of attention, researchers devised an experiment in which hungry subjects were scanned with functional magnetic resonance imaging and outfitted with an eye-tracker while making simple binary choices between food items, one of which they were told to focus on (Lim et alii, 2011). Results revealed that during decision-making, the ventral striatum and the ventromedial prefrontal cortex “encode[d] a relative value signal…equal to the difference in value between the attended and unattended items.” Indeed, computational accumulator models could be enhanced with insights from neuroeconomics, a field that acknowledged the systemic fantasies of normative economic theories propounded by neoclassicists, as well as the limitations of descriptive theories offered 37 up by the psychologists and behavioral economists. Certainly, it was not unhelpful to have models that accurately predicted human behavior. Neural Substrates of Value and Self-Control in Simple Choice Neuroeconomics wedded itself with various paradigms while locating the neural basis of decision-making. One paradigm was the Reinforcement Learning, which delineated simple choice decision-making into a process (Sutton and Barto, 1998; Rushmore et alii, 2009). Gauging the value of the reward occurred as the brain represented the expected value of the reward; these expectations included the Stimulus Value and the Decision Value/Action Value that measured the net value of a goal/good. Once the good/goal was ascertained, there was some a prediction error—the discrepancy between the expected value and the actual value of the reward—which led to updating of preferences. In electrophysiology experiments in which thirsty monkeys choices between juices, researchers found that orbitofrontal cortex neurons encoded value (Padoa-Schioppa and Assad, 2006). These researchers hypothesized a paradigm in which primates, instead of making comparisons between the utilities/values of various actions, made comparisons between the utilities/values of the goods/rewards that were the outcomes of the respective actions. With humans, researchers conducted a willingness-to-pay experiment in which willingness-to-pay was proxy for Subjective Value (Plassman et alii, 2007). Hungry subjects were scanned with functional magnetic resonance imaging while bidding on fifty different salty and sweet junk foods. Results suggested that willingness-to-pay was encoded in the right medial orbitofrontal cortex. 38 In a variation of this experiment that examined memory, after bidding on appetitive and aversive food, subjects outfitted in electroencephalography caps rated the same items on a four- point scale (Harris et alii, 2011). Analysis revealed a “flow of information from temporal cortex to [ventromedial prefrontal cortex],” with activity shifting from posterior to anterior sensors in the first two-hundred-fifty milliseconds, from parietal to central sensors in the next three hundred milliseconds, and to the frontal sensors in the next two-hundred fifty milliseconds. What this indicated according to the researchers was that value signals in the ventromedial prefrontal cortex, in this case, “appear[ed] relatively late in the choice process, and seem[ed] to reflect the integration of incoming information from sensory and memory related regions.” In another experiment, hungry subjects were scanned with functional magnetic resonance imaging while learning instrumental tasks associated with food delivery (Valentin et alii, 2007). In the next stage, subjects were fed one of the foods. In the final stage, they were asked to repeat some instrumental task associated with food delivery. As hypothesized, the researchers discovered modulation of the value of the food that had been fed to the subject, its value decreasing. The modulation was correlated with activity in the medial orbitofrontal cortex. In another experiment, researchers also examined the neural substrates for Subjective Value for various kinds of rewards (Levy and Glimcher, 2011). In their experiment, hungry and thirsty subjects were scanned with functional magnetic resonance imaging while making simple choices between three reward types: water, food, money. “Risk preferences across the reward types were highly correlated,” results revealed. Yet each reward type was linked to a “partially distinct” neural network, with the hypothalamic region linked to food rewards and the posterior cingulate cortex linked to money rewards. These partially distinct valuations networks converged on a unified valuation networks, researchers suggested, since in the ventromedial prefrontal 39 cortex and the striatum both food and money were represented, and since the ventromedial prefrontal cortex represented the different types of rewards on a common scale. Researchers examined neural correlates in other contexts of uncertainty. A gambling experiment was devised to examine the neural correlates of value when risk was a factor (Tom et alii, 2007). Specifically, this tested the concept of aversion and the idea that, in a fifty-fifty gamble, individuals would want the upside to be roughly twice the downside (Tversky and Kahneman, 1992). In this experiment, subjects were scanned with functional magnetic resonance imaging while accepting or rejecting fifty-fifty gambles of varying quantities. Gains ranged from ten to forty dollars in increments of two, losses ranged from five to twenty dollars in increments of one, and subjects were endowed with thirty dollars and told that one of the decisions would be honored. The results not only vindicated loss aversion and the two-to-one win-to-loss ratio, but also showed that, while coding of gains was correlated with the ventromedial prefrontal cortex, the losses were associated with decreased activity. Other researchers, wondering if there was a different neural mechanism for uncertainty—when probabilities are not known to the subject— and risk—when they are—researchers devised an experiment in which subject were scanned with the functional magnetic resonance imaging while performing gambling tasks in which the probabilities were known and unknown (Levy et alii, 2010). Results revealed that subjective value was represented in a common system linked to the medial prefrontal cortex and the striatum. Another context involved delayed versus immediate rewards; and wondering if the neural mechanisms for delayed rewards were similarly computed, researchers devised an experiment in which subjects were scanned with functional magnetic resonance imaging while choosing between a smaller, immediate reward and a larger, delayed reward (Kable and Glimcher, 2007). 40 Results revealed that Subjective Value was computed for delayed rewards, and this activity was linked to the ventromedial prefrontal cortex, which was adjacent to the medial prefrontal cortex. Consistent with theory, subjects who discounted future rewards more (than subjects who did not) were less likely to delay immediate gratification. In another delayed-reward experiment, researchers scanned subjects with functional magnetic resonance imaging while they made simple choices between indifference pairs of two rewards—one delayed, one immediate (Luo et alii, 2009). Results revealed that response was greater and reaction time faster for immediate rewards than delayed rewards. Explaining this neural discrepancy was the idea that self-control processes were “recruited during decision-making” yet “absent when rewards [were] individually anticipated,” as with indifference pairs. While a meta-analysis found that a sub-region of the orbitofrontal cortex/ventromedial prefrontal cortex was the “principle brain area associated with…common representation” of value (Levy and Glimcher, 2012), competing explanations arose for how Action Costs were computed. One model was the Additive Model. According to the Additive Model utilities and disutilities were summed linearly, similarly to the “Subjective Value less Action Cost” paradigm (Green and Goldried, 1965). The other model was the interactive model (Cacioppo and Berntson, 1994). According to the interactive model positive and negative values co-occurred and interacted, resulting in attenuation of stimulus values. In an experiment in which subjects were scanned with functional magnetic resonance imaging while being offered rewards in exchange for various levels of discomfort or pain, researchers found attenuation of Stimulus Values in the medial orbitofrontal cortex, which supported for the Interactive Model (Talmi et alii, 2009). In a similar experiment, subjects were scanned with functional magnetic resonance imaging while “accepting and rejecting choice options that were combinations of monetary reward and physical 41 pain” (Park et alii, 2011). Here, results showed that a model where values interacted better predicted neural activity than a model that subtracted the disutility from the utility of an option. Furthermore, modulations in subgenual anterior cingulate cortex-amygdala coupling were linked to variations in valuations of an item. Yet the Additive Model was not without neuroscientific support. In an experiment in which hungry subjects bid on junk food, researchers used functional magnetic resonance imaging to find neural substrates for Action Costs (Hare et alii, 2008). They found that while the medial orbitofrontal cortex encoded the stimulus value of the junk food, a more lateral area encoded the action cost, which they defined as net value or value of junk food less its given price. Experiments with monkeys sought to locate the neural substrates of Action Values. In one study, Action Values were linked to the striatum as monkeys’ neurons were recorded while they participated in a probabilistic binary choice task (Samejima et alii, 2005). In another probabilistic choice task with monkeys, this time with delayed rewards, researchers linked discounted Action Values to the dorsolateral prefrontal cortex, although a major amount of the neurons encoded the discounted value of only one of the two actions (Kim et alii, 2008). And in a neurophysiology experiment in which monkeys were trained to “capture a series of visual targets presented on a computer screen by manipulating a joystick,” researchers found Action Values linked to single supplementary motor area neurons (Sohn and Lee, 2007). Furthermore, noting that both the orbitofrontal cortex and the anterior cingulate cortex sulcus had been implicated in reinforcement-guided decisions, researchers attempted to dissociate the functions (Rudebeck et alii, 2008). From an experiment using monkeys with selective orbitofrontal cortex lesions, researchers found that both regions were necessary for reinforcement-guided decision, 42 but that action selection was linked to anterior cingulate cortex sulsus, because monkeys with lesions in the anterior cingulate cortex sulsus showed an impairment for action selection. With humans, in a functional magnetic resonance imagining experiment in which subjects were rewarded for an eye movement or for pressing a lever, researchers found Action Values correlated with supplementary motor cortex (Wunderlich et alii, 2009). These researchers also found activity in the dorsolateral prefrontal cortex “that resemble[d] the output of a decision comparator.” What happened once Action Values were encoded was a source of further scholarship. In a functional magnetic resonance imaging study tracing the neural mechanisms from “valuation to action,” researchers asked thirsty subjects choose between various types and amounts of liquids that were later delivered to the subjects. Once the comparison process had completed, its output “modulate[d] activity in the motor cortex,” thus implementing the choice (Hare et alii, 2011). In another experiment, hungry subjects were scanned with functional magnetic resonance imaging while making simple choices between foods that they had ranked according taste, health, and likability in a previous session (Sokol-Hessner et alii, 2012). One treatment group was allotted one second to make a decision, the other treatment group four seconds. Results revealed that in both treatments, there was immediate onset of activity in the ventromedial prefrontal cortex. This activity was prolonged in the long-decision treatment. Activity in the dorsolateral prefrontal cortex lagged slightly. Researchers suggested that there existed sharing across the dorsolateral prefrontal cortex and the ventromedial prefrontal cortex. Most interesting was that, regardless of the treatment—fast or slow—there was relative parity in terms of consistency and accuracy. Finally, once the reward was obtained, researchers looked to find prediction errors. In an experiment already discussed, prediction errors measuring the deviation from predicted value of 43 the goal were correlated with activity in the ventral striatum (Hare et alii, 2008). Furthermore, insofar as “goal-directed behavior involves monitoring of ongoing actions and performance outcomes, and subsequent adjustments of behavior and learning,” researchers investigated the interactions engaged in cognitive control (Riddernkhof et alii, 2004). Reviewing various meta- analyses, as well as primate and human studies, the researchers discovered that an “extensive” part of the posterior medial frontal cortex was linked to “detecting unfavorable outcomes, response errors, response conflict, and decision uncertainty.” The monitoring in the posterior medial frontal cortex was then linked to activity in the lateral prefrontal cortex, a signal that “implement[ed] the performance adjustments.” And hence neuroeconomists found neural substrates for mechanisms underlying simple choice as conceived by Reinforcement Learning. Yet there were other kinds of choices, and their substrates were located as well. Neural Substrates of Complex Choice Different from a simple choice, a complex choice could have involved self-control, as individuals weighed between attributes such as taste and health. In a study, hungry subjects were scanned with functional magnetic resonance imaging while doing the following: First, they rated fifty foods in terms of healthiness and tastiness; second, based on their ranking, they were shown a neutral, reference food item and asked to compare it to the other foods, one by one (Hare et alii, 2009). The study was interested in self-control, which in this case was represented by higher weights for healthiness than for tastiness. That the orbitofrontal cortex nevertheless encoded the value of each food demonstrated that the locus of self-control was elsewhere. Elsewhere was a region of the left dorsolateral prefrontal cortex, which appeared to modulate the value of healthy 44 items. In another self-control task, hungry subjects first ranked appetitive foods, then were asked to either indulge, decrease, or react regularly to some food on which they would bid (Hutcherson et alii, 2012). During the “downregulation” or indulgence, researchers noted decreased activation in the dorsolateral prefrontal cortex, whereas during the “upregulation” researchers noted the opposite. And in another self-control task, hungry subjects were scanned with functional magnetic resonance imaging as they made food choices while exogenous cues relating to healthiness appeared (Hare et alii, 2011). Results revealed that the exogenous attention cues activated self-control, demonstrating that individuals could be manipulated for good as well as for gluttony. Another kind of complex choice involved charity. In a charity-giving study, researchers discovered that, as with the food self-control experiment, value was encoded in the orbitofrontal cortex regardless of other considerations (Hare et alii, 2010). Whereas in the food self-control experiment the dorsolateral prefrontal cortex modulated value to satisfy self-control considerations, in the charity-giving experiment the anterior insula and the posterior superior temporal cortex—areas associated with social cognition—interacted with the orbitofrontal cortex. These findings were insightful, as they provided clarity to the decision-making picture, isolating the regions associated with specific mechanisms. Further experiments considered miscellaneous factors—from dieting (already discussed via self-control) to confidence and multi- attribute choices. 45 Multi-attribute choice, Confidence, Dieting, Arithmetic Multi-attribute choices examined how value was computed for multi-attribute items. In one experiment, subjects were scanned with functional magnetic resonance imaging while evaluating tee-shirts on two dimensions—color and logo (Lim et alii, 2013). Color was linked to visual aspects; logo was linked to semantic aspects. Results revealed that activity in the fusiform gyrus, an area associated with visual processing, was correlated with encoding value only for the visual aspect (color); meanwhile, activity in the posterior superior temporal gyrus, an area associated with sematic processing, was correlated with value encoding for the semantic aspect (logo). Both areas “exhibited functional connectivity” with the ventromedial prefrontal cortex, giving credence to the idea that “some attribute values [were] computed in cortical areas specialized in the processing of such features, and that those attribute-specific values [were] then passed to the [ventromedial prefrontal cortex] to be integrated into an overall stimulus value.” In another experiment considering how value was computed for multi-attribute items, subjects were first conditioned to learn the value of three different shapes, three different colors, and three different patterns—each attribute was presented by itself and with a monetary value (Kahnt et alii, 2010). A few days later, as they were scanned with functional magnetic resonance imaging, subjects rated multi-attribute items comprised of a shape, a color, and an internal pattern. Results revealed that the combined value of an item was coded in “distributed” patterns in the ventromedial prefrontal cortex, whereas the variability of the “value predictions of the individual attributes [was] encoded in the dorsolateral prefrontal cortex.” The combined value guided 46 choices, the researchers noted, whereas the variability signaled an ambiguity that was an impediment to value-integration. Wondering how confidence computationally played a role in decision-making, researchers examined blood oxygenated level dependent responses in functional magnetic resonance imaging studies in which subjects made simple choices (Rolls et alii, 2010). Discernibility was considered a proxy for confidence—the greater the discernibility between two items, the easier the choice was considered, and the more confident one was assumed to be while making it. What the researchers located was a “signature” present in the medial prefrontal cortex such that, the easier the choice, the greater the blood oxygenated level dependent activation. The presence of this signature was, however, not found in the orbitofrontal cortex, which was linked to scaling of value rather than direct decision-making. Wondering how various arithmetic operations registered in the brain, researchers examined brain patterns of Chinese undergraduates who were scanned with functional magnetic resonance imaging while either adding or multiplying (Zhou et alii, 2006). Results revealed that addition correlated with greater activity in the intraparietal sulcus and middle occipital gyri at the right hemisphere and the superior occipital gyri at both hemispheres. Results furthermore revealed that multiplication correlated with greater activity in the precentral gyrus, supplementary motor areas, and posterior and anterior superior temporal gyrus at the left hemisphere. What this “pattern of activation” meant was that multiplication was more verbal- based while addition was more visual based. While behavioral economists had showed that marketers could manipulate individuals’ decision-making, Plassmann et alii set out to show what happens neurologically when such manipulation occurred (2008). In their experiment, subjects were scanned with functional 47 magnetic resonance imaging while judging wines with accompanying prices that were in fact inaccurate. The researchers found greater blood-oxygen-level-dependent activity in the medial orbitofrontal cortex, which was known for encoding pleasantness, when the reported price of the wine was relatively high. Likewise, researchers found higher “subjective reports of flavor pleasantness” when the reported price of the wine was relatively high. These findings therefore showed in which parts of the brain the manipulation happened when individuals’ subjective values are manipulated. Curious about the interplay of diet and the neural systems underlying choice, Antonio Rangel examined three systems associated with diet and their neural substrates (2013). He looked at Pavlovian Control, in which “pre-programmed” responses are deployed, and found that lesions to the amygdala, orbitofrontal cortex, and ventral striatum impaired the “expression of appetitive Pavlovian responses.” Rangel looked at Habitual Control, characterized by “more flexible…responses generated using stimulus-action associations.” In the Habitual Control framework, each action corresponded to a state, and each pair of action-states corresponded to a discounted value. When a particular state was cued, the highest valued action was taken. Notably, the habitual system was backward looking and had “difficulty learning the future consequences of actions.” The dorsolateral prefrontal cortex was implicated as key for Habitual Control, as was the motor cortex, which was implicated in ascertaining the cues. Finally, already discussed at length was goal-directed control, which was even more flexible. This model built on the action-state value model of Habitual Control, but made it probabilistic. Unlike Habitual Control, Goal-Directed Control was forward-looking, with values not contingent on experiences. Here, the dorsomedial striatum was implicated in representing action-outcome associations. 48 While reviewing investigations into phenomena such as arithmetic seemed a digression from decision-making, the next model—divisive normalization—brought the thinking full circle back to neoclassical economics and optimization. Divisive Normalization According to the Relative Value Model, a precursor to Divisive Normalization, firing rates for neurons were expressed as 𝑣 1 (𝑣 1 +𝑣 2 ) , demonstrating that the firing rate was a function of the Stimulus Values and decreased when the values of the other surrounding items increased (Platt and Glimcher, 1999). Enhancing this insight mathematically was Dynamic Divisive Normalization Model (Glimcher, 2014; Louie et alii, 2014). Like the Relative Value Model, Dynamic Divisive Normalization also factored in context, assuming that perceiving a reward created an initial excitatory state that was then inhibited by the covariance matrix; the covariance matrix computed the weighted sum of other inputs (distractions, essentially) adjacent to the main reward. Mathematically, the firing rate for some item was expressed as the excitatory value of that item divided by one plus the double sum of the inhibitory covariance matrix (weighted such that current-value related activity was biased over older value related activity). In an eye movement experiment with monkeys trained to look at certain rewards, researchers found support for divisive normalization, with results showing that the lateral intraparietal area encoded “value in a context-dependent manner, incorporating the values of both saccade to the [Response Field] and other alternative saccades,” and “model comparison revealed an underlying divisive normalization process” (Louie et alii, 2011). The major insight of Divisive Normalization Model was that neoclassical economics was not necessarily wrong about optimization. It was wrong however to assume that individuals had 49 infinite neurological resources to optimize. This model then considered that the scarcity of neurological resources explained human behavior resulted in inefficiencies. For example, an asymmetrically dominated irrelevant alternative could have caused noise, resulting in an irrational decision. More support for the idea that individuals did act in the manner of a homo economicus came from an experiment which showed that humans who had damaged ventromedial frontal lobes, which were implicated in computation of Stimulus Value, were more likely to violate the General Axiom of Revealed Preference, which as discussed earlier, was a necessary and sufficient condition for classical utility maximization (Camille et alii, 2011). That individuals thought economically, but were limited computationally, was key for understanding why it was important to understand the underlying mechanisms of decision-making. Conclusion Reviewing the conceptions of humans from neoclassical economics to behavioral economics to neuroeconomics painted a shifting portrait—a dispassionate homo economicus was repainted as a fickle and slothful homo sapien, who was redrawn as an optimizing agent with limited neurological resources to optimize. Meanwhile, the conception of rationality as contingent and limited helped define the contours of human decision-making, which were of a piece with discoveries of the underlying mechanisms of value. In sum, the better the underlying mechanisms of value were understood, and the more precise the parameters of the human mind were understood, the more economists could adjust the definition of rationality, and the more informed economists’ policy prescriptions could be. 50 References 1. Allais, M. (1953). “Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’école Américaine.” Econometrica. 21 (4): 503–546. 2. Ariely D., Loewenstein, G., Prelec, D. (2006). Tom Sawyer and the construction of value. Journal of Economic Behavior & Organization, 60, 1-10. 3. Ariely, D., Loewenstein, G., & Prelec, D. (2003). “Coherent arbitrariness”: Stable demand curves without stable preferences. The Quarterly Journal of Economics, 118(1), 73-106. 4. Arrow, K. J. (2012). Social choice and individual values (Vol. 12). Yale university press. 5. Arrow, K., & Debreu, G. (1954). Existence of an Equilibrium for a Competitive Economy. Econometrica, 22(3), 265-290. doi:10.2307/1907353 6. Becker, G. M., DeGroot, M. H., & Marschak, J. (1964). Measuring utility by a single ‐ response sequential method. Systems Research and Behavioral Science, 9(3), 226-232. 7. Bettman, James R. (1979), An Information Processing Theory of Consumer Choice, Reading, MA: Addison-Wesley. 8. Birnbaum, M. H. (1974). The nonadditivity of personality impressions. Journal of Experimental Psychology, 102(3), 543. 9. Birnbaum, M. H. (1999). Paradoxes of Allais, stochastic dominance, and decision weights. Decision science and technology: Reflections on the contributions of Ward Edwards, 27-52. 10. Birnbaum, M. H., & Chavez, A. (1997). Tests of theories of decision making: Violations of branch independence and distribution independence. Organizational Behavior and human decision Processes, 71(2), 161-194. 11. Birnbaum, M. H., & McIntosh, W. R. (1996). Violations of branch independence in choices between gambles. Organizational Behavior and Human Decision Processes, 67(1), 91-110. 12. Birnbaum, M. H., & Navarrete, J. B. (1998). Testing descriptive utility theories: Violations of stochastic dominance and cumulative independence. Journal of Risk and Uncertainty, 17(1), 49-79. 13. Birnbaum, M. H., Coffey, G., Mellers, B. A., & Weiss, R. (1992). Utility measurement: Configural-weight theory and the judge's point of view. Journal of Experimental Psychology: Human Perception and Performance, 18(2), 331. 14. Birnbaum, M. H., Patton, J. N., & Lott, M. K. (1999). Evidence against rank-dependent utility theories: Tests of cumulative independence, interval independence, stochastic dominance, and transitivity. Organizational Behavior and Human Decision Processes, 77, 44–83. 15. Blume, Lawrence E. and David Easley. Rationality. The New Palgrave Dictionary of Economics. Second Edition. Eds. Steven N. Durlauf and Lawrence E. Blume. Palgrave Macmillan, 2008. 16. Cacioppo, J. T., & Berntson, G. G. (1994). Relationship between attitudes and evaluative space: A critical review, with emphasis on the separability of positive and negative substrates. Psychological bulletin, 115(3), 401. 17. Camerer, C. F., Loewenstein, G., & Rabin, M. (Eds.). (2011). Advances in behavioral economics. Princeton University Press. 51 18. Camille, N., Griffiths, C. A., Vo, K., Fellows, L. K., & Kable, J. W. (2011). Ventromedial frontal lobe damage disrupts value maximization in humans. Journal of Neuroscience, 31(20), 7527-7532. 19. Daniel Kahneman - Facts. Nobelprize.org. Nobel Media AB 2014. Web. 20 Jul 2017. <http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2002/kahneman- facts.html> 20. Edwards, W. (1954). The theory of decision making. Psychological Bulletin, 51, 380– 417. 21. Edwards, W. (1962). Subjective probabilities inferred from decisions. Psychological Review, 69, 22.expected return. Journal of Political Economy, 86, 321–324. 23. Fishburn, P. C. (1978). On Handa’s “New theory of cardinal utility” and the maximization of 24. Freud, S. (2015). Civilization and its discontents. Broadview Press. 25. Glimcher, P. (2014). Understanding the Hows and Whys of Decision-Making: From Expected Utility to Divisive Normalization. In Cold Spring Harbor symposia on quantitative biology, 79, 169-176. Cold Spring Harbor Laboratory Press. 26. Glimcher, Paul and Fehr. Neuroeconomics: Decision Making and the Brain. Elsevier Science. 27. Gold, J. I., & Shadlen, M. N. (2007). The neural basis of decision making. Annu. Rev. Neurosci., 30, 535-574. 28. Gonzalez-Vallejo, C. (2002). Making trade-offs: A probabilistic and context-sensitive model of choice behavior. Psychological Review, 109(1), 137. 29. Green, R. E., & Goldfried, M. R. (1965). On the bipolarity of semantic space. Psychological Monographs: General and Applied, 79(6), 1. 30. Handa, J. (1977). Risk, probabilities, and a new theory of cardinal utility. Journal of Political Economy, 85(1), 97-122. 31. Hare, T. A., Camerer, C. F., & Rangel, A. (2009). Self-control in decision-making involves modulation of the vmPFC valuation system. Science, 324(5927), 646-648. 32. Hare, T. A., Camerer, C. F., & Rangel, A. (2009). Self-control in decision-making involves modulation of the vmPFC valuation system. Science, 324(5927), 646-648. 33. Hare, T. A., Malmaud, J., & Rangel, A. (2011). Focusing attention on the health aspects of foods changes value signals in vmPFC and improves dietary choice. Journal of Neuroscience, 31(30), 11077-11087. 34. Hare, T. A., O’Doherty, J., Camerer, C. F., Schultz, W., & Rangel, A. (2008). Dissociating the role of the orbitofrontal cortex and the striatum in the computation of goal values and prediction errors. Journal of neuroscience, 28(22), 5623-5630. 35. Hare, T. A., Schultz, W., Camerer, C. F., O'Doherty, J. P., & Rangel, A. (2011). Transformation of stimulus value signals into motor commands during simple choice. Proceedings of the National Academy of Sciences, 108(44), 18120-18125. 36. Hare, T., Camerer, C., Knoepfle, D., O’Doherty, J., & Rangel, A. (2010). Value computations in VMPFC during charitable decision-making incorporate input from regions involved in social cognition. J Neurosci, 30(2), 583-590. 37. Harris, A., Adolphs, R., Camerer, C., & Rangel, A. (2011). Dynamic construction of stimulus values in the ventromedial prefrontal cortex. PloS one, 6(6), e21074. 52 38. Heath, T.B. & Chatterjee, S. (1995). Asymmetric Decoy Effects on Lower-Quality Versus Higher-Quality Brands: Meta-Analytic and Experimental Evidence. Journal of Consumer Research, 22(3), 268-284. Retrieved from http://www.jstor.org/stable/2489613. Accessed: 23-04-2016 23:36 UTC 39. Hedgcock, W. & Rao, A.R. (2009). Trade-off Aversion as an Explanation for the Attraction Effect: A Functional Magnetic Resonance Imaging Study. Journal of Marketing Research, 46(1), 1-13. Retrieved from http://www.jstor.org/stable/20618865. Accessed: 14-04-2016 00:58 UTC 40. Holt, Jim (2011). “Two Brains Running.” New York Times Sunday Book Review. http://www.nytimes.com/2011/11/27/books/review/thinking-fast-and-slow-by-daniel- kahneman-book-review.html 41. Houthakker HS. Revealed preference and the utility function. Economica. 1950;17( 66): 159– 174. 42. Huber, J. & Puto, C. (1983). Market Boundaries and Product Choice: Illustrating Attraction and Substitution Effects. Journal of Consumer Research, 10(1), 31-44. Oxford University Press. Retrieved from http://www.jstor.org/stable/2488854. Accessed: 14-04- 2016 00:41 UTC 43. Huber, J., Payne, J. W., & Puto, C.. (1982). Adding Asymmetrically Dominated Alternatives: Violations of Regularity and the Similarity Hypothesis. Journal of Consumer Research, 9(1), 90–98. Retrieved from http://www.jstor.org.libproxy2.usc.edu/stable/2488940 44. Hutcherson, C. A., Plassmann, H., Gross, J. J., & Rangel, A. (2012). Cognitive regulation during decision making shifts behavioral control between ventromedial and dorsolateral prefrontal value systems. Journal of Neuroscience, 32(39), 13543-13554. 45. Jung, C. G. (2014). The archetypes and the collective unconscious. Routledge. 46. Kable, J. W., & Glimcher, P. W. (2007). The neural correlates of subjective value during intertemporal choice. Nature neuroscience, 10(12), 1625. 47. Kahneman, D. (2011). Thinking, fast and slow. Macmillan. 48. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica: Journal of the econometric society, 263-291. 49. Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica: Journal of the econometric society, 263-291. 50. Kahnt, T., Heinzle, J., Park, S. Q., & Haynes, J. D. (2011). Decoding different roles for vmPFC and dlPFC in multi-attribute decision making. Neuroimage, 56(2), 709-715. 51. Karmarkar, U. S. (1978). Subjectively weighted utility: A descriptive extension of the expected utility model. Organizational behavior and human performance, 21(1), 61-72. 52. Karni, Edi. “Savage's subjective expected utility model.” The New Palgrave Dictionary of Economics. Second Edition. Eds. Steven N. Durlauf and Lawrence E. Blume. Palgrave Macmillan, 2008. The New Palgrave Dictionary of Economics Online. Palgrave Macmillan. 53. Kim, S., Hwang, J., & Lee, D. (2008). Prefrontal coding of temporally discounted values during intertemporal choice. Neuron, 59(1), 161-172. 54. Krajbich, I., & Rangel, A. (2011). Multialternative drift-diffusion model predicts the relationship between visual fixations and choice in value-based decisions. Proceedings of the National Academy of Sciences, 108(33), 13852-13857. 53 55. Krajbich, I., Armel, C., & Rangel, A. (2010). Visual fixations and the computation and comparison of value in simple choice. Nature neuroscience, 13(10), 1292-1298. 56. Levy, D. J., & Glimcher, P. W. (2011). Comparing apples and oranges: using reward- specific and reward-general subjective value representation in the brain. Journal of Neuroscience, 31(41), 14693-14707. 57. Levy, D. J., & Glimcher, P. W. (2012). The root of all value: a neural common currency for choice. Current opinion in neurobiology, 22(6), 1027-1038. 58. Levy, I., Snell, J., Nelson, A. J., Rustichini, A., & Glimcher, P. W. (2010). Neural representation of subjective value under risk and ambiguity. Journal of neurophysiology, 103(2), 1036-1047. 59. Lim, S. L., O'Doherty, J. P., & Rangel, A. (2013). Stimulus value signals in ventromedial PFC reflect the integration of attribute value signals computed in fusiform gyrus and posterior superior temporal gyrus. Journal of Neuroscience, 33(20), 8729-8741. 60. Lim, S. L., O’Doherty, J. P., & Rangel, A. (2011). The decision value computations in the vmPFC and striatum use a relative value code that is guided by visual attention. Journal of Neuroscience, 31(37), 13214-13223. 61. Louie, K., Grattan, L. E., & Glimcher, P. W. (2011). Reward value-based gain control: divisive normalization in parietal cortex. Journal of Neuroscience, 31(29), 10627-10639. 62. Louie, K., LoFaro, T., Webb, R., & Glimcher, P. W. (2014). Dynamic divisive normalization predicts time-varying value coding in decision-related circuits. Journal of Neuroscience, 34(48), 16046-16057. 63. Luce, R. D. (2005). Individual choice behavior: A theoretical analysis. Courier Corporation. 64. Luo, S., Ainslie, G., Giragosian, L., & Monterosso, J. R. (2009). Behavioral and neural evidence of incentive bias for immediate rewards relative to preference-matched delayed rewards. Journal of Neuroscience, 29(47), 14820-14827. 65. Mishra, S., Umesh, U.M., & Stem, Jr., D.E. (1993). Antecedents of the Attraction Effect: An Information-Processing Approach. Journal of Marketing Research, 30(3), 331-349. Retrieved from http://www.jstor.org/stable/20618865. Accessed: 14-04-2016 00:58 UTC 66. Mongin, P. (1997). Expected utility theory. Handbook of economic methodology, 342350. 67. Myers, J. L., Suydam, M. M., & Gambino, B. (1965). Contingent gains and losses in a risk-taking situation. Journal of Mathematical Psychology, 2(2), 363-370. 68. Padoa-Schioppa, C., & Assad, J. A. (2006). Neurons in orbitofrontal cortex encode economic value. Nature, 441(7090), 223. 69. Paramesh, R. (1973). Independence of Irrelevant Alternatives. Econometrica, 41(5), 987- 991. Retrieved from http://www.jstor.org/stable/1913820. Accessed: 24-04-2016 00:01 UTC 70. Park, S. Q., Kahnt, T., Rieskamp, J., & Heekeren, H. R. (2011). Neurobiology of value integration: when value impacts valuation. Journal of Neuroscience, 31(25), 9307-9314. 71. Plassmann, H., O'Doherty, J., & Rangel, A. (2007). Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. Journal of neuroscience, 27(37), 9984-9988. 72. Plassmann, H., O’Doherty, J., Shiv, B., & Rangel, A. (2008). Marketing actions can modulate neural representations of experienced pleasantness. Proceedings of the National Academy of Sciences, 105(3), 1050-1054. 54 73. Platt, M. L., & Glimcher, P. W. (1999). Neural correlates of decision variables in parietal cortex. Nature, 400(6741), 233. 74. Quiggin, J. (1982). A theory of anticipated utility. Journal of Economic Behavior & Organization, 3(4), 323-343. 75. Rangel, A. (2013). Regulation of dietary choice by the decision-making circuitry. Nature neuroscience, 16(12), 1717-1724. 76. Rangel, A., & Clithero, J. A. (2013). The computation of stimulus values in simple choice. Neuroeconomics: Decision making and the brain, 2, 125-147. 77. Rangel, A., & Hare, T. (2010). Neural computations associated with goal-directed choice. Current opinion in neurobiology, 20(2), 262-270. 78. Rangel, A., & Hare, T. (2010). Neural computations associated with goal-directed choice. Current opinion in neurobiology, 20(2), 262-270. 79. Ratcliff, R. (1978). A theory of memory retrieval. Psychological review, 85(2), 59. 80. Ratcliff, R. (1985). Theoretical interpretations of the speed and accuracy of positive and negative responses. Psychological review, 92(2), 212. 81. Ratcliff, R., & McKoon, G. (2008). The diffusion decision model: theory and data for two-choice decision tasks. Neural computation, 20(4), 873-922. 82. Ratneshwar, S., Shocker, A.D., & Stewart, D.W. (1987). Toward Understanding the Attraction Effect: The Implications of Product Stimulus Meaningfulness and Familiarity. Journal of Consumer Research, 13(4), 520-533. Retrieved from http://www.jstor.org/stable/2489372. Accessed: 13-04-2016 23:04 UTC 83. Ridderinkhof, K. R., Ullsperger, M., Crone, E. A., & Nieuwenhuis, S. (2004). The role of the medial frontal cortex in cognitive control. science, 306(5695), 443-447. 84. Rolls, E. T., Grabenhorst, F., & Deco, G. (2010). Choice, difficulty, and confidence in the brain. Neuroimage, 53(2), 694-706. 85. Rudebeck, P. H., Behrens, T. E., Kennerley, S. W., Baxter, M. G., Buckley, M. J., Walton, M. E., & Rushworth, M. F. (2008). Frontal cortex subregions play distinct roles in choices between actions and stimuli. Journal of Neuroscience, 28(51), 13775-13785. 86. Rushworth, M. F., Mars, R. B., & Summerfield, C. (2009). General mechanisms for making decisions?. Current opinion in neurobiology, 19(1), 75-83. 87. Samejima, K., Ueda, Y., Doya, K., & Kimura, M. (2005). Representation of action- specific reward values in the striatum. Science, 310(5752), 1337-1340. 88. Samuelson, P.A. (1938). “A Note on the Pure Theory of Consumer's Behaviour.” Economica, 5(17), 61-71. 89. Savage, Leonard J. 1954. The Foundations of Statistics. New York, Wiley. 90. Seymour, B., & McClure, S. M. (2008). Anchors, scales and the relative coding of value in the brain. Current opinion in neurobiology, 18(2), 173-178. 91. Simonson, I. (1989). Choice Based on Reasons: The Case of the Attraction Effect Journal of Consumer Research, 16(2), 158-174. Retrieved from http://www.jstor.org/stable/2489315. Accessed: 14-04-2016 00:51 UTC 92. Smith, A. (2010). The theory of moral sentiments. Penguin. 93. Sohn, J. W., & Lee, D. (2007). Order-dependent modulation of directional signals in the supplementary and presupplementary motor areas. Journal of Neuroscience, 27(50), 13655-13666. 55 94. Sokol ‐Hessner, P., Hutcherson, C., Hare, T., & Rangel, A. (2012). Decision value computation in DLPFC and VMPFC adjusts to the available decision time. European Journal of Neuroscience, 35(7), 1065-1074. 95. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction(Vol. 1, No. 1). Cambridge: MIT press. 96. Talmi, D., Dayan, P., Kiebel, S. J., Frith, C. D., & Dolan, R. J. (2009). How humans integrate the prospects of pain and reward during choice. Journal of Neuroscience, 29(46), 14617-14626. 97. Thurstone, L. L. (1927). A law of comparative judgment. Psychological review, 34(4), 273. 98. Tom, S. M., Fox, C. R., Trepel, C., & Poldrack, R. A. (2007). The neural basis of loss aversion in decision-making under risk. Science, 315(5811), 515-518. 99. Tversky, A. (1969). Intransitivity of preferences. Psychological Review, 76, 31–48. 100. Tversky, A., & Kahneman, D. (1975). Judgment under uncertainty: Heuristics and biases. In Utility, probability, and human decision making (pp. 141-162). Springer Netherlands. 101. Tversky, A., & Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological review, 90(4), 293. 102. Tversky, A., & Kahneman, D. (1986). Rational choice and the framing of decisions. Journal of business, S251-S278. 103. Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4), 297-323. 104. Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty, 5(4), 297-323. 105. Tversky, A., & Wakker, P. (1995). Risk attitudes and decision weights. Econometrica: Journal of the Econometric Society, 1255-1280. 106. Tversky, Amos, and Daniel Kahneman. "Advances in prospect theory: Cumulative representation of uncertainty." Journal of Risk and uncertainty 5.4 (1992): 297-323. 107. Valentin, V. V., Dickinson, A., & O’Doherty, J. P. (2007). Determining the neural substrates of goal-directed learning in the human brain. Journal of Neuroscience, 27(15), 4019-4026. 108. Von Neumann, J., & Morgenstern, O. (1947). Theory of games and economic behavior, 2nd rev. 109. Weintraub, E. Roy. Neoclassical Economics. The Concise Encyclopedia of Economics. 1993. Library of Economics and Liberty. 110. Wu, G., & Gonzalez, R. (1996). Curvature of the probability weighting function. Management science, 42(12), 1676-1690. 111. Wunderlich, K., Rangel, A., & O'Doherty, J. P. (2009). Neural computations underlying action-based decision making in the human brain. Proceedings of the National Academy of Sciences, 106(40), 17199-17204. 112. Zhou, X., Chen, C., Zang, Y., Dong, Q., Chen, C., Qiao, S., & Gong, Q. (2007). Dissociated brain organization for single-digit addition and multiplication. Neuroimage, 35(2), 871-880.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The evolution of decision-making quality over the life cycle: evidence from behavioral and neuroeconomic experiments with different age groups
PDF
Sequential decisions on time preference: evidence for non-independence
PDF
Neuroeconomic mechanisms for valuing complex options
PDF
Value-based decision-making in complex choice: brain regions involved and implications of age
PDF
Laboratory studies in the economics of information
PDF
Homeostatic imbalance and monetary delay discounting: effects of feeding on RT, choice, and brain response
PDF
Essays in behavioral and entrepreneurial finance
PDF
Understanding human-building interactions through perceptual decision-making processes
PDF
Value based purchasing: decision-making processes underlying hospital acquisitions of orthopedic devices
PDF
Clinical decision analysis for effective supervision: a review of the supervisor-supervisee relationship
PDF
Brain and behavior correlates of intrinsic motivation and skill learning
PDF
Gene-environment interactions in neurodevelopment
PDF
Anti-gritos: screaming as witnessing in postwar Central America
PDF
Three essays on social policy: institutional development, and subjective well-being as a cause and consequence of labor market outcomes
PDF
Predicting and planning against real-world adversaries: an end-to-end pipeline to combat illegal wildlife poachers on a global scale
PDF
The veiled screen: uncanny crystallizations in the moving image
PDF
Towards identification of proteins interacting with wild-type or mutant PMP22 protein
PDF
Individual vs. group behavior: an empirical assessment of time preferences using experimental data
PDF
Forced migration as a cause and consequence of conflict
PDF
Biophysical and molecular identification of ion channels in sour taste
Asset Metadata
Creator
Holt, Alex Julian
(author)
Core Title
Toward a more realistic understanding of decision-making
School
College of Letters, Arts and Sciences
Degree
Master of Arts
Degree Program
Economics
Publication Date
09/25/2017
Defense Date
09/22/2017
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
behavioral economics,decision theory,decision-making,Economics,experimental economics,neoclassical economics,neuroeconomics,Neuroscience,OAI-PMH Harvest
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Brocas, Isabelle (
committee chair
), Carrillo, Juan (
committee member
), Coricelli, Giorgio (
committee member
)
Creator Email
alexholt@usc.edu,alexjulianholt@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-431498
Unique identifier
UC11264342
Identifier
etd-HoltAlexJu-5753.pdf (filename),usctheses-c40-431498 (legacy record id)
Legacy Identifier
etd-HoltAlexJu-5753.pdf
Dmrecord
431498
Document Type
Thesis
Rights
Holt, Alex Julian
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
behavioral economics
decision theory
decision-making
experimental economics
neoclassical economics
neuroeconomics