Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The cost of missing objectives in multiattribute decision modeling
(USC Thesis Other)
The cost of missing objectives in multiattribute decision modeling
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Copyright 2021 Sarah A. Kusumastuti The Cost of Missing Objectives in Multiattribute Decision Modeling by Sarah A Kusumastuti A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (PSYCHOLOGY) August 2021 ii This dissertation is dedicated to my late father, who passed one year before this was published. iii Acknowledgements I’d like to first of all acknowledge the invaluable support and guidance of my advisor, Dr. Richard S. John, who made me really realize that the most important factor in making you thrive and getting through the toughest times in graduate school is the mentorship and resourcefulness of an advisor. I’d like to express my appreciation for the members of my committee: Dr. John Monterosso, Dr. Detlof von Winterfeldt, Dr. Antoine Bechara, and Dr. Morteza Dehghani for making the defense a real blast (out of all the ways imagined the defense would be, fun was probably the last thing I expected). I would also like to thank all the exceptional people I had a chance to work with over the course of my graduate career: Dr. Heather Rosoff, Dr. Jim Blythe at ISI, Dr. Ben Files and the team at ARL-West, as well as all the researchers and staff at CREATE lab, who have been a vital part of my development as a research scientist throughout the entirety of my time at USC. I’d also like to express my appreciation for friends and colleagues throughout my trials and tribulations at USC and/or in Los Angeles who have provided encouragement, feedback, support, and just generally helping me getting through life: Dr. Tracy Cui, Dr. Zhiqin Chen, Dr. Kevin Nguyen, Dr. Jessica Zhao, Vardan Ambartsumyan, Renee Barrera, Katie Byrd, Lai Xu, the people at American Cinematheque and many others. A significant source of support in my life also came from people I have met and bonded over in various online communities, with a special shout out to everyone in PBTT. I would be significantly less sane if not for the support of all these people. Finally, I would like to express unending appreciation to my family, in particular my brother Taufiq Adhie Wibowo, who is always there to support me, especially when I need it the most. “I’ll break them down, no mercy shown, heaven knows it’s got to be this time.” -Joy Division/New Order iv Table of Contents Dedication ...................................................................................................................................... ii Acknowledgements ....................................................................................................................... iii List of tables ....................................................................................................................................v List of figures ................................................................................................................................ vi Abstract ........................................................................................................................................ vii Chapter 1: Omission of objectives in multiattribute decision models ............................................1 Chapter 2: Formulation of a multiattribute decision problem ......................................................12 Chapter 3: Study I: Cost of omitted attributes from applied case studies .....................................24 Methods ......................................................................................................................................24 Analysis procedures................................................................................................................25 Selection of cases ...................................................................................................................26 Problem descriptions ..............................................................................................................28 Results ........................................................................................................................................34 Selected application analysis: Container ports (De Icaza, Parnell, & Pohl, 2019) ................34 Aggregated results ..................................................................................................................39 Discussion ..................................................................................................................................42 Chapter 4: Study II: Monte Carlo simulation of decision problems analyzing factors influencing the cost of omitted attributes ..........................................................................................................46 Methods..........................................................................................................................................46 Simulation procedures ...........................................................................................................46 Characteristics of decision space ...........................................................................................48 Results ........................................................................................................................................52 Number of objectives x attribute intercorrelation ..................................................................52 Number of objectives x number of alternatives .....................................................................56 Weighting method x attribute intercorrelation .......................................................................62 Discussion ..................................................................................................................................65 Chapter 5: Study III: Behavioral study of cost of omitting objectives ..........................................69 Methods ......................................................................................................................................69 Participants ............................................................................................................................69 v Procedures .............................................................................................................................70 Survey ...............................................................................................................................70 Multiattribute analysis ......................................................................................................74 Results ........................................................................................................................................76 Multiattribute analysis ............................................................................................................77 Comparing relative weights of objectives .............................................................................79 Comparison with direct judgment .........................................................................................80 Discussion ......................................................................................................................................81 Chapter 6: General Discussion and Conclusions ...........................................................................84 References ......................................................................................................................................91 Appendices .....................................................................................................................................98 Appendix A: Aggregated parameter values for study I ..............................................................98 Appendix B1: Airline competitiveness ...................................................................................103 Appendix B2: Cargo delivery ..................................................................................................104 Appendix B3: Climate change policy ......................................................................................107 Appendix B4: Health care treatment ........................................................................................110 Appendix B5: Historical buildings ...........................................................................................114 Appendix B6: Power plants ......................................................................................................118 Appendix B7: Radioactive waste .............................................................................................122 Appendix C: Survey materials for Study III ............................................................................127 Appendix D: Attribute measures and consequence table for study III ....................................130 vi List of tables Table 3.1: Published multiattribute applications included in the analysis ....................................27 Table 3.2: Performance parameters at each level of reduction for port selection application ......35 Table 3.3: Full model and complete list of attribute values for port selection application ............36 Table 3.4: Value loss real unit equivalence for port selection ......................................................38 Table 4.1: Simulation Design ........................................................................................................49 Table 5.1: Step by step procedure of study III ...............................................................................70 Table 5.2: Master list of objectives in step 2 of study III .............................................................72 vii List of figures Figure 3.1: Hit rate for each application at every reduction level .................................................40 Figure 3.2: Value loss for each application at every reduction level ............................................41 Figure 3.3: Mean convergence value for each application at every reduction level .....................42 Figure 4.1: Median hit rate, value loss, and mean convergence values based on attribute intercorrelation and number of objectives ....................................................................................54 Figure 4.2a: Hit rate distributions by number of objectives, attribute intercorrelation, and attribute reduction level .................................................................................................................56 Figure 4.2b: Value loss distribution based on number of objectives and attribute intercorrelation type .................................................................................................................................................57 Figure 4.2c: Mean convergence value distribution based on number of objectives and attribute intercorrelation type .......................................................................................................................57 Figure 4.3 Median hit rate, value loss, and mean convergence values based on number of alternatives and number of objectives ...........................................................................................59 Figure 4.4a: Hit rate value distribution based on number of objectives and number of alternatives ........................................................................................................................................................60 Figure 4.4b: Value loss distribution based on number of objectives and number of alternatives 61 Figure 4.4c: Mean convergence value distribution based on number of objectives and number of alternatives .....................................................................................................................................61 Figure 4.5: Median hit rate, value loss, and mean convergence values based on weighting method and attribute intercorrelation ..........................................................................................................62 Figure 4.6a: Hit rate value distribution based on attribute intercorrelation and weighting method ........................................................................................................................................................63 Figure 4.6b: Value loss distribution based on attribute intercorrelation and weighting method ..64 Figure 4.6c: Mean convergence value distribution based on attribute intercorrelation and weighting method ..........................................................................................................................64 Figure 5.1: Distribution of interattribute correlation values among set of objectives ..................76 Figure 5.2: Distribution of participant hit rate based on the number of objectives in both full and reduced model fitted with a regression line based on each hit rate category ................................78 Figure 5.3: Cumulative density of value loss for both true and standardized value loss and cumulative density for convergence values ..................................................................................79 Figure 5.4: Distribution of weight composition of full models across participants. Each bar represents a participant’s full modell .............................................................................................80 Figure 5.5: Comparison of cities based on the proportion of being within top five according to decision models ...........................................................................................................................781 viii Abstract Multiattribute decision analysis requires decision makers to identify their objectives as a critical part of sound decision making and failure to do so may result in poor decisions (Keeney & Raiffa, 1976). Research has shown that decision makers are often ill equipped to identify objectives and could not generate more than half of objectives they later recognized to be important (Bond, Carlson & Keeney, 2008; 2010). However, previous studies which examined missing attributes and missing objectives found that attributes that are excluded are likely to be unimportant, or that they are deliberately excluded as a heuristic. Three approaches are presented to examine the consequences of missing objectives in multiattribute models: (1) Analysis of existing multiattribute models from published applications; (2) Monte Carlo simulations evaluating the decision quality of reduced models. The simulations will also be used to test the effects of model reduction under various model characteristics, such as number of objectives and alternatives, intercorrelated objective sets, and different weighting methods; (3) Analysis of data collected from a behavioral assessment of multiattribute models constructed from both the list of objectives that are generated by decision makers and the list of which includes objectives identified from a reference list. The results of each study provide a variety of outcomes concerning the consequences of missing objectives in multiattribute models. In general, the largest determiner of the impact of the missing objective is the attribute intercorrelation within the decision space. However, missing objectives might not necessarily be a detriment to making optimal decisions. As long as the set of objectives sufficiently captures the essential trade-offs, it is still very much possible to produce satisfactory outcomes from models with missing objectives. 1 Chapter 1: Omission of objectives in multiattribute decision models Many of the challenging parts of life involve making decisions with multiple and conflicting objectives. One prescriptive approach to map out the components of our decision space involves identifying our goals or objectives and how we can measure the extent to which an option fulfills the objectives through certain attributes. This approach was introduced by Keeney & Raiffa (1976) in “Decisions with multiple objectives: preferences and value tradeoffs”. The method used to structure this decision problem is known as Multiattribute Value (MAV) analysis when uncertainty (in other words risk) is not involved and Multattribute Utility (MAU) when uncertainty is involved. Multiattribute value/utility analysis is a decision analytic method utilized to evaluate complex decision problems that may involve multiple conflicting criteria and a wide variety of variables to consider in choosing among several alternatives. These methods are particularly beneficial in situations with a considerable amount of risk and/or multiple stakeholders. For example, multiattribute methods have been used to examine decision problems in a wide variety of contexts such as energy (Merkhofer & Keeney, 1987; Brothers et al., 2009), social programs (Edwards & Newman, 1982), medical and health (Torrance, Boyle, & Horwood; 1982, Feeny, Furlong, Boyle, & Torrance, 1995), management (Youngblood & Collins, 2003; Cheung & Suen, 2002), and engineering (Sabatino, Frangopol, & Dong, 2015). This decision analytic approach remains widely used and taught in institutions and is expected to continue to grow in popularity among academics and practitioners, particularly due its strong foundations in the behavioral aspect of decision making (Wallenius, et al., 2008). Because of this widespread use, it is important for multiattribute methods to be refined and improved according to new findings in behavioral studies. 2 One critical component of structuring a decision problem in multiattribute analysis is identifying objectives. The use of objectives is a way to reflect a decision maker’s values by framing the decision problem in terms of what is hoped to be achieved by making the decision. Knowing one’s objectives is considered to be a critical step in structuring any decision problem and making poor decisions is associated with not having a clear understanding of what the objectives are (Nutt, 1998; Payne et al., 1999; Bond, Carlson, & Keeney, 2008). Therefore, striving to create better decisions in a multiattribute context can be reflected by having a more comprehensive set of objectives. However, identifying objectives requires thorough introspective thinking and studies by Bond et al. (2008, 2010) have shown that decision makers are only able to identify 30-50% of objectives they consider to be important. Therefore, they may be in fact committing a major blunder even before identifying or evaluating options. Limitations in objective self-generation process Each decision problem involves a unique set of objectives that must be determined by the decision makers. Decision makers may use a wish list of desirable outcomes, deconstructing available alternatives to the components that make them an attractive/unattractive option, identify shortcomings caused by the decision problem, thinking about consequences of the alternatives, etc. (Keeney, 1996, p.56-65). Access to different viewpoints or using a trained facilitator would be immensely helpful, but this might not always be an available option. Bond et al. (2008) discovered that the objective self-generation process is often insufficient in identifying all key objectives related to a decision problem. Their study aims to examines people’s capability in self-identifying objectives by comparing the list of objectives 3 that decision makers can generate by themselves with the list of objectives that they can identify from a more comprehensive list. If a decision maker is well equipped to articulate her objectives in a decision problem, then the self-generated objectives list and the list of objectives identified from a master list should have only minor disparities. Bond et al. (2008) utilizes some variation of this method: decision makers are asked to generate objectives for a decision problem (this study in particular uses relevant personal decision problems such as choosing an MBA program or internship). The participants then compare their set of self-generated objectives to a comprehensive “master list” of objectives provided by the experimenter. Finally, participants are asked to compile a final list of objectives incorporating both the objectives from their self-generation process and objectives they marked as important from the master list and rank each objective by importance. The results show that participants fail to recognize at least half of the objectives in their self-generated list compared to their combined list, with some failing to generate up to 2/3rd of the objectives they consider to be important. This phenomenon is observed even in experts. The researchers speculated that this is due to decision makers not thinking broadly or deeply enough and is constrained due to a mental representation of relevant knowledge organized in a clustered way that makes it difficult to retrieve information across categories. The follow-up study by Bond et al. (2010) attempts to formulate interventions that may help improve the objective generation process. The interventions are designed to encourage broader categorical thinking or increase motivation in thinking about objectives. The study replicates the rate of initial objective self-generation (around 30-50%) with intervention methods showing generally limited success. Interestingly, the post-intervention self-generation list is judged to have the same quality (measured by average rank) as the pre-intervention list, meaning 4 that omitted objectives possibly have the same level of importance with objectives in the initial model. The studies by Bond et al. (2008, 2010) present results from a brand new paradigm on the limitations and the mechanism behind the formulation of objectives, a critical step in structuring decision problems through VFT. While the possibility of an incomplete objective and/or attribute sets in a decision problem has been acknowledged and studied to a limited extent, the general assumption of an incomplete objective set seems to be that omission of objectives are more like minor errors and objectives that are omitted are likely to be of lesser importance. A review of the current literature on omitting objectives is outlined in the next section. Missing objectives in multiattribute literature Studies that attempt to measure the effect of an incomplete objective or attribute set on decision quality have been relatively limited and much of the argument on the importance of having a comprehensive set of objectives have been prescriptive, meaning that the action is recommended based on a prediction of the outcome. (Keeney, 1996, pp. 33-34; Keeney, 2013; Montibeller & von Winterfeldt, 2015). The formulation of objectives determine what attributes are going to be included in the analysis (since attributes are used to measure the achievement of objectives), thus the existing studies focus on missing attributes and in general the omission of objectives always result in the omission of attributes. Aschenbrenner (1977) conducted an empirical study on the robustness of multiattribute models against attribute variation by presenting a decision problem of choosing apartments to two different groups who must independently construct their attribute sets. One group included 16 attributes in their model while the other included 14 attributes which are then used to judge 15 5 choice alternatives. The primary question for this study is to examine the extent to which the results of both models agree with each other, also referred to as measuring their convergence, by computing the Pearson rank order correlation of alternatives rated by the two models. One of the key findings of the study is that models have notably higher convergence when a cost attribute is eliminated, because it’s an attribute with a value judgment that’s negatively correlated with most other attribute values in the model. For example, in comparing the attributes of cost and space, one would like to maximize space but minimize cost, but looking at attribute values, the larger the apartment the higher the cost, therefore the objectives are conflicting with each other and the value of the attributes will be negatively correlated. This suggests that failing to include an attribute with negatively correlated value or utility in a multiattribute model may affect model performance more dramatically than omitting an attribute with a positive or no correlation with the attributes in the model. A paper by Barron & Kleinmuntz (1986) appears to be the first to explicitly investigate the decision quality of reduced attribute sets. The study compares the decision quality of a reduced model (a model that only uses a subset of attributes) to a full model by comparing the convergence between the models (a measure which was used by the Aschenbrenner study explained prior) and value loss. Value loss measures the difference of value between the top choice picked by full model and the one picked by the reduced model. If the full and reduced model picks the same top alternative, then there is no value loss. Value loss is high when the reduced model picks a top alternative that is judged to be inferior (have lower value) in the full model. Barron & Kleinmuntz introduced value loss as an alternative measure for model performance since they argued that convergence might not be the right measure to compare the 6 decision quality of a reduced model against the full model. The first and foremost goal of a multiattribute model is selecting the best alternative instead of producing a rank order of choices. A measure that compares the top choices between models, such as value loss, would be more appropriate in measuring the consequence of a decision model than convergence. The study uses the Places Rated Almanac dataset (Boyer & Savageau, 1985) which measures the desirability of an area of living based on 9 criteria (which basically functions as objectives): climate, housing, health care, crime, transportation, education, recreation, arts, and economics. The attribute values measuring these criteria are composite measures based on various statistics. For the study, a value function was applied to normalize the attribute values such that the best level for each attribute is valued at 1 and the worst level is valued at 0. Barron & Kleinmuntz constructed reduced models consisting of either 8, 7, or 6 attributes picked from the full model by using all possible combinations, resulting in a total of 129 reduced models. Equal weights were used for both full and reduced model, meaning that each attribute is treated as if they are equally important. The models are then used to judge 18 datasets with 15 different alternatives. The results of the computations indicate that value loss increases as more attributes are omitted and confirms that convergence and value loss may provide different measures of performance of reduced models. A high performing reduced model is expected to have a high convergence with the full model as well as lower levels of value loss and vice versa. Yet there are cases where models have both high convergence and high value loss as well as low convergence with a smaller value loss. Ultimately, models can still have high convergence when attributes are omitted but value loss always suffers. Barron (1987) also investigated this result to observe the effects of attribute intercorrelation on convergence and value loss. The experiment 7 from Barron & Kleinmuntz (1986) was partially replicated by using reduced models with 6 attributes. The datasets containing the values for each alternative are picked such that the value of the attributes included in the reduced model and the attributes not included have negative, positive, or no correlation. The results showed that convergence and value loss have a weak inverse relationship and in contrast to the results from Aschenbrenner (1977), did not find that model performance is worse when attributes are negatively correlated. These studies utilized equal weights for all models, a very simplistic assumption that may not adequately capture the complexity of a multiattribute decision problem. Barron & Barrett (1999) revisited the question of reduced models and weighting, where they used a different dataset with 15 alternatives and 9 attributes, weighted using a rank-based method and constructed reduced models with 6, 7, and 8 attributes. This paper used a simulation approach by generating 500 value matrices and used hit rate as the main measure of performance. Hit rate measures the proportion of the top choice picked by the reduced model matching the top choice picked by the full model. The study constructed reduced models by eliminating the lowest ranked attributes and discovered that the hit rate decreases as more attributes are omitted and that rank based reduced models perform better than reduced models utilizing equal weights. Fry, Rinks & Ringuest (1997) examines the effects of omitted attributes in their study on the decision quality of various elicitation methods under noisy decision making conditions where the utilization of the model incorporates potential errors that can be made during the elicitation process such as missing attributes and measurement error. This study tested both additive and multiplicative models (model which includes an extra parameter and terms that account for possible non-independence among attribute value ranges) using a dataset with 6 attributes and 8 alternatives. They examined value loss under the conditions of omitting 1, 2, or 3 attributes. The 8 weights for the model were elicited from decision makers and the attributes with the smallest weights were omitted for the reduced model. The results replicated the findings of Barron & Kleinmuntz (1986), where value loss increased when more attributes are omitted for both additive and multiplicative models and across all elicitation methods compared. The elimination of attributes in decision making have also been examined through the paradigm of information overload and heuristics. Fasolo, McCleland, & Todd (2007) devised a missing attribute study using both real world and simulated datasets with negative or positively correlated attributes. Each dataset consisted of 6 attributes and 21 alternatives and used either equal weights or unequal weights based on rank-of-centroid (ROC) weights. The study tested reduced models that omit up to five of the attributes. For unequal weights, the omission of attributes started from the lowest ranked attributes. Model performance was measured by proportion of non-dominated options and value loss. The results show that the performance of reduced models was worse on negatively correlated attribute sets than positively correlated ones, consistent with Aschenbrenner (1977) and inconsistent with Barron (1987). Additionally, the model performance on real world attribute data more closely resembled the model performance on negatively correlated attribute sets, suggesting that real life attribute data is more similar to negatively correlated attribute sets than they are to the positively correlated ones. The researchers claim that the results showed that people can still make good decisions even with only one attribute under certain conditions, but this conclusion may depend on the somewhat ad hoc performance metrics reported and unique properties of the particular decision problem. One main problem is that the reasonableness of the models was evaluated using an arbitrary threshold of value loss. The researchers consider any model yielding less than .1 (10% of 0 to 1 range) of value loss to be reasonable, a standard which was not elaborated further and 9 has no basis from their citation of value loss (Barron, 1987) where it is used more as a comparative measure. Using this threshold, the researchers determined that if the attributes are positively correlated or if the attributes are unequally important, then one attribute is enough to select a good option. In practice however, a set of alternatives often don’t need to use the entire 0 to 1 range, meaning that a 0.1 value loss might have a much larger impact. For example, in a set of alternatives where the best choice has a value of .75 and the worst choice having a value of .25, a .1 value loss represents 20% of the range used by the set of alternatives. Given that this study is conducted in a relatively low stakes consumer marketing context, there was an unconsidered aspect of judging the acceptable level of value loss in the study that should be acknowledged. In more complex and high stakes problems, even the smallest difference of utility can represent a large amount of loss. For example, in a $10 billion investment problem, .1 is equivalent to a $1 billion difference which can be considered a significant loss. The use of “unequally important weights” is also rather vague, as the unequal weights they used were specifically ROC weights, and they systematically eliminated attributes from the least to the 2nd most important. The weight of a first ranked attribute out of six using ROC is .408, which is more than twice of equal weights (.167) under that condition. This is a fairly specific condition than simply saying that the attributes are unequally important. While the study provides interesting results on how the different levels of reduction affect the decision quality of models on different intercorrelated attribute sets and weighing methods, the qualitative claims are limited by the use of ROC weights. From the collection of missing attribute studies described above, one consistent result on the effect of missing attributes is that the number of objectives omitted proportionately affects 10 the performance of a reduced model, observed by measuring either value loss or hit rate (but not necessarily convergence). However, there are substantial methodological variations in these studies such as the size of the value matrix (number of attributes & alternatives), the performance measure used, the source and characteristic and value matrix (empirical or simulated), etc. which limits more generalized conclusions. These variations may even be the cause of some of the more inconsistent results. Considering the results from Bond et al. (2008; 2010), this question should be revisited. The effects of missing attributes should be framed with the paradigm derived from the empirical observation of how objectives are omitted more due to the lack of depth and breadth of thinking, and that objectives that are omitted may be as important as the objectives that are included in the full model. However, the previous studies suggest that the importance of the having the proper set of objectives may depend not only on the weights, but also on the correlational structure among attribute values. Omitting a negatively correlated objective/attribute value has been shown to impact decision quality more than omitting a positively/non-correlated attribute value. Current inconsistencies in the literature should be rectified by constructing a series of covering previous cases and examples to provide a replication of findings and find an explanation for previous inconsistent results. Three types of studies are proposed to cover these issues: (1) Analysis of multiattribute models from existing case studies This study will utilize multiattribute models and value matrices from published decision analytic case studies by constructing reduced models from the existing model and evaluate their performances against the decision yielded in the original case study. This study functions to 11 demonstrate the potential effects and consequences of omitted objectives in applied decision contexts. (2) Monte Carlo simulation This study involves generating multiattribute models and value matrices that can be specified to size (in terms of number of attributes and/or alternatives) as well as relevant characteristics such as correlations between attributes. Simulation methods also allow the control of characteristics in the model, such as attribute weights. This study functions to identify the factors that may affect the decision quality of models when objectives are omitted by varying various parameters and observing the change in decision quality yielded by the reduced model. (3) Empirical study of objective self-generation This study is an extension of the Bond, et al. (2008, 2010) objective self-generation studies where in addition to comparing the list of self-generated objectives and list of objectives identified from a master list, multiattribute models will be constructed for each participant based on both those lists, which will function as the full and reduced model, respectively. This behavioral study functions to replicate the findings in Bond et al. and produce a quantitative analysis of the implication of omitted objectives and verify patterns that may be observed from factors analyzed in study (2). The next chapter outlines the principles behind analyzing a decision problem using multiattribute decision value modeling, the principles behind the formulation of objectives, using attributes as a way to measure the achievement of objectives, the limitation of the objective generation process, and the procedures of assessing multiattribute model parameters. 12 Chapter 2: Formulation of a Multiattribute Decision Problem One of the main selling points of multiattribute decision methods is that it has a strong behavioral foundation in that it focuses on deciding what’s important to the decision maker instead of just comparing alternatives against each other through all available metrics. Practitioners of multiattribute methods consider that structuring of a decision problem to be as important as the analysis itself, and it is important to understand how to properly structure a decision problem in order to produce meaningful results (Keeney, 2007; Keeney & Raiffa, 1976, pp.31-65; Montibeller & von Winterfeldt, 2015; von Winterfeldt, 1980; von Winterfeldt & Edwards, 1986, pp. 26-62; von Winterfeldt & Edwards, 2007; von Winterfeldt & Fasolo, 2009) Value focused thinking The principle behind value focused thinking (VFT) is to frame a decision problem by thinking about what one cares about in making the decision and examining how to achieve it instead of examining a decision problem by thinking about available choices and choosing among them. Structuring decision problems through VFT encourages broader thinking that truly reflects what one wants, as opposed to alternative focused thinking which works within the boundaries of what is believed to be the available options (Keeney, 1996, pp. 29-30). There are two important components in the initial process of framing a decision problem through VFT, by setting what is referred to as the decision frame. The decision frame is derived from setting the decision context and fundamental objectives. A decision context defines the set of alternatives appropriate to consider for a specific decision situation. Fundamental objectives capture the key values relevant to the decision context and define metrics to assess the degree to which those values are fulfilled. This is in contrast to means objectives which serve the purpose 13 of achieving a more essential objective. Fundamental objectives can be considered to be end objectives, which are more useful in capturing the big picture of the decision problem, while means objectives are mainly useful in identifying more specific consequences of fundamental objectives. For example, a film producer may have an objective of minimizing filming days as a means objective for the larger fundamental objective of minimizing cost. Keeney (1996, p.49) specifies a prescriptive sequence of structuring a decision problem according to VFT as follows: (1) recognize a decision problem, (2) specify values, (3) create alternatives, (4) evaluate alternatives, (5) select alternative. Alternative based thinking has the order of (2) and (3) reversed, therefore constraining the formulation of values based on the alternatives that are immediately available. Considering values before alternatives actually encourages the formulation of alternatives, as Siebert & Keeney (2015) have empirically demonstrated that objectives can be used to encourage more and better alternatives. Desirable properties for defining objectives Through identifying values, we can define objectives by specifying two things: (1) what is hoped to be achieved; and (2) the directionality of attaining that particular achievement, usually illustrated as maximizing or minimizing a certain characteristic or consequence. A general example would be minimizing cost and maximizing benefits. This example also demonstrates the naturally conflicting nature of objectives in a multiattribute objective set in any decision analytic problem. Capturing this type of value tradeoff is essential in eliciting preferences from decision makers and the failure to do so may result in poor decisions (Keeney, 2002). 14 There are 9 desired properties of fundamental objectives as outlined by Keeney (1996, p. 82). The set of fundamental objectives should be: (1) essential, to indicate consequences in terms of the fundamental reasons for interest in the decision situation; (2) controllable, to address consequences that are influenced only by the choice of alternatives in the decision context; (3) complete, to include all fundamental aspects of the consequences of the decision alternatives; (4) measurable, to define objectives precisely and to specify the degrees to which objectives may be achieved; (5) operational, to render the collection of information required for an analysis reasonable considering the time and effort available; (6) decomposable, to allow the separate treatment of different objectives in the analysis; (7) nonredundant, to avoid double-counting of possible consequences; (8) concise, to reduce the number of objectives needed for the analysis of a decision; and (9) understandable, to facilitate generation and communication of insights for guiding the decision making process. A set of objectives in a decision problem typically has a hierarchical structure or organized with value trees, which helps in identifying and organizing objectives, as well as mapping the relationship between means and fundamental objectives (Keeney & Raiffa, 1976, p. 41; von Winterfeldt & Edwards, 1986, p. 26). Mapping the objective hierarchies into higher and lower level objectives may be done through top down or bottom up procedures or even a combination of both. A top down approach may start with a general objective and breaking it down to more specific context or circumstances. A bottom up procedure may involve grouping objectives together and categorizing them under a larger theme (Eisenfuhr, Weber, & Langer, 2010, p.68). In applications of multiattribute analysis, not all objectives may necessarily be included in the multiattribute model. Different objectives may serve different purposes. An objective may 15 be used in a more qualitative manner, such as screening inferior alternatives, addressing consequences of specific alternatives, and helping to generate alternatives (Siebert & Keeney, 2015). When an objective is chosen to be included in the model, the next important step is to decide how to measure how well an alternative achieves that objective by picking appropriate attributes. Attributes as operationalization of objectives Attributes serve to operationalize what we mean by achieving each objective and provides a metric to score the performance of each alternative. In other words, attributes should allow us to define the best or worst consequences of the objectives. Keeney & Gregory (2005) outline five desirable properties of a set of attributes, some of which follows from the desirable properties of objectives: (1) unambiguous, where a clear relationship exists between consequences and descriptions of consequences using the attribute; (2) comprehensive, where the attribute levels cover the range of possible consequences for the corresponding objective and value judgments implicit in the attribute are reasonable; (3) direct, where the attribute levels directly describe the consequences of interest; (4) operational, where information to describe consequences can be obtained and value trade-offs can reasonably be made in practice; and (5) understandable, where consequences and value trade-offs made using the attribute can readily be understood and clearly communicated. Despite these similar properties, objectives and attributes are not interchangeable with each other in the decision structuring process. A study by Butler, Dyer & Jia (2006) compared the performance of a model which structures a decision problem based on attributes and a model that does so based on objectives. They discovered that the model which is based on objectives performs better and produces less judgment error than the model based on attributes. 16 Attributes can be measured directly through natural or constructed attributes. Natural attributes are measures that exist in measuring the achievement of a value, such as cost measured in dollars or euros (or any other currency for that matter). Constructed attributes are measures that are formulated specifically to measure an achievement as accurately as possible, such as measuring pain through a rating scale. Attributes can also be measured indirectly using proxy attributes. An example would be using the level of air pollution to measure the objective of minimizing health effects from air pollution, rather than directly measuring illnesses and deaths due to air pollution. It is recommended that proxy attributes should be avoided unless it is absolutely necessary due to potential confounds and biases that may occur (Fischer, Damodaran, Laskey & Lincoln, 1987; Keeney & Gregory, 2005; Montibeller & von Winterfeldt, 2015). An attribute always maps to an objective, and several attributes may map into the same objective. In other words, an objective may have several attributes. In some cases, an attribute can be used to measure more than one objective, but it may be evaluated differently such that it has different value functions for different objectives. For example, a company during a hiring process may aim to maximize the quality of a candidate yet minimize wage demand. Both of these can be measured by years of experience. Regardless of the number of attributes and objectives, the main takeaway is that the omission of an objective always results in the omission of one or more attributes. Operationalizing omission of objectives in a multiattribute model A key element of any missing attribute study is construction of a “full” multiattribute model which accounts for all relevant decision objectives and/or attributes and construction of a reduced version of the full model by eliminating a certain number of attributes from the full 17 model. The two models are then used to judge a set of alternatives and the result of these judgments are then compared to each other. Before going into the details, it should be understood clearly that the only difference between a multiattribute value (MAV) analysis and multiattribute utility (MAU) analysis is the presence of uncertainty or risk in the decision problem. In the formulation of the model, this distinction is expressed by the usage of either value functions (often denoted by v(x)) for MAV problems or utility functions (often denoted by u(x)) for MAU problems. In this section, the model will be structured as an MAV problem. Consider a riskless decision problem with n number of alternatives to choose from, denoted as the set = {𝑋 1 , 𝑋 2 , … , 𝑋 𝑛 } . Each alternative is associated with m number of attributes, denoted as the set 𝑋 𝑖 = {𝑥 1 𝑖 , 𝑥 2 𝑖 , … , 𝑥 𝑚 𝑖 } | 𝑖 = 1, 2, … , 𝑛 where x i j denotes the jth attribute of the ith alternative. The function v(x) is generally used to denote a value function for some variable x. Value functions serve to translate real world attribute values (for instance cost in dollars) to a value contained in the interval [0, 1] where 0 represents the worst possible value for the attribute and 1 represents the best value. In this case, the value function v(Xi) is used to represent the value function of an alternative Xi, while vj(x i j) represents the value function of attribute j in in some alternative i. The value of alternative Xi is denoted as 𝑣 (𝑋 𝑖 ) = 𝑓 [𝑣 1 (𝑥 1 𝑖 ), 𝑣 2 (𝑥 2 𝑖 ), … , 𝑣 𝑚 (𝑥 𝑚 𝑖 )] (1) When comparing among alternatives, an alternative Xy is preferable to an alternative Xz and 𝑦 , 𝑧 ∈ {1, 2, … , 𝑛 } if and only v(Xy) > v(Xz). The function f denotes a method of aggregating the utility for each attribute. There have been several approaches to formulate a way to express the collective value from each attribute of an alternative. The additive model is the most 18 commonly used function and is sufficient when the objective set sufficiently represents the fundamental objectives (Keeney & Raiffa, 1976, p. 91). An expansion of equation (1) using the additive is expressed as 𝑣 (𝑋 𝑖 ) = 𝑣 1 (𝑥 1 𝑖 ) + 𝑣 2 (𝑥 2 𝑖 ) + ⋯ + 𝑣 𝑚 (𝑥 𝑚 𝑖 ) = ∑ 𝑣 𝑗 (𝑥 𝑗 𝑖 ) 𝑚 𝑗 =1 (2) The expansion above expresses the additive nature of the model, yet it is still not quite in the form of a proper additive model. An important property of this model is the use of scaling parameters or weights, which denote the relative valuation for each attribute to accommodate the fact that one objective might not be of equal importance to another. Each attribute is associated with their own scaling parameter or weight wj. With the set of weights 𝑊 = {𝑤 1 , 𝑤 2 , … , 𝑤 𝑚 }, the proper form of the additive model for (2) is expressed as 𝑣 (𝑋 𝑖 ) = 𝑤 1 𝑣 1 (𝑥 1 𝑖 ) + 𝑤 2 𝑣 2 (𝑥 2 𝑖 ) + ⋯ + 𝑤 𝑚 𝑣 𝑚 (𝑥 𝑚 𝑖 ) = ∑ 𝑤 𝑗 𝑣 𝑗 (𝑥 𝑗 𝑖 ) 𝑚 𝑗 =1 (3) The weights are also constrained by the conditions ∑ 𝑤 𝑗 = 1 0 < 𝑤 𝑗 < 1 (4) 𝑚 𝑗 =1 Both value functions and weights are elicited from decision makers and they may vary from person to person. Constructing reduced models The underlying idea behind computing the effects of incomplete objective sets is that an incomplete set of objectives is simply the subset of a more complete objective set. The extent of 19 how much a reduced set of objectives affect the overall decision quality can be observed by sampling various subsets of the initial objective set. Consider the value of some alternative Xi with m attributes associated with the set of weights 𝑊 = {𝑤 1 , 𝑤 2 , … , 𝑤 𝑚 }, which will be referred to as the full set of weights. Suppose that a decision maker omits several objectives for some reason or another, therefore the decision maker has a reduced set of m-ℓ parameters where ℓ represents the number of attributes associated with the objectives not included in the decision maker’s model, referred to as the reduced model, which has the reduced weights set 𝑊 ′ = {𝑤 ′ 1 , 𝑤 ′ 2 , … , 𝑤 ′ 𝑚 −ℓ }. Two variations of weighting methods commonly used in the application of multiattribute studies will be explored for the series of studies included in this proposal. The first will be ratio scale weighting methods, which preserves the relative importance of an objective to another. The second will be approximate weighting methods, represented by equal weights and rank based weighting methods. Ratio weights is the more ideal way to obtain multiattribute weights because it preserves the relative valuation of each attribute against each other, in other words, wi/wj is constant for any i,j = 1, 2, …, m. Ratio weights are also better at accounting for the range of the attribute values, since it takes into account what the best and worst possible value for that particular attribute among the alternatives. For example, in a travel decision problem where one of the objectives is minimizing time of travel, a set of alternatives may have the range of worst to be best alternatives from 1 to 8 hours, while one set of alternatives has the range of 1 to 2 hours. Despite being the same objective for the same decision problem, the model for the first set of alternatives would likely have a larger weight than the model for the other set of alternatives due how a wider range would likely require more tradeoff than a narrower range. Ratio weights can 20 be obtained by using techniques such as indifference methods or swing weighting (von Winterfeldt & Edwards, 1986, pp. 272-273) In this case, the set of W’ is simply obtained by collecting the weights for the attributes in the full model that correspond to the objective included in the reduced model. Note that since the sum of all members of W’ does not equal to 1 due to being a subset of W, it needs to be normalized by dividing the members of W’ by the sum of W’. Much of the previous missing attribute studies utilized approximate methods of attribute weightings, namely equal and rank-order based weights. Equal weights is the simplest weighting method which provides the binary information of whether an attribute is relevant or not while assuming they are all of equal importance (Dawes & Corrigan, 1974). The weights for each attribute for a model with m attributes would simply be 𝑤 𝑖 = 1 𝑚 𝑖 = 1, 2, … , 𝑚 (5) A rank-order weighting method is an approximate way of assigning weights to objectives. It provides ordinal information on the value of objectives and is relatively intuitive to understand, such that it may be more practical in situations such as having multiple decision makers (Stillwell, Seaver, & Edwards, 1981). There are various kinds of rank-order weighting methods such as rank-sum and rank reciprocal models, but rank-order centroid (ROC) is considered to be the best performing out of all rank-based weighing methods (Barron & Barret, 1996; Jia, Fischer, & Dyer, 1998). The weighing model formulates calculating weights for objectives that are weighted ordinally as 𝑤 1 > 𝑤 2 > ⋯ > 𝑤 𝑚 as 𝑤 𝑗 = 1 𝑚 ∑ 1 𝑖 𝑚 𝑖 =𝑗 𝑗 = 1, 2, … , 𝑚 (6) 21 Weighing method researchers have argued that effects of imprecise weights may be negligible and yield similar quality of decisions. Simulation studies on different weighting models discover that some models marginally perform better quantitatively (Barron & Barret, 1996; Jia et al., 1998; Ahn & Park, 2008) but the differences are in such small magnitudes that may not even matter in the application of the model. One can argue that omission of objectives could also be considered a problem of weighting variation, as omitting an objective is equivalent to assigning it a weight of 0. Measuring model performance Measuring the impact of omitted attributes in a multiattribute model is essentially a sensitivity analysis where the weights of omitted attribute are set to zero and renormalizing the remaining attributes included in the model. Thus, performance measures used in studies of missing attributes are measures commonly used in sensitivity analysis literature. The various measures that have been used to assess the performance of reduced models in the existing literature have produced mixed results. These measures detect differences in distinct aspects of model performance with various degree of relevance to examining the consequence of missing attributes. These three parameters will be used to measure decision quality: (1) Hit Rate (HR) – Hit rate is simply obtained by counting the proportion of models in the distribution of reduced models that selects the same top choice as the full model. Top choice being the choice that has the highest value among all alternatives. For example, if the full model picks alternative Xi as the highest ranked choice and out of a simulation of 1000 reduced models, 690 of them have Xi has the highest ranked, then the hit rate would be .690. This measure was used by Fry, Rinks, and Ringuest (1996) as well as Barron and Barrett (1997). 22 (2) Average Value Loss (AVL) – Average value loss describes the absolute difference in value for the best alternative picked by the full model and the alternative picked by the reduced model, using the MAV function of the full model. If xmax,full represents the best alternative choice of the full model and xmax,reduced is the best alternative choice for the reduced model, value loss is calculated from 𝑣 𝑙𝑜𝑠𝑠 = 𝑣 𝑓𝑢𝑙𝑙 (𝑥 max,𝑓𝑢𝑙𝑙 ) − 𝑣 𝑓𝑢𝑙𝑙 (𝑥 max,𝑟𝑒𝑑𝑢𝑐𝑒𝑑 ) (4) When the full and reduced model pick the same alternative, then there is no value loss. Value loss is high when a reduced model picks a top choice that is judged to have a low value in the full model. A low value loss indicates higher performance of the reduced model. Value loss is useful in interpreting model differences in context, because numerical value differences can be converted to attribute values using the inverse of their value function. This measure was used by Barron and Kleinmuntz (1986), Barron (1987), Fry et al. (1996), and Fasolo et al. (2007). (3) Convergence – Convergence refers to Spearman’s rank order correlation value between the rank ordering of alternatives from the full model and the reduced model. In contrast to previous measures that focus on the top choice made by the models, convergence considers how the model judges the entire set of alternatives as a whole, which may be relevant in situations where decision makers need to rank priorities or preference from a set of choices. This measure was used by Aschenbrenner (1977) and Barron and Kleinmuntz (1986). However, Barron (1987) found that convergence as yields inconsistent results. The intuition for the effect of omitted objectives based on this conceptualization is that the effect may also be just marginal or at the very least not much worse than the imprecision of weights. However, there are other factors that need to be evaluated that may produce more 23 systematic effects compared to the imprecision of model parameters. The extent to which these variations may impact real life decision problems may also depend on the structure of the objectives in a decision problem and the value matrices of choice alternatives themselves. One way to study the role of decision context is to evaluate the systematic effects of omitted objectives on established multiattribute case studies. Published multiattribute value/utility studies provide decision models and choice alternative values in a specific decision context that reflects how multiattribute analysis is applied in various scenarios. By constructing reduced models for each case study, the effects and consequences of omitted objectives can be examined by comparing the outcome observed from the initial model with the reduced models. 24 Chapter 3: Study I: Cost of omitted attributes from applied case studies The wide usage of the multiattribute approach to decision analysis has provided a great breadth of empirical data available in published literature (Edwards & Newman, 1982; Merkhofer & Keeney, 1987; Youngblood & Collins, 2003; Brothers et al., 2009). Previous studies on missing attributes have included data from real life decision problems (Barron, 1987; Fry et al, 1997; Fasolo et al., 2007). For most part, the studies acknowledge the limits of their results given that their analysis is bound to the characteristics of the decision problem, e.g., # of attributes and alternatives, weight parameters, and covariance structure of values. This chapter outlines a comparative analysis the effects of model reduction across multiple cases in the applied decision analysis literature. Cases from existing decision problems provides data from real life decision environments which captures the characteristics of a decision space one might encounter in the application of multiattribute analysis techniques. Comparing the effects of model reduction from various applied decision settings allows us to observe the potential empirical implications of the insufficient self-generation of objectives observed in Bond et al.’s studies and observe variation that may exist across applications of multiattribute analysis. 3.1 Methods The analysis of each application begins with a review of the decision context of the multiattribute analysis. This followed by an examination of the process involved in constructing the model, such as formulation of objectives, selection of alternatives, weight elicitation methods, relevant stakeholders involved, etc. A brief description of the results from the initial 25 analysis in the application follows. The model used in the original analysis is referred to as the full model, which acts as the baseline model. For the analysis in this study, a new multiattribute model is constructed by omitting some of the objectives that were included in the full model. This decision problem is then reexamined using the new reduced model, and the results of the reduced model will be compared against the full model. The implication of the difference (or lack thereof) between the full and reduced model will be discussed. 3.1.1 Analysis Procedures The empirical literature on objective self-generation discovers that the degree of omission ranges around 50-70%, therefore reduced models will be constructed on three levels: the omission of (roughly) 1/3, 1/2 and 2/3 of available objectives from the full model. The empirical results show that there is little difference the in the quality of objectives that are self-generated and recognized objectives, therefore multiple reduced models will be obtained from each possible combinations of the subset of objectives. For example, if a decision problem has a full set of 12 objectives, the 1/3 reduced model will be utilizing 8 objectives sampled from the full set. There will be ( 12 8 ) = 495 possible reduced models included in the analysis. Hit rate is calculated across all trials. Value loss is averaged across all reduced models. Convergence is obtained from each reduced model. The range and median of convergence values will be used to describe the results. The analysis was executed on the R statistical programming platform. 3.1.2 Selection of cases 26 In order to understand the potential impact of missing objectives in practice, the analysis procedures explained in the previous section will be applied to various applications of multiattribute methods to real life decision problems. The applications will be selected from published literature demonstrating the use of multiattribute methods in various decision contexts. There are four criteria for selecting cases included in the study: 1. Presents a multicriteria decision problem which utilizes an additive multiattribute model. 2. Lists all objectives considered in the model and the attributes (measurement) used for each objective as well as the attribute weights. 3. Lists the set of alternatives considered in the problem along with their normalized value/utility for each objective or includes both the attribute values and value/utility function used. 4. Results of the multiattribute analysis as presented in the paper are reproducible. An important aim of this study is to sample applications from various fields that may span across multiple interdisciplinary journals. The academic search engine Google Scholar complemented by ProQuest journal database were utilized for the search process. The literature search process involved searching the keywords “multi-attribute”, “application”, “model” as well as “weights” and “table”, in order to filter for studies that include the value matrices used in their study. Around 100 abstracts cases were examined in the literature search process. The most common reason that a study was excluded is due to not presenting all the required data for the analysis. For example, a study may not show the full set of alternatives and their attribute values. Some of the studies are eliminated due to being more theoretical in nature and lack applicable 27 decision contexts, although applications with hypothetical data would be accepted as long as it has a clear applicable context of the decision problem. Another consideration in selecting cases is that the applications should be chosen to illustrate the variety of topics multiattribute methods are applied to, giving the opportunity to describe how different scenarios may be impacted by the omission of objectives. The complexity of the value matrices, i.e. the number of objectives/attributes and alternatives from the variation of different decision contexts should also vary to help illustrate the diversity of decision contexts in real life applications of multiattribute methods. The 8 applications of multiattribute methods that are selected to be analyzed in this study are shown on Table 3.1. The applications are listed alphabetically by application name. Table 3.1: Published multiattribute applications included in the analysis Application name Decision context Citation Airline competitiveness Evaluating airline competitiveness Chang, Y. H., & Yeh, C. H. (2001). Evaluating airline competitiveness using multiattribute decision making. Omega, 29(5), 405-415. Cargo delivery Planning of cargo delivery in disaster aftermath Cavalcanti, L. B., Mendes, A. B., & Yoshizaki, H. T. Y. (2017). Application of multi-attribute value theory to improve cargo delivery planning in disaster aftermath. Mathematical Problems in Engineering, 2017. Climate change policy Evaluation of climate change mitigation policy Konidari, P., & Mavrakis, D. (2007). A multi- criteria evaluation method for climate change mitigation policy instruments. Energy Policy, 35(12), 6235-6257. 28 Container ports Selection of container ports in the gulf coast for shipping lines De Icaza, R. R., Parnell, G. S., & Pohl, E. A. (2019). Gulf Coast Port Selection Using Multiple- Objective Decision Analysis. Decision Analysis, 16(2), 87-104. Health care treatment Value of health care treatment options Phelps, C. E., & Madhavan, G. (2017). Using multicriteria approaches to assess the value of health care. Value in Health, 20(2), 251-255. Historical buildings Choice of historical buildings to repurpose Ferretti, V., Bottero, M., & Mondini, G. (2014). Decision making and cultural heritage: An application of the Multi-Attribute Value Theory for the reuse of historical buildings. Journal of cultural heritage, 15(6), 644-655. Power plant Options for power plant development Jacobi, S. K., & Hobbs, B. F. (2007). Quantifying and mitigating the splitting bias and other value tree-induced weighting biases. Decision Analysis, 4(4), 194-210. Radioactive waste Immobilizing radioactive liquid waste Brothers, A. J., Mattigod, S. V., Strachan, D. M., Beeman, G. H., Kearns, P. K., Papa, A., & Monti, C. (2009). Resource-limited multiattribute value analysis of alternatives for immobilizing radioactive liquid process waste stored in Saluggia, Italy. Decision Analysis, 6(2), 98-114. 3.1.3 Problem descriptions Airline competitivesness Chang & Yeh (2001) examines the performance of several domestic air carriers in Taiwan. The goal of this study is not to choose between alternatives, but to compare the 29 performance of different domestic airlines and evaluate the strength and weaknesses of each airlines based on the distribution of attribute values that contribute to the evaluation of airline quality. This in turn provides the airline companies some guidance in determining strategies to improve their competitiveness. The application uses 11 performance measures as attributes, which is based on 5 main dimensions: cost, productivity, service quality, price, and management. The performance measures are natural attributes based on airline performance records during the years 1992-1997. Cargo delivery (Cavalcanti, Mendes, & Yoshizaki, 2017) The specific decision context for this study is to choose between combination of disaster aid strategies. The alternatives compared in the problem involve directing the focus of the strategy to various key factors given the constraint of time and resources. For example, there would be a plan that focuses on response speed or a plan focuses on deploying the appropriate cargo type and quantity. The objectives in the problem were extensively elicited from various experts with experience in humanitarian aid and/or peacekeeping as well as available literature on the topic. A single decision maker (a humanitarian expert) was used to select the appropriate objectives for the model from the pool of objectives. The set of objectives are then narrowed down to 5 main objectives that are included in the analysis: (1) safety of the cargo and workers, (2) stability with local communities, (3) quantity delivered proportional to needs, (4) speed of urgent cargo delivery, and (5) demand fulfillment of priority cargo. The attributes to measure performance were also elicited from the decision maker. The weights for the attributes were elicited using rank order centroid method (ROC). 30 The alternatives were selected through a Value Focused Thinking (VFT) approach, where alternatives are formulated based on the type of aid deployment strategies that would fulfill one objective at a time. The alternative formulation process also utilizes the Analysis of Interconnected Decision Areas (AIDA) method (Howard, 1988) which divides a strategy into a set of small decisions, each set representing a decision area. Climate change policy (Konidari & Mavrakis, 2017) This application is an example of multiattribute methods to compare between alternatives instead of picking a top choice. Konidari & Mavrakis (2007) utilized various MCDA methods including MAUT to evaluate climate change mitigation policies from various European nations. The researchers gathered policies from various European nations and judged them based on criteria selected from objectives identified in the various literature concerning energy policymaking. They organized the criteria into three main objectives: (1) Environmental performance, (2) Political acceptability, and (3) Feasibility of implementation. From the three main objectives, several lower-level objectives are developed, amounting to a total of 11 objectives that are utilized in the multiattribute model. The attribute weights are formulated based on the expressed preferences of three stakeholder groups: policy makers, researchers, and target groups. Container ports (De Icaza, Parnell, & Pohl, 2019) In this study, De Icaza, Parnell, and Pohl (2019) evaluate and compare five ports located across the Gulf coast. The analysis incorporates eighteen objectives and corresponding attribute measures gathered from port selection literature with input from experts. Utility functions were 31 used to obtain a normalized attribute value where 0 representing the worst case and 1 representing the best (for this study’s analysis, the utility functions are linearized for simplicity). The researchers elicited weights from experts using the swing weighting matrix (Parnell & Trainor, 2009), and expansion of the swing weighting methods that accounts for both importance and variation in the attribute value measure. Three levels of importance were used to classify the value measures: critical, moderate, and minor, while three levels of attribute value measure variations were used to classify variation: small, medium, and large. Health care treatment (Phelps & Madhavan, 2017) In contrast to the organizational level decision making prevalent in the previous examples, this study examines decision making on an individual level. The decision problem centers on a cancer patient choosing between 5 treatment options. There are 9 attributes included in the analysis: (1) probability of remission, (2) expected months of remission free survival, (3) probability of hair loss, (4) probability of nausea, (5) pain, (6) total cost, (7) patient cost, (8) advancing knowledge, and (9) quality of evidence about benefits. There are five types of decision makers presented in the application and the decision maker goals are based on archetypes such as focusing on avoiding pain, increasing longevity, or maximizing the chance to be cured of their condition. Many of the attributes are natural measurements that are very sensitive to differences in value/utility and easily translatable as to how it impacts a patient’s life quality. Historical buildings (Ferretti, Bottero, & Mondini, 2014) This study provides an example of multiattribute application of examining options for the preservation of historical buildings. In this decision problem, decision makers from different 32 specialties (history, planning, restoration, and economics) are asked to evaluate seven different historical buildings and determine the building most suitable for restoration for tourism. There are five objectives considered in the selection: (1) quality of the context, (2) economic activities, (3) flexibility of the building, (4) accessibility, and (5) conservation level. The objectives were chosen based on the recommendation of a panel group of experts. Value functions are constructed for each objective based on the attributes used to measure the objectives, using either quantified measures (for economic activities and accessibility) or qualitative judgment (quality of context, flexibility, and conservation level). The weights utilized in the problem are solicited through swing weighting methods (which is a ratio-based method) from each expert. The results of each expert are then compared to each other. In addition to the expert models, a model that uses the mean of the attribute weights from all the expert is also constructed, Power plant (Jacobi & Hobbs, 2008) The decision problem in this study involves energy management executives choosing between long term plans for energy generation and conservation. The initial decision structuring process involve a group of 11 experts (mostly planners or midlevel executives) brainstorming to identify plan alternatives for the expansion of the energy company. The alternatives consist of various changes and improvement that can be made to the current power plant operations, such as focusing more on renewable energy or improving the efficiency of current generators. There are 14 proposed plans in addition to the status quo, totaling at 15 alternatives. The set of objectives/attributes were provided by the researchers. Examples of the objectives used in the study include minimizing 20-year capital expenditures, minimizing carbon 33 dioxide emissions, and minimizing regional job loss. There are 15 alternatives in total. The main purpose of this particular study is to examine the presence of splitting biases during weight elicitation process in the formulation of multiattribute process. The splitting bias occurs when the model weights elicited from the decision maker is influenced by the structure of the value tree used to formulate the set of objectives/attributes. The researchers compared the set of weights elicited from 11 decision makers who were presented with different forms of value trees for the same problem. For this analysis, the set of weights from subject 5 was chosen because every weight in the model is nonzero. Radioactive waste (Brothers et al., 2009) This study examines the decision problem of evaluating options in the processing and disposal of liquid radioactive waste in Saluggia, Italy. The three alternatives were proposed by an expert in design and management in nuclear waste and presents three waste disposal plans to consider: (1) vitrification and cementation at site, (2) cementation at site, and (3) transport abroad. There are 11 objectives included in the analysis, many of which relates to the risks involved in the consequences of each alternative, e.g., technical risk and schedule risk. Thus, this decision problem would be classified as a multiattribute utility problem. The objectives were outlined to account for concerns from various stakeholders such as management, technicians, and the public. The attribute values used to measure the objectives were a mix of engineering data and expert judgments. The attribute values are then converted using a value or utility function to be numerical values bounded on [0,1] where 0 represents the worst value and 1 represents the best value. The 34 attributes are mostly positively correlated with each other with the notable exception for the objective final product confinement, which has a negative correlation with all remaining attributes. The weights for this model were elicited from a modified simple multiattribute rating technique (SMART) devised by Edwards and Barron (1994) which means that it is a ratio-based weight set. 3.2 Results 3.2.1 Selected application analysis: Container ports (De Icaza, Parnell, & Pohl, 2019) This section presents a representative of the qualitative and quantitative analysis that was performed on each application. The analysis for all 7 other applications can be found in Appendices B1-B7 The top choice according to the full model is Houston with a value of 0.356 and the lowest is Gulfport with a value of 0.169. The value range for the set of considered alternatives is 0.189. Due to the large number of objectives, this analysis provides the highest number of possible reduced models. Table 3.2 shows the performance parameters for each level of model reduction. Despite the large number of possible models, the top choices stay relatively robust compared to decision problems with the same number of alternatives. The initial top choice of Houston remains ahead from the vast majority of other options even given the fact that it is not particularly close to dominating the other options, with it having the highest attribute value in only 10 out of 18 available attributes. The full model and complete list of attribute values for each alternative is displayed in Table 3.3. 35 Table 3.2: Performance parameters at each level of reduction for port selection application Parameter Reduced model 1/3 reduced 1/2 reduced 2/3 reduced Number of objectives (out of 18) Number of possible models Hit Rate Average Value Loss Convergence Values Median Mean Range 12 18564 0.916 0.0082 0.90 0.87 -0.80 - 1.00 9 48620 0.806 0.0207 0.90 0.76 -1.00 - 1.00 6 18564 0.682 0.0377 0.70 0.61 -1.00 - 1.00 36 Table 3.3: Full model and complete list of attribute values for port selection application Objective Houston New Orleans Mobile Gulfport Tampa Weights Maximize potential of bigger ships Maximize vessel arrival Maximize berth capacity Maximize cargo handling performance Maximize port capacity Maximize refrigerated containers capacity Maximize market/cargo volume Maximize ship calls Maximize container handling capacity Maximize shipping lines serviced Maximize intermodal service Maximize rail connectivity Minimize landside congestion Maximize environmental protection Minimize severe weather Minimize disasters situation Minimize tornadoes presence Minimize precipitation condition 0.47 0.38 0.05 0.29 0.21 0.23 0.28 0.87 0.34 0.42 0.25 0.20 0.00 0.75 1.00 0.00 0.00 0.00 0.47 0.05 0.00 0.06 0.00 0.14 0.05 0.44 0.34 0.16 0.01 1.00 0.82 0.25 1.00 0.59 0.82 0.24 0.47 0.05 0.00 0.03 0.04 0.03 0.02 0.10 0.34 0.19 0.56 0.80 0.97 0.00 0.00 0.46 0.78 0.30 0.00 0.38 0.15 0.00 0.10 0.26 0.02 0.05 0.00 0.00 0.10 0.20 1.00 0.36 0.60 0.38 0.78 0.49 0.37 0.00 0.00 0.00 0.03 0.00 0.00 0.00 0.28 0.02 0.79 0.00 0.66 0.50 0.20 0.71 0.62 0.61 0.110 0.077 0.105 0.099 0.050 0.006 0.094 0.083 0.061 0.066 0.055 0.039 0.033 0.022 0.044 0.017 0.011 0.028 Interpretation of reduced model performance The results affirmed the competitiveness of the top choice, Houston. This is consistent with the Monte Carlo analysis from the original study. Even when accounting for uncertainty of the attribute values, Houston remains on top in most cases. In addition to value calculation, the original study also included a Monte Carlo simulation of cost calculations which was separate 37 from the multiattribute value model. This decision is made due to stakeholder preferences in comparing cost directly against value. The cost model also involves a higher level of uncertainty, which is why a Monte Carlo method was utilized in the original analysis. Houston unsurprisingly had the highest cost, as value is often negatively correlated with cost in a decision problem, which is why value tradeoffs need to be calculated. Therefore, any judgment towards the overall best option for investment should determine the balance between cost and value, keeping in mind that the value of the Houston port is significantly better than the other options. Many of the attributes in the decision problem utilized natural measurements, therefore value loss is easily interpretable as real unit equivalence. The top three attributes with the largest weights will be discussed in this section, with the complete list of value loss real unit equivalence is displayed in Table 3.4. Attribute weights with largest values indicate the biggest gap between the best and worst attribute levels, thus value loss would affect these attributes more than attributes with smaller weights. Consider the attribute with the highest weight, which represents the objective of maximizing potential of bigger ships, measured by depth in feet. The real unit equivalent of the value loss for 1/3, 1/2, and 2/3 reduction corresponds to 0.16, 0.39, and 0.72 feet respectively, which would not be a substantially large difference when considering the sizes of ships harboring in such ports. The next attribute is for the objective maximizing berth capacity, which is represented by the length of the berths in feet. The real unit equivalent of the value loss for 1/3, 1/2, and 2/3 reduction corresponds to 227.38, 573.99, and 1045.38 feet. Differences in berth length affects the size of ships that can dock in the harbor. These differences could mean that the port would have to reject any boat that are too long to reasonably dock at the harbor, so this particular value loss may have a larger impact than the value loss from the objective with the 38 largest weight. The attribute with the third largest weight is the number of cranes, which represents the objective maximizing cargo handling performance. The real unit equivalent of the value loss for 1/3, 1/2, and 2/3 reduction corresponds to 0.57, 1.45, and 2.64 cranes. The real unit equivalence for 1/2 and 2/3 reduction value losses represents differences in actual units of cranes. An extra unit of crane may lead to additional operational/maintenance and/or labor costs for the port. Since this decision model is used to pick the top choice, convergence value would not be a measure that would be relevant to this application. Table 3.4: value loss real unit equivalence for port selection application Objective Attribute Attribute range Linearized real unit difference from 1/3 model reduction value loss Linearized real unit difference from 1/2 model reduction value loss Linearized real unit difference from 2/3 model reduction value loss Maximize potential of bigger ships Depth (feet) 19 0.16 0.39 0.72 Maximize vessel arrival No. of berths 21 0.17 0.43 0.79 Maximize berth capacity Berth length (feet) 27,729 227.38 573.99 1045.38 Maximize cargo handling performance No. of cranes 70 0.57 1.45 2.64 Maximize port capacity Port capacity (acres) 1,608 13.19 33.29 60.62 Maximize refrigerated containers capacity No. of refrigerated slots 3,414 28 70.67 128.71 Maximize market/cargo volume Container traffic (TEUs) 5,874,366 48,169.8 121,599.38 221,463.6 Maximize ship calls Ship calls (no.) 1,099 9.01 22.75 41.43 39 Maximize container handling capacity Maximum ship capacity call (TEUs) 16,877 138.39 349.35 636.26 Maximize shipping lines serviced No. of shipping lines calling at Terminal 43 0.35 0.89 1.62 Maximize intermodal service % of intermodalism used for shipments in state 15 0.12 0.31 0.57 Maximize rail connectivity No. of class 1 railroads 5 0.04 0.1 0.19 Minimize landside congestion Landside annual traffic delay (hours) 199,173 1633.22 4,122.88 7,508.82 Maximize environmental protection No. of environmental protection Policies 8 0.07 0.17 0.3 Minimize severe weather Severe weather data inventory (no.) 5 0.04 0.1 0.19 Minimize disasters situation Billion-dollar weather and climate disasters events (no.) 56 0.46 1.16 2.11 Minimize tornadoes presence Average annual no. of tornadoes 144 1.18 2.98 5.43 Minimize precipitation condition Precipitation ranks 70 0.57 1.45 2.64 3.2.2 Aggregated Results From the eight applications, a total of 16 decision models were analyzed due to the health care treatment and historical buildings restoration applications involving five decision makers each. The tables of all values used in the graph is included in appendix A 40 Hit Rate Figure 3.1 shows the hit rate patterns of each application at every reduction level. Hit rate generally demonstrates a pattern of decrease as more attributes are taken out of the model, with the exception of decision maker 2 in the health care treatment case. In most but not all cases, the drop of hit rate from 1/3 to 1/2 reduction is less severe than the drop from 1/2 to 1/3. The variation is not solely due to differences in decision problem, as different models for the same decision problem demonstrates variation in how much reduction impacts the model parameter. Figure 3.1: Hit rate for each application at every reduction level Value Loss Figure 3.2 shows the value loss patterns of each application at every reduction level. In contrast to hit rate, value loss increases as more attributes are taken out because it is a measure of deviation against the top choice of the original model. For most part, value loss shows an inverse relationship with hit rate, although it is possible for models with similar hit rates to have differing levels of value loss. This is due to value loss being dependent on how the full model 41 evaluates the attribute values. The value loss increase from 1/2 to 2/3 reduction appears to be higher than the value loss increases from 1/3 to 1/2 reduced, which is consistent with the pattern of hit rate decrease. Note that the value loss from climate change policy case is notably higher than the other cases, which may be due to the competitiveness of alternatives in addition to the more spread-out distribution of alternative values. Figure 3.2: Value loss for each application at every reduction level Mean Convergence Figure 3.3 shows the mean convergence value patterns of each application at every reduction level. The analysis for each level of model reduction produces a distribution of convergence values. Mean convergence ends up being the most sensitive towards changes in rank order variation, especially in smaller set of alternatives where there are limited combinations of rank order. Mean convergence values express a more consistent pattern of decreasing values between each level of model reduction. 42 Figure 3.3: Mean convergence value for each application at every reduction level 3. Discussion The analysis of these wide variety of application shows that the effects of missing objectives may not be as simple as previous literature has suggested. Some results from the selection of studies above have shown several patterns that challenges the findings from previous literature. Some considerations that arise from the results are as follows: (1) Characteristics of the set of alternatives is as important when measuring the impact of missing objectives. While missing objectives do generally lead to different results in multiattribute models, there is much variability in how severe this impact is on the model. One factor that is apparent in decision problems, particularly when observing problems with the same number of objectives in the model (meaning that it experiences the same level of reduction) is the characteristic of the alternatives considered. Based on what has been observed in the case evaluation, here are some characteristics that may lead to more pronounced effects of missing objectives: 43 a. Competitive set of alternatives Competitiveness of alternatives represent how close the values of each alternative with each other such that the smallest variations in the model may lead to a change in rank order of values. This is not necessarily represented by the attribute value range, as it is possible that a large value range is due to an alternative that has an outlier value whose relative position in respect to the other alternatives may not shift easily even with a high level of reduction. b. Large set of alternatives Following the competitiveness of alternatives, the size of the set of alternatives also affects reduced model performance to some extent. This characteristic particularly affects the measure of convergence value since a longer list of alternatives lead to more combination of alternative rank order. However, if the alternative set contains dominated alternatives, it does not adversely affect model performance at all. (2) The relationship between objectives/attribute values may matter more than the number of objectives themselves. As outlined in the results section, there are cases where a large set of objectives do not necessarily lead to a high reduction in model performance. The application with the largest set of objectives (De Icaza, Parnell & Pohl, 2019) still show a relatively high hit rate value compared to other cases with smaller number of objectives, and vice versa with problems with a small number of objectives having worse parameter values (Ferretti, Bottero, & Mondini, 2014). Other than the characteristics of the set of alternatives considered one possible factor related to the set of objectives in the decision problem is the relationship between attribute values. As discussed in previous studies (Aschenbrenner, 1977; Fasolo et al., 2007), reduced models with 44 more conflicting objectives, i.e., negative correlation between attribute values are more sensitive to omission of objectives. (3) The magnitude of impact of missing objectives may dramatically vary between models for the same decision problem. As demonstrated in the applications with multiple decision makers, the extent to which the parameter change vary across decision makers. Some models still perform close to the full model even after a substantial amount of reduction while others yield dramatically deviate from the full model even with a small level of reduction. Variation of performance across models even occur in rank-order based decision, where the difference between models is merely the arrangement of weights while the set of values in the weights stay constant. This may be related to the previous factor of relationship between objectives where some value conflicts are more emphasized according to the rank ordering of objectives. It may be the case that objectives that have larger weights happen to be positively correlated with each other, thus would be less detrimental to the model when removed. It should be noted that all the factors above work in conjunction with each other, thus identifying the exact factor that causes differences in performance would be a challenging task. The effect of model reduction in a decision problem is highly dependent on the interaction between the properties of the decision model, set of objectives, as well as the set of alternatives. The uniqueness of each decision problem makes it difficult to provide a definitive answer on how big of a problem missing objectives in a multiattribute model are. However, it is important to note that missing objectives does not necessarily lead to disastrous consequences. 45 This study presented several scenarios in which omission of objectives may affect decision problems evaluated through multiattribute analysis, from large scale policy decisions to personal decision making. The analyses of the various studies provide insight on how different magnitudes of objective omission may meaningfully affect decision outcomes with different levels of model reduction. One possible direction for future research regarding this topic is to have more systemic approach in selecting multiattribute cases to evaluate. This may be done through a stricter selection of cases that are bounded by the nature of decision problem such as number of objectives and alternatives or cases. Another possible approach is through simulations of decision problems. Simulations provide a level of control towards specifying the characteristics of a decision problem while systematically varying a specific factor. 46 Chapter 4: Study II: Monte Carlo simulation of factors influencing the cost of omitted attributes Following the analysis of existing multiattribute data from study I, a simulation further generalizes findings by investigating the robustness of the effects of certain conditions through a wide variety of cases with adjustable parameters. Simulations provide a level of control in specifying the type and magnitude of variation in the case which is not present in studies with existing cases. Simulation approaches are commonly used in studies to compare the decision quality of various weighting models (Barron, 1988; Barron & Barret, 1996; Jia et al., 1998; Butler et al., 2006; Ahn & Park, 2008; Durbach & Stewart, 2009), and have been applied in a limited way by previous missing attribute studies (Barron & Barrett, 1999; Fasolo et al.,2007). Using Monte Carlo simulations in studying the effects of missing objectives is advantageous in being able to control various parameters of a decision model and simulate value matrices which yields a distribution of potential outcomes that describe the range of consequences from the omission of objectives. Additionally, in contrast to previous studies where missing attributes are systematically excluded based on the smallest weights, this study uses repeated random sampling to select attributes that make up the reduced models, giving a distribution of possible outcomes based on the proportion of objectives that are omitted. 4.1 Methods 4.1.1 Simulation procedures The general method of simulation studies in the context of multiattribute models involves generating a matrix of choice alternative values, a baseline (“true”) model, and comparing the performance of a test model against the baseline model. In this study, the baseline model will be 47 referred to as the full model, which uses all relevant attributes in analyzing the decision problem. The test model will be referred to as the reduced model which omits a proportion of attributes included in the baseline model; the reduced model is a model that uses only a subset of attributes from the full model. The simulation procedures adapted for this study are outlined as follows: Step 1. Specify weights for the full model The distribution for the weights needs to be defined on the interval (0,1) and the sum of the m weights must always equal 1. Therefore, the weights will be defined on a uniform distribution over a joint m-variate Dirichlet distribution. The Dirichlet distribution is a multivariate generalization of the Beta distribution where values are bounded at the interval (0,1) and sum to 1. The Dirichlet distribution provides flexibility in determining the characteristic of a distribution of values by changing a single parameter for each distribution, making it attractive to use in a multiattribute context. (Jia, et al., 1998; Wang & Bier, 2011). This includes other commonly used weighting methods: equal weights and rank order centroid weights. Step 2. Specify choice alternatives After specifying a model with m attributes, a value matrix will be generated to represent the set of alternatives considered in the decision problem. An m x n value matrix will be generated for each model where n is the number of alternatives considered. Attribute values are normalized such that 0 is the worst (minimum) value and 1 is the best (maximum) value. These values will be derived from a correlation matrix so that the relationship among attribute values and level of tradeoff can be specified. Step 3. Create reduced models 48 A reduced model is obtained by selecting a subset of the objectives, m-ℓ, where ℓ represents the number of objectives omitted from the full model. The magnitude of model reduction follows from the rate of objective self-generation observed in the Bond et al. studies, where decision makers generate 30-50% of their final list of objectives. Thus, the reduced test models will omit 1/3, 1/2, or 2/3 of its objectives that were included in the full model. The subset of weights included in the reduced model will be renormalized such that they sum to 1. Another finding from Bond et al. suggests that omitted objectives were not necessarily the lowest ranked ones; therefore, to account for all possible combination of omitted attributes, ( 𝑚 𝑚 −𝑙 ) models will be constructed for each simulation level. For example, in a simulation scenario with 6 attributes in the full model and a reduction level of 1/3 (giving a reduced model that includes 4 attributes), there will be ( 6 4 ) = 15 possible reduced models with 4 attributes that will be obtained. Step 4. Compute the preferred alternative of the reduced model and compared to the full models The reduced model is then used to analyze the set of choice alternatives, yielding the overall value for each alternative which determines the best alternative as well as a rank order of the remaining alternatives. This result will then be compared with the results from the full model. 4.1.2 Characteristics of decision space The simulations will be conducted for the 12 combinations of decision contexts defined by the number of attributes (m=4, 6, 8) and number of alternatives (n=5, 10, 15, 20). For each 12 decision contexts (also referred to as cases), 1000 iterations are simulated by generating 100 sets of full and reduced models for 10 generated value matrices for each case. The three performance metrics are collected from each set of full and reduced model. 49 Table 4.1 outlines all the factors that are varied in each case of the analysis. Since attributes are used to define objective, I will use the term attribute to mean both the objective and its definition (operationalization). The five factors with different levels amount to 324 cases. The total number of models generated across all cases is 324,000. Table 4.2: Simulation Design Factor Levels Objective reduction level 1/3, 1/2, 2/3 Number of alternatives 5,10,15,20 Number of objectives 4,6,8 Attribute intercorrelation Positive, Negative, Noncorrelated Attribute weighting method Ratio, Rank Order Centroid (ROC), equal weights In addition to the variation in the size (the number of objectives and alternatives) of value matrices, the simulation will also be used to measure the effects of omission of attributes on decision problems with specific characteristics, which has been briefly explored in previous studies. Attribute value intercorrelation Simulation makes it possible to control the correlational structure between attribute pairs as reflected in the correlation matrix between attributes. If most of the correlations between attributes are positive, then there is a low need for tradeoffs due to the redundancy between attributes. This correlational structure is also more likely to produce dominated alternatives. If there are negative correlations, then more tradeoff would be required to determine the best 50 alternative (Keeney & Raiffa, 1976, p. 66). It is also possible for the attributes to be mostly independent of each other, where the correlations are very weak and/or around 0. Generating a correlation matrix would be the first step in constructing the attribute value matrix. An m x m correlation matrix (where m is the number of attributes) can be randomly generated with values within the interval (-1,1). Additional constraints are applied in the generation of correlation values depending on the condition: 1) Positively correlated (low tradeoff): The values in the correlation matrix is constrained to be between weak (r=.3) and very high (r=1) positive correlation 2) Negatively correlated (high tradeoff): The values in the correlated matrix will contain at least one very high negative correlation. There is a larger consideration in generating correlation values the high tradeoff condition due to fact that it is more difficult to generate a correlation matrix with negative values which fulfills the requirement that correlation matrices need to be positive definite. However, the idea of negatively correlated attributes is less of a mathematical requirement than it is a function of reflecting the presence of high tradeoff between attributes. Recall that there are at least two main conflicting values, cost and benefit. Thus, a high conflict value matrix would be value matrix that contains both high negative correlation and high positive correlation. The parameter set for this condition is that the correlation matrix must contain at least two moderately negative or stronger correlation (r<-0.5) 3) Non-correlated (independent): The values in the correlated matrix are constrained to be between weak negative (r=-.3) and weak positive (r=.3) correlations. Attribute model weighting methods for reduced models 51 While the weighting for the baseline full model will be ratio weights that are predetermined from the Dirichlet distribution, both ratio weights and approximate weights will be used for the construction of reduced model. The difference in the method of constructing the reduced set of weights for each method is how the renormalized weights are obtained For ratio weights, reduced model weights are simply obtained by renormalizing the sampled weight set such that it sums to 1. For example, say the weight set of a full model with m attributes is denoted by the set 𝑊 = {𝑤 1 , 𝑤 2 , … , 𝑤 𝑚 }. The set of weights of a reduced model derived from that model which omits l attributes would be some set 𝑊 ′ = {𝑤 ′ 1 , 𝑤 ′ 2 , … , 𝑤 ′ 𝑚 −ℓ } where 𝑊 ′ ⊂ 𝑊 . Normalizing the set of weights for the reduced model consists of dividing each weight by the sum of W’ For rank order centroid (ROC) weights, the normalized weights for the reduced model is obtained by ranking the attribute weights of the reduced model from largest to smallest and computing the ROC weights based on the ranking. For example, a set of weights {.15, .1, .3, .05} obtained from the full model will be normalized to {.27, .15, .52, .06} in the reduced model. For equal weights, all the weights in the reduced model will be 1/n (where n is the number of remaining attributes) regardless of its magnitude in the full model. The advantage of Monte Carlo methods is that it accounts for a wide range of decision contexts, including variables that may be derived from different statistical distributions. There is a wide variety of distribution shapes across combinations of categories, including a fair number of skewed distributions. This presents a challenge in making interpretations based on summary statistics, so I will observe the aggregated distribution of parameter values across various cases. I will be using the median as the parametric measure to represent the typical value in each combination of decision characteristics. 52 Observing the median value should be treated as insight to the typical parameter output which allows for a detailed look at the pattern of value changes across conditions. However, it is important to keep in mind that the simulated data involve values that are sampled from a decision space constrained by the specified characteristics. Thus, the interpretation of the data should consider the larger pattern across all conditions, instead of focusing on specific value differences. 4.2 Simulation Results The simulation was conducted with R, an open-source statistical software (R Core Team, 2019), computed by utilizing the University of Southern California’s Center for Advanced Research Computing’s high-performance machines. The following R packages were used in the simulation processes and: MASS (Version 7.3-51.3; Venables & Ripley, 2002), gtools (Version 3.8.1.; Warnes, Bolker & Lumley, 2018), clusterGeneration (Version 1.3.5.; Qiu & Joe, 2020), reshape2 (Version 1.4.3; Wickham, 2007), ggplot2 (Version 3.3.2; Wickham, 2016), and ggridges (Version 0.5.2; Wilke, 2020). 4.2.1 Number of objectives x Attribute intercorrelation Figure 4.1 shows the median value for each performance metric based on the number of objectives and type of attribute intercorrelation in the decision space. The median values of each performance measure for decision problems with mostly positive attribute intercorrelation indicate better performance than for decision problems that contain of the non-correlated or negatively correlated attributes. Non-correlated and negatively correlated cases have similar performance across all conditions. The hit rates for cases with positive attribute intercorrelation are vastly higher than the other cases. The value losses for cases with positive attribute 53 intercorrelation are lower than the other cases. The mean convergence values for cases with positive attribute intercorrelation are higher than the other cases. A higher number of objectives tend to correspond to better performance, especially on higher levels of reduction. The improvement from four to six objectives appears to be more dramatic than the improvement from six to eight objectives. The reduced model performance in the decision space with non-correlated and negatively correlated attributes tend to be similar in higher number of objectives. However, in cases with lower number of objectives, performance tend to be slightly higher in cases with negatively correlated attributes than non-correlated attributes with hit rate and value loss, while the opposite is true for mean convergence. 54 Figure 4.1: Median hit rate, value loss, and mean convergence values based on attribute intercorrelation and number of objectives Figure 4.2a, 4.2b, and 4.2c shows the distribution hit rate, value loss, and mean convergence of every combination of number of attributes and attribute intercorrelation in a 3 by 3 graphical facet, broken down to the level of reduction within each facet. The grouping on the x axis represent the number of objectives, while the grouping based on the y axis represents the type of attribute intercorrelation. The figures for value distribution for cases with the lowest number of objectives (four) will display distinct distribution shapes with multiple sharp peaks for the measure hit rate. This is an artefact of the limited number of possible values from the existing combination of models. For 55 instance, a full model of 4 objectives on the 1/3, 1/2, and 2/3 level of reduction will have respectively 4, 6, and 4 possible reduced models respectively. The possible hit rates values for cases with four possible reduced model would then be 0, 1/4, 2/4, 3/4, or 1, resulting in a distribution value that has 5 peaks. Similarly, for cases with six reduced models, there would be seven possible hit rate values. Essentially, the cases with four reduced models have a somewhat discrete distribution of hit rate values. In the distributions for hit rate. the increase in level of reduction generally results in the decrease of higher values, i.e., the peak of the distribution is pushed left. The cases with positive correlations have large concentrations on the highest values of hit rate, in other words, a left skewed distribution that gradually spreads the values as the reduction level increases. In contrast, cases where attributes are mostly non correlated or contain negative correlations show a much less skewed distribution. At the initial level of 1/3 reduction show a roughly even shape that becomes more right skewed as more attributes are taken out. Value loss and mean convergence values display similar patterns in regard to how the variation in factors affect the shape distribution. Their distribution tends to concentrate in the optimal values (zero for value loss and one for mean convergence), with each level of reduction shifting the peak of the distribution away from the optimal value. Decision problems/cases with mostly noncorrelated or contain negative correlations both perform worse than cases with mostly positive correlations. As demonstrated by the median values in Figure 4.1, the distribution of parameter values in the non-correlated and negative correlated cases are quite similar with the minor exception of mean convergence value where negatively attribute intercorrelation show a 56 somewhat bimodal distribution of values in smaller number of objectives, but these differences diminish as the number of objectives increase. Figure 4.2a: Hit rate distributions by number of objectives, attribute intercorrelation, and attribute reduction level 57 Figure 4.2b: Value loss distribution based on number of objectives and attribute intercorrelation type Figure 4.2c: Mean convergence value distribution based on number of objectives and attribute intercorrelation type 58 4.2.2 Number of objectives x number of alternatives Figure 4.3 shows the median value for each performance metric based on the number of objectives and number of alternatives in the decision space. Hit rate and value loss appear to suggest that lower number of alternatives correspond to better performance. The variation on the number of objectives does not show a definitive pattern with hit rate, while a higher number of objectives decreases value loss at higher levels of reduction. Mean convergence values appears to be least affected by the variation in number of alternatives, and if there are any, it suggests that a higher number of alternatives slightly corresponds to better performance, which is the opposite of the hit rate and value loss suggests. Similar to value loss, higher number of objectives corresponds to better performance at higher levels of reduction. 59 Figure 4.3 Median hit rate, value loss, and mean convergence values based on number of alternatives and number of objectives Figure 4.4a, 4.4b, and 4.4c shows the hit rate, value loss, and mean convergence of every combination of objective in a 4 by 3 graph facet, broken down to the level of reduction within each facet. The grouping on the x axis represent the number of objectives, while the grouping based on the y axis represents the number of alternatives. The concentration of the value of hit rate tend to shift to lower values as the number of alternatives increases. The parameter value distribution across the different number of alternatives does not seem to vary dramatically for value loss. The level of reduction appears to affect each case combination in a similar fashion, where performance parameters become less concentrated near the optimal value. 60 The number of alternatives appear to affect mean convergence more than the other performance measures, which would be expected considering that convergence value is a measure that considers the entire set of alternatives considered instead of just focusing on the top choice. A distinct pattern shown in the differences due to the number of objectives and/or alternatives in mean convergence is the emergence of a bimodal distribution of values as the number of objectives/alternatives increases. Overall, the difference in performance due to the number of objectives/alternatives are relatively minor, especially in higher levels of reduction. Figure 4.4a: Hit rate value distribution based on number of objectives and number of alternatives 61 Figure 4.4b: Value loss distribution based on number of objectives and number of alternatives Figure 4.4c: Mean convergence value distribution based on number of objectives and number of alternatives 62 4.2.3 Weighting method x Attribute intercorrelation Ratio, ROC, and equal weights have relatively similar performance in conditions with positively correlated attributes across all indicators. In the non-correlated and negatively correlated attribute conditions, equal weights result in significantly lower performance compared to ratio and ROC weights. In general, ratio and ROC weights appear to perform similarly across all conditions by all indicators. At higher levels of reduction, the gap between the performance of ROC/ratio weights and equal weights tends to narrow Figure 4.5: Median hit rate, value loss, and mean convergence values based on weighting method and attribute intercorrelation Figure 4.6a, 4.6b, and 4.6c shows the hit rate, value loss, and mean convergence of every combination of objective in a 3 by 3 graph facet, broken down to the level of reduction within 63 each facet. The grouping on the x axis represents the attribute correlational structure, while the grouping based on the y axis represents the different weighing methods. There is relatively little difference in distribution among different weight types in regard to hit rate. Other parameters show slight variations in certain conditions. The distribution of value loss and mean convergence for models with equal weights show more variation in parameter value in addition to overall lower performance compared to ratio and rank order weights when the attribute intercorrelations are low or contain negative correlations. These differences seem to disappear at higher levels of reduction. Figure 4.6a: Hit rate value distribution based on attribute intercorrelation and weighting method 64 Figure 4.6b: Value loss distribution based on attribute intercorrelation and weighting method Figure 4.6c: Mean convergence value distribution based on attribute intercorrelation and weighting method 65 4.3 Discussion The simulation provides results that help to fill the gap from previous literature, as well as new insights from additional factors included in the simulation. In general, the greater proportion of objectives that are omitted, the more likely a reduced model leads to a different outcome from the full model across all cases. However, the extent to which the omission of objectives affects reduced model performance depends on other properties of the decision context. This variation is related to some characteristics of the decision problem. It is important to examine these characteristics, as they are foundational to constructing multiattribute models, as well as being the main components of the analysis itself. The most apparent factor that causes differences in model performance is the nature of attribute intercorrelations within the consequence matrix. Decision problems where most attribute values are positively correlated are less affected by missing objectives compared to decision problems where the attribute values are mostly noncorrelated or when it contains significant negative correlations among attributes. Decision problems that require more tradeoff calculation are more sensitive to changes in the model composition. The exclusion of a negatively correlated objective essentially eliminates the need for tradeoffs, which is often the most critical part of examining a decision problem from a multiattribute perspective. On the flipside, decision problems that has mostly positively correlated objectives are not as affected by model reduction. This decision characteristic is more likely to result in dominated alternatives, in which an alternative that is bested by other alternatives in every attribute, such that there is no situation in which the dominated alternative can be the top choice. In this case, the top choice or rank order of alternatives from reduced models are less likely to change compared to a full model. 66 The nature of attribute correlation is one of the factors that has been explored in the literature and these results are concurrent with previous findings (Aschenbrenner ,1977; Fasolo, et al., 2007). In addition, the result of this simulation provides additional insight on how a decision problem with a set of objectives that are not correlated is affected by model reduction. The performance for uncorrelated attributes tends to be closer to the performance of models with negatively correlated attributes than it is to models with positively correlated attributes. In practice, this result encourages decision makers to be more aware of the relationship between their objectives, not only if they are conflicting, but also when they might not have correlations with each other. A smaller number of objectives in a multiattribute model corresponds to a higher spread of performance values across the distribution. This spread indicates a higher likelihood of lower performing models compared to models with more objectives. The effect of the number of alternatives does not to be quite clear cut. The distribution of measures for hit rate and value loss suggests that smaller number of alternatives would lead to better performance of reduced model, while convergence values suggest that performance is better when there are more alternatives. With the exception of conditions with mostly positive attribute intercorrelations, models with equal weights appear to be more affected by missing objectives than decision problems using ratio or ROC method, where both performances are relatively close to each other. Equal weighting presumes an objective is relevant for the model or not, and all attribute ranges from worst to best are equivalent in value. Equal weights are more prone to measurement error due to the lack of information regarding the relative importance of the objectives. As the results demonstrated, this error is less of a problem for decision problems with mostly positive attribute intercorrelation, which is often not the case in the practice when analyzing decision problem. 67 Compared to existing missing attribute studies, a key difference in how the simulations are performed in this study is that it takes into account every possible combination of missing attributes, not just the ones with the smallest weights. This way of eliminating weights is more consistent with the empirical evidence by Bond, Carlson, and Keeney (2008, 2010) that it is not always the case that omitted objectives are the ones that are judged to be less relevant. Accounting for all possible combination allows consideration of a range of possible models, and the distribution of the range of values are similar for ratio and ROC weights. Keep in mind that this similarity in performance pertains only to the possible missing information due to missing objectives. In conclusion, the most apparent difference of the impact of model reduction depends on the correlations between objectives/attributes. Since decision problems generally involve negative attribute intercorrelation, decision makers should focus on identifying the ways in which the attribute values that define objectives relate to each other in order to minimize the impact of possible missing objectives. When it comes to the number of objectives and alternatives considered in the decision problem, having a larger set of objectives and/or alternatives may mitigate this effect to a limited extent. In terms of weighting method, having information of the relative importance of each objective by using ratio or ROC weights vastly lessens the impact of missing objectives compared to just using equal weights, except if the objectives are mostly positively correlated, which is most likely not the case in examining decision problems that necessitates multiattribute analysis because then there likely would be dominated alternatives that have advantages from other alternatives in all aspects. Overall, this study demonstrates how having missing objectives/attributes affect multiattribute decision models with various levels of complexity. Therefore, the next step in 68 exploring the effects of missing objectives is to involve decision makers to evaluate a decision problem and comparing the results of their full model and reduced model due to missing objectives. This study is presented in the next chapter. 69 Chapter 5: Study III: Behavioral study of cost of omitting objectives As sophisticated as any decision-making study using existing and simulated data, any attempt to investigate a phenomenon observed in decision making processes would be incomplete without an empirical behavioral study. For this reason, the final component of this proposal is a replication and expansion of the studies by Bond et al. (2008, 2010). Replicability is central to validating results from behavioral experiments, therefore replicating the experiment should be the first steps in the behavioral component of this proposed study. This study will also go further than the original studies by estimating the cost of omitted objectives by constructing the multiattribute models for each participant based on their self- identified objectives using the same paradigm from Bond et al described above. Consequences of the omitted objectives can be quantitatively examined by comparing choices that are made using the self-generated list of objectives and choices that are made with the complete list from a given set of alternative values. 5.1. Methods 5.1.1 Participants Participants were recruited from Prolific Academic, an online survey recruitment platform. Participants were required to be over the age of 18, fluent in English, and residents of the United States. They received $5 for completion in the survey. In total, responses were collected from 61 participants (female = 40, male = 21, mean age = 30.87, SD = 11.5). The completion time for the survey ranged from 5.5 to 24 minutes with an average of 9 minutes and 53 seconds. 5.1.2 Procedure 70 5.1.2.1 Survey This study uses the experimental framework introduced by Bond, et al. (2008; 2010). The general procedure of the study involves instructing decision makers to generate objectives for a relevant decision problem, then giving them a master list of objectives where they can choose the relevant objectives from the list. Decision makers are then asked to compare and merge the two lists creating a finalized objective list and then rate the importance of the finalized list by assigning them a rank ordering. The decision problem used in the initial studies were customized to the specific population of the sample, such as prospective MBA students generating objectives for programs they would be interested in applying for or PhD students generating objectives for choosing a dissertation topic. The question for this study is determining objectives for choosing a city in which to live.in using a general population set of respondents. This question is chosen in consideration of the more general population that’s sampled for the study. Even if there is not an immediate need to relocate, people often have personal values and lifestyle choices that can be reflected by the objectives they choose in imagining an ideal place of living. A general step by step diagram for the survey format is shown in Table 5.1. The more detailed procedures and survey materials are included in appendix C. 71 Table 5.1: Step by step procedure of study III Step 1 Prompt to generate as many relevant objectives as they can. 1. (self-generated objective 1) 2. (self-generated objective 2) 3. … Step 2 DMs see the master list and check all objectives that are relevant.’ master list objective 1 master list objective 2 master list objective 3 ... Step 3 DMs map objectives from Step 1 to the master list. Checked items that map back are self-generated objectives; all others are recognized. (self-generated objective 1) (self-generated objective 2) (master list objective 1) (master list objective 3) … Step 4 DMs rate the importance of their checked objectives. Objective Rank (self-generated objective 1) 2 (master list objective 1) 3 (master list objective 3) 1 … Step 5 DM directly choose and rank their top choices from the list of alternatives Alternative Rank Alternative 1 3 Alternative 2 1 Alternative 3 2 … 72 The procedures for the first step of the study are as follows: participants are prompted to list objectives in choosing an ideal city to live in. In this step, participants generate their objectives independently without any aid or cue. There are 18 entry boxes available for their responses. Participants can proceed to the next step of the survey after three minutes have elapsed. The objectives obtained in this step are referred to as the self-generated objectives On the second step, participants are given a master list of objectives and are asked to check the relevant objectives to the previous question. The contents of the master list are sourced from literature on urban desirability, measures used to form the ranking in Places Rated Almanac (Boyer & Savageau, 1985) and various websites compiling best places to live such as bestplaces.net (2021) and livability.com (2021). The objectives obtained from this step are referred to as the recognized objectives. Table 5.2 shows entries in the master list included in the survey. Table 5.2: Master list of objectives in step 2 of study III List of objectives 1. Low cost of living 2. Low rent prices 3. Low house prices 4. More distance to family 5. Less distance to family 6. More sunny days 7. Less sunny days 8. More snowy days 9. Less snowy days 10. Low taxes 73 11. Low unemployment rate 12. Low crime rate 13. More comprehensive public transport 14. Less commute time 15. More walkability 16. More biker friendly infrastructure 17. More older residents 18. More younger residents 19. More diversity in racial demographics 20. More night life options 21. More available dating pool 22. More access to coasts 23. More options for outdoor activity 24. Higher quality of K-12 schools 25. Higher quality of higher education institutions (university, college, etc.) 26. More access to religious communities 27. More Democratic party voters 28. More Republican party voters 29. More access to quality health care facilities 30. High population density 31. Low population density On the third step, participants will be presented with the list of both the self-generated objectives (obtained from step one) and recognized objectives (obtained from step two). They are then asked to identify self-generated objectives that are similar to or overlapping with their recognized objectives, i.e. self-generated objectives that map to the recognized objectives. 74 On the fourth step, participants are once again presented with both their self-generated and recognized list objectives. On this step, they are asked to create a final list by choosing from the combined set of objectives and assigning a rank order of importance and discard the objectives that are redundant/irrelevant to their final list. An additional step to this study is a direct judgment of alternatives by presenting the participants with a list of ten cities in the United States and asking them to rank their top five cities based on their preference. The cities presented in this step are the alternatives used in the consequence table for the multiattribute analysis. 5.1.2.2 Multiattribute modeling An element of this study that expands from the original study is performing a multiattribute analysis by constructing multiattribute models from the participants’ list of objectives and performing an analysis comparing the results of the model consisted of the self- generated (reduced) and finalized (full) list of objectives. Two multiattribute models is constructed for each of the 61 participants: a reduced model consisting of only mapped self-generated objectives and a full model that incorporates both the mapped self-generated objectives and recognized objectives. The rank order information obtained from the final step will be used as weights for the model by using the rank order centroid (ROC) method. The simulation study in the previous chapter demonstrated that the performance for model reduction is similar that of ratio weights. Each self-generated objective that are marked by participants as having mapped to the master list (obtained in step 3) is evaluated and coded according to which objective in the master list is mapped to. In total, 474 self-generated objectives were mapped into 292 instances from the 75 31 master list objectives. An individual can have multiple self-generated objectives that map to the same master list objective and vice versa. For the purposes of the multiattribute analysis, only objectives that map to entries in the master list are taken into consideration and idiosyncratic objectives that do not map are discarded. The set of alternatives are based on real cities located in the United States, obtained by randomly choosing ten states and selecting one city/town from each state. The cities chosen for the consequence table are (1) Boulder, CO, (2) Decatur, IL, (3) Wichita, KS, (4), Reno, NV, (5) Baton Rouge, LA, (6) Minneapolis, MN, (7) Raleigh, NC, (8) Allentown, PA, (9) Virginia Beach, VA, and (10) Olympia, WA. These cities were presented as the alternatives on the last step of the survey The statistics and ratings for each city that are relevant to the objectives are obtained to form the consequence table. The statistics and ratings are obtained from the dataset provided by city rating websites Sperling’s Best Places (2021), Niche (2021), and Walk Score (2021). These sources aggregate data from polling and census datasets as well as proprietary rating of certain aspects of a place of living. The rating/statistics are then converted into utility values bounded on [0,1] using a linear transformation. The complete list of attribute measures used and standardized consequence table are included in appendix C. Figure 5.1 shows the distribution of 465 correlations among the set of 31 objectives. The composition of the set of correlations are 29% (N=137) weak to strong negative correlation (-1 < r < -0.3), 49% (N=197) relatively non-correlated (-0.3 < r < 0.3), and 26% (N=131) weak to strong positive correlations (0.3 > r > 1). 76 Principal Component Analysis on the list of 31 objectives identified ten principal components (PC) where the most significant principal component (PC1) accounts for 39% of variance, PC2 accounts for 19%, and PC3 accounts for 16%. In other words, three principal components account for around 74% of the variance. The complete table of principal components are included in appendix B4 Figure 5.1: Distribution of interattribute correlation values among set of objectives 5.3 Results Out of all sixty participants, the number of self-generated objectives range from 3 to 15 objectives with a mean of 7.7. For recognized objectives, participants chose an average of 9.3 objectives from the master list. The average proportion of self-generated objectives that map to the master list of objectives (obtained in step 2) is 0.65. The finalized list obtained in the last part contain 5 to 24 objectives with an average of 12.3 objectives. On average, participants included 3 self-generated objectives in their final list with the average proportion of 0.26. Six participants 77 did not include any self-generated objectives in their final list. Among 61 participants, 41% (N=25) have a recognized objective as their most important objective and 89% (N=54) have at least one recognized objective in their top three. Multiattribute models After mapping self-generated objectives to master list objectives, the average number of objectives in the reduced model is 3.6, while the average number of objectives in the full model is 10.5. The average proportion of self-generated objectives that are included in the full model is 0.36, slightly higher than the overall responses. Across all participants, the reduced model produces the same top choice as the full model in 27 participants, corresponding to a hit rate of 0.44. The top choice for reduced models was present in the top three of the choices for the full model in 72% (N=44) of participants. Figure 5.7 displays the distribution of participants based on the number of objectives in their full and reduced model, distinguished by whether the top choice of the full model matches the top choice of the reduced model. 78 Figure 5.2: Distribution of participant hit rate based on the number of objectives in both full and reduced model fitted with a regression line based on each hit rate category. The range of value loss is 0 to 0.23 with a median of 0.0057 (IQR = 0 - 0.07). When the value loss is standardized by dividing the value loss by the range between the best and worst alternative, the range of value loss is from 0 to 0.7 with a median of 0.03 (IQR = 0 - 0.25), which means that the median reduced model loses 3% of its value compared to the full model but can lose up to 70%. However, value loss does not correlate with the proportion of missing objectives (ρ=0.10, p=0.43). The convergence value for participants ranges from -0.43 <ρ< 0.98 with a median of 0.68 (IQR = 0.49 - 0.85). Convergence value has a moderately positive correlation (ρ=0.41, p<0.001) with the proportion of objectives missing in the full model. Figure 5.3 shows 79 the cumulative density for both true and standardized value loss, as well as the cumulative density for convergence values. Figure 5.3: Cumulative density of value loss for both true and standardized value loss and cumulative density for convergence values Comparing the relative weights of generated and recognized objectives in full models The sum of weights for generated objectives in a participant’s full model ranges from 0.09 to 0.96. Self-generated objectives make up more than half of the weights in the full model in 51% (N=31) participants. Overall, across all participants’ models, the cumulative weights of self- generated and recognized objectives are 0.51 and 0.49 respectively. The distribution of the composition of weights by objective type by participants is shown in figure 5.9. 80 Figure 5.4: Distribution of weight composition of full models across participants. Each bar represents a participant’s full model Comparison with direct judgment The results of both multiattribute models are compared to the top five cities chosen in step 5. Less than 10% of participants pick the same top choice as their multiattribute model, specifically 5% (N=3) for full models and 8% (N=5) for reduced models. More broadly comparing the top choices from the multiattribute models and the five directly chosen cities reveal that for the top choice for full model is present in the top five for 50% (N=30) of participants while the top choice for the reduced model is present in 47% (N=28) of participants’ top five. A comparison of the top 5 ranking across the different decision models is presented in figure 5.5. The value loss from the top choice of the direct judgment compared to the full model ranges from 0 to 0.70 with a median of 0.19 (IQR = 0.10 - 0.29). The standardized value loss ranges from 0 to 1 with a median of 0.56 (IQR = 0.30 - 0.82). The value loss from the top choice of the direct judgment compared to the reduced model ranges from 0 to 1 with a median of 0.27 81 (IQR = 0.14 - 0.42). The standardized value loss ranges from 0 to 1 with a median of 0.48 (IQR = 0.29 - 0.76). Figure 5.5: Comparison of cities based on the proportion of being within top five according to decision models 5.4 Discussion The decision problem offered in this experiment has a wider scope compared to the previous missing objectives studies, both in the scope of the problem and the subject population participating in the study. The results of the study align with what have been previously observed, which is that decision makers do often omit objectives relevant to their decision problem. The extent to which decision makers are incorporating more recognized than self- generated objectives in their full model vary, but on average, the percentage of self-generated objectives included in the final model is 26% which is comparative to what recorded in previous studies of 30-50% (Bond, et al., 2008; 2011). The analysis portion of the study reveals more insight into the consequences of missing objectives. The relationship between the extent of objective omission and model performance is 82 not quite straightforward. The overall hit rate for participants is lower than half (44%) and there does not seem to be a definitive pattern regarding the number of self-generated objectives compared to the full model that makes their top results match. Full models that have both low and high proportion of self-generated objectives have top results that match or do not match regardless of how many total objectives are in the full model itself. Likewise, with value loss, a higher level of omission does not necessarily correspond to more value loss. Convergence is the only measure that presents evidence of being impacted by the proportion of missing objectives. Neither the number of objectives in the reduced/full model nor the completeness of one’s set of objectives are shown to be the definitive factor in determining how close the performance of the reduced model is compared to the full model. The analysis of the weight components show that it is not necessarily the case that participants are omitting less important objectives, as observing the full model weights based on how the objectives are generated show that only around half of participants weigh their set of self-generated objectives more than recognized (i.e. forgotten) objectives in their full model. The aggregated weights across participants themselves show a roughly even split between self- generated and recognized objectives, 0.51 and 0.49 respectively. This supports the findings in Bond, et al. regarding the relative importance of omitted objectives where the quality of recognized objectives was judged to be similar to the quality of recognized objectives. Thus, any future research in this topic should reconsider the idea that omitted/forgotten objectives are less important than self-generated objectives. Comparison from the results of the two multiattribute analysis with the direct judgment reveals a more complicated picture. Both the reduced and the full model failed to produce outcomes that are consistent with the top choices from the direct judgment with both having less 83 than 10% hit rate. Even going more broadly by looking at the top five cities from the direct judgment, the top choices from both reduced and full model are present in only around half of the participants. Normalized value loss indicates that the choices from direct judgment is more proximate to the results of the reduced model than they are the full model. Overall, the study manages to replicate findings in Bond, et al. on the extent to which decision makers often unable to articulate the full set of objectives that are important to them in making a decision. The multiattribute analysis presents the variety of outcomes for decision makers when they omit non self-generated objectives in analyzing their decision problem. The relationship between the performance of reduced model against the full model is not quite straightforward with the exception of converge values, which does suffer when the proportion of missing objectives are higher. The results demonstrate that consequences of missing objectives might not necessarily be disastrous. It is still very much possible for decision makers to have a satisfactory outcome despite not being able to articulate all the objectives that matter to them. Omissions may not be a big deal when the self-generated objectives sufficiently capture the key tradeoffs. 84 Chapter 6: General Discussion and Conclusion The three studies presented in this dissertation demonstrate the consequences of omitted objectives in multiattribute models. The studies cover various existing gaps in the literature in regard to specific decision problem characteristics that affect the consequence of missing objectives, as well as the variety of empirical consequences of missing objectives. A summary of the results produced by the three different studies are as follows: Study 1: Application analysis The applications examined a variety of scenarios of missing objectives in various decision contexts. The applications vary in the number of alternatives and objectives involved, and some applications feature multiple decision makers for the same problem. The nature of the set of alternatives examined in the applications plays a significant role in the analysis. Decision problems with more competitive set of alternatives (alternatives that are close in value) tend to experience worse effects of missing objectives. On the other side of the spectrum, decision problems that feature dominated or near dominated alternatives are of less concern regarding the effects of missing objectives. In the selected container ports application (De Icaza, Parnell, & Pohl, 2019), there is a large number of possible models, yet the top choice stays relatively robust. On the decision problems with multiple decision makers, the impact of missing objectives may significantly differ from one decision maker to another. For the usage of convergence value as a way to measure performance, mean convergence appears to be more useful than median convergence, especially in decision problems with a small number of alternatives and consequently a limited number of rank order combinations. Mean 85 convergence is more sensitive towards the changes happening at the tail ends of convergence value distribution. Study 2: Monte Carlo simulation Overall, the factor that is most apparent in affecting the magnitude of impact in missing objectives in the simulation aside from the obvious level of reduction is the correlational structure of the consequence matrix. This result concurs with findings by Aschenbrenner (1977) and Fasolo et al. (2007). The performances measures of cases with negatively correlated or non- correlated attributes have notably worse performance than in cases with positively correlated attributes. The size of the consequence matrix itself, i.e., the number of objectives and alternatives, modestly affects reduced model performance. Having more objectives may decrease impact of missing objectives on model performance across all measures. The effect of the number of alternatives are not as clear cut. A higher number of alternatives may lead to better performance in hit rate and value loss, but convergence values tend to suffer with higher numbers of alternative. However, all these differences in performance diminish with higher levels of reduction. The variation of weighting methods in measuring the impact of missing objectives has been commonly investigated in the previous literature (Barron & Barrett, 1997; Fry, et al., 1997; Fasolo et al.,2007), and the results from these sets of studies appear to point to the same direction, which is that models with equal weights are more impacted than models with unequal weights. The use of unequal weights in the analysis across the studies has been refined to include both ratio and rank order weights as well as considering all possible combination of missing 86 objectives instead of just eliminating objectives with the smallest weights. Comparisons made by the simulation studies suggest that models with equal weights perform worse overall than both ratio and rank order weights, and both types of unequal weights have near identical performance. Study 3: Behavioral study The replication of Bond et al. (2007; 2011) studies solidifies how missing objectives may occur in the decision modeling process. The self-generated objectives tend to be only a fraction of objectives in the full model, with the average in the replication being 26%, slightly lower than the previously recorded 30-50%. Another replicated result is that it is not necessarily the case that objectives that are not produced in the self-generated process (i.e., forgotten objectives) are objectives that are considered less important, as the weigh composition of the full model suggests that on average, the weights from self-generated and recognized objectives are about even. The correlational structure of the consequence matrix used in the multiattribute analysis contains a fair range of correlation values among the thirty-one attributes with even number of positive and negative correlations. The results of both the application review and the behavioral study suggest a wide variety of consequences which is more similar to the simulation performance of cases with negatively correlated attributes. This similarity between performance of empirical decision models would be more similar to simulated negatively correlated cases was also previously made by Fasolo et al. (2007). However, Fasolo et al. (2007) made the large leap to assert that using one objective/attribute would then be enough to make good decisions, which I do not believe to be the case. The caveat to producing well performing models that are missing some number of 87 objectives is that the objectives included in the model should be able to sufficiently capture the key tradeoffs of the decision problem and identifying a trade-off requires at least two objectives. The additional survey procedures included in the behavioral study provides insight on the empirical mechanisms and consequences of missing objectives in multiattribute modeling. The number of objectives in the full model and proportion of objectives missing in the reduced model vary across participants. However, higher level of reduction does not necessarily lead to worse performance in terms of hit rate and value loss, although it is likely to impact the performance measured by convergence value. These empirical results are somewhat at odds with the analysis produced from the application analysis and simulation results, in which higher levels of reductions always leads to worse performance. This disparity emphasizes the need to validate technical analysis (which rely on certain behavioral assumptions) with empirical studies that reflects the reality of formulating decision models to solve problems. The addition of direct judgment of alternatives at the end of the behavioral study explores the relationship between the decision models and direct assessment of alternatives. Compared to direct ranking of alternatives, the analysis results for both reduced and full model are likely to wildly differ from direct ranking of alternatives, but the results of the reduced model are likely to be closer to the direct judgment. Limitation and future research While the three studies complement each other in covering how decision models are impacted by missing objectives, each have their set of limitations. The variety of studies examined in the application review is rather limited, so it would be beneficial to examine more 88 applications with a more systematic approach in selecting the cases, particularly in regard to the variety in the set of alternatives. The characteristic values involved in the simulation were subject to limitations of computational power. Notably, the range of number of objectives in the analysis are quite narrow (4,6, and 8 objectives). Some of the model components can be refined to involve more sophisticated characteristics such as nonlinear utility functions. A rework of the behavioral study would benefit from featuring a more specific decision problem. One of the main limitations of the survey is that the decision problem is quite general and may not particularly be something participants have thought about. In addition, the direct judgment portion of the behavioral study can be refined such that the preference for the top choices is not confounded with familiarity with the alternative. Since all results point to correlational structures between each attribute being the most important factor in determining the impact of missing objectives, future studies could look into this phenomenon closer. For instance, how is the correlational structure of the reduced model compared to the full model? Does the omission of attributes cause the reduced model to have a more positive, negative, or closer to zero correlations? The empirical approach to this question is to examine the correlation between objectives that are self-generated and objectives that are recognized. Conclusion Looking at the results across all three types of studies, several key observations emerge regarding the nature of missing objectives and its consequences: 89 (1) The relationship between objectives and value tradeoffs is the key to determining the effects of missing objectives. The underlying idea of utilizing multriattibute models is to judge the relative value of attributes and quantifying the worth of an advantage in one objective over the other. Having attribute values that are mostly positively correlated means that value trade-off calculation is minimal, and the exclusion of an attribute would not have a significant effect on the outcome of the model (Aschenbrenner, 1977). However, if an attribute that is noncorrelated or negatively correlated is involved, then its exclusion leads to the lack of trade-off that would likely change the outcome of the model. (2) The competitiveness of alternatives is also a substantial factor that contributes to the impact of missing objectives. Having to choose between alternatives that are close to each other in value means that the top choices and rank ordering are more sensitive towards any changes in the model, especially the omission of objectives. (3) The number of objectives and alternatives involved may have some effect in reduced model performance, but any variation due to these factors are relatively small compared to the effect of the attribute intercorrelation. (4) The manner in which objectives are omitted in the identification process are unique from person to person even for the same problem. It is not necessarily the case that the least relevant objectives are the ones that are omitted. (5) While both application and simulation studies definitively demonstrate that larger proportions of missing objectives correspond to worse model performance, empirical results from the behavioral experiment suggests that this might not always be the case. 90 I would propose that we reconsider the idea that the inability to articulate the entire set of relevant objectives would necessarily lead to disastrous consequences. This idea is not particularly novel, as it has been presented before by Phillips (1984) in the theory of requisite models, where he suggests that a model can omit aspects of social realities that do not provide additional insight towards the problem but still be sufficient in solving a problem as long as the complex relations are properly approximated in the model. All three studies, but most importantly the behavioral study, demonstrated that it is still very much possible to arrive to satisfactory outcomes despite having an incomplete set of objectives. Of course, this does not mean that we should not be encouraging decision makers to think thoroughly about objectives that are relevant to them in their decision problems. Missing objectives generally results in an incomplete model, as any type of missing information is in defining the decision problem, but the consideration of its impact should be viewed akin to sensitivity analysis instead of a fatal model flaw. 91 References Ahn, B. S., & Park, K. S. (2008). Comparing methods for multiattribute decision making with ordinal weights. Computers & Operations Research, 35(5), 1660-1670. Aschenbrenner, K. M. (1977). Influence of attribute formulation on the evaluation of apartments by multi-attribute utility procedures. In Decision making and change in human affairs (pp. 81- 97). Dordrecht: Springer. Barron, F. H. (1987). Influence of missing attributes on selecting a best multiattributed alternative. Decision Sciences, 18(2), 194-205. Barron, F. H. (1988). Limits and extensions of equal weights in additive multiattribute models. Acta Psychologica, 68(1), 141–152. https://doi.org/10.1016/0001-6918(88)90051-0 Barron, F. H., & Barrett, B. E. (1996). Decision quality using ranked attribute weights. Management Science, 42(11), 1515-1523. Barron, F. H., & Barrett, B. E. (1999). Linear Inequalities and the Analysis of Multi-Attribute Value Matrices. In Decision Science and Technology (pp. 211-225). Springer, Boston, MA. Barron, F. H., & Kleinmuntz, D. N. (1986). Sensitivity in value loss of linear models to attribute completeness. In B. Brehmer, H. Jungermann, P. Lorens, & G. Sevon (Eds.), New directions in research on decision making. Amsterdam: Elsevier. Bond, S. D., Carlson, K. A., & Keeney, R. L. (2008). Generating objectives: Can decision makers articulate what they want? Management Science, 54(1), 56-70. Bond, S. D., Carlson, K. A., & Keeney, R. L. (2010). Improving the generation of decision objectives. Decision Analysis, 7(3), 238-255. 92 Boyer, R., & Savageau, D. (1985). Places rated almanac: Your guide to finding the best places to live in America. Rand McNally & Company. Brothers, A. J., Mattigod, S. V., Strachan, D. M., Beeman, G. H., Kearns, P. K., Papa, A., & Monti, C. (2009). Resource-limited multiattribute value analysis of alternatives for immobilizing radioactive liquid process waste stored in Saluggia, Italy. Decision Analysis, 6(2), 98-114. Butler, J. C., Dyer, J. S., & Jia, J. (2006). Using attributes to predict objectives in preference models. Decision Analysis, 3(2), 100-116. Cavalcanti, L. B., Mendes, A. B., & Yoshizaki, H. Y. (2017). Application of Multi-Attribute Value Theory to Improve Cargo Delivery Planning in Disaster Aftermath. Mathematical Problems in Engineering, 2017. Chang, Y. H., & Yeh, C. H. (2001). Evaluating airline competitiveness using multiattribute decision making. Omega, 29(5), 405-415. Chang, Y. H., & Yeh, C. H. (2002). A survey analysis of service quality for domestic airlines. European Journal of Operational Research, 139(1), 166-177. Cheung, S. O., & Suen, H. C. (2002). A multi-attribute utility model for dispute resolution strategy selection. Construction Management & Economics, 20(7), 557-568. De Icaza, R. R., Parnell, G. S., & Pohl, E. A. (2019). Gulf Coast Port Selection Using Multiple- Objective Decision Analysis. Decision Analysis, 16(2), 87-104. Dawes, R. M., & Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81(2), 95. 93 Durbach, I. N., & Stewart, T. J. (2009). Using expected values to simplify decision making under uncertainty. Omega, 37(2), 312-330. Eisenfuhr, F., Weber, M., & Langer, T. (2010). Rational decision making. Berlin: Springer. Edwards, W., & Newman, J. (1982). Multiattribute evaluation. Beverly Hills: Sage Publications. Edwards, W., Miles Jr, R. F., & Von Winterfeldt, D. (Eds.). (2007). Advances in decision analysis: from foundations to applications. Cambridge University Press. Fasolo, B., McClelland, G. H., & Todd, P. M. (2007). Escaping the tyranny of choice: When fewer attributes make choice easier. Marketing Theory, 7(1), 13-26. Feeny, D., Furlong, W., Boyle, M., & Torrance, G. W. (1995). Multi-attribute health status classification systems. Pharmacoeconomics, 7(6), 490-502. Ferretti, V., Bottero, M., & Mondini, G. (2014). Decision making and cultural heritage: An application of the Multi-Attribute Value Theory for the reuse of historical buildings. Journal of cultural heritage, 15(6), 644-655. Fischer, G. W., Damodaran, N., Laskey, K. B., & Lincoln, D. (1987). Preferences for proxy attributes. Management Science, 33(2), 198-214. Fry, P. C., Rinks, D. B., & Ringuest, J. L. (1997). Comparing the predictive validity of alternatively assessed multi-attribute preference models when relevant decision attributes are missing. European Journal of Operational Research, 94(3), 599-609. Jacobi, S. K., & Hobbs, B. F. (2007). Quantifying and mitigating the splitting bias and other value tree-induced weighting biases. Decision Analysis, 4(4), 194-210. 94 Jia, J., Fischer, G. W., & Dyer, J. S. (1998). Attribute weighting methods and decision quality in the presence of response error: a simulation study. Journal of Behavioral Decision Making, 11(2), 85-105. Keeney, R. L. (1996). Value-focused thinking. Harvard University Press. Keeney, R. L. (2002). Common mistakes in making value trade-offs. Operations Research, 50(6), 935-945. Keeney, R. L. (2007). Developing objectives and attributes. In Edwards, W., Miles Jr, R. F., & Von Winterfeldt, D. (Eds.), Advances in decision analysis: From foundations to applications, pp. 81-103. New York, NY: Cambridge University Press. Keeney, R. L. (2013). Identifying, prioritizing, and using multiple objectives. EURO Journal on Decision Processes, 1(1-2), 45-67. Keeney, R. L., & Gregory, R. S. (2005). Selecting attributes to measure the achievement of objectives. Operations Research, 53(1), 1-11. Keeney, R. L., & Raiffa, H. (1976). Decisions with multiple objectives: preferences and value trade-offs. Cambridge university press. Konidari, P., & Mavrakis, D. (2007). A multi-criteria evaluation method for climate change mitigation policy instruments. Energy Policy, 35(12), 6235-6257. Merkhofer, M. W., & Keeney, R. L. (1987). A multiattribute utility analysis of alternative sites for the disposal of nuclear waste. Risk Analysis, 7(2), 173-194. Montibeller, G., & Von Winterfeldt, D. (2015). Cognitive and motivational biases in decision and risk analysis. Risk Analysis, 35(7), 1230-1251. 95 Leon, O. G. (1999). Value-focused thinking versus alternative-focused thinking: Effects on generation of objectives. Organizational Behavior and Human Decision Processes, 80(3), 213- 227. Niche.com (2021). 2021 Best Places to Live in America. https://www.niche.com/places-to- live/search/best-places-to-live/ Nutt, P. C. (1998). Evaluating alternatives to make strategic choices. Omega, 26(3), 333-354. Payne, J. W., Bettman, J. R., Schkade, D. A., Schwarz, N., & Gregory, R. (1999). Measuring constructed preferences: Towards a building code. In Elicitation of preferences (pp. 243-275), Dordrecht: Springer. Phelps, C. E., & Madhavan, G. (2017). Using multicriteria approaches to assess the value of health care. Value in Health, 20(2), 251-255. Phillips, L. D. (1984). A theory of requisite decision models. Acta psychologica, 56(1-3), 29-48. Qiu, W. & Joe, H. (2020). clusterGeneration: Random Cluster Generation (with Specified Degree of Separation). R package version 1.3.5. Retrieved from https://CRAN.R- project.org/package=clusterGeneration R Core Team (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Retrieved from https://www.R-project.org/. Sabatino, S., Frangopol, D. M., & Dong, Y. (2015). Sustainability-informed maintenance optimization of highway bridges considering multi-attribute utility and risk attitude. Engineering Structures, 102, 310-321. 96 Siebert, J., & Keeney, R. L. (2015). Creating more and better alternatives for decisions using objectives. Operations Research, 63(5), 1144-1158. Sperling’s Best Places (2021). Best Places to Live. https://www.bestplaces.net/ Stillwell, W. G., Seaver, D. A., & Edwards, W. (1981). A comparison of weight approximation techniques in multiattribute utility decision making. Organizational Behavior and Human Performance, 28(1), 62-77. Torrance, G. W., Boyle, M. H., & Horwood, S. P. (1982). Application of multi-attribute utility theory to measure social preferences for health states. Operations Research, 30(6), 1043-1069. Venables, W. N. & Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth Edition. Springer, New York. ISBN 0-387-95457-0 von Winterfeldt, D. (1980). Structuring decision problems for decision analysis. Acta Psychologica, 45(1-3), 71-93. von Winterfeldt, D & Edwards, W. (1986). Decision analysis and behavioral research. Cambridge University Press von Winterfeldt, D., & Edwards, W. (2007). Defining a decision analytic structure. In Edwards, W., Miles Jr, R. F., & Von Winterfeldt, D. (Eds.), Advances in decision analysis: From foundations to applications, pp. 81-103. New York, NY: Cambridge University Press. von Winterfeldt, D., & Fasolo, B. (2009). Structuring decision problems: A case study and reflections for practitioners. European Journal of Operational Research, 199(3), 857-866. Walkscore.com (2021). Walk Score. https://www.walkscore.com/ 97 Wallenius, J., Dyer, J. S., Fishburn, P. C., Steuer, R. E., Zionts, S., & Deb, K. (2008). Multiple criteria decision making, multiattribute utility theory: Recent accomplishments and what lies ahead. Management Science, 54(7), 1336-1349. Wang, C., & Bier, V. M. (2011). Target-hardening decisions based on uncertain multiattribute terrorist utility. Decision Analysis, 8(4), 286-302. Warnes, G. R., Bolker, B. & Lumley, T. (2018) gtools: Various R Programming Tools. R package version 3.8.1. Retrieved from https://CRAN.R-project.org/package=gtools Wickham, H. (2007). Reshaping Data with the reshape Package. Journal of Statistical Software, 21(12), 1-20. URL http://www.jstatsoft.org/v21/i12/. Wickham, H. (2016). ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. ISBN 978-3-319-24277-4, Retrieved from https://ggplot2.tidyverse.org. Wilke, C. O. (2020). ggridges: Ridgeline Plots in 'ggplot2'. R package version 0.5.2. Retrieved from https://CRAN.R-project.org/package=ggridges Youngblood, A. D., & Collins, T. R. (2003). Addressing balanced scorecard trade-off issues between performance metrics using multi-attribute utility theory. Engineering Management Journal, 15(1), 11-17. 98 Appendix A: Aggregated Parameter Values Level of reduction Hit rate Value loss Mean convergence Application "1/3" 1 0 0.95 Radioactive waste "1/2" 0.987 0.0058 0.89 Radioactive waste "2/3" 0.818 0.0902 0.72 Radioactive waste "1/3" 0.8 0.0129 0.91 Cargo delivery "1/2" 0.7 0.0189 0.89 Cargo delivery "2/3" 0.7 0.0189 0.82 Cargo delivery "1/3" 1 0 0.76 Airline competitiveness "1/2" 0.981 0.0068 0.69 Airline competitiveness "2/3" 0.861 0.0587 0.6 Airline competitiveness "1/3" 0.916 0.0082 0.87 Container ports "1/2" 0.806 0.0207 0.76 Container ports "2/3" 0.682 0.0377 0.61 Container ports "1/3" 0.4 0.0301 0.79 Historical buildings "1/2" 0.2 0.0462 0.54 Historical buildings "2/3" 0.1 0.1048 0.38 Historical buildings "1/3" 0.6 0.0276 0.74 Historical buildings "1/2" 0.5 0.078 0.5 Historical buildings "2/3" 0.4 0.1161 0.34 Historical buildings "1/3" 0.6 0.0482 0.61 Historical buildings "1/2" 0.4 0.0867 0.35 Historical buildings "2/3" 0.3 0.1285 0.21 Historical buildings "1/3" 0.8 0.022 0.57 Historical buildings "1/2" 0.5 0.0626 0.35 Historical buildings "2/3" 0.4 0.0759 0.27 Historical buildings "1/3" 0.6 0.0457 0.75 Historical buildings "1/2" 0.5 0.0599 0.59 Historical buildings "2/3" 0.4 0.091 0.45 Historical buildings "1/3" 0.429 0.0103 0.71 Power plant "1/2" 0.328 0.0136 0.59 Power plant "2/3" 0.297 0.0169 0.41 Power plant "1/3" 0.636 0.1101 0.78 Climate change policy "1/2" 0.455 0.1728 0.64 Climate change policy "2/3" 0.273 0.2451 0.43 Climate change policy "1/3" 0.452 0.0141 0.3 Health care treatment "1/2" 0.381 0.0186 0.18 Health care treatment "2/3" 0.31 0.0262 0.02 Health care treatment "1/3" 0.333 0.0191 0.52 Health care treatment "1/2" 0.437 0.0172 0.45 Health care treatment "2/3" 0.571 0.0184 0.34 Health care treatment "1/3" 0.667 0.0622 0.42 Health care treatment 99 "1/2" 0.555 0.0868 0.3 Health care treatment "2/3" 0.333 0.1425 0.14 Health care treatment "1/3" 0.333 0.0422 0.37 Health care treatment "1/2" 0.238 0.0561 0.29 Health care treatment "2/3" 0.012 0.0837 0.1 Health care treatment "1/3" 0.464 0.0648 0.42 Health care treatment "1/2" 0.413 0.0718 0.36 Health care treatment "2/3" 0.321 0.0896 0.17 Health care treatment 100 Appendix B1: Application analysis: Airline competitiveness (Chang & Yeh, 2001) Table B1.1: Full model and complete list of attribute values for airline competitiveness application Objective A1 A2 A3 A4 A5 Weight Cost Unit operating cost Productivity Labour Fleet Passenger load Service quality On-time performance Safety Flight frequency Price Average fare Management Revenue growth Net profit margin Market share 1.00 0.66 1.00 0.92 0.71 1.00 0.93 0.87 1.00 0.05 1.00 0.12 0.17 0.50 0.00 1.00 1.00 0.79 0.00 0.59 0.00 0.00 0.00 0.43 0.29 0.77 0.36 0.00 0.00 0.09 0.03 1.00 0.69 0.06 0.00 0.07 0.23 0.00 0.67 1.00 1.00 0.00 0.34 0.47 0.36 1.00 0.00 1.00 0.21 1.00 0.02 0.84 0.03 0.37 0.45 0.091 0.091 0.091 0.091 0.091 0.091 0.091 0.091 0.091 0.091 0.091 Using the full model, alternative A1 has the best performance by quite far with a value of 0.832. The next best alternative is A5 with a value of 0.481. The worst alternative is A3 with a value of 0.333, giving a value range of 0.499 for the whole set of alternatives. Despite being an outlier, alternative A1 does not dominate any other alternative and only has the highest attribute value in 5 of the 11 attributes. The analysis is presented in Table B1.2 Table B1.2: Performance parameters at each level of reduction for airline competitiveness application Parameter Reduced model 1/3 reduced 1/2 reduced 2/3 reduced Number of objectives (out of 11) Number of possible models Hit Rate 7 330 1 5 462 0.981 3 165 0.861 101 Average Value Loss Convergence Values Median Mean Range 0 0.70 0.76 0.10 - 1.00 0.0068 0.70 0.69 0.00 – 1.00 0.0587 0.70 0.60 -0.20 – 1.00 Given that the value of A1 being significantly higher than the other alternatives, the hit rate in this application remains high across model reduction levels. The value loss also drops at a similar rate. The median convergence is robust across all levels of reduction while the mean decreases. This indicates that the main variation in convergence values is in the proportion of varying rank orders, which to a lesser extent is also demonstrated in the lowering minimum convergence value. Interpretation of reduced model performance The result demonstrates that alternative A1 is the most competitive by far, and this stays relatively constant across levels of model reduction. More competition happens between the rest of the alternatives, as indicated by the moderate mean and median convergence values that exist despite the high hit rate. Other airlines may want to evaluate and compare their operational approach with the approach taken by airline A1. Because the model utilizes equal weights, the real unit value loss is a constant proportion of the attribute range. In the case of the 1/2 reduced value loss, 0.0068/0.091 amounts to 0.0748 of the range, and for the 2/3 reduced value loss, 0.0587/0.091 equals 0.6358 of the range (almost 2/3 of the entire attribute range). All of the attributes consist of natural measurements, so value loss can be more easily interpreted to real units for every objective. For example, the value loss for 1/2 omitted objectives is equivalent to a difference of 70 flights scheduled per year or 1 flight 102 delayed among 1000 flights, while for 2/3 omitted objectives it’s a difference of 612 flights per year or 9 flights delayed out of 1000. The complete real unit value loss equivalence for each attribute is included in table B1.3 However, one consideration in interpreting the real unit value loss equivalence is that attributes may have nonlinear utility/value functions, thus the magnitude of the loss may differ depending on the alternative. For example, an increase in the number of incidents from 0 to 1 may be perceived as more detrimental than an increase from 50 to 51 incidents. Convergence value may be useful for lower ranked airlines, since the fact that there is a considerable amount of variation despite the top choice being relatively consistent means that there is more competition between airlines that are not the top choice. Instead of looking at the big picture and aiming to be the overall best, it might be more viable to strategize in becoming the dominant airlines on certain routes or markets. For those purposes, looking at closer competition and examining the fluctuation of rankings may be more useful. Table B1.3 Complete real unit value loss equivalence for each attribute in airline competitiveness application Objective Attribute range Real unit difference from 1/2 model reduction value loss Real unit difference from 1/2 model reduction value loss unit operating cost (total operating cost/available seat- kilometers) 1.05 0.079 0.686 Labour productivity (total passenger revenue/total employee numbers) 888.62 66.469 580.979 Fleet productivity (total revenue passenger kilometers/total aircraft numbers) 16.4 1.228 10.722 Passenger load factor (total passengers carried/total seats available) 0.13 0.010 0.085 103 On-time performance (1− (total flights delayed/total flights departed)) 0.014 0.001 0.009 Safety (number of accidents/million-hour flown) 91.27 6.827 59.672 Flight frequency (total number of flights/total number of routes) 936.00 70.013 611.957 Average fare (total fare revenue/total revenue passenger kilometers) 0.60 0.045 0.394 Revenue growth (annual growth rate of total operating revenues) 0.18 0.016 0.118 Net profit margin (total net profit/total operating revenue) 0.70 0.052 0.456 Market share (total passengers carried/total passengers in the market) 0.34 0.025 0.220 104 Appendix B2: Application analysis: Cargo delivery (Cavalcanti, Mendes, & Yoshizaki, 2017) Table B2.1: Full model and attribute value matrix for cargo delivery application Objective A1 A2 A3 A4 A5 A6 Attribute Rank ROC Weight Safety Stability Quantity Cargo urgency Destination priority 0.65 0.35 0.25 0.75 0.65 0.50 0.55 0.75 1.00 0.85 0.85 0.85 0.80 0.70 1.00 0.45 0.25 0.05 0.70 0.35 0.30 0.30 0.00 0.90 0.30 0.05 0.10 0.75 0.10 0.45 3 5 4 2 1 0.157 0.040 0.090 0.257 0.457 The multiattribute analysis using the full model determined that alternative 3 (conditions of destination) is the top choice with a normalized weight of .876 and alternative 6 (FIFO) is the least desirable choice with a normalized weight of .311, giving a range of .865 between the top and the last choice. Looking closer at the attribute value matrix shown in Table B2.1, alternative 3 dominates alternative 4 and 6. The highest value for each attribute is always from either alternative 2 or 3. Therefore, there is no situation where the subset of the attribute set will result in a choice where the highest utility belongs to an alternative other than 2 or 3. Thus, the only possible top choice for a reduced model is either alternative 2 or 3. The results of the analysis is displayed in Table B2.2 Table B2.2: Performance parameters at each level of reduction for cargo delivery application Parameter Reduced model 1/3 reduced 1/2 reduced 2/3 reduced Number of objectives (out of 5) Number of possible models Hit Rate Average Value Loss Convergence Values 4 5 0.8 0.0126 3 10 0.7 0.0189 2 10 0.7 0.0189 105 Median Mean Range 0.94 0.91 0.83 - 0.94 0.91 0.89 0.77 - 0.94 0.83 0.82 0.54 - 0.94 As expected, parameter values decrease as more objective are omitted, but some attention needs to be directed in the performance measures of 1/2 and 2/3 reduced models. Observe that the performance of 1/2 reduced and 2/3 reduced model are identical in terms of hit rate and consequently, value loss. However, convergence values tell a slightly different story where there is a difference between the performance of 1/2 and 2/3 reduced model. In contrast to Barron’s assertion that hit rate and value loss are more reliable measure of model performance than convergence, we see that they both cannot detect the difference between the performance of 1/2 and 2/3 reduced model, while convergence values (mean, median, and to some extent the range) can. Interpretation of reduced model performance As previously discussed, the only competitive alternatives in this decision problem are 2 and 3, and the initial top choice of alternative 3 (focusing on conditions in the destination) edges out from alternative 2 (focusing on cargo type) most of the time when there are missing objectives. However, the combination of actions between both alternatives are quite distinct, therefore switching strategies will require a fair amount of effort. It should also be noted that the one objective in which alternative 2 has an advantage over 3 is on its ability to address cargo urgency, therefore situations where alternative 2 becomes the top choice is when this particular objective is emphasized. The weights for each attribute were elicited based on rank ordering and none of the alternative set were related to a quantifiable attribute value, therefore value loss could not be 106 contextually interpreted for this decision problem. This however is mostly the function of the way the decision structuring is approached instead of the choice of attributes in measuring each objective. Some objectives such as cargo quantity could possibly be converted into quantifiable measures which would help to make the differences more meaningful. Convergence values would not be a measure that is useful to look at due to having only two competitive alternatives, the difference in variation will mostly occur at the lower ranked alternatives. 107 Appendix B3: Application analysis: Climate change policy (Konidari & Mavrakis, 2017) Table B3.1: Full model and attribute value matrix for climate change policy application Objective DK GE GR IT NL PT SE UK Weight EP DGR ANC PA CE DCE CP EQ FL SNC FI INC AF FF 0.40 0.40 0.01 0.13 0.21 0.15 0.18 0.13 0.19 0.19 0.25 0.36 0.36 0.27 0.13 0.03 0.12 0.18 0.13 0.23 0.15 0.02 0.34 0.34 0.18 0.13 0.03 0.10 0.03 0.13 0.04 0.01 0.19 1.00 1.00 0.03 0.13 0.03 0.12 0.18 0.13 0.01 0.01 0.02 0.65 0.65 1.00 0.13 0.13 0.12 0.11 0.13 0.19 0.19 0.19 0.00 0.00 0.01 0.13 0.21 0.12 0.11 0.13 0.01 0.02 0.19 0.96 0.96 0.01 0.13 0.13 0.15 0.11 0.13 0.09 0.15 0.061 0.13 0.13 0.00 0.13 0.21 0.12 0.18 0.13 0.23 0.19 0.06 0.140 0.028 0.350 0.135 0.063 0.129 0.038 0.024 0.029 0.055 0.010 The policies from eight different European countries will be evaluated using this model. Using the full set of objectives in the model, the policy from the Netherlands is judged to have the highest value of 0.525 followed by the policy from Sweden with a value of 0.227. The lowest value is the policy from Portugal with a value of 0.060, giving an overall range of 0.464 for the set of alternatives. The results of the analysis are outlined in table B3.2 Table B3.2: Performance parameters at each level of reduction for climate change policy application Parameter Reduced model 1/3 reduced 1/2 reduced 2/3 reduced Number of objectives (out of 11) Number of possible reduced models Hit Rate Average Value Loss Convergence Values 7 330 0.636 0.1101 5 462 0.455 0.1728 3 165 0.273 0.2451 108 Median Mean Range 0.86 0.78 -0.17 – 1.0 0.81 0.64 -0.49 – 1.0 0.57 0.43 -0.57 – 0.98 Despite having a gap that’s half the range with the second best choice, there is a fair amount of degradation in model performance even when only 1/3 of the objectives or the omitted. This decision problem has the highest value loss among cases examined in this study where it reaches over 10% value just on the reduction of 1/3 of the attributes. The attribute values between alternatives are actually closer than the overall value for each alternative suggests, as most alternatives have identical attribute values for certain objectives but dominate in one or two attributes. This closeness in attribute value, in combination with the more even attribute weight distribution leads to a more varied array of top choices and larger value loss. Consequently, this leads to a more varied rank ordering of choices which leads to lower convergence value. Interpretation of reduced model performance There is a lot of variability in terms of top choices in this application. However, it should be noted that the purpose of the analysis is more comparative than it is to pick the best policy. However, a higher ranked country may be delegated with greater authority in setting regional laws and standards. Some countries are more advantageous in some attributes, therefore different country’s leadership may lead to different approaches and priorities in tackling climate policy. For this reason, hit rate and convergence values are similar indicators of how there are dynamic approaches for a regional climate policy. The high value loss corresponds to the higher stakes involved in each policy. Considering that much of attributes are quantifiable measures with considerable impact, especially for a time 109 sensitive topic such as climate change, decision makers should be more considerate of what it means to have value differences between options. In this study’s case, many of the attribute measures are in the form of ratings which aggregate various relevant indicators, therefore interpreting the value loss to real units require a deconstruction of the grading criteria for each objective. List of objectives 1. Environmental performance (EP) a. Direct contribution to GHG emission reductions (DGR) b. Ancillary (ANC) 2. Political Acceptability (PA) a. Cost efficiency (CE) b. Dynamic cost efficiency (DCE) c. Competitiveness (CP) d. Equity (EQ) e. Flexibility (FL) f. Stringency for non-compliance (SNC) 3. Feasibility of implementation (FI) a. Implementation network capacity (INC) b. Administrative feasibility (AF) c. Financial feasibility (FF) 110 Appendix B4: Application analysis: Health care treatment (Phelps & Madhavan, 2017) Table B4.1: Full model and attribute value matrix for health care treatment application Attribute T1 T2 T3 T4 T5 Probability of remission (PR) Expected months of remission free survival (RFS) Hair loss (HL) Nausea (N) Pain (P) Total cost (TC) Patient cost (PC) Advancing knowledge (AK) Quality of evidence on benefits (QE) 0.80 0.33 0.20 0.70 0.20 0.51 0.50 0.00 0.80 0.40 0.67 0.80 0.20 0.80 0.51 0.50 0.00 1.00 0.40 0.33 0.70 0.60 0.50 0.20 1.00 1.00 0.40 0.20 1.00 0.20 0.30 0.30 0.81 0.80 0.00 1.00 0.00 0.00 1.00 1.00 1.00 0.99 0.90 0.00 0.80 Table B4.2 Performance parameters at each level of reduction for health care treatment application Parameter Reduced model 1/3 reduced 1/2 reduced 2/3 reduced Number of objectives (out of 8) Number of possible models P1 Hit Rate Average Value Loss Convergence Values Median Mean Range P2 Hit Rate Average Value Loss Convergence Values 6 84 0.452 0.0141 0.60 0.30 -0.90 – 1.00 0.333 0.0191 5 126 0.381 0.0186 0.50 0.18 -1.00 – 1.00 0.437 0.0172 3 84 0.31 0.0262 -0.25 0.02 -0.90 – 1.00 .571 0.0184 111 Median Mean Range P3 Hit Rate Average Value Loss Convergence Values Median Mean Range P4 Hit Rate Average Value Loss Convergence Values Median Mean Range P5 Hit Rate Average Value Loss Convergence Values Median Mean Range 0.60 0.52 -0.30 – 0.90 0.667 0.0622 0.50 0.42 -0.90 – 1.00 0.333 0.0442 0.30 0.37 -0.40 – 1.00 0.464 0.0648 0.60 0.42 -0.6 – 1.00 0.50 0.45 -0.30 – 0.90 0.555 0.0868 0.30 0.30 -0.90 – 1.00 0.238 0.0561 0.20 0.29 -0.70 – 1.00 0.413 0.0718 0.60 0.36 -0.90 – 1.00 0.30 0.34 -0.50 – 0.70 .333 0.1425 0.10 0.14 -0.80 – 0.90 0.012 0.0837 0.00 0.10 -0.70 – 0.90 0.321 0.0896 0.10 0.17 -0.90 – 1.00 Similar to another application discussed in this study which involves multiple decision makers, there is a wide variation in how model reduction impacts each decision model and the evaluation of alternatives. The results for each model do not seem to be particularly robust, where there are only two instances where the hit rate is above half. One curious result is the parameter values of P2, where it contradicts every other analysis of model reduction by having better hit rates as more objectives are omitted from the model. 112 The results of the other parameters for this decision model adds to a more complicated story. The value loss for P2 is the smallest for 1/2 reduced condition despite having a lower hit rate than the 2/3 reduced condition. This means that in the instances where the 2/3 reduced models did not pick the same top choice as the full model, it is more likely to pick lower value alternatives than the top choices of the 1/2 reduced model. Both the mean and median convergence value follows the usual pattern of decreasing value as more attributes are taken out. One distinguishing feature of this application which likely contributed to the higher impact of model reduction is its use of rank order based weights. While the recalculation of ratio based weights retain the relative valuation between attributes, recalculating model weights for rank order based weights require readjusting the distribution of weights, which distorts the comparative valuation among attributes even further. This likely contributes to the lower overall performance of the reduced models in this application compared to reduced models that utilize ratio weights. Interpretation of reduced model performance Even though each decision maker is based on one archetype that points to a grander goal, the top choices from their original model do not strongly dominate the other alternatives. One of the factors that likely contributed to this is that many of the objectives are conflicting with each other thus articulating the level of tradeoff is key in decision problems such as this, especially if they concern sensitive life quality measures such as pain and expected months of remission free survival. An artifact of the model weights being rank order based is that value loss is less meaningful to interpret, as the weights were not calculated in consideration of the attribute value 113 range. Convergence values are irrelevant since this decision problem is about choosing the type of treatment to undergo as a time sensitive matter. One aspect to note is that since several of the attribute values are expressed in the form of probabilities, they are subject to biases in risk perception (Fischoff, 1999; Slovic et al., 2004). The interpretation of value loss with these parameters should consider the extent to which patients understand the marginal differences in health risk. 114 Appendix B5: Application analysis: Historical buildings (Ferretti, Bottero, & Mondini , 2014) Table B5.1 Attribute value matrix for historical buildings application Objective La Carigana Caudano Bona Montrucco Belgrado Dupre Gianelli Quality of the context Economic activity Building flexibility Accessibility Conservation level 1.00 0.64 0.50 0.00 0.67 1.00 0.18 0.00 0.00 0.33 0.50 0.00 1.00 0.00 0.33 0.5 0.0 1.0 0.0 1.0 0.00 0.27 1.00 0.00 1.00 0.00 0.73 0.50 0.38 1.00 0.00 1.00 0.00 1.00 0.00 The analysis determined that 3 out of 4 of the experts agree on the top choice (La Carignana), which in turn is also the top choice of the mean weights model. For the analysis, in addition to the standard reduced model analysis, the level of agreement between the experts will also be examined as the consequence of the omitted objectives. The full model utilizes 5 objectives, which is the smallest set of objectives in the applications analyzed in this study. The reduced model will incorporate 2,3, and 4 objectives. Table B5.2: Full models for each decision maker in the historical buildings application Objective Expertise Mean weights History of architecture Spatial planning Restoration Economic evaluation Quality of the context 0.290 0.292 0.264 0.262 0.277 115 Economic activity Building flexibility Accessibility Conservation level 0.065 0.226 0.161 0.258 0.083 0.125 0.167 0.333 0.226 0.358 0.113 0.038 0.246 0.200 0.215 0.077 0.155 0.227 0.164 0.177 Table B5.3: Performance parameters at each level of reduction for historical buildings application Parameter Reduced model 1/3 reduced 1/2 reduced 2/3 reduced Number of objectives (out of 5) Number of possible models Hit Rate History Planning Restoration Economics Mean Average Value Loss History Planning Restoration Economics Mean Convergence Values Median History Planning Restoration Economics Mean Mean History Planning Restoration Economics Mean Range History Planning Restoration Economics 4 5 0.4 0.6 0.6 0.8 0.6 0.0301 0.0276 0.0482 0.0220 0.0457 0.79 0.96 0.79 0.61 0.68 0.79 0.74 0.61 0.57 0.72 0.50 – 0.96 0.29 – 0.96 -0.07 – 0.99 0.07 – 0.95 3 10 0.2 0.5 0.4 0.5 0.5 0.0462 0.0780 0.0867 0.0626 0.0599 0.66 0.59 0.39 0.41 0.63 0.54 0.50 0.35 0.35 0.59 -0.18 – 0.89 -0.36 – 0.89 -0.32 – 0.95 -0.11 – 0.71 2 10 0.1 0.4 0.3 0.4 0.4 0.1048 0.1161 0.1285 0.0759 0.0910 0.59 0.40 0.33 0.34 0.52 0.38 0.34 0.21 0.27 0.45 -0.36 – 0.76 -0.37 – 0.88 -0.34 – 0.77 -0.32 – 0.68 116 Mean 0.63 – 0.89 0.23 – 0.77 -0.16 – 0.77 Observe that while parameters do degrade over more reduction for every decision maker, comparing them to each other reveal that the nature of the magnitude of reduction and relationship between parameters are not that straightforward. For example, in the 1/3 reduced condition, the Planning and Restoration experts have the same hit rate of .6, but their value losses are .0276 and .0482 respectively. Compare these to the performance measure of the History expert who has a lower hit rate of .4. The History expert’s value loss of .0301 is higher than the Planning expert but lower than the Restoration expert. Several factors may contribute to these kinds of results: (1) Value loss is obtained from calculating the difference between the attribute value of the top choice of the full model and the top choices of the reduced model evaluated on the full model, difference between the models of decision makers themselves will influence this (2) For the reduced model top choices that are different from the full model’s top choice, the rank of the chosen model (and subsequently the value differences) may be different. For example, a hit rate of .8 out of 10 reduced models means that there are 2 times where the top choice of the reduced model differs from the full model. The value loss will be higher when the top choices of the 2 reduced models are ranked 4 th and 5 th instead of 2 nd and 3 rd , because the former will have a lower value. As previously mentioned, each decision maker has a different model; therefore, the parameters are in respect to their own model. However, every model utilizes the same attribute value matrix. The main difference is in the set of weights, the source of variation among each alternative’s value. This demonstrates that the effect of model reduction is sensitive to the weights, the main question is how these variations affect the consequences of the model. 117 Interpretation of reduced model performance There’s a wide variation in terms of how much the performance of reduced models deviate from full models. The low number of reduced models also contributed to the variation in parameter values. In this decision problem, the top choice seem to be less consistent, showcasing how open the options are. This applies for all of the experts, which indicates that even though they have their own specialties that they may prioritize more than other objectives, model reduction shifts their top choice regardless. There are two objectives out of five that use a numeric attribute measure, economic activity and accessibility; therefore value loss will be interpretable through these objectives. Converting value loss to a real unit equivalence is dependent on the weights used for the particular attribute in the model, so each expert will have a different equivalent loss. The maximum and minimum loss will be discussed. The most loss for the attribute economic activity is for the economics expert model, where it is equivalent to 3.1 economic activity, and the least value loss is for the history expert model, which loses 0.7 economic activity. For accessibility, the most loss is also from the economics expert model, where the value loss is equivalent to an extra 1.7 minutes to access the facility and the least loss is from the model for the restoration expert, where it is equivalent to an extra 0.9 minutes to access the facility. Converge also show a high level of variation, but since the purpose of this study is to select one alternative to restore, it would not be too relevant to interpret. The main takeaway from the analysis of this particular application, which has multiple decision makers, is that the effect of model reduction is not easily predictable even for the same decision problem (i.e. attribute and value matrix), consequences of model reduction may vary from model to model. 118 Appendix B6: Application analysis: Power plant (Jacobi & Hobbs, 2008) Table B6. 1 Full model and set of attribute values for power plant application Attr Ref A B C D E F G H I J K L M N Weight X 1 X 2 X 3 X 4 X 5 X 6 X 7 X 8 X 9 X 10 X 11 X 12 X 13 0.89 0.72 0.77 0.88 0.39 0.41 0.41 1.00 0.89 0.55 1.00 0.86 0.98 0.89 0.72 0.76 0.88 0.46 0.42 0.41 1.00 0.89 0.55 1.00 0.86 0.98 1.00 0.90 0.77 0.93 0.39 0.40 0.41 1.00 0.93 0.73 1.00 0.93 0.81 0.76 0.69 0.77 0.83 0.43 0.45 0.47 1.00 0.17 0.59 1.00 0.72 0.98 0.62 0.21 0.75 0.77 0.38 0.41 0.41 1.00 0.89 0.48 1.00 0.71 0.98 0.87 0.81 0.76 0.87 0.37 0.39 0.41 0.00 0.92 0.70 1.00 0.81 0.98 0.74 0.70 0.77 0.82 0.70 0.25 0.53 1.00 0.86 0.49 1.00 0.70 0.90 0.73 0.70 0.77 0.82 0.76 0.59 0.53 1.00 0.86 0.49 1.00 0.69 0.90 0.74 0.70 0.79 0.82 0.69 0.49 0.53 1.00 0.86 0.49 1.00 0.69 0.98 0.63 0.40 0.76 0.78 0.96 0.87 0.82 1.00 0.89 0.51 0.00 0.67 0.85 0.86 0.61 0.67 0.80 0.48 0.50 0.47 1.00 0.92 0.70 1.00 1.00 1.00 0.81 0.56 0.71 0.79 0.75 0.62 0.65 1.00 0.86 0.50 1.00 0.96 0.92 0.68 0.54 0.71 0.73 0.85 0.68 0.65 1.00 0.14 0.48 1.00 0.82 0.92 0.76 0.65 0.77 0.88 0.37 0.33 0.35 1.00 0.89 0.58 1.00 0.79 0.96 0.60 0.63 0.77 0.83 0.64 0.50 0.47 1.00 0.86 0.50 1.00 0.62 0.87 0.011 0.014 0.397 0.158 0.067 0.023 0.023 0.034 0.023 0.045 0.011 0.100 0.095 Researchers compared the set of weights elicited from 11 decision makers who were presented with different forms of value trees for the same problem. For this analysis, the set of weights from subject 5 was chosen because every weight in the model is nonzero Using the full set of weights, alternative B is determined to have the highest value at .790 (from a maximum of 1) followed by A and H, both having a value of .780. The bottom choice is alternative D with a value of .725, giving a range of around .065 between the best and worst choice, which is 6.5% of the total range of values. Table B6.2 displays the results of the analysis Table B6.2: Performance parameters at each level of reduction for power plant application Parameter Reduced model 1/3 reduced 1/2 reduced 2/3 reduced 119 Number of objectives (out of 13) Number of possible reduced models Hit Rate Average Value Loss Convergence Values Median Mean Range 9 715 0.429 0.0103 0.72 0.71 0.08 - 0.99 7 1716 0.328 0.0136 0.60 0.59 -0.04 - 0.98 4 715 0.297 0.0169 0.42 0.41 -0.24 - 0.93 For every reduction level, the hit rate is below .5, indicating that the process of formulating objectives will have a big consequence on the decisions made from the model. This also indicates that the set of alternatives evaluated are competitive and sensitive towards changes in the model. The median convergence values are almost identical to the mean convergence. These results may be influenced by the fact that (1) There is a large set of alternatives and (2) The range of values is quite narrow, which basically means that the alternatives have a high degree of competitiveness and even the smallest changes in the model may result in a different top choice or rank order. Interpretation of reduced model performance The alternatives in this application represent ways to allocate resources to invest in for the future of the power plant. The operational approaches represented by the alternatives are based on fuel costs (economic approach) or a combination of economic and emissions approach. For most part, many of the alternatives are variations of each other. For example, alternative G, H, and I are variations of alternative F, with an additional investment in different energy sources. Therefore, the lower hit rate may not be much of a problem if transitioning between alternatives 120 do not require major changes. In fact, it could be the case that the best alternative change over time due to changes in standards or new technology. In the case of interpreting value loss, every attribute can be expressed as real unit equivalent. The top three attributes with the largest weights will be discussed in this section, with the complete list of value loss real unit equivalence attached in Appendix B6.3. The attribute with the largest weight according to the model used in this analysis represents levelized rates for the first 6 years of operation, measured in $/megawatt-hour. The value loss for 1/3, 1/2, and 2/3 model reduction is 0.15, 0.19, and 0.24 respectively. The second largest weight represents the long term levelized rates, which is defined as the first 20 years of operation. Table B6.3 Complete real unit value loss equivalence for each attribute in power plant application Attribute Attribute range Real unit difference from 1/3 model reduction value loss Real unit difference from 1/2 model reduction value loss Real unit difference from 2/3 model reduction value loss X1 380 3.91 5.17 6.42 X2 306 3.15 4.16 5.17 X3 14.1 0.15 0.19 0.24 X4 17.5 0.18 0.24 0.3 X5 46,440 478.33 631.58 784.84 X6 7,484 77.09 101.78 126.49 X7 17 0.18 0.23 0.29 X8 1 0.01 0.01 0.02 X9 1,223 12.6 16.63 20.67 X10 5,909 60.86 80.36 99.86 X11 600 6.18 8.16 10.14 121 X12 46,453 478.47 631.76 785.06 X13 126 1.3 1.71 2.13 The value loss for 1/3, 1/2, and 2/3 model reduction is 0.18, 0.24, and 0.3 respectively. This is slightly higher than the short term rates. While they are all in the order of cents on the dollar, given the scale of energy needed to be generated by a power plant the difference in costs may have a wide impact on the regional economy. The third largest weight represents regional job loss. The value loss for 1/3, 1/2, and 2/3 model reduction is 478, 632, and 785 jobs respectively. List of attributes: X1: Levelized annual revenue requirements, years 0–20 X2: Average capital expenditures, years 0–20 X3: Levelized rates, years 0–6 X4: Levelized rates, years 0–20 X5: Average SO2 emissions, years 0–20 X6: Average CO2 emissions, years 0–20 X7: Average NOx emissions, years 0–20 X8: Number of new sites for coal ash disposal required X9: Total land needed for new generation, years 0–20 (acres) X10: Remotely sited new generation capacity X11: Nuclear power (megawatts) X12: Job losses, region X13: Average emergency power, years 0–20 122 Appendix B7: Application analysis: Radioactive waste (Brothers et al., 2009) Table B7.1 Full model and set of attribute values for radioactive waste application Objective Alternative (1) Alternative (2) Alternative (3) Weight Total cost Schedule Worker exposure Public health Environmental impact Final product amount Final product confinement Technical risk Regulatory risk Cost risk Schedule risk 0.08 1.00 0.50 0.75 0.00 0.00 1 0.00 0.80 0.40 0.20 1.00 1.00 1.00 1.00 1.00 1.00 0.00 1.00 1.00 1.00 1.00 0.00 0.00 0.00 0.00 0.90 0.30 1.00 0.00 0.00 0.00 0.00 0.083 0.056 0.111 0.139 0.111 0.083 0.111 0.111 0.097 0.069 0.028 The multiattribute model that includes every objective in the set (referred onwards as the full model) determines that alternative B is the top choice with a value of 0.880, followed by alternative A with a value of 0.444 and lastly alternative C with a value of 0.269. The range of the alternative valuation (the difference between the top and last choice) is .611. Consider that there are some distinct properties regarding the alternative set for this problem that may affect the outcome of the model reduction performance. The most important one is that alternative B dominates the other alternatives in all but one objective (final product confinement). The implication of this in terms of constructing the reduced model is that in order for alternative B to not dominate the other choices, it is necessary that the particular objective to be excluded from the model. Additionally, the weights for the non-dominated attribute should be large enough to counter the domination of the other attributes Another notable characteristic is that the number of alternatives is relatively small (in fact the fewest of all cases included in this 123 study) with a relatively wide difference of values between the top and the least favored choice according to the full model. The results of the analysis are displayed in Table B7.2 Table B7.2: Performance parameters at each level of reduction for radioactive waste application Parameter Reduced model 1/3 reduced 1/2 reduced 2/3 reduced Number of objectives (out of 11) Number of possible reduced models Hit Rate Average Value Loss Convergence Values Median Mean Range 7 330 1.000 0 1.00 0.95 0.50 – 1.00 5 462 0.987 0.0058 1.00 0.89 0.50 – 1.00 3 165 0.818 0.0902 1.00 0.72 -1.00 – 1.00 Observe that in the 1/3 reduced condition, the performance of the reduced model is identical to the full model in almost every parameter. The convergence values still allow for some variation, but because the reduced model always picks the top choice, the only possible rank order variation occurs between the 2nd and 3rd ranked choice. In the 1/2 reduced scenario, the model has a still very high hit rate of .983, less than a 2% likelihood of selecting a different top choice. Value loss is also quite small, around 0.6% of the entire range of value. While the median and range of convergence values are identical with the 1/3 reduced level, the lower mean indicate that there is a larger proportion of rank order variation which is most likely to be between the 2nd and 3rd choice. In the 2/3 reduced scenario, there are more drops in hit rate and value loss compared to the previous scenario. Reduced models still yield the original top choice in a little bit over 4 out of 5 possible reduced models. However, the value loss jumps by a factor of 10 in addition to being 1.5 times larger. Convergence values indicate that this level of reduction allows even more 124 variation between alternative rank order, in some scenarios inverting the original rank order. However, most of the time, the rank order obtained from the full model is robust against all level of model reduction, as indicated by every median convergence being 1. Interpretation of reduced model performance The original study was commissioned as a reevaluation of a decision problem that was faced by the Italian authorities in charge of nuclear management. Sensitivity analysis from the original study determined that the only way for B to not outperform the other alternatives is to increase the weight of final product containment from 11.1% to 38.3%, more than three times the original evaluation, which is a relatively major modification to be made. This analysis supports this conclusion that alternative B is superior to the other alternatives by quite far. This is demonstrated by how the hit rate remains to be very high even when the set of objectives are substantially reduced. In describing the consequences of value loss, the factor that is most meaningful in explaining the differences in value points is to see its effects on cost (von Winterfeldt & Edwards, 1986 p.419). While utility/value point is essentially unitless, it can be converted to tangible measures such as dollar amount when the weights and attribute ranges are known. In this application, for the attribute of total cost, the difference between the total cost valued at 1 (most desirable) and the total cost valued at 0 (least desirable) is €51,020,000. Cost is weighted 0.11, therefore one unit of overall value difference is valued at €463,636,364. Value loss for 1/2 and 2/3 reduced models are 0.0058 and 0.0902, respectively, which would be equivalent to €2,689,091 and €41,820,000. The value loss from the 1/2 reduced level 125 corresponds to 5.27% of the total budget, while the value loss from 2/3 reduced level corresponds to 82% of the entire budget. Another attribute in the application utilizing natural measurement is scheduling. The difference between the most desirable and least desirable value is 1 year and has a weight of 0.056. Therefore, one unit of overall value difference is valued at 17.86 years (roughly 17 years and 10.5 months). The value loss of 0.0058 is equivalent to 0.10 years (5 weeks) and value loss of 0.0902 is 1.61 years (1 year and 7 months), which is larger than the actual range of values between the alternatives. Other attributes in the model are either constructed measures or a mix between natural and constructed measure which is difficult to analyze and interpret. Since this application is concerned with picking the top choice between alternatives, the convergence values are not much of concern, as any rank order variation affecting the top choice is covered by the hit rate performance. However, while the range and median of convergence values are not reliable in measuring model performance, the mean convergence values could potentially be more indicative. However, the caveat for drawing conclusions from this particular application is that the attribute values displayed here are extreme in terms of the level of domination between alternatives. This characteristic may not properly reflect the effect of the omission of objectives on typical decision problems. 126 Appendix C: Survey Materials Page 1 (Initial generation): Imagine that you have been accepted for a new job. The job requires you (and any family/dependents) to relocate to a new city. The company that hires you has multiple locations across the country. Thus, you are given the option to choose between several locations. To help your transition to your new location, a hiring administrator has been assigned to you to aid you in choosing the location that best suits your lifestyle As a starting point, the administrator asks you to list, as many as you can, characteristics of a location that influence you in making the decision on where to live. Focus on what’s really important, think about characteristics that could be a tiebreaker in the event that all other important characteristics are equal among your possible choices. Be specific in describing what you would like from your ideal place to live. For example, write "low cost of living" instead of just "cost of living" Please list them below (note: these are all text entry boxes) 1. _(self-generated objective 1)_ 2. _(self-generated objective 2)_ 3. _(self-generated objective 3)_ 4. … Page 2 (Checking off master list): To help guide you in the process, the administrator then shares with you the list that was compiled from previous employees that had to be relocated. Check off all characteristics in the list that are relevant to you (even if you have previously written it down in the previous section) □ Low cost of living □ Low rent prices □ Low house prices □ More distance to family □ Less distance to family □ More sunny days 127 □ More snowy days □ Low taxes □ Low unemployment rate □ Low crime rate □ More comprehensive public transport □ Less commute time □ More walkability □ More biker friendly infrastructure □ More older residents □ More younger residents □ More diversity in racial demographics □ More night life options □ More available dating pool □ More access to coasts □ More options for outdoor activity □ Higher quality of K-12 schools □ Higher quality of higher education institutions (university, college, etc.) □ More access to religious communities □ More Democratic party voters □ More Republican party voters □ More access to quality health care facilities □ High population density □ Low population density Page 3 (Mapping): 128 You are asked to review the list you checked off in the previous step, listed below 1. (checked master list objective 1) 2. (checked master list objective 2) 3. (checked master list objective 3) 4. … On the left you will see the list of characteristics you generated in the first step. Drag them to the box on the right if the entries overlap with the list above Page 4 (Finalizing): Now it’s time to finalize your list. The administrator tells you to do so by sorting the characteristics you’ve written down or chosen from the previous list. Place the relevant characteristics on the box on the top right and rank them by the order of importance as best as you can. Place any redundant or irrelevant ones on the box below it. 1. _(self-generated objective 1)_ 2. _(self-generated objective 2)_ 3. _(self-generated objective 3)_ 4. … 5. (checked master list objective 1) 6. (checked master list objective 2) 7. … Page 5 (Direct judgment): Choose five cities from the list below and rank them according to your preference Boulder, Colorado Decatur, Illinois 1. _(self generated objective 1)_ 2. _(self generated objective 2)_ 3. _(self generated objective 3)_ 4. … (Box where participants drag and drop entries from the self generated list that map to the checked list) (Box where participants drag and drop final list of objectives from list on the left) (Box where participants drag and drop discarded list of objectives from list on the left) 129 Wichita, Kansas Reno, Nevada Baton Rouge, Louisiana Minneapolis, Minnesota Raleigh, North Carolina Allentown, Pennsylvania Virginia Beach, Virginia Olympia, Washington (Box where participants drag and drop their top five cities according to their preference) 130 Appendix D: Attribute measures and consequence table for study III Objective Parameters □ Low cost of living □ Low rent prices □ Low house prices □ More distance to family □ Less distance to family □ More sunny days □ Less sunny days □ More snowy days □ Less snowy days □ Low taxes □ Low unemployment rate □ Low crime rate □ More comprehensive public transport □ Less commute time □ More walkability □ More biker friendly infrastructure □ More older residents □ More younger residents □ More diversity in racial demographics □ More night life options □ More available dating pool □ More access to coasts □ More options for outdoor activity □ Higher quality of K-12 schools □ Higher quality of higher education institutions (university, college, etc.) □ Cost of Living index □ Average rent price for 2 bedroom □ Average cost of house □ Randomly generated □ 1 - value of “more distance from family” □ Days per year with some sun □ Days per year with some sun □ Snowfall in inches □ Snowfall in inches □ Sum of sales and income tax □ Unemployment rate in 2016 □ Violent and property crime index □ Walkscore transit rating □ Average commute time □ Walkscore walkability rating □ Walkscore biker friendliness rating □ Average age of residents □ Average age of residents □ Niche.com diversity rating □ Niche.com nightlife rating □ Percent of single residents □ Distance to coasts □ Niche.com outdoor activity rating □ Niche.com public school rating □ Availability and quality of the higher education institutes in the area 131 □ More access to religious communities □ More Democratic party voters □ More Republican party voters □ More access to quality health care facilities □ High population density □ Low population density □ Percent of residents who are religious adherents □ Percentage of residents voting for the democratic party in last election □ Percentage of residents voting for Republican party in last major election □ Physician per 100,000 people □ Residents per square mile □ Residents per square mile 132 City Boulder, CO Decatur, IL Wichita, KS Reno, NV Baton Rouge, LA Minneapolis, MN Raleigh, NC Allentown, PA Virginia Beach, VA Olympia, WA Low cost of living 0 1 0.869521 0.521916 0.784913 0.620795 0.663609 0.801223 0.629969 0.624873 Low rent prices 0 1 0.902287 0.58316 0.784823 0.529106 0.634096 0.672557 0.438669 0.608108 Low house prices 0 1 0.901169 0.544163 0.849534 0.691818 0.687232 0.895547 0.695813 0.62169 More distance to family 0.921732 0.831297 0.425667 0.031231 0.915105 0.039844 0.146228 0.011228 0.634672 0.460083 Less distance to family 0.078268 0.168703 0.574333 0.968769 0.084895 0.960156 0.853772 0.988772 0.365328 0.539917 More sunny days 0.939655 0.543103 0.732759 1 0.672414 0.534483 0.663793 0.560345 0.663793 0 Less sunny days 0.060345 0.456897 0.267241 0 0.327586 0.465517 0.336207 0.439655 0.336207 1 More snowy days 1 0.18169 0.177465 0.307042 0 0.73662 0.050704 0.409859 0.076056 0.083099 Less snowy days 0 0.81831 0.822535 0.692958 1 0.26338 0.949296 0.590141 0.923944 0.916901 Low local taxes 0.274878 0.173974 0.313152 1 0 0.052192 0.410578 0.702853 0.514962 0.911621 Low unemployment rate 1 0 0.536585 0.731707 0.268293 0.780488 0.585366 0.02439 0.902439 0.292683 Low crime rate 0.863346 0.555556 0 0.542784 0.10728 0.075351 0.731801 0.661558 1 0.515964 More comprehensive public transport 0.859649 0 0.350877 0.491228 0 1 0.526316 0.631579 0.368421 0.614035 Less commute time 0.633842 1 0.757282 0.585298 0.316227 0.104022 0.040222 0.018031 0 0.443828 More walkability 0.666667 0.025641 0.102564 0.179487 0.25641 1 0 1 0.025641 0.205128 More biker friendly infrastructure 1 0.044444 0.155556 0.244444 0.133333 0.955556 0 0.066667 0.066667 0.355556 More older residents 0 1 0.579439 0.672897 0.261682 0.35514 0.448598 0.327103 0.682243 0.934579 More younger residents 1 0 0.420561 0.327103 0.738318 0.64486 0.551402 0.672897 0.317757 0.065421 More diversity in racial demographics 0 0.636364 0.636364 0.636364 0.636364 0.636364 1 0.636364 0.636364 0.272727 More night life options 1 0.428571 0 0.428571 0.428571 1 0.714286 0.714286 0.714286 0.428571 More available dating pool 0.971561 0.459614 0.198045 0.465575 1 0.914781 0.536868 0.74278 0 0.464598 More access to coasts 0 0.5 0 0.25 0.75 1 1 0.75 1 0.75 More options for outdoor activity 0.777778 0.222222 0.388889 1 0 0.555556 0.555556 0.555556 0.777778 0.777778 Higher quality of K-12 schools 1 0.291667 0.833333 0.833333 0 0.416667 1 0.166667 1 1 Higher quality of higher education institutions (university, college, etc.) 1 0.2 0.8 1 1 1 1 0.7 0.6 0.3 More access to religious communities 0.319185 0.500118 0.58629 0.005551 1 0.658663 0.350973 0.558954 0.282789 0 More Democratic party voters 1 0.054787 0 0.296434 0.470998 0.788345 0.619234 0.401706 0.250835 0.439751 More Republican party voters 0 1 0.955951 0.682628 0.621962 0.182905 0.447244 0.686768 0.778184 0.41986 More access to quality health care facilities 0.866165 0.080451 0.041353 0.254887 0.603759 1 0 0.814286 0.176692 0.130075 High population density 0.391933 0 0.114956 0.079716 0.152342 1 0.230701 0.86848 0.017366 0.170166 Low population density 0.608067 1 0.885044 0.920284 0.847658 0 0.769299 0.13152 0.982634 0.829834
Abstract (if available)
Abstract
Multiattribute decision analysis requires decision makers to identify their objectives as a critical part of sound decision making and failure to do so may result in poor decisions (Keeney & Raiffa, 1976). Research has shown that decision makers are often ill equipped to identify objectives and could not generate more than half of objectives they later recognized to be important (Bond, Carlson & Keeney, 2008; 2010). However, previous studies which examined missing attributes and missing objectives found that attributes that are excluded are likely to be unimportant, or that they are deliberately excluded as a heuristic. ❧ Three approaches are presented to examine the consequences of missing objectives in multiattribute models: (1) Analysis of existing multiattribute models from published applications; (2) Monte Carlo simulations evaluating the decision quality of reduced models. The simulations will also be used to test the effects of model reduction under various model characteristics, such as number of objectives and alternatives, intercorrelated objective sets, and different weighting methods; (3) Analysis of data collected from a behavioral assessment of multiattribute models constructed from both the list of objectives that are generated by decision makers and the list of which includes objectives identified from a reference list. ❧ The results of each study provide a variety of outcomes concerning the consequences of missing objectives in multiattribute models. In general, the largest determiner of the impact of the missing objective is the attribute intercorrelation within the decision space. However, missing objectives might not necessarily be a detriment to making optimal decisions. As long as the set of objectives sufficiently captures the essential trade-offs, it is still very much possible to produce satisfactory outcomes from models with missing objectives.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Choice biases in making decisions for oneself vs. others
PDF
A robust model for disaster risk and decision analysis
PDF
A multiattribute decision model for the selection of radioisotope and nuclear detection devices
PDF
The acute impact of glucose and sucralose on food decisions and brain responses to visual food cues
PDF
Taking the temperature of the Columbia Card Task
PDF
Disaster near-miss appraisal: effects of attribution, individual differences, psychological distance, and cumulative sequences
PDF
Preparing for natural disasters: investigating the effects of gain-loss framing on individual choices
PDF
On the latent change score model in small samples
PDF
Behabioral and neural evidence of state-like variance in intertemporal decisions
PDF
Social exclusion decreases risk-taking
PDF
Evaluating aleatory uncertainty assessment
PDF
Modeling human bounded rationality in opportunistic security games
PDF
Psychological distance in the public’s response to terrorism: An investigation of the 2016 Orlando nightclub shooting
PDF
Trade-offs among attributes of authentication
PDF
Regularized structural equation modeling
PDF
Behavioral and neural evidence of incentive bias for immediate rewards relative to preference-matched delayed rewards
PDF
Sequential decisions on time preference: evidence for non-independence
PDF
Culture's consequences: a situated account
PDF
Clinical decision analysis for effective supervision: a review of the supervisor-supervisee relationship
PDF
Human and machine probabilistic estimation for decision analysis
Asset Metadata
Creator
Kusumastuti, Sarah AFR
(author)
Core Title
The cost of missing objectives in multiattribute decision modeling
School
College of Letters, Arts and Sciences
Degree
Doctor of Philosophy
Degree Program
Psychology
Degree Conferral Date
2021-08
Publication Date
07/12/2021
Defense Date
05/19/2021
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
decision analysis,Decision making,Monte Carlo simulation,multiattribute model,multicriteria decision making,OAI-PMH Harvest
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
John, Richard (
committee chair
), Bechara, Antoine (
committee member
), Dehghani, Morteza (
committee member
), Monterosso, John (
committee member
), von Winterfeldt, Detlof (
committee member
)
Creator Email
kusumast@usc.edu,sa.kusumastuti@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC16252537
Unique identifier
UC16252537
Legacy Identifier
etd-Kusumastut-9721
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Kusumastuti, Sarah AFR
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
decision analysis
Monte Carlo simulation
multiattribute model
multicriteria decision making