Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Enchanted algorithms: the quantification of organizational decision-making
(USC Thesis Other)
Enchanted algorithms: the quantification of organizational decision-making
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
ENCHANTED ALGORITHMS The Quantification of Organizational Decision-Making by Vern L. Glaser A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (BUSINESS ADMINISTRATION) December 2014 ii ABSTRACT Organizations quantify increasingly diverse types of decision-making processes. Traditionally, many scholars conceptualize quantification as a means of rationalizing decision- making processes, wherein organizational leaders strive to control business processes and eliminate human bias. Recently, however, other scholars have suggested that quantification of decision-making processes often reflect political compromises that resolve conflicts between competing interests. To develop theory to explain the tension between these perspectives, I ask the research question, how do organizations quantify decision-making processes? To answer this question, I engage in participant observation of Gaming Expert and Algo-Security, two organizations that use game theory to quantify decision-making in the security industry. While prior research emphasizes the role of quantification in rationalizing business processes or resolving political conflict, my study shows that quantifying decision-making processes leads to a different outcome. Specifically, quantification functions as a cultural tool or an “enchanted algorithm” that organizational actors draw on to develop innovative approaches to overcome perennial organizational challenges. By offering an empirically grounded analysis of how organizations quantify decision-making, I contribute to strategy research on the emergence of routines and capabilities, as well as research on organizational decision-making more broadly. iii TABLE OF CONTENTS Abstract .......................................................................................................................................... ii List of Tables ................................................................................................................................. v List of Figures ............................................................................................................................... vi Acknowledgements ..................................................................................................................... vii Chapter 1 – Introduction ............................................................................................................. 1 Motivation ........................................................................................................................................................... 1 Core Research Questions ............................................................................................................................... 7 Structure of Dissertation................................................................................................................................ 7 Data and Methods Overview ......................................................................................................................... 9 Methodological Approach .................................................................................................... 10 Empirical Context ................................................................................................................. 11 Data Collection ..................................................................................................................... 13 Data Analysis ........................................................................................................................ 15 Contributions .................................................................................................................................................. 16 Chapter 2 – Theoretical Perspectives on Quantification ......................................................... 19 The Rationalization Perspective .............................................................................................................. 21 Frederick Taylor and Scientific Management....................................................................... 21 W. Edwards Deming and Total Quality Management .......................................................... 25 Thomas Davenport and the “Analytics” Literature .............................................................. 32 Summary of the Rationalization Perspective ........................................................................ 40 The Sociology of Quantification Perspective ....................................................................................... 41 Theodore Porter and Trust in Numbers ................................................................................ 41 Wendy Espeland and Commensuration ................................................................................ 47 Summary of the Sociology of Quantification Perspective .................................................... 55 Limitations of Existing Perspectives ....................................................................................................... 55 Limitations of the Rationalization Perspective ..................................................................... 56 Limitations of the Sociology of Quantification Perspective ................................................. 58 The Theoretical Gap ............................................................................................................. 59 Conclusion ............................................................................................................................ 61 Chapter 3 – Constructing Strategy Tools To Quantify Decision-Making ............................. 63 Introduction ..................................................................................................................................................... 63 Theoretical Background .............................................................................................................................. 65 Methods ............................................................................................................................................................. 69 Empirical Context ................................................................................................................. 69 Data Collection ..................................................................................................................... 73 Data Analysis ........................................................................................................................ 75 Findings: Developing a Game-Theoretic Tool for Strategic Decision-Making ......................... 78 Prototyping ............................................................................................................................ 78 Pinging .................................................................................................................................. 84 Contextualizing ..................................................................................................................... 98 A Theoretical Model of How Professionals Inscribe Quantification Expertise into Strategy Tools ........................................................................................................................................................... 104 Discussion ....................................................................................................................................................... 107 Chapter 4 – The Use of Statistical Expertise to Automate Decision-Making Routines...... 109 Introduction ................................................................................................................................................... 109 Theoretical Background ............................................................................................................................ 112 iv Methods ........................................................................................................................................................... 117 Empirical Context ............................................................................................................... 118 Data Collection ................................................................................................................... 119 Data Analysis ...................................................................................................................... 121 Findings ........................................................................................................................................................... 124 Scheduling Security Resources ............................................................................................................... 131 The Metropol Scheduling Routine ...................................................................................... 131 The Gaming Expert Game-Theoretic Scheduling Algorithm ............................................. 137 Two Distinct Approaches to Security Scheduling .............................................................. 143 Bridging Metropol and Gaming Expert Approaches to Decision-Making Routines ............. 145 Modeling and Mapping ....................................................................................................... 145 Establishing Algorithmic Jurisdiction................................................................................. 149 Constructing Validation ...................................................................................................... 154 Discussion ....................................................................................................................................................... 158 Limitations and Opportunities for Further Study................................................................ 161 Conclusion .......................................................................................................................... 162 Chapter 5 - Conclusion ............................................................................................................. 163 Contributions ................................................................................................................................................ 165 Conclusion ...................................................................................................................................................... 168 References .................................................................................................................................. 170 v LIST OF TABLES Table 1 – The Deadly Diseases of Western Management ................................................................ 27 Table 2 – Deming’s 14 Points ..................................................................................................................... 29 Table 3 – Davenport and Harris’ five stage process .......................................................................... 35 Table 4 – A Comparative Synopsis of the Rationalization Perspective ...................................... 41 Table 5 – A Comparative Synopsis of the Sociology of Quantification Perspective ............... 55 Table 6 – Categorizing Theoretical Perspectives on Quantification ........................................... 60 Table 7 – Algo-Security Target Markets ................................................................................................ 72 Table 8 – Algo-Security Sensegiving Documents ................................................................................ 75 Table 9 – Proposed Website Comparing Transportation and Event Security Markets ........ 89 Table 10 – Representative Data Supporting Interpretations ...................................................... 127 Table 11 – Comparison of Metropol and Gaming Expert Approaches to Scheduling Routines .................................................................................................................................................... 143 vi LIST OF FIGURES Figure 1 – Breakdown of Analytical Expertise .................................................................................... 38 Figure 2 – Data Structure, Constructing Strategy Tools for Quantification of Decision- Making ......................................................................................................................................................... 77 Figure 3 – Strategy Comparison for Effectiveness ............................................................................. 96 Figure 4 – Theoretical Model, How Professionals Construct Strategy Tools to Quantify Decision-Making .................................................................................................................................... 105 Figure 5 – Data Structure, The Use of Statistical Expertise to Automate Decision-Making Routines .................................................................................................................................................... 123 Figure 6 – Theoretical Model, How Organizations Use Statistical Expertise to Automate Decision-Making Routines ................................................................................................................. 126 Figure 7 – Simple Example of a Game Matrix for Security Scheduling ..................................... 139 vii ACKNOWLEDGEMENTS During the last five years, my dissertation has been influenced and shaped by many friends and colleagues, and I would like to acknowledge their invaluable contributions. First and foremost, I would like to thank my advisor and committee chair, Peer Fiss. Throughout my doctoral education, Peer consistently challenged me to think deeply about my research and always encouraged me to pursue field sites that offered a plethora of latent possibility. He displayed incredible wisdom in guiding me through the creative process of discovering and developing a theoretical contribution of significance. Ultimately, Peer’s generative coaching and critique have provided me with a wonderful foundation for my academic career. I would also like to acknowledge the other members of my dissertation committee. With her creative and inquiring questions, Nina Eliasoph continually pushed my thinking about cultural sociology generally and the development of my theoretical model in my dissertation more specifically. I also appreciated her consistent exhortation to write more clearly. Kyle Mayer provided me with a tremendous amount of coaching and guidance about how to build relationships in the Academy, and helped me think through strategies for connecting my research to secondary audiences. Omar El Sawy provided me with invaluable encouragement and advice about how to develop my long-term research stream related to analytics and Big Data. Taken as a whole, I am incredibly grateful for the support, critique and encouragement my committee gave me throughout my dissertation research. I would also like to acknowledge the vital contribution of all of the individuals represented with pseudonyms in this dissertation. I had no prior experience in the law enforcement industry, and had limited experience in complex game theory and statistical viii modeling. As a result, I asked many “stupid” questions that participants humored by providing me with patient explanations that enabled me to appreciate and experience these diverse social worlds. During my fieldwork, I was particularly impressed by the professionalism, competence, and curiosity displayed by many different people playing many different roles in law enforcement activities. I hope that my dissertation provides readers with a taste of the passion with which the individuals working in Gaming Expert, Algo-Security, and Metropol pursued their quest to make society safer. I am also quite grateful for the stimulating environment of the University of Southern California’s Marshall School of Business and the Department of Sociology. Close friendships with fellow students – particularly Derek Harmon and Mariam Krikorian – helped me make it through the dissertation. They spent countless hours providing informal and formal feedback on various versions of this research that have subtly and significantly shaped this end product. Additionally, many current and former faculty members spent a tremendous amount of energy nurturing my nascent academic career. In particular, I would like to thank Alexandra Michel, Mark Kennedy, and Sandy Green for their passion for research and their formative influence on my academic thinking. I would also like to acknowledge Paul Lichterman for his clear exposition of the methodological principles used to craft excellent qualitative research. I am very grateful for Tom Cummings and his constant support of my research. The atmosphere at USC was quite conducive for developing creative research, and I know that conversations with diverse faculty such as Paul Adler, Nate Fast, Victor Bennett, Nan Jia, Lori Yue, and many others formed a backdrop for my dissertation. Finally, I would like to thank my family and friends for their unwavering support and expressions of confidence and encouragement during this lengthy, grueling process. Peter ix Brewin’s provocative questioning almost fifteen years ago probably paved the way for my entry into the Academy; his mentoring and friendship certainly helped me persevere through challenging times during my dissertation. Many close friends helped me cultivate and maintain a passion for research and a love for learning: in particular, Howard, Roberta, Chris, and Joe. I thank my mom Barbara, and my sister Kathy, for their constant support; I wish my dad could celebrate this with me – I miss him, and am more grateful to him than words can express. I thank my three sons, David, Ethan, and James for their curiosity, love for learning, energy, and enthusiasm, which lifted me up more during this process than they realize. Most of all, I thank my wife DeAna: her passion to live in and create a just, kind, and caring society strengthen and motivate me, and it is to her that I dedicate this dissertation. x I dedicate this dissertation to my wife, DeAna 1 CHAPTER 1 – INTRODUCTION “Baseball – of all things – was an example of how an unscientific culture responds, or fails to respond, to the scientific method.” (Lewis, 2004) Motivation In his bestselling book Moneyball, Michael Lewis describes how Billy Beane and the Oakland A’s used statistics to revolutionize their strategy for winning baseball games (Lewis, 2004). At the heart of this story lies a simple – but deep and profound – conflict between two different types of experts. The first type of expert, baseball’s talent scout, evaluated individual talent by observing players and forming interpretive judgments about the future potential of these players based on their assessment of raw talent. The second type of expert, baseball’s statistical analyst, evaluated individual talent by analyzing quantitative, historical records of players and forming interpretive judgments about the future potential of these players based on statistical analyses. These experts provided the organization with clashing recommendations to fundamental strategic questions. For example, the talent scouts advocated investing in high-cost, high-potential high school players, while the statistical analysts advocated investing in low-cost, proven college and minor league players. The talent scouts evaluated players based on their “gut” and their “intuition;” the statistical analysts evaluated players based on the “facts” and the “numbers.” As the story unfolds, Lewis resolves this tension between domain experts and statistical experts: the statistical experts win, as their methods propel the A’s to an unprecedented twenty game winning streak. Moneyball illustrates a phenomenon of theoretical and practical interest for organizational scholars: a macro-cultural shift towards expertise requiring the quantification of organizational decisions. By using the term quantification of organizational decisions, I refer to the ways in which organizations make decisions using numbers and measurements they create and analyze. This 2 trend towards quantification has begun to influence increasingly diverse organizational decision- making dilemmas. For example: how do organizations select, promote, and retain talented individuals for career development? Many popular consultants and leadership gurus emphasize the role of the executive in appraising the “fit” of an employee with the “culture” of the organization (Lencioni, 2010). In contrast, however, some leading human resource scholars advise organizational leaders to incorporate statistical analysis into their talent-related human resources business practices (Boudreau & Jesuthasan, 2011). Another example: how do executives make major strategic decisions? Some famous executives highlight the power of the intuitive “gut” decision (Welch & Byrne, 2003). Others, however, argue for a more quiet, analytical, and mathematical approach to strategic decision-making (Davenport, Harris, & Morison, 2010). This battle for decision-making predominance between a leader’s experience- based intuition and an organization’s analysis of quantified “facts” features prominently in popular business publications and significantly impacts organizational performance. A large body of management research argues that quantification of decision-making processes leads to superior organizational performance. Frederick Taylor’s philosophy of Scientific Management, for example, argued that when organizations quantify the activities of their employees, accountability improves and production increases (Taylor, 1911). Similarly, W. Edwards Deming’s philosophy of Total Quality Management suggested that the most productive organizations identify quantifiable measurements for quality, and then track and monitor performance with statistical control charts (Deming, 1982). Recently, Thomas Davenport and colleagues extend the work of these classic management scholars by suggesting that when organizations use quantifiable analytical methods to inform all significant business decisions, they can develop competitive advantage (Davenport & Harris, 2007; Davenport et al., 2010; 3 Davenport, 2014). In developing the link between the quantification of decision-making and organizational performance, these scholars praise many virtues that accompany the quantification of decision-making: improved objectivity, increased accountability, and the creation of a common language that crosses departmental or geographic boundaries. Other scholars, however, highlight the negative consequences of quantification. By distilling characteristics of a complex environment into quantitative measurements, organizational actors simplify the world and strip away meaningful information (Porter, 1996) central to the tacit knowledge of workers (Lave, 1996). When organizations rely on quantitative performance measurements such as rankings, they may focus their attention in unintended and unhealthy directions (Sauder & Espeland, 2009). Powerful organizational actors may use quantified processes to dominate less powerful actors (Porter, 1996; Zuboff, 1988). Quantification becomes embedded in technologies of rationality that can hamper creativity and possibly mis-specify situations that can produce significant disasters (March, 2006; Taleb, 2010). Ultimately, quantification of organizational decision-making may commensurate items or values that may not be commensurable, and lead the organization to drift away from historical ideals and values (Espeland & Stevens, 1998; Weber, 1958). Taken as a whole, some research thus suggests that the trend towards quantification possesses a dark underbelly. Although scholars have studied and debated the consequences of quantification, surprisingly little research has investigated the processes by which organizations quantify their decision-making (Cabantous & Gond, 2011). Normative research advocating quantification of decision-making recommends particular sets of implementation practices, but fails to study process of implementation itself (Taylor, 1911; Deming, 1982; Davenport & Harris, 2007). Recently, some scholars have suggested creating a research agenda to investigate “the sociology 4 of quantification,” but this work remains nascent, and has only begun to develop some foundational concepts of quantification (Espeland & Stevens, 2008). Similarly, scholars have suggested that additional study should investigate how decision tools incarnate expertise of quantification systems such as economics (Cabantous & Gond, 2011; Jarzabkowski & Kaplan, 2014). My dissertation seeks to build on such calls to research the phenomenon of the quantification of decision-making. I suggest that this research is important for several reasons. First, much existing research conceptualizes the quantification of organizational decision- making in a way that assumes that quantification can reflect a decision environment objectively and accurately. For example, Taylor (1911) suggests that employees make many organizational decisions using “rules of thumb” that represent individual, biased, and inaccurate perceptions about the environment; Scientific Management seeks to replace such biased decision-making processes with quantified, objective decision-making processes. Deming (1982) critiques idiosyncratic, reactive decision-making; he recommends that organizations consider decision- making as a component of an objective, stable business process. Davenport et al. (2010) highlight the limitations of intuition and experience in decision-making; they advocate overcoming these limitations by using particular analytical techniques that fit the organization’s decision-making environment. Each of these research streams assumes that a quantified decision problem more accurately represents reality and that organizations can “optimize” decisions using quantification techniques. Other research has shown, however, that the quantification of decision-making inherently simplifies and defines reality in subtle and important ways (Porter, 1996). For example, using economic techniques to value an environmental disaster re-shapes societal understanding of the importance of the environment (Fourcade, 2011). Two specific critiques of rationalized, quantified decision-making recur in the academic literature (March, 5 2006). First, such emphasis on rationality leads to overly conventional thinking that lacks creativity (Steuerman, 2003). Second, when organizations incorporate quantified decision- making into overly complex systems, they may mis-specify situations and produce major disasters (Albin & Foley, 1998). To understand this tension between the positive and negative impacts of the quantification of decision-making, scholars have specifically highlighted the importance of developing theory to understand decision-making processes in the context of organizational practices and strategic decision-making tools (Cabantous & Gond, 2011; Jarzabkowski & Kaplan, 2014). Second, quantification leads to a fundamental shift of expertise used in decision-making. When domain experts make decisions, they draw on their historical experience of the domain. Scholars and lay people alike describe decision-making based on such historical experience with adjectives such as “intuitive” or “gut-based.” The quantification of decision-making requires the delineation and explication of such tacit processes in mathematical models; to utilize massive quantities of data and analyze complicated problems, organizations must rely on sophisticated algorithms that require specialized expertise in “data science” (Miller, 2013; Nisbet, Elder IV, & Miner, 2009). This modeling process inherently forces an interaction between the domain expertise and mathematical expertise, with unclear effects. For example, is the process of quantification simply an automation of expert knowledge? Or does quantification shift expertise from one professional jurisdiction to another (Abbott, 1988)? Such political conflict emerges from the nature of quantification, and developing theory about the processes of quantification can help scholars better understand the consequences of such political conflict. Third, the movement towards quantification of decision-making has become increasingly central to management discourse with the rise of “Big Data.” The movement towards Big Data 6 resembles other administrative innovations such as Scientific Management, Total Quality Management, Re-engineering Business Processes, or Six Sigma. We have a limited understanding, however, of the processes by which such administrative innovations get instantiated in organizational practices (Kennedy & Fiss, 2009). This limited understanding hampers our ability to understand antecedents of organizational adoption of administrative innovations and the fidelity and extensiveness of their adoption (Ansari, Fiss, & Zajac, 2010). Consequently, scholars have called for more process studies to investigate the design of such innovations and how organizations actually implement these innovations (Ansari et al., 2010). Fourth, the quantification of decision-making tends to be examined from the point of view of the organization quantifying the decision-making. In order to quantify decision-making, however, a professional organization usually exists – a consultant or some form of statistical expert. The professional organization needs to translate the principles of an abstract decision rationality such as economics or statistics into the tools and social interactions of the organization quantifying the decision-making process (Jarzabkowski & Kaplan, 2014). Most of the research that exists revolves around the study of the organization’s decision, but little research studies how the professional organizations construct a logic of quantification and embed their expertise in the practices and tools of client organizations (Cabantous & Gond, 2011; Jarzabkowski & Kaplan, 2014; Greenwood, Suddaby, & Hinings, 2002). Finally, the processes by which organizations embed quantified decision-making into software-encoded algorithms are not well understood (D’Adderio, 2011; Pentland & Feldman, 2008). By embedding a decision-making process into a routine, the encoded, quantified decision- making process undermines the ability of organizational actors to draw on their tacit knowledge and adjust their performance of the routine to match contextual circumstances (D’Adderio, 7 2008). Consequently, when organizational actors attempt to design a routine with a durable algorithm, unintended consequences may emerge during implementation (Pentland & Feldman, 2008). Scholars do not have a developed understanding of the processes used to design and implement such software artifacts (Pentland & Feldman, 2008). An inductive process study offers opportunity to shed insight into this phenomenon. Core Research Questions To fill these gaps in scholars’ understanding of quantification, I seek to answer the following overarching question in my dissertation: How do organizations quantify their decision- making processes? To answer this overarching question in more detail, I ask the following specific research questions: 1. How have scholars historically understood the impact of quantification on organizational decision-making processes? 2. How do professionals inscribe their expertise in quantification into strategy tools? 3. How do organizations quantify specific decision-making processes? By answering these three research questions, I seek to develop a comprehensive theoretical account of the processes by which organizations quantify decision-making processes. Structure of Dissertation My dissertation consists of this introductory chapter, three studies that answer the aforementioned research questions, and a concluding chapter. I now briefly describe each of the core studies. In Chapter Two, I review literature from organizational and sociological perspectives that directly deals with the phenomenon of quantification. I first describe the research of three mainstream management scholars (Taylor, Deming, and Davenport) who approach quantification from a normative perspective by suggesting that quantification leads to 8 improved business performance. Next, I describe research from two sociological scholars (Porter and Espeland) who directly attempt to describe the sociology of quantification. Finally, I close this chapter by showing the limitations of both of these research streams. Specifically, I highlight the assumptions each of these perspectives makes, and argue that scholars should develop theory about the processes by which organizations quantify decision-making. In Chapter Three, I study the process by which a professional organization constructs a strategy tool to quantify decision-making. To study this process, I conduct a 15-month study of a professional organization that markets and develops a game-theoretic decision-making tool to the security industry. Relying on data from participant observation, I analyze the design and construction of a game-theoretic strategy tool from prototype to commercial deployment. I find that to construct the strategy tool, the professional organization engages in three processes: prototyping, pinging, and contextualizing. In prototyping, the professional organization applies a system of quantification (i.e., game theory) to a particular domain by developing and testing a decision-making model. In pinging, the professional organization alternates between probing to understand their environment and updating their conceptualization of the strategy tool. In contextualizing, the professional organization integrates the tool into a client culture – while maintaining portability of the tool (i.e., making sure they can easily adapt the tool to other markets). While prior research suggests that professionals primarily design strategy tools of quantification to represent the decision environment accurately, my findings suggest that pinging and contextualizing significantly alter the nature of the strategy tool. Specifically, the quantification performed by the strategy tool shifts from being a mechanism to represent a decision environment to being a mechanism through which organizational actors seek to resolve significant organizational or industry-level problems. In other words, quantification as performed 9 by the strategy tool does not rationalize organizational processes; quantification as performed by the strategy tool instead becomes a tool used by domain experts to magnify control over their environment. In Chapter Four, I examine how an organization deploys the logic of quantification to automate decision-making routines. To study this phenomenon, I observe a law enforcement organization and an expert in game theory designing an algorithm to automate a decision-making routine for scheduling and deploying security resources. I find that the law enforcement organization reconciles their grounded, embedded, and pragmatic approach to the routine with the algorithmic expert’s inherently abstract, disembedded, and mathematical approach to the routine. Specifically, they perform this reconciliation through three bridging practices: modeling and mapping, establishing algorithm jurisdiction, and constructing validation. While prior research highlights the role of algorithms in rationalizing decision-making processes, my findings suggest that algorithms compel organizational experts to inscribe their dynamic, multi- faceted domain expertise into an “enchanted” algorithm. I close my dissertation in Chapter 5, reviewing the results and summarizing the theoretical implications and contributions. Data and Methods Overview Since an existing theoretical framework has not addressed my research questions, I engage in a process of theoretical discovery rather than verification. Consequently, I use a grounded theory methodology, where “theory is derived from data and then illustrated by characteristic examples of data” (B. Glaser & Strauss, 1967, p. 5). To outline the data and methods used in my dissertation, I first describe my overall methodological approach. I then 10 describe the empirical context for my dissertation. I close this overview by detailing the processes used for data collection and analysis. Methodological Approach In a grounded theory approach, the research approaches the data without a pre-existing theoretical framework. Rather than apply pre-existing theoretical categories from particular streams of literature, Glaser and Strauss (1967) advise the researcher to identify concepts tightly linked with their empirical context. An effective strategy is, at first, literally to ignore the literature of theory and fact on the area under study, in order to assure that the emergence of categories will not be contaminated by concepts more suited to different areas. Similarities and convergences with the literature can be established after the analytic core of categories has emerged. (B. Glaser & Strauss, 1967, p. 37) Conceptual categories both emerge as evidence from the empirical context, and are used by the researcher for illustration as well (Strauss & Corbin, 1998, p. 23). The goal of the research is to identify hypotheses of generalized relations among different categories (Strauss & Corbin, 1998; Gioia, Corley, & Hamilton, 2012). The researcher selects the data used to generate concepts and categories. Unlike traditional theory verification research, the data for grounded theory is collected to a large extent by the discretion of the researcher. Glaser and Strauss use the term “theoretical sampling” to describe the approach the researcher uses for data collection. The basic question in theoretical sampling (in either substantive or formal theory) is: what groups or subgroups does one turn to next in data collection? and for what theoretical purpose? (B. Glaser & Strauss, 1967, p. 47) With theoretical sampling, as the research progresses, the type of data and the method of data collection progress as well. 11 As the research unfolds, the goal of the researcher is to develop theoretical understandings of the relationships between concepts. While traditional variance theorizing generates “know-that” knowledge, my research questions theorize “know-how” knowledge that require an understanding of the connections between concepts (Langley, Smallman, Tsoukas, & Ven, 2013, p. 4). Specifically, the theory should pay attention to the dynamic relationships between data concepts. My research design builds on the methodological techniques used by other management scholars to develop theory grounded in data, interpreted in a way that can be validated by the scholarly community, and can be generalized to other contexts (Eliasoph & Lichterman, 1999). Empirical Context To study the quantification of decision-making, I use the empirical context of the security industry: public law enforcement organizations and private security companies that provide security services that protect citizens. This industry provides an appropriate setting in which to develop novel theory about the quantification of decision-making for several reasons. First, security organizations traditionally allocate and deploy security resources with decision-making routines on a daily basis. Second, due to technological advances related to “Big Data,” security organizations increasingly rely on large quantities of data from crime databases, video recordings, and sensor data to inform the decisions which shape the performance of the routines allocating security resources. Finally, the security industry serves as an attractive context due to its substantive importance: in the United States, for example, public spending on law enforcement related activities reached almost $300 billion in 2013 (Chantrill, 2013) and private security industry spending was projected to grow to $45 billion in 2013 (IBISWorld 2012) 1 . 1 Revenue numbers include NAICS codes 56161 (security services) and 56162 (security alarm services). 12 Within the security industry, I study Gaming Expert, a research organization based in a large metropolitan university, and Algo-Security, an entrepreneurial start-up founded by Gaming Expert. Gaming Expert specializes in conducting academic research related to the security industry. Founded approximately ten years ago, the organization develops research applicable to real-world problems in the security industry. Structurally, Gaming Expert consists of a director, business development staff members, faculty, administrative support, and various post-doctoral researchers and graduate students. During my project, I worked with the Gaming Expert department that used game theoretic algorithms to automate scheduling decisions. Algo-Security was founded as an entrepreneurial start-up to commercialize technology developed by Gaming Expert. Founders of the company include an academic professor who developed the intellectual property while doing research at Gaming Expert and a seasoned Chief Executive Officer with a proven entrepreneurial track record. Other managers and graduates from Gaming Expert also hold advisory and staff positions with Algo-Security. Gaming Expert and Algo-Security use game theory to help security organizations optimally respond to criminal or terrorist activities. Game theory analyzes strategic decision- making by employing formal mathematical models that assume intelligent decision-makers. Specifically, Gaming Expert and Algo-Security use game theory to model security forces defending targets from the attacks of adversaries (i.e., criminals or terrorists). In such competitive situations, the defender and the adversary attempt to outthink each other. Game theory formally models these competitive interactions by defining strategies and payoffs for each player. Such game theoretic approaches have attracted security organizations because they provide a simplified framework that analyzes the competitive interactions between security providers and adversaries such as criminals or terrorists. 13 The game-theoretic scheduling algorithm models a scheduling decision as a mathematical optimization problem for a security provider and represents a quantification of a decision-making process. Theoretically, a security provider could choose a variety of strategies within the structure of the game matrix. For example, a security provider might choose to use part of their limited resources to protect a particularly high value target like a central hub station all of the time. For their scheduling algorithm, however, Gaming Expert and Algo-Security generated recommended strategies that optimized a metric called expected defender utility. Expected defender utility provided a mathematically optimum metric that represented the expected value of the payoffs based on the security organization or defender strategy and potential defender responses. By calculating and optimizing expected defender utility, Gaming Expert and Algo- Security assume that both the defender and the adversary will act in their self-interest by maximizing their expected potential gain. This mathematical construct of expected defender utility thus functions as the scheduling objective in Gaming Expert and Algo-Security’s approach to decision-making. By using game theory to make security-scheduling decisions, Gaming Expert and Algo- Security quantify decision-making processes in the security industry. These decision-making processes historically relied on the judgment and experience of domain experts. In order to deploy a game theoretic decision-making process, quantification had to take place. My fieldwork with Gaming Expert and Algo-Security enabled me to observe social interactions that resulted in the quantification of decision-making processes. Data Collection My primary data source is field notes from participant observation of various business activities for Gaming Expert and Algo-Security. I entered the field as a business school doctoral 14 student with a consulting background. In exchange for helping Gaming Expert and Algo- Security with various projects, I received access to their business activities and projects. Specifically, I helped Gaming Expert implement their game-theoretic scheduling algorithm at Metropol, a large West Coast metropolitan law enforcement agency. I also helped Algo-Security with business development activities such as the development of marketing collateral and investor pitch materials. I spent approximately ten hours per week in the field from February 2013 through June 2014. During this time, I participated in various types of activities such as meetings, on-site project updates, conference calls, and field trials of their game theoretic algorithm. I spent time shadowing people doing their job. During this time, natural opportunities emerged for me to interview participants of Gaming Expert, Algo-Security, and client organizations. I conducted this as informal interviews in the field, asking individuals to describe their activities while avoiding “why” or motive questions (Kvale & Brinkmann, 2008). Additionally, I engaged in e- mail conversations and conference calls on topics of theoretical interests. During my participant observation, I took extensive field notes. I used the field notes to record my experiences and provide a logged repository to analyze the data in the future. As Lofland et al. (2005) note. The Logging Record actually constitutes the data. The data are not the researcher’s memories which, in and of themselves, can never be subjected to systematic analysis. Rather, the data consists of whatever is logged and thus is available for careful and systematic inspection. (Lofland et al., 2005, p. 82) My goal was to create field notes with social realism that minimized bias, but still recognized that the importance of my identity as an observer. I took detailed notes during all of my field interactions, carrying a notebook and pencil with me at all times. As soon as possible after my time in the field (typically within a day), I transferred my handwritten notes into electronic field 15 notes. By taking such detailed field notes, I made my data available for future coding (Lofland et al., 2005). I also gathered supplemental archival data. I obtained and catalogued publicly available information about Gaming Expert, Algo-Security, and client organizations. This general information enabled me both to familiarize myself with industry terminology and to understand the cultural background of each organization. During participant observation, I collected artifacts used for the scheduling routine (i.e., Standard Operating Procedures, checklists, etc.). I also observed Gaming Expert lectures that introduced and explained the algorithm to academic audiences. Additionally, I accumulated archival materials relevant to the design process such as pictures of the mobile software and dashboards developed to represent various outputs from the algorithmic scheduling routine. Data Analysis To analyze this collected data, I coded my field notes and archival materials following generally accepted qualitative methods (Gioia et al., 2012; B. Glaser & Strauss, 1967; Strauss & Corbin, 1998; Van Maanen, 1979). In coding my field notes, I looked for concepts of relevance. Gioia et al (2012) define concepts in the following way: By ‘concept,’ we mean a more general, less well-specified notion capturing qualities that describe or explain a phenomenon of theoretical interests. (Gioia et al., 2012) Concepts provide the basis from which the research can develop interpretations and theoretical analyses. During the coding process, the goal is to come up with a data structure that can be used to analyze the data. Following the principles of grounded theory, the data structure should be independent of a pre-existing theoretical framework (Gioia et al., 2012; B. Glaser & Strauss, 1967) 16 To develop this data structure, I follow the recommendations from Gioia et al (2012). I begin by identifying first order concepts. First order concepts come directly from the field context, and are tightly grounded in the empirical data (Van Maanen, 1979). In this phase, I use the constant comparative method to identify many codes (Strauss & Corbin, 1998), which I eventually distill into first order concepts (Gioia et al., 2012). After developing first order concepts, I identify second order themes that are more theoretical, and relate to my phenomenon of interest. Finally, I consolidate second order codes into aggregate dimensions. This data structure is represented in Figure 2 for Chapter Three’s study, and Figure 5 for Chapter Four’s study. In the final step of data analysis, I seek to explain how the concepts in the data structure relate to each other. During this time, I pay attention to relationships between the data. This process is dynamic. As Langley (1999) notes, …process thinking involves considering phenomena dynamically – in terms of movement, activity, events, change, and temporal evolution. (Langley, 1999, p. 271) The goal of this process is to make the data to theory connections clear. In this last phase of data analysis, I show the transformation from the data structure into a dynamic, inductive model. The dynamic theoretical models I develop are represented in Figure 3 for Chapter Three’s Study, and Figure 6 for Chapter 4’s study. Contributions My dissertation makes several contributions to management research. First, I show that clarifying scholars’ assumptions regarding the nature of quantification highlights an important gap in our theoretical understanding of quantification. Specifically, prior research either assumes the active efforts of people and considers numbers as reflecting reality (the rationalization perspective) or assumes that numbers have agency, shaping social interactions by considering 17 numbers as both reflecting and defining reality (the sociology of quantification perspective). I argue that conceptualizing quantification from a different set of assumptions that emphasize the active efforts of people while recognizing that numbers also both reflect and define reality provides a better way to study quantification of complex decision-making processes. Second, I show that portrayals of the battle between the domain experts and statistical experts do not reflect situations featuring quantification of complex decision-making. Instead, my study shows that the processes required to fuse these different forms of expertise lead to a significant shift in the fundamental nature of expertise. Quantification functions like an educational canon – domain experts learn how the quantification process works and then build their new culture on top of the quantification process – without necessarily understanding the inner mechanics of the quantification itself. This account differs from existing literature, which either highlights the eventual dominance and automation by the statistical expert of the domain expert (i.e. the Moneyball story) or highlights the ways in which the quantification process is inappropriately applied to social phenomenon that cannot be quantified. Ironically, quantification becomes a part of culture, and as such should be understood as an integral cultural component rather than a battle between different forms of expertise. Third, existing research on the implementation of practices focuses on the ways in which practices either contribute to the efficiency or the legitimacy of the organization. Interestingly, although the quantification of decision-making can be viewed as an administrative technology spreading throughout the business world, the adoption of quantified decision-making practices may reflect a subtly different trend. Rationalization accounts highlight the ways that quantification automates existing processes to make them more efficient, but the findings of this dissertation suggest that the nature of quantification of complex decision-making processes may 18 impel creative exploration of new ways of addressing perennial problems. Recent research by Ansari et al (2010) highlights the ways that practices are adapted to fit politically, culturally, and technically. We extend this research by highlighting the way in which the nature of an administrative adoption may result in transformation rather than adaption. Fourth, I show how the professional organization goes about inscribing their expertise of quantification in strategy tools. By describing this process, I shed insight into the quantification process from a different perspective – that of the professional organization. I show how the nature of the professional organization inspires analogical thinking that results in the transformation of a tool for quantifying decisions into a tool embedded in client organizations for addressing more broad, fundamental organizational and industry-level problems. In other words, the quantification functions as a core but perhaps unrelated component of organizational business processes. Finally, I develop theory to explain dynamics related to how quantification gets instantiated in software and algorithms. Current research emphasizes the ways in which quantification becomes hard-wired into many organizational practices, and the quantification becomes hidden and deterministically influences subsequent organizational activities. I show, however, that complexity and integrated nature of the underlying decisions being quantified requires the actors designing the algorithmic decision-making process to visualize future possibilities. Consequently, rather than constraining future performances of the routine, the design of the algorithm shifts the locus of agency for future performances of the routine. 19 CHAPTER 2 – THEORETICAL PERSPECTIVES ON QUANTIFICATION The current trend of quantifying decision-making processes extends efforts by management scholars and practitioners to create more efficient organizational routines (Taylor, 1911; Deming, 1982; Haeckel & Nolan, 1993; Davenport & Harris, 2007). Scholars suggest that statistical expertise rationalizes business processes by replacing biased, human decision-making processes with unbiased, mathematical decision-making processes. Specifically, quantifying decision-making processes helps organizations control foundational business processes by representing a process or decision in mathematical terms and using math to identify an optimal response. This normative approach argues that organizations that quantify decision-making processes make better decisions than those that use traditional forms of human expertise. I label this perspective as the rationalization perspective. Other scholars, however, view the quantification of decision-making from a less utopian perspective (Porter, 1996; Desrosieres, 2001; Espeland & Stevens, 2008). Specifically, these scholars highlight several negative features inherent in the quantification processes. First, quantification oversimplifies complex phenomena, often in a way which undermines the tacit expertise of individuals with decision-making experience (Porter, 1996). Second, because quantification requires individuals to make judgments about how to simplify such complex phenomena, quantification can focus the attention of organizational actors in unintended and potentially negative locations (Sauder & Espeland, 2009). Third, the categorization processes that accompany quantification can reify existing power structures and marginalize minority perspectives (Porter, 1996; Zuboff, 1988). Fourth, quantification of decision-making processes often become embedded in “technologies of rationality” such as computer software that can 20 solidify a dominant perspective, mis-specify situations, and potentially lead to major disasters (March, 2006). These negative consequences arise from the inherent difficulty actors have in quantification of items or values that may not be commensurable (Espeland & Stevens, 1998; Weber, 1958). I label this perspective the sociology of quantification perspective. In this chapter of my dissertation, I provide a historical perspective on scholarly literature that studies the process of quantification from each of these perspectives. I begin by describing the rationalization perspective. Specifically, I summarize three streams of research that illustrate this perspective: Frederick Taylor and Scientific Management (1911), W. Edwards Deming and Total Quality Management (1982), and Thomas Davenport’s perspective on Analytics (Davenport & Harris, 2007; Davenport et al., 2010; Davenport, 2014). Although many different academic disciplines and theoretical perspectives address facets of the quantification of decision- making processes (such as research in operations management, business statistics, or administrative innovations such as Six Sigma), each of the three focal perspectives addressed in my dissertation has impacted western business practices significantly. For each of these perspectives I describe the organizational problem they identify, the status quo they argue against, and their normative solution that relies on statistical expertise and quantification. I then contrast this perspective with the sociology of quantification perspective. Although many different scholars peripherally deal with the issues of quantification, I highlight two scholars whose research specifically addresses the phenomenon of quantification: Theodore Porter (1996) and Wendy Espeland (Espeland & Stevens, 1998, 2008; Sauder & Espeland, 2009). For each of these perspectives, I describe the organizational problem they suggest quantification deals with, the way in which quantification overcomes the problem, and the potential negative (and often unintended) consequences emerging from the use of quantification. 21 After I describe each of these perspectives, I describe the limitations of both the rationalization perspective and the sociology of quantification perspective. I categorize the perspectives along two dimensions: locus of agency (i.e., does the human use the number, or does the number use the human?) and the ontology of numbers (i.e., does the number represent reality, or do numbers both represent and define reality?) The rationalization perspective features a human-based locus of agency (organizational actors strategically use numbers) and a representational ontology of numbers. I argue that rationalization perspective’s reliance on a representational ontology limits its ability to explain quantification of complex concepts and values. In contrast, the sociology of quantification perspective features a mutually constitutive ontology of numbers in which numbers both define and create reality, but relies on a numbers- based locus of agency. In other words, the tone and emphasis of the sociology of quantification perspective privileges the ways in which quantification as a phenomenon produces unintended consequences. Drawing on this categorization, I suggest that scholars should study the quantification of decision-making from a perspective that relies on an ontology of numbers that allows for numbers to both represent and define reality and privileges a human locus of agency. In other words, I suggest that we need to develop an understanding of quantification that allows for vagaries inherent in quantifying complex social phenomena while recognizing the importance of the ways that organizational actors strategically use quantification to control their environment. The Rationalization Perspective Frederick Taylor and Scientific Management In the early 20 th century, Frederick Taylor introduced the notion of scientific management. Taylor suggests that society should pursue a simple ideal: maximizing 22 productivity. He argues that by maximizing productivity, society can provide individuals with an optimal life: “maximum prosperity can exist only as the result of maximum productivity” (Taylor, 1911, p. L32). Taylor suggests that workers and management can work together to achieve this prosperity, where management leads and the workers follow. The central problem of management to solve is “that of obtaining the best initiative of every workman” (Taylor, 1911, p. L207). He describes his vision with a utopian flair, inviting readers passionate about scientific management to visit him at his house in Pennsylvania. In his text, Taylor describes the problem scientific management can solve, critiques the way current management practices attempt to solve the problem, and proposes a normative solution that leads to optimal productivity. According to Taylor, the main problem that hampers organizational productivity is soldiering. He defines soldiering as “underworking…deliberately working slowly so as to avoid doing a full day’s work” (Taylor, 1911, p. L45). Taylor suggests that there are two basic reasons for soldiering. First, there is a general conflict between management and workers. Here, Taylor suggests that workers think that if they increase their output, management will reduce staffing. Second, workers use inefficient rule of thumb methods to do their jobs. Unlike scholars that emphasize the benefits of tacit knowledge (Lave, 1996), Taylor emphasizes the ways in which workers learn and internalize sub-optimal work practices. Essentially, Taylor equates individuals with inefficient machines in need of repair. Taylor describes the dominant management philosophy of his time as “Initiative and Incentive Management.” This philosophy presumes that management’s primary objective is to inspire the initiative and the effort of the individual workers. This philosophy assumes that individual workers understand techniques about doing their job that management does not know and cannot know. As a result, the Initiative and Incentive Management philosophy recommends 23 management practices that emphasize the role of leadership in optimizing worker productivity. In other words, the role of the manager is to inspire the worker. To provide inspirational leadership, good managers use a toolkit that relies on special incentives such as promotions, favorable schedules, premium pay, and better working conditions. Such incentives help managers combat a worker’s natural tendency to shirk. Taylor, however, pays cursory attention to this issue of motivation. He suggests that inspiring workers requires leaders to understand two important concepts. First, the worker’s personal characteristics should “fit” the job. For example, Taylor suggests that finding workers to handle pig-iron requires identification of a person biologically suited to do the job: “the selection of the man…does not involve finding some extraordinary individual, but merely picking out from among very ordinary men the few who are especially suited to this type of work” (Taylor, 1911, p. L458). Second, he suggests that management should distribute the rewards from an individual’s work in a fair and balanced manner. In essence, Taylor advocates a profit-sharing incentive compensation structure. Taylor thus assumes a moral benevolence of management (he ignores possible abuses of power). He also assumes management has an ability to “fit” the nature of the task with the inherent, genetically imprinted personality and ability of the individual. Taylor subordinates this inspirational role of the manager, however, to another role: streamlining work tasks. He suggests that the manager analyze job tasks in a detailed way that does not rely on worker knowledge or initiative. Taylor thus shifts the emphasis away the management role of combatting shirking by prioritizing the role of management in overcoming worker reliance on inefficient rules of thumb. 24 He describes his solution as a “scientific” approach to address issues related to soldiering. Taylor’s solution consists of four duties performed by the manager: First. [Managers] develop a science for each element of a man’s work, which replaces the old rule-of-thumb method. Second. They scientifically select and then train, teach, and develop the workman, whereas in the past he chose his own work and trained himself as best he could. Third. They heartily cooperate with the men so as to insure all of the work being done in accordance with the principles of the science which has been developed. Fourth. There is an almost equal division of the work and the responsibility between the management and the workmen. The management take over all work for which they are better fitted than the workmen, while in the past almost all of the work and the greater part of responsibility were thrown upon the men. (Taylor, 1911, p. L228). The focus of this “scientific” solution is to deconstruct the worker’s job into specific task components. Management plans the tasks workers do and controls the manner in which the workers perform the tasks. A system of incentives exists only to ensure that workers adhere to management’s philosophy. Ultimately, Taylor suggests that certain personalities are required to do this type of abstract planning work. Within his four-step method for management, Taylor emphasizes the importance of quantification and measurement. Specifically, he argues that the establishment of numbers and metrics should replace rule-of-thumb practices of the workman. The development of a science, on the other hand, involves the establishment of many rules, laws, and formula which replace the judgment of the individual workman and which can be effectively used only after having been systematically recorded, indexed, etc. (Taylor, 1911, p. L244) In other words, numbers replace tacit knowledge. Taylor provides several illustrations of the utility of the principles of scientific management. He explains the process for shoveling pig iron at Bethlehem Steel. He also explains how scientific management positively impacted an array of other organizations forms including a bricklaying company, a manufacturer of bicycle balls, a machine shop, and a surgeon’s practice. In each of these examples, he shows how a consultant (trained in the principles of scientific 25 management) enters the business and replaces rule-of-thumb practices with quantified, measured ways of implementing business processes. With his philosophy of scientific management, Taylor suggests that the fundamental problem with organizations is soldiering. He argues that the reliance of existing philosophies of management on inspiring employees revolving around compensation schemes is problematic. He emphasizes the importance of combatting a second dimension of soldiering, individuals following inefficient rules-of-thumb in their daily work practices. Taylor thus develops a process whereby management deconstructs and reconstructs a process to optimize productivity. Inherent in his recommendation: managers quantify job tasks by relying on mathematics and analytics to develop standardized best practices. Taylor advocates that decision-making shift from the tacit responsibility of the experienced individual to the explicit responsibility of the manager who uses quantification to reformulate the decision-making process. W. Edwards Deming and Total Quality Management W. Edwards Deming extends the tradition of Taylor by continuing to focus on the importance of managing processes rather than effort. Whereas Taylor emphasizes the importance of productivity, Deming highlights the importance of quality. Improvement of quality transfers waste of man-hours and of machine-time into the manufacture of good product and better service. The result is a chain reaction – lower costs, better competitive position, happier people on the job, jobs, and more jobs. (Deming, 1982, p. 1) Deming also describes his management philosophy with utopian language by suggesting that quality management practices directly result in national wealth – “the wealth of a nation depends on its people, management and government, more than on its natural resources – the problem is where to find good management” (Deming, 1982, p. 5). Like Taylor, Deming sets up a problem, 26 critiques the current trends in management, and advocates a proposed solution about how to manage organizations. The fundamental problem, according to Deming, is quality (rather than people, as Taylor said). Deming critiques traditional folklore that places quality and production efficiency in opposition with each other. He develops a consumer-evaluated standard of quality, but suggests that producers, not consumers, should drive quality production. Quality comes from producing the product right the first time, not from having a quality control process that identifies defects. To illustrate the importance of quality, Deming makes an analogy between quality manufacturing and teachers. He observes that the best teachers often are appreciated long after their students have left. In the same way, producers should lead the effort to produce quality goods that “lead” consumer notions of quality and last well beyond the initial purchase of a product or service. Known for his support of Japanese management practices and their post-World War II resurgence, Deming critiques Western management practices. He suggests that “deadly diseases…afflict most companies in the Western world” (Deming, 1982, p. 97). These diseases, listed in Table 1 below, “require the total reconstruction of Western management” (Deming, 1982, p. 97). 27 Table 1 – The Deadly Diseases of Western Management Deming’s list of “diseases” that infect Western management practices (Deming, 1982) 1. Lack of constancy of purpose 2. Emphasis on short-term profits 3. Evaluation of performance (a critique of the annual performance/merit review system) 4. Mobility of management 5. Management by only use of visible figures 6. Excessive medical costs 7. Excessive costs of legal liability Within these diseases, Deming highlights several themes that indirectly valorize the quantification of decision-making processes. First, he offers a critique of the short-term focus of western capitalism. By focusing on short-term results, western capitalism unintentionally forms barriers between workers and management. He ironically talks about this by observing the positive effects of the Japanese system: With no lenders nor stockholders to press for dividends, this effort became an undivided bond between management and production workers. (Deming, 1982, p. 3) Short-term focus, then, leads to diverging interests between management and production workers. Deming also critiques the current system for its focus on short-term fixes rather than substantive, long-term commitments to quality. For example, he critiques the way organizations focus on inspection to produce quality instead of using the process to create quality. Similarly, he 28 attacks the adoption of machinery and gadgets as an immediate, unreflective, gut reaction that does not address the root cause of the problem (Deming, 1982, p. 12). Second, he suggests that Western organizations often use measurements in a way that avoids addressing the root cause of organizational problems. He notes, for example, “measurements of productivity are like statistics on accidents: they tell you all about the number of accidents in the home, on the road, and at the work place, but they do not tell you how to reduce the frequency of accidents” (Deming, 1982, p. 14). He also says: …management by numerical goal is an attempt to manage without knowledge of what to do, and in fact is usually management by fear. Anyone may now understand the fallacy of ‘management by the numbers.’ (Deming, 1982, p. 75) Deming thus argues that numbers and goals prevent people from thinking through the root cause of the situation, and that “numerical goals…have effects opposite to the effects sought” (Deming, 1982, p. 68). Third, Deming attacks the reliance of Western organizations on annual performance reviews. Interestingly, he focuses on the way that the annual performance review focuses managerial attention on the end result, rather than the process – he suggests that APRs result in managers becoming “managers of defects” (Deming, 1982, p. 102). The critique of the merit rating system derives from its failure to encourage individuals to think about the root causes of problems: Merit rating rewards people that do well in the system. It does not reward attempts to improve the system. Don’t rock the boat. (Deming, 1982, p. 102) Deming is particularly critical of numerical goals that characterize such performance management methods. He notes, …numerical goals set for other people, without a road map to reach the goal, have effects opposite to the effects sought. (Deming, 1982, p. 68) 29 In many ways, Deming’s critique aligns with Kerr’s earlier famous caution about “the folly of rewarding A, while hoping for B” (Kerr, 1975). Deming’s solution for organizations is to focus on the process. He outlines 14 points of quality that form the backdrop to his system. Table 2 – Deming’s 14 Points Deming’s 14 Points (Deming, 1982) 1. Create constancy of purpose for improvement of product and service 2. Adopt the new philosophy 3. Cease dependence on mass inspection 4. End the practice of awarding business on the basis of price tag alone 5. Improve constantly and forever the system of production and service 6. Institute training 7. Adopt and institute leadership 8. Drive out fear 9. Break down barriers between staff areas 10. Eliminate slogans, exhortations, and targets for the workforce 11. Eliminate numerical quotas for the workforce, and for people in management 12. Remove barriers that rob people of pride of workmanship 13. Encourage education and self-improvement for everyone 14. Take action to accomplish the transformation By using the word “process,” Deming refers to the system through which an organization organizes activities. In order to focus on the process, organizational members need to 30 “understand the distinction between a stable system and an unstable system” (Deming, 1982, p. 309). In a stable system, organizational personnel can discover and understand variations in performance and productivity. In unstable systems, organizational personnel cannot discover or understand the sources of variation. Consequently, organizations cannot reduce product variation and increase quality from unstable systems. In order to understand and stabilize processes, managers need to develop the skill to differentiate between common causes (caused by a systemic process) and special causes (caused by an idiosyncratic event). Confusion between common causes and special causes leads to the frustration of everyone, and leads to greater variability and to higher costs, exactly contrary to what is needed. (Deming, 1982, p. 314) Deming suggests that in his experience, 94% of problems in quality derive from common system causes, while only 6% of problems come from individual idiosyncratic issues. Consequently, managers need to avoid the trap of looking for a singular cause when there is a systemic cause. For Deming, quantifying production processes and decision-making functions as a fundamental technique for stabilizing and controlling business processes. Definitions of terms are an important prerequisite for quantification, serving as a “definition that reasonable men can agree on” (Deming, 1982, p. 276). Deming clarifies that such definitions provide a necessary foundation for organizational communication. Operational definitions are necessary for economy and reliability. Without operational definitions of (e.g.) unemployment, pollution, safety of goods and of apparatus, effectiveness (as of a drug), side effects, duration of dosage before side effects become apparent, such concepts have no meaning unless defined in statistical terms. (Deming, 1982, p. 285) He highlights some examples of the importance of providing clear definitions. In the textile industry, for instance, what does it mean for cloth to be 50% wool? Individuals can interpret the 31 standard of 50% wool in different ways, and the process-focused organization needs to remove such interpretive variation. Similarly, the metric of on-time customer deliveries offers a potential for a wide degree of variation, and thus needs to be defined in order for organizations to improve their delivery processes. Once an organization puts measurements in place, control charts serve as a tool to help them understand the stability of their processes. Control charts measure a process variable of interest by reporting actual values over time in context of upper and lower limits. Deming explains why the control chart provides organizational personnel with superior information relative to other report types such as histograms. A distribution (histogram) only presents accumulated history of performance of a process, nothing about its capability…The capability of a process can be achieved and confirmed by use of a control chart, not by a distribution… (Deming, 1982, p. 313) Control charts thus help organizational personnel to both determine whether existing processes are in control, and maintain control. Deming also advocates the hiring of a quantification expert, a “leader of statistical methodology, responsible to top management” (Deming, 1982, p. 466). He makes an analogy between statistics and medical work, and suggests that organizations need this specialized expertise to maintain focus on quality and process improvement. He describes the responsibilities of this position in detail. There will be a leader of statistical methodology, responsible to top management. He must be a man of unquestioned ability. He will assume leadership in statistical methodology throughout the company. He will have authority from top management to be a participant in any activity that in his judgment is worth his pursuit. He will be a regular participant in any major meeting of the president and staff. He has the right and obligation to ask questions about any activity, and he is entitled to responsible answers. The choice of application for him to pursue must be left to his judgment, not to the judgment of others, though he will, of course, try to be helpful to anyone that asks for advice. The non-statistician cannot always recognize a statistical problem when he sees one. (Deming, 1982, p. 466) 32 Deming thus emphasizes the need to integrate expertise related to quantification into the organization. He believes that this expertise requires a dedicated position, and views the statistician as a professional exercising judgment independent of other management in the organization. Importantly, the statistical insights represent a scientific understanding of the causal processes that undergird business activities. Deming views the problem with Western management as a lack of focus on quality. Historically, he views this lack of focus as the primary reason why Japan was able to recover successfully from World War II (and from his perspective, eventually outcompete the United States). Deming emphasizes that quality problems occur because managers tend to focus on special causes rather than foundational processes. He critiques current management practices that focus on short-term results, noting that these practices frequently focus management attention in the wrong places. Deming advocates a different solution that emphasizes two themes: understanding and improving the process, and using statisticians to evaluate what is happening in the process. Deming believes that executing his system will result in greater quality and productivity, and ultimately lead to greater levels of national prosperity. Thomas Davenport and the “Analytics” Literature Recently Thomas Davenport and colleagues have emphasized the importance of developing a business management philosophy that revolves around “analytics” (Davenport & Harris, 2007; Davenport et al., 2010; Davenport, 2014). Writing several books and functioning as the chair of the International Institute for Analytics (www.iianalytics.com), Davenport articulates a philosophy of management that normatively recommends replacing intuitive, biased decision- making processes with evidence-based decisions based on analysis and fact. Davenport’s 33 philosophy, like Taylor’s and Deming’s, requires and revolves around a quantification of decision-making processes. Davenport highlights a fundamental problem businesses face: their tendency to use non- objective decision-making processes. Fact-based decisions employ objective data and analysis as the primary guides to decision-making. The goal of these guides is to get at the most objective answer through a rational and fair-minded process, one that is not colored by conventional wisdom or personal biases. Whenever feasible, fact-based decision makers rely on the scientific method - with hypotheses and testing - and rigorous quantitative analysis. They eschew deliberations that are primarily based on intuition, gut feeling, hearsay, or faith, although each of these may be helpful in framing or assessing a fact-based decision. (Davenport et al., 2010, p. 175) Davenport thus sets up a mild opposition between “quantitative analysis” and “intuition.” He acknowledges that “intuition” or “gut feeling” may play a role in decision-making, but privileges the role of the quantified decision-making process. Consequently, his solution is to emphasize “analytics.” By analytics we mean the extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions and actions. (Davenport et al., 2010, p. 7) This definition fits with Taylor’s and Deming’s perspective on the approach to decision-making processes. Davenport suggests that businesses currently approach decision-making in a way that does not resemble analytics. Specifically, he suggests that many major decisions in business are based on the manager’s “gut” rather than fact. For too long, managers have relied on their intuition or their “golden gut” to make decisions. For too long, important calls have been based not on data, but on the experience and unaided judgment of the decision maker. Our research suggests that 40 percent of major decisions are based not on facts, but on the manager’s gut. (Davenport et al., 2010, p. 1) 34 Davenport defines the consequences of this reliance on intuition in terms similar to March (2006) by highlighting their potential to “go astray” or “end in disaster” (Davenport et al., 2010, p. 1). His fundamental argument, however, is that nonanalytic decisions cost organizations money. …nonanalytical decisions…leave money on the table: businesses price products and services based on their hunches about what the market will bear, not on actual data detailing what consumers have been willing to pay under similar circumstances in the past; managers hire people based on intuition, not on an analysis of the skills and personality traits that predict an employee’s high performance; supply chain managers maintain a comfortable level of inventory, rather than a data-determined optimal level; baseball scouts zoom in on players who “look the part,” not on those with the skills that – according to analytics – win games. (Davenport et al., 2010, p. 1) He thus critiques the current establishment and suggests that “analytics” offers organizations the opportunity to make more money. To address this problem of relying on “intuition” rather than “facts,” Davenport recommends developing an analytics capability. He suggests that “analytics aren’t just a way of looking at a particular problem, but rather an organizational capability that can be measured and improved” (Davenport & Harris, 2007, p. 5). He suggests that this process involves five stages, beginning in Stage 1 when the organization struggles with missing or poor-quality data, and progresses to Stage 5 when the organization maintains an analytic architecture integrated into foundational and strategic decision-making processes. These stages are described in Table 3 below. 35 Table 3 – Davenport and Harris’ five stage process Data and IT Capability by Stage of Analytical Competition (Davenport & Harris, 2007, p. 156) Established companies typically follow an evolutionary process to develop their IT analytical capabilities. Stage 1: The organization is plagued by missing or poor-quality data, multiple definitions of its data, and poorly integrated systems. Stage 2: The organization collects transaction data efficiently but often lacks the right data for better decision-making. Stage 3: The organization has a proliferation of business intelligence tools and data marts, but most data remains unintegrated, nonstandardized, and inaccessible. Stage 4: The organization has high-quality data, an enterprise-wide analytical plan, IT processes and governance principles, and some embedded or automated analytics. Stage 5: The organization has a full-fledged analytic architecture that is enterprise- wide, fully automated and integrated into processes, and highly sophisticated. Throughout these stages, the organization needs to construct data, embed analytics into the process, unify the organization around analytic decision-making, and establish analytic expertise. Davenport and Harris note that developing analytic information requires the construction of data. They observe that the data storage capabilities exceed the analytic capabilities, and that organizations need a committed information technology team to construct the data required to eventually analyze. This data collection often simply catalogues data from existing business processes into various forms of databases. Sources of data include anything from transactional ERP data to financial data to video data to textual data from social media. Existing data 36 frequently needs to be cleansed, as a significant issue facing many organizations is “dirty data.” Similarly, organizations need to identify what information specifically they want to capture and track. This information then needs to be housed in a repository such as a data mart or data warehouse. Often, however, developing an analytic capability requires the creation of new metrics. Davenport and Harris note: …deciding what information is valuable and going out and getting proprietary data that doesn’t exist in your or anybody else’s organization is a different matter, and may require creating a new metric. (Davenport & Harris, 2007, p. 25) Organizational leaders can use these new metrics to make decisions. After putting data in place to make decisions, the organization has to learn to incorporate that data into decision-making processes. Davenport and Harris note that there are three different types of decision-making processes to consider adjusting to incorporate analytic capabilities: automation, over-rides, and decision-assistance. Rather than focus on any one of these decision- types, however, Davenport and Harris emphasize the importance of creating an analytical culture that will shape and influence all decision-making processes. What is an analytical culture? Like any culture, it’s the sum total of a series of individual attributes and behaviors that get repeated over time. People in an analytical culture demonstrate a set of common attributes. (Davenport & Harris, 2007, p. 137) To develop this culture by having members internalize principles of an analytic culture, organizations “have to incorporate some firm (but not punitive) ‘pushbacks’ for people who adopt the wrong behaviors” (Davenport & Harris, 2007, p. 139). Davenport also highlights the importance of unifying the organization and avoiding “institutional silos” (Davenport et al., 2010). In other words, many important organizational processes cross-departmental boundaries and require complex coordination. Davenport and Harris provide the example of using analytics to identify and improve customer profitability: 37 [Analytic competitors] use predictive modeling to identify the most profitable customers - as well as those with the greatest profit potential and the ones most likely to cancel their accounts. They integrate data generated in-house with data acquired from outside sources for a comprehensive understanding of their customers. They optimize their supply chains and can thus determine the impact of unexpected glitches, simulate alternatives, and route shipments around problems. They analyze historical sales and pricing trends to establish prices in real time and get the highest yield possible from each transaction. They use sophisticated experiments to measure the overall impact or "lift" of advertising and other marketing strategies and then apply their insights to future analyses. (Davenport & Harris, 2007, p. 83) Quantifiable data and analytics help unify different areas of the organization around a common goal of organizational profitability. Additionally, Davenport and colleagues highlight the importance of creating a broad platform of expertise throughout the organization. They highlight four levels of analysts that perform different functions at different levels in the organization (Davenport et al., 2010; Harris, Craig, & Egan, 2009). First, analytical champions are executive decision-makers. Second, analytical professionals create algorithms that analyze data. Third, analytical semi-professionals apply the algorithms. Finally, analytical amateurs are knowledgeable consumers of analytics who apply analytics in their day-to-day jobs. These different levels of expertise are spread throughout the organization in a pyramid way, and have different levels of quantitative and analytical expertise (see Figure 1). 38 Figure 1 – Breakdown of Analytical Expertise Graphics from Accenture Report on Analytical Expertise (Harris et al., 2009) 39 Davenport and colleagues observe that this expertise can be manifested in a variety of analytical methods, including activity based costing, Bayesian inference, biosimulation, combinatorial optimization, constraint analysis, experimental design, future value analysis, Monte Carlo simulation, multiple regression analysis, neural network analysis, textual analysis, yield analysis, CHAID, conjoint analysis, lifetime value analysis, market experience, price optimization, or time series experiments. Interestingly, Davenport notes that this statistical expertise combines artistic and scientific styles. He notes that In a variety of ways, art is already built into analytics. First is the hypothesis, which is really an intuition about what’s happening in the data. Hypotheses enter the realm of science when subjected to the requisite testing. (Davenport et al., 2010, p. 16) This suggests that the intuition comes up with a deductive idea, and the science comes into the equation as the way of testing the idea. The art form in a sense directs the focus of the analytical attention as “even the most analytically oriented company needs to target its analytical efforts where they will do the most good, because resources, especially talent, are always constrained” (Davenport et al., 2010, p. 73). Davenport also highlights the critical role of the Chief Executive Officer and the leadership team to make analytics part of the culture. …if the CEO or a significant fraction of the senior executive team doesn't understand or appreciate at least the outputs of quantitative analysis or the process of fact-based decision-making, analysts are going to be relegated to the back office, and competition will be based on guesswork and gut feel, not analytics. (Davenport & Harris, 2007, p. 133) By emphasizing the role of leadership, Davenport articulates the importance of considering analysis as part of a broader culture, and not as an isolated tool or technique to solve a particular problem. The ultimate goal: to become an analytical competitor, “an organization that uses 40 analytics extensively and systematically to outthink and outexecute the competition” (Davenport & Harris, 2007, p. 23). Davenport thus thinks that the primary problem with business relates to flawed decision- making. He suggests that human biases and complex environments challenge organizational decision-making processes. His prescribed solution suggests that organizations develop an analytics culture and capability. This involves building a repository of data in a standardized, structured format. Using this data, he recommends that organizations develop an analytical capability applying “extensive data, statistical and quantitative analysis, and fact-based decision- making” (Davenport & Harris, 2007, p. 9). Summary of the Rationalization Perspective From the rationalization perspective, quantification functions as a methodology that solves a problem in the status quo. For Taylor, problems arise from soldiering due to workers’ use of inefficient rules of thumb. For Deming, problems emerge from the repetition of unstable, inefficient processes. For Davenport, biased and ineffective decision-making generates problems. Each of these traditions builds on their predecessor, and suggests that data and quantification provide a way in which to fix problems and generate new solutions. For Taylor, good managers objectively quantify employee work processes to eliminate inefficient rules of thumb. For Deming, good managers construct stable business processes by using quantifiable measures of inputs and outputs of processes. For Davenport, good leaders eliminate bias by obtaining objective facts and using statistical, quantitative analysis in decision-making. I summarize these perspectives in Table 4 below. 41 Table 4 – A Comparative Synopsis of the Rationalization Perspective Scholar The Problem The Solution Taylor Worker soldiering related to the use of inefficient rules of thumb Managers quantify worker performance to identify best practices Deming Unstable and/or inefficient business processes Construct stable business processes using quantified measurements of performance Davenport Flawed decision-making Leaders make decisions by statistically and quantitatively analyzing objective facts The Sociology of Quantification Perspective Theodore Porter and Trust in Numbers Theodore Porter addresses the phenomenon of quantification in his book Trust in Numbers (Porter, 1996). Porter suggests that a pre-eminent problem in the modern world is how to connect local networks into the global community. He argues that the language of quantification and scientific discourse presents a way in which people can communicate to translate gaps between local networks and the global community. In other words, quantification becomes a means to achieve a socially shared notion of objectivity. Furthermore, Porter observes diversity in how quantification influences social interaction. In his book, for example, he suggests that experts can either use quantification as a tool to solve problems (French engineers) or as a tool to defend against political pressure and accusations of bias (American Army Corps of Engineers). He explores these dynamics of quantification and highlights some of the ambivalent effects of quantification. In the following review of Porter’s work, I describe the problem of subjectivity, the way in which quantification provides a unique solution to that problem, and then describe the consequences (some unintended) of the quantification solution. 42 Porter suggests that social groups struggle with the problem of subjectivity. Subjectivity allows for individuals to manipulate an interpretation of a situation in a self-interested manner. He comments, for example, that in old-regime societies “there was always room for power, negotiation, and fraud in determining the size of the heap” (Porter, 1996, p. 24). Porter suggests that during the Industrial Revolution, the subjective nature of local systems and local knowledge made it difficult to execute significant national projects such as utilities or railways. The pursuit of such grand-scale projects required “a drive for rigor and standardization” that responded “to a world in which local knowledge had become inadequate” (Porter, 1996, p. 92). As the Industrial Revolution progressed, then, society and government were forced to address this issue of subjectivity. The problem of subjectivity also emerges from the nature of individuals. Individuals often act according to “selfish desire,” so “science meant…the elevation of general rules and social values over subjectivity and the selfish desires of the individual” (Porter, 1996, p. 75). As a result, the academic discipline of statistics became used as a tool to overcome such subjectivity. This practice resulted in practices such as averaging measurements (versus selecting the best measurements) and utilizing error theory “to protect against false judgment or bias” (Porter, 1996, p. 201). Could verbal reasoning solve this problem of subjectivity? People from different locales could gather and attempt to negotiate solutions through social interaction. Porter suggests, however, that individuals and communities struggle to overcome the challenge of subjectivity using verbal reasoning. To illustrate the difficulties inherent in verbal reasoning, Porter quotes a British empiricist: Verbal reasoning…is too slippery. It does not require that the premises be made clear, and it permits auxiliary hypotheses to slip in unnoticed. It provides no clear checks 43 against errors of reasoning. It is too imprecise for its results to be tested against those uncompromising judges, experiment and observation. (Porter, 1996, p. 52) As a consequence, verbal reasoning lacks the capacity to bridge individuals and groups tied to distinct locales. Historically, the professions have also provided another potential check on the dangers of subjectivity. Leaders from the accounting profession, for example, suggest that “rigorous objectivity and professional autonomy are opposite extremes on a continuum of possibilities” (Porter, 1996, p. 90). Strong professions trusted by the public did not necessarily face challenges to subjectivity. Every profession, however, did not have the public trust. The public struggle to trust professions in both extreme cases such as parapsychology and more mainstream cases such as clinical medicine after “regulatory and disciplinary confrontations” (Porter, 1996, p. 208). The problem of subjectivity, then, sometimes becomes a significant problem for professions, particularly those “with insecure borders [and] persistent boundary problems” (Porter, 1996, p. 230). In summary, traditional means of overcoming subjectivity – verbal reasoning and professional expertise – struggle to adapt to the difficulties that arise from the more global, less local environment of modern society. Porter suggests that quantification functions as a natural solution to this challenge of subjectivity. First and foremost, quantification provides a sense of objectivity. A decision made by the numbers (or by explicit rules of some other sort) has at least the appearance of being fair and impersonal. Scientific objectivity thus provides an answer to a moral demand for impartiality and fairness. Quantification is a way of making decisions without seeming to decide. (Porter, 1996, p. 7) Quantification serves as a natural solution when “subjective discretion has become suspect” as “mechanical objectivity serves as an alternative to personal trust” (Porter, 1996, p. 89). Objectivity related to quantification thus removes the need for subjective personal judgment. 44 Quantification also lessens the impact of subjectivity by providing standardization. When different cultures have different meanings for items of importance, the process of quantification can bring about a more universal consensus. Quantification is a powerful agency of standardization because it imposes order on hazy thinking…whenever a reasoning process can be made computable, we can be confident that we are dealing with something that has been universalized, with knowledge effectively detached from the individuality of its makers. (Porter, 1996, p. 85) Quantification thus becomes a way for different locales to work through situations via compromise. Porter argues, for example, that economic quantification was “an attempt to create a basis for mutual accommodation in a context of suspicion and disagreement” (Porter, 1996, p. 148). Quantification also enables people to control people and nature without having to make decisions based on theoretical perspectives. Porter comments, for example, that Measurement and even mathematization were often favored as evasions of theory: it was not necessary to choose between substance and motion theories of heat, or to find the correct force law pertaining to capillary action. (Porter, 1996, p. 18) Put another way, quantification provided a technique through which people could measure phenomena precisely, and thus quantification has been “a crucial agency for managing people and nature” (Porter, 1996, p. 50). Finally, quantification helps overcome the issue of subjectivity due to its impersonal nature. Quantification has an “appeal of impersonality, discipline, and rules” (Porter, 1996, p. 32). This often manifests with decision-rules. Economists, for example, …insist that a decision can never be left to the judicious consideration of complex details, but must always be reduced to a sensible, unbiased, decision rule. (Porter, 1996, p. 189) Impersonality comes from quantification and thus helps individuals, local communities, and state governments overcome the challenge of subjectivity. 45 Porter suggests that there are several problems that arise as a result of quantification. The first problem is that quantified measurements “necessarily involve a loss of information” (Porter, 1996, p. 44). An example of this can be seen when engineers have to evaluate “intangibles.” To analyze and assess trade-offs, engineers need to value intangibles in monetary terms. This process by nature is tricky, and Porter notes that the engineers are “embarrassed” when they have to put a money value on these intangibles (Porter, 1996, p. 177). Quantification provides a “license…to ignore or reconfigure much of what is difficult or obscure” (Porter, 1996, p. 85). This simplification is important because choosing a particular method of quantification results in highlighting one particular solution, but multiple solutions exist. Porter notes that: More than one solution is possible because more than one measurement regime is possible, and this means that there is a range of potentially valid measures. (Porter, 1996, p. 33). In other words, quantification privileges a particular measurement regime and strips away meaning that might be captured by other measurement regimes. Quantification has an ambivalent effect on political processes and discriminatory social practices. Positively, quantification provides a powerful way to challenge and break existing hierarchy. Porter highlights how quantification challenges the “old-boy network.” In Europe and America, mathematics has long been gendered masculine, and this has often worked to exclude women from the sciences and engineering. But the impersonal style of interactions and decisions promoted by heavy reliance on quantification has also provided a partial alternative to a business culture of clubs and informal contacts – an old-boy network – that was and remains a still greater obstacle to women and minorities. (Porter, 1996, p. 76) This positive ability of quantification to provide a means to challenge existing discriminatory hierarchies, however, faces limits when quantification itself becomes the object of political contestation. Porter shows, for example, that in the US Army Corps of Engineers, that political interests manipulated quantification processes. 46 The Corps transgressed its customary standards most egregiously when the political forces were overwhelming, and when they were all arrayed on one side. (Porter, 1996, p. 161). Porter concludes that quantification does not really reduce subjectivity, and simply becomes a new battle arena for competing political interests. He notes somewhat ironically: The drive to eliminate trust and judgment from the public domain will never completely succeed. Possibly it is worse than futile. (Porter, 1996, p. 216) Porter also notes that quantification has the power to shape the way society thinks about the quantified concepts. For example, to conduct statistical analysis in social sciences, categories must be formed. These categories “increasingly…form the basis for individual and collective identity” (Porter, 1996, p. 42). Quantification influences society in two subtly distinct ways. First, public statistics “help to define [social reality]” (Porter, 1996, p. 42). Second, numbers “never provide enough information to make detailed decisions” but their “highest purpose is to instill an ethic” (Porter, 1996, p. 44). In other words, quantification becomes a tool to define social reality and colonize the consciousness of individuals in society. Because of these problematic effects of quantification, Porter suggests that society experiences a continual tension between trust in experts and trust in numbers. The complexity of reality requires a contextual interpretation by individuals. He describes this inherent feature of reality: Possibly the most influential term in what is called the new sociology of science is negotiation. It conveys the idea that general principles, so-called universal scientific laws, are never sufficiently definite or concrete to apply to the richly detailed circumstances of experience and experiment. Hence the meaning of experiments, even of theories, cannot be settled by general principles, but must be worked out by a narrow group of specialists. (Porter, 1996, p. 219) The tension about where to draw the line exists, however. Porter uses the illustration of the relationship between the state and a group of professionals, actuaries. He highlights that there 47 was a conflict, “the government sought a foundation for faith in numbers, while the actuaries demanded trust in their judgment as gentlemen and professionals” (Porter, 1996, p. 111). Porter uses comparative cases of national engineering organizations to show that the quantification can either be subject to political manipulation (the US Army Corps of Engineers) or a force for trust and “loosen the straitjacket of impersonal rules” (Porter, 1996, p. 214). In summary, Porter suggests that society faces a fundamental problem in the phenomenon of individual subjectivity. As the Industrial Revolution progressed, the increasing challenges related to subjectivity resulted in actors using quantification to overcome this subjectivity; quantification has presented a mechanism for objectivity. This solution works but is problematic, bringing about many benefits but also many unintended consequences. Porter leaves his readers with a feeling of ambivalence. What should we as a society do? A prescription is not clear, and Porter’s reader gets the sense that numbers use us more than we use numbers. Wendy Espeland and Commensuration Wendy Espeland and colleagues extend Porter’s work on the phenomenon of quantification in society. She suggests that society faces a fundamental challenge of making trade-offs between different objects or activities (Espeland & Stevens, 1998, 2008). She suggests that the modern world solves this fundamental challenge through the process of commensuration, “the expression or measurement of characteristics normally represented by different units according to a common metric” (Espeland & Stevens, 1998, p. 315). The process of commensuration enables society to achieve consensus by producing compromises in situations that feature uncertain or conflicting interests. Similar to Porter, Espeland also suggests and illustrates some unintended consequences of commensuration, noting how commensuration powerfully focuses attention and inflames political maneuvering. 48 Espeland and Stevens (1998) illustrate this fundamental social challenge of coordination through three vignettes describing common decision-making dilemmas. First, they describe the psychological processes a working mother engages in to balance the amount of time spent at work versus caring for her child. Second, they articulate a typical decision-making process organizations use to establish salaries for different roles. Finally, they portray the difficulties society faces in evaluating whether or not to construct a dam in a particular location. These types of decisions challenge individuals and communities for several reasons. First, decision-making is complex, and decision-makers often seek to impose control by simplifying their decision-making process to understand how to utilize limited resources. Second, outside audiences evaluate decisions; consequently, decision-makers seek to obtain legitimacy through their decision- making processes. Finally, decision-makers face a battle in their attempts to manage uncertainty, as decisions often require consideration of unclear and/or unknown inputs. Although individuals and communities can address these decision-making challenges through traditions, standardization, or power, commensuration, a particular type of quantification, functions as an alternative mechanism to address these challenges (Espeland & Stevens, 1998). In the aforementioned three vignettes, for example, the working mother may develop a ratio of work time to caregiving time; the organization might use rankings and external benchmarks to make salary decisions; a team of engineers may evaluate the prospective dam by doing a cost-benefit analysis to compare competing interests. Commensuration thus functions as a way to quantify challenging decision-making dilemmas. Espeland and Stevens formally describe commensuration as follows: Commensuration – the transformation of different qualities into a common metric – is central…whether it takes the form of rankings, ratios, or elusive prices, whether it is used to inform consumers and judge competitors, assuage a guilty conscience, or represent 49 disparate forms of value, commensuration is crucial to how we categorize and make sense of the world. (Espeland & Stevens, 1998, p. 314) Commensuration “simplifies information” by eliminating irrelevant information and imposing a shared metric: commensuration “decontextualizes knowledge” (Espeland & Sauder, 2007, p. 17). Commensuration simplifies decision trade-offs by establishing a common metric that relates a group of entities together by establishing “a common relationship…derived from their shared metric” (Espeland & Sauder, 2007, p. 19). Commensuration requires quantification – and a significant amount of up-front effort. Espeland and Stevens (2008, p. 408) note that “before objects can be made commensurate, they must be classified in ways that make them comparable.” Objects are classified when they are measured, as “measures help transform individual experiences and events into general categories or characteristics” (Espeland & Stevens, 2008, p. 412). Concurrently, this means that objects need to be quantified, because “quantification creates relations between different entities through a common metric” (Espeland & Stevens, 1998, p. 316). Typically, the commensuration process occurs through the construction of a trade-off scheme. When used to make decisions, commensurated value is derived from the trade-offs made among the different aspects of the choice. Value emerges from comparisons that are framed in terms of how much one thing is needed to compensate for something else…The structure of value rooted in trade-offs is like that of an analogy: Its unity is based on the common relationship that two things have with a third thing, a metric. (Espeland & Stevens, 1998, p. 317) Commensuration often constructs this type of trade-off through an aggregating process. Espeland and Stevens (1998, p. 317) illustrate this aggregating process through the example of water quality, which has a measurement that collapses distinct properties such as temperature, the amount and nature of dissolved solids, turbidity, and pH. In summary, commensuration solves 50 social problems related to decision-making by collapsing decision-making parameters into common metrics that can be more easily analyzed. Similar to Porter, Espeland and colleagues highlight the role of quantification and commensuration in objectivity. Espeland and Vannebo (2007, p. 38) note that “increasingly we link accountability to quantification.” Espeland follows Porter by suggesting that numbers inherently persuade (Espeland & Stevens, 2008) and make information seem more authoritative (Espeland & Sauder, 2007). This objectivity fulfills an important societal function: it enables society to control institutions. For example, ranking systems function as “one instance of widespread efforts to control public institutions and make them more accessible to outsiders” (Espeland & Sauder, 2007, p. 5). Issues arise, however, when social actors rely on quantification and commensuration to resolve social conflict. Espeland and Stevens (1998) punctuate the classic (and, to some extent, eternal) nature of this debate by observing that Plato and Aristotle held conflicting perspectives on commensuration. Plato argued that commensuration is good, causing humans to control their animalistic passions and use reason instead. According to Plato, commensuration requires an individual to perform calculations that result in a reflective, rational person. Aristotle, however, argued that commensuration is bad, causing humans to overlook individualities and singularities that form the essence of human value and character. Espeland and Stevens (1998, p. 319) crystallize this tension by observing that “the homogeneity commensuration produces simultaneously diminishes risk and threatens the intensity and integrity of what we value.” Variants of these competing perspectives exist in the social sciences. Economics-based rational choice theory, for example, follows Plato by claiming that “commensuration encourages us to believe that we can integrate all our values, unify our compartmentalized worlds, and 51 measure our longings” (Espeland & Stevens, 1998, p. 323). Classic sociologists, however, tend to follow Aristotle. Marx laments the commodification of individuals and their labor; Weber critiques the iron cage and disenchantment that spawn from rationalization and quantification. Espeland’s research attempts to delve into such timeless issues by uncovering some of the social mechanisms through which quantification and commensuration influence society. Espeland highlights one such mechanism that she labels “reactivity.” Broadly speaking, she notes: Measurement intervenes in the social worlds it depicts. Measures are reactive; they cause people to think differently. (Espeland & Stevens, 2008, p. 412) In addition to causing people to think differently, measures cause “reactivity” by forcing people to respond to them. Because people are reflexive beings who continually monitor and interpret the world and adjust their actions accordingly, measures are reactive. Measures elicit responses from people who intervene in the objects they measure. (Espeland & Sauder, 2007, p. 2) Measures intervene in the social world when they “create and reproduce social boundaries;” these new boundaries “replac[e] murky variation with clear distinctions between categories of people and things” (Espeland & Stevens, 2008, p. 414). Although measures can be viewed as a “valid, neutral depictions of the social world,” they also function as “vehicles for enacting accountability and inducing changes in performance” (Espeland & Sauder, 2007). In other words, although measures are often thought of as “neutral depictions,” they actually create reactions in the social worlds they measure. Quantification also impacts social interaction by disciplining social behavior. Espeland observes that: …numbers also can exert discipline on those they depict. Measures that initially may have been designed to describe behavior can easily be used to judge and control it. (Espeland & Stevens, 2008, p. 414) 52 Quantification simplifies reality and facilitates surveillance of behavior. By simplifying, excluding, and integrating information, quantification expands the comprehensibility and comparability of social phenomena in ways that permit strict and dispersed surveillance. (Espeland & Stevens, 2008, p. 415) Such discipline results in conformity. When quantified metrics such as rankings commensurate a group of similar objects, they initiate a set of actions and reactions that may lead to conformity. For example, “[Law school] rankings impose a standardized, universal definition of law schools which creates incentives for law schools to conform to that definition” (Espeland & Sauder, 2007, p. 15). Sauder and Espeland (2009) illustrate the power of reactivity and discipline more specifically in their study of law school rankings. Drawing on Foucault (1995) for theoretical inspiration, they explain how rankings form disciplinary power through mechanisms of surveillance and normalization. Sauder and Espeland suggest that rankings function as an easily transportable metric that facilitate observation from a distance: Rankings make remote surveillance possible by creating numbers that circulate easily. Because rankings are abstract, concise, and portable, and because they decontextualize so thoroughly, they travel widely and are easily inserted into new places and for new uses. (Sauder & Espeland, 2009, p. 72). Similarly, the rankings also operate through the mechanism of normalization, which propagates an ideal type of law school. Rankings can be understood as a standardized norm of excellence; they create a calculable law school by producing an abstract, ideal law school comprised of discrete, integrated components. By depicting how well and how poorly schools adhere to this abstraction, schools are encouraged to conform to this ideal. (Sauder & Espeland, 2009, p. 74) As a result of this surveillance and normalization, Sauder and Espeland (2009) describe how individuals internalize the values associated with the rankings, through mechanisms of anxiety, 53 resistance, and allure. They conclude with a somewhat ironic description of the ranking system: “rankings create a public, stable system of stratification comprised of unstable positions” (Sauder & Espeland, 2009, p. 79). Quantified measures of commensuration such as rankings thus influence society through mechanisms of reactivity and discipline (i.e., surveillance and normalization). Espeland observes negative social consequences to these mechanisms of quantification. One such negative consequence that emerges from individuals’ reactive responses to the disciplinary role of measurements and rankings is “gaming.” Espeland and Sauder (2007, p. 29) define gaming as “manipulating rules and numbers in ways that are unconnected to, or even undermine, the motivation behind them.” In gaming, actors focus on “managing appearances” and pay less attention to “improving the characteristics the factors are designed to measure” (Espeland & Sauder, 2007, p. 29). She suggests that gaming produces negative social consequences because it builds and reinforces distrust within a community (Espeland & Sauder, 2007, p. 30). Commensuration can also produce negative consequences when a quantified metric privileges particular groups or perspectives. This discriminatory privilege becomes particularly thorny when commensuration “transgresses deeply significant moral and cultural boundaries” (Espeland & Stevens, 1998, p. 326). This occurs when actors attempt to value two things that cannot be compared, or there is “incommensurability.” …we broadly define something as incommensurable when we deny that the value of two things is comparable. An incommensurable involves a ‘failure of transitivity,’ where neither of two valuable options is better than the other and there could exist some other option that is better than one but not the other.” (Espeland & Stevens, 1998, p. 326) Incommensurability is often associated with “identities and crucial roles” such as children, sexual relationships, and cultural identities (Espeland & Stevens, 1998, p. 327). When rankings 54 aggregate incommensurable values, they may privilege values such as materialism (i.e., the starting salary of a law school graduate) over personal citizenship (i.e., the likelihood a law school graduate will volunteer in their community). Espeland also views commensuration from a positive perspective. Commensuration can be inclusive and generative. Espeland and Stevens highlight this function by looking at the role of commensuration in creating new relationships and ideas. The capacity to create relationships between virtually anything is extraordinary in that it simultaneously overcomes distance (by creating ties between things where none before had existed) and imposes distance (by expressing value in such abstract, remote ways). (Espeland & Stevens, 1998, p. 324) They provide examples of the power of commensuration to generate new ideas. Society, for example, has used quantification and commensuration to understand irregularities in society such as suicide or crime rate. Similarly, commensuration facilitates economic integration and growth. Ultimately, commensuration and quantification form a necessary platform required to facilitate large-scale organizational and financial markets activity. When built into large institutions, commensurative practices are powerful means for coordinating human action and making possible automated decision-making. (Espeland & Stevens, 1998, p. 326) By being “transparent,” numbers can take the obscurities of tacit knowledge and make the tacit knowledge explicit so that new ideas can be developed and implement (Espeland & Vannebo, 2007, p. 39). In summary, Wendy Espeland’s research shows that individuals and social groups overcome decision-making challenges through quantified commensuration. Commensuration provides a quantifiable mechanism to evaluate decision-making trade-offs. This commensuration process helps achieve objectivity, but also influences social interactions through mechanisms of reactivity and discipline (i.e., surveillance and normalization). Although these mechanisms can 55 lead to negative consequences such as gaming and marginalization of minorities, they also can lead to inclusive and generative ideas. Summary of the Sociology of Quantification Perspective Like the rationalization perspective, the sociology of quantification perspective suggests that quantification solves societal problems. For Porter, society struggles to deal with the problem of subjectivity. For Espeland, individuals and society face difficulties in making difficult trade-offs. Both Porter and Espeland suggest that quantification plays a critical role in overcoming these problems by functioning as an objective solution that facilitates social consensus. Unlike the rationalization perspective, however, the sociology of quantification perspective highlights unintended consequences that emerge from the use of quantification. Porter observes that quantification strips away local information and may exacerbate political conflict. Espeland notes that quantification and commensuration influence the social world through mechanisms of reactivity and discipline. I summarize these perspectives in Table 5 below. Table 5 – A Comparative Synopsis of the Sociology of Quantification Perspective Scholar Problem/Solution Unintended Consequences Porter Subjectivity/ Quantification provides objectivity and standardization Quantification strips away local information and may exacerbate political conflict Espeland Making difficult trade-offs/ Commensuration provides a metric to evaluate these trade-offs objectively Quantification impacts the social world through mechanisms of reactivity and discipline 56 Limitations of Existing Perspectives These two perspectives provide insight into the phenomenon of the quantification of decision-making. The rationalist perspective has provided a normative advocacy for quantification that has shaped how many organizational actors think about quantification. The sociology of quantification perspective has moved beyond this normative perspective to articulate theoretical processes that explain how quantification impacts the social world. Each of these perspectives, however, has limitations and leaves a gap in our theoretical understanding of quantification. Limitations of the Rationalization Perspective The rationalist perspective assumes that numbers represent reality, or a representational ontology of numbers (Desrosieres, 2001). In other words, numbers reflect the essence of an underlying phenomenon. This ontology of numbers is grounded in the assumption that numbers operate like they do in the physical sciences. Consequently, mathematics can measure the essential “truth” of a phenomenon. Hence, when Taylor evaluates a worker handling pig iron, he can measure properties such as the strength of the person or the distance the worker needs to travel. Such measurements help organizational actors identify causal relationships in the environment that provide the basis for normative recommendations for improving productivity or quality. As a result, the rationalization perspective can provide organizations with a powerful ability to improve their productivity or quality. When numbers represent and measure objects accurately, organizational actors are better able to control their environment (Beniger, 1989). Zuboff’s (1988) depiction of the information technology revolution and the “smart machine,” for example, illustrated how organizations use a combination of technology and quantification to 57 control a variety of phenomenon such as paper mill production, telecommunications, or financial services. In the deployment of technology, organizations use quantification to control their environment; the environment responds predictably to that exercise of control. In other words, the rationalization perspective functions very effectively when organizational actors can develop clear understandings of causal relationships in their environment. The rationalization perspective, however, struggles with the application of quantitative measures and analysis of areas that do not act like the physical world and feature ambiguous understandings of causation. Consider an example: human performance. In baseball, the on-base percentage of a batter can be viewed by conceptualizing the batter as a human machine. Quantitative analyses such as saber-metrics can therefore effectively measure human performance. There is a clear human action being measured (i.e., swinging the bat) and a clear outcome being measured (i.e., hitting the ball). In an organization, however, how does this measurement of performance translate to the role of a manager? In this instance there is no individual human action being measured, but rather a multitude of actions (i.e., leadership, processing paperwork, etc.). Similarly, there is no clear outcome being measured (i.e., short-term profitability? long-term profitability? treating employees with dignity and respect? preventing the organization from going out of business?). Whereas in the example of baseball and a batter, clear causal connections exist; in the example of an organization and a manager, complexity obfuscates clear causal connections. A representational ontology of numbers might allow numbers to measure the complex performance of the manager. The manager, for example, might be conceptualized to have a latent variable called “managerial ability.” Defining this variable quantitatively, however, inherently creates additional issues. As observed by scholars from the sociology of quantification 58 perspective, such aggregated measurements inherently privilege one measurement regime over another. Does managerial ability, for example, privilege making money, or treating employees with dignity and respect? The social world features complex nuances of meaning that a representational ontology of numbers struggles to model. As a consequence, although the rationalization perspective provides tools organizational actors can use to control their environment, these tools operate differently when used to control a complex and ambiguous social environment. When Taylor measures humans in their capacity as machines, scientific management works – and may even produce explosive results. When Taylor measures humans exercising nuanced judgments, the representational view struggles, as has been critiqued by the sociology of quantification perspective. In essence, the rationalization perspective contains boundary conditions related to the certainty and nature of the causal connections that exist in quantified decision-making. Limitations of the Sociology of Quantification Perspective The sociology of quantification perspective critiques the notion that numbers represent reality. This perspective specifically highlights and assumes that numbers influence the world through Espeland’s mechanism of reactivity. Specifically, sociology of quantification research shows that when humans classify objects numerically through quantification, the quantification process itself may change the underlying object and the way actors interact with the object. For example, in the case of measuring human performance, ranking individuals based on some type of quantified, commensurated ranking system creates responses in both the actors measuring and the actors being measured through the mechanism of reactivity. Similarly, quantified rankings influence social interactions and identities through the mechanism of discipline. 59 The theoretical insight that quantification influences the social world through the mechanisms of reactivity and discipline has produced a better understanding of how quantification works. This research uncovers the political dynamics associated with quantification. Additionally, this research shows that quantification provides actors with a means of obtaining consensus by providing a way for actors to simplify complexity and make decisions. Quantification also enables society to discover and mitigate harmful trends such as suicide or racism. The sociology of quantification perspective has thus developed a theoretical explanation for some fundamental ways in which quantification influences the social world. Although the sociology of quantification perspective recognizes that quantification both represents and creates reality (a mutually constitutive ontology of numbers), it provides less insight into the strategic ways organizational actors use quantification. To a large extent, the perspective adopts a political lens on the quantification of decision-making: actors with competing interests manipulate numbers to achieve their interests. Although there are hints in Porter and Espeland that numbers and quantification have the ability to create generative social ideas, their research emphasizes that quantification emerges naturally as a coordination mechanism, and that the use of quantification generates a variety of unintended consequences. The sociology of quantification perspective has dropped the idea in the rationalist perspective that quantification functions as a strategic tool that helps organizational actors control their world more effectively and has replaced this with a political conception of agency. The Theoretical Gap I suggest that these two perspectives on quantification differ in two fundamental assumptions. The first assumption relates to the ontological status of numbers (Desrosieres, 2001). On the one hand, numbers can be viewed as representing reality: a “representational” 60 ontology of numbers. On the other hand, numbers can be viewed as representing and defining reality: a “mutually constitutive” ontology of numbers. The second assumption relates to the locus of agency: what impels social action? On the one hand, humans use numbers to accomplish strategic ends: a “human agentic orientation.” On the other hand, numbers subtly yet significantly shape human behavior: a “numbers agentic orientation.” I visually depict these dimensions in Table 6. Table 6 – Categorizing Theoretical Perspectives on Quantification Locus of Agency Human Agentic Orientation Numbers Agentic Orientation Ontology Representational Ontology: Numbers Represent Reality Taylor Deming Davenport N/A Mutually Constitutive Ontology: Numbers Represent and Define Reality Gap Porter Espeland The rationalization perspective assumes a representational ontology of numbers; the perspective also assumes a human agentic orientation. In the rationalization perspective, organizational actors use numbers to quantify and thereby control the world. As they do so, they realize ends such as increased productivity, improved quality, and development of new capabilities. The sociology of quantification perspective, however, assumes a mutually constitutive ontology of numbers; the perspective also assumes a numbers agentic orientation. In the sociology of quantification perspective, organizational actors use numbers to overcome social 61 challenges of coordination, and the resulting uses of quantification influence people in surprising and unexpected ways. These theoretical perspectives on quantification can be supplemented by research that invokes different assumptions about quantification. Specifically, I suggest that many uses of quantification in organizational decision-making require the assumptions of a mutually constitutive ontology of numbers and a human agentic orientation. Conducting research that relies on these assumptions becomes increasingly important with organizational reliance on analysis of Big Data and the ways in which quantification colonizes complex, multi-faceted decisions that lack clear causal understandings. In other words, organizational actors use quantification to control their environment (a la the rationalization perspective) even when the phenomena they seek to quantify look less and less like the phenomena featured in the rationalization perspective and more like the phenomena featured in the sociology of quantification perspective. Consequently, scholars should study how organizations actually quantify decision-making processes in social interaction. Conclusion Society increasingly uses numbers to quantify phenomena. Scholars have studied the process of quantification from two different perspectives. In the rationalization perspective, scholars suggest that quantification provides a means for individuals and organizations to control their environment more effectively. Quantification helps overcome difficulties related to soldiering, inefficient processes, and biased decision-making. In the sociology of quantification perspective, however, scholars suggest that quantification functions as a social device to achieve consensus. They highlight the ways in which numbers both represent and construct the social 62 world. Both of these perspectives provide insight into the phenomena of quantification, but leave a significant gap. While the rationalization perspective emphasizes human agency, it relies on a representational ontology of numbers. This ontology of numbers helps explain phenomena that can be cleanly modeled and quantified, but provides less aid in dealing with more complex types of quantification. Alternatively, the sociology of quantification perspective relies on a mutually constitutive ontology of numbers, but maintains an orientation towards agency that privileges the power of the number over the person. This leaves an opportunity for research to approach quantification from the perspective that highlights the ways organizational actors strategically use complex, multi-faceted numbers to accomplish objectives. 63 CHAPTER 3 – CONSTRUCTING STRATEGY TOOLS TO QUANTIFY DECISION- MAKING Introduction With the recent advent of Big Data, organizations have leveraged advances in information technology by incorporating quantification into their business processes (Davenport, 2014). Three recent technological changes in particular spur this increasing use of quantification: the availability of massive volumes of data, the availability of different sources of unstructured data, and the ability to obtain and analyze data rapidly using sophisticated algorithms (McAfee & Brynjolfsson, 2012). This quantification, however, often features high degrees of statistical complexity and requires intense involvement from professionals with expertise in quantification (Davenport et al., 2010). To incorporate this expertise into organizational practices, professionals often develop strategy tools organizations can use to analyze this large quantity of data and concurrently quantify business decisions (Jarzabkowski & Kaplan, 2014). In the development of these tools, professional organizations seek to take their expertise in quantification and transfer it to client organizations; the strategy tools function as a device through which client organizations incorporate quantification into their decision-making processes. Scholars approach this quantification of expertise from two different perspectives: the rationalization perspective (Taylor, 1911; Deming, 1982; Davenport & Harris, 2007) and the sociology of quantification perspective (Porter, 1996; Espeland & Stevens, 1998, 2008). According to the rationalization perspective, professional organizations inscribe their expertise in quantification by developing strategy tools that help client organizations accurately map the parameters of their environment into a mathematical model. According to the sociology of quantification perspective, however, this type of parameter mapping is not likely to be accurate. 64 Instead, the sociology of quantification perspective views quantification from a political perspective, with more powerful actors inscribing their interests and agendas into the parameters and functionality of the tool. These two perspectives thus present conflicting conceptions of how organizations quantify decision-making processes. To resolve these conflicting perspectives, I ask the research question: how do professional organizations inscribe their expertise of quantification into strategy tools? To study this question, I conduct a 15-month study relying on intensive participant observation data of Algo-Security, an organization seeking to design strategy tools that quantify decision-making in the security industry. Specifically, Algo-Security draws on mathematical models from game theory to develop decision recommendations for the scheduling and deployment of patrol officers. I entered the field in February 2013, shortly after Algo-Security committed to commercialize their product, and I conclude observations after the deployment of their first commercial strategy tool, a software product deployed in June 2014. I find that the professional organization inscribes its expertise in quantification into strategy tools through three processes. In prototyping, the professional organization hypothesizes and tests a particular “real-world” application of its theoretical expertise in quantification. In pinging, the professional organization engages its environment through iterative, focused messaging and analyses to refine its messaging and modify features of the strategy tool. In contextualizing, the professional organization attempts to insert its tool into the environment of a target market while maintaining the portability of its core strategy tool. Through these processes, the professional organization both molds its quantification models to the culture of target markets and transforms the culture of target markets through its quantified decision-making models. 65 This research contributes to research on strategy tools and quantification by developing a process model that explains how professionals develop strategy tools. Prior research emphasizes the manner in which strategy tools are designed to rationalize a complex or uncertain environment by representing the environment as parameters in a mathematical model. Although I observe these processes in prototyping, subsequent activities of pinging and contextualizing challenge this theoretical account. Specifically, through the processes of pinging and contextualizing, the energy of the professional organization shifts away from accurately representing environmental parameters in the quantified decision-making model, and shifts towards iterative attempts to mitigate domain-specific problems through pragmatic manipulation of their expertise in quantification. In other words, the strategy tool does both less and more than use quantification to rationalize decision-making; instead, the professional organization uses their expertise in quantification as a malleable engine of a strategy tool that domain experts in client organizations can use to address their organizational and industry-level perennial challenges and difficulties in new and fresh ways. Theoretical Background With the advent of Big Data, organizations deal with large quantities of complex, unstructured data (Davenport, 2014). Such large quantities of data resist human analysis: organizational actors use statistical algorithms to process and analyze the data. Oftentimes, however, organizational domain experts lack the expertise to conduct such analyses. The statistical expertise required to analyze Big Data resides with individuals popularly described as “data scientists” (Davenport & Patil, 2012). These data scientists rely on specialized algorithms that use mathematical, pre-processed rules to describe and analyze data. 66 As a result, organizations and their data scientists rely on software technologies to perform these computations. For some analyses, data scientists might use standard statistical packages such as R or SPSS; specialized problems, however, often require outside professional organizations to deploy algorithms inside of existing software. For example, a logistics company might rely on an optimization algorithm to make shipping decisions about what truck to send where. They might incorporate an external optimization algorithm into their existing company software so that their employees can utilize the quantification expertise in decision-making, even when they are not statistical experts. Scholars describe such software and algorithms as strategy tools (Jarzabkowski & Kaplan, 2014; Whittington, 2006). Jarzabkowski and Kaplan (2014) suggest that strategy tools feature two types of properties of theoretical interest. First, they suggest that strategy tools provide organizational actors with conceptual and material affordances (Orlikowski & Scott, 2008). Jarzabkowski and Kaplan follow Zammuto et al. (2007, p. 752) by defining affordances as “the materiality of an object [that] favors, shapes or invites, and at the same time constrains, a set of specific uses.” This notion of affordance means that the human actor can extend their power over an action or object that they want to control. The notion of affordances also, however, incorporates the concept that strategy tools have the potential of privileging certain perspectives or worldviews. Jarzabkowski and Kaplan emphasize the significance of these affordances: Strategy tools…have both material and conceptual affordances that shape their use. Strategy tools come with choices embedded in them about what knowledge to privilege…By implication, a strategy tool is not neutral or ‘objective,’ but makes and argument about what is important to analyze strategically and, conversely, what is not. (Jarzabkowski & Kaplan, 2014, p. 6) 67 When professional organizations incorporate expertise in quantification into strategy tools, then, they provide client organizations with a material affordance (i.e., the ability to perform calculations relevant for a decision automatically and accurately) and conceptual affordances (i.e., by reinforcing and privileging the conceptual assumptions underlying quantified numbers and quantification techniques). Second, Jarzabkowski and Kaplan (2014) highlight the actors that use the tools. They draw on March’s (2006) description of strategy tools as “technologies of rationality” to suggest that individuals actively use strategy tools to project rationality in social interactions within the bounds of strategy discourse (Knights & Morgan, 1991). When professional organizations incorporate expertise in quantification into strategy tools, they provide actors in client organizations with devices that can be used to generate social consensus and advance political interests. Scholars leverage this framework to suggest that the affordances and agency provided by strategy tools influence the selection, application, and outcomes that occur from the use of the strategy tools (Jarzabkowski & Kaplan, 2014). Strategy tools thus function as the means through which expertise in quantification influences organizational activities. It is important for scholars to understand how this expertise in quantification, as mediated by strategy tools, creates material and conceptual affordances and facilitates agency. Two different theoretical perspectives on quantification – the rationalization perspective and the sociology of quantification perspective – offer different explanations for how strategy tools that quantify decision-making construct affordances and facilitate agency. The rationalization perspective suggests that strategy tools of quantification function as a material affordance. Deming, for example, suggests that control charts provide organizational actors insight into the properties and performance of business processes (Deming, 1982). 68 Similarly, a strategy tool of quantification might enable an actor to remove personal bias or process a large amount of information (Davenport et al., 2010; Davenport, 2014). In essence, the rationalization perspective suggests that strategy tools of quantification enable organizational actors to represent and control their environment more effectively. The sociology of quantification perspective, however, challenges this approach. Specifically, for complex organizational decision-making processes, this perspective highlights the notion that quantification creates conceptual affordances that resolve tensions and difficulties in a complex environment (Espeland & Stevens, 1998, 2008). Because of their emphasis on conceptual affordances, the sociology of quantification perspective suggests that strategy tools of quantification function as venues for political conflict and compromise. I illustrate the difference between these two perspectives with the example of a logistics company using an optimization algorithm. From the rationalization perspective, the optimization algorithm provides a material affordance by enabling organizational actors to calculate lowest cost trucking routes rapidly and automatically. This perspective highlights the agency of the organizational leader or designer gaining control over the difficult decision of minimizing transportation costs. From the sociology of quantification perspective, however, the optimization algorithm provides a conceptual affordance by privileging algorithmically modeled objectives like short-term profitability over other organizational objectives such as customer service or employee satisfaction. This perspective highlights the political agency of different factions in the organization, and the ways in which they manipulate and react to the affordances provided by the strategy tool of quantification. These theoretical accounts explain how professional organizations design strategy tools that quantify complex, multi-faceted decisions. According to the rationalization perspective, the 69 efforts of the professional organization would focus on the mapping of the environment into the model. In the sociology of quantification perspective, however, the efforts of the professional organization would focus on techniques of commensuration required to adjudicate competing interests. To evaluate this tension, I ask the research question, how do professional organizations inscribe their expertise of quantification into strategy tools? Methods To answer my research question, I use an inductive, grounded-theory research method (B. Glaser & Strauss, 1967; Gioia et al., 2012). To develop grounded theory, I locate an empirical context in which a professional organization inscribes expertise of quantification into strategy tools. This research question calls for the collection of data from participant observation over an extended period of time. As Eliasoph and Lichterman (1999) observe, extended observation of social interaction allows the researcher to identify concepts grounded in an empirical setting and then connect those concepts to existing theoretical perspectives. While I develop a theoretical model tightly connected with the substantive context of my field site (B. Glaser & Strauss, 1967), I also interact with existing theoretical literature to develop interpretations of my data that extend current theory (Burawoy, 1998). For my study, I select a professional organization – Algo-Security – that specializes in the application of game theory for competitive interactions in markets such as the security industry. I now describe the empirical context and my approach to data collection and analysis. Empirical Context Algo-Security, an entrepreneurial company founded in early 2013, attempted to commercialize a series of patented game-theoretic algorithms. Algo-Security licensed these patents from Gaming Expert, a university-based research organization. Gaming Expert 70 specialized in the development of security studies research. They invented game-theoretic algorithms that facilitated the quantification of decision recommendations for resource deployment in competitive situations (i.e., providing recommendations about where to send security resources like patrol officers). The security industry, consisting of public law enforcement agencies (ranging from the police department to the Federal Bureau of Investigation) and private security organizations (ranging from providers of local manpower to staff sporting or entertainment events to large private security organizations such as Wackenhut), seeks to prevent criminal activity (ranging from petty crime to felonies to terrorism). Security organizations strive to prevent criminal activity by deploying resources (ranging from patrol officers to K-9 dog units to drones) to protect targets criminals might attack. In order to optimize the deterrence value of their limited security resources, security organizations face two fundamental challenges. First, adversaries often observe the patrol patterns of security personnel and seek to exploit patterns of repeated behavior. Consequently, security organizations attempt to deploy resources randomly. Second, security organizations usually protect a variety of targets that cannot be protected full time. As a result, security organizations must choose which targets to prioritize. Security organizations traditionally make these decisions through officers who use intuitive, tacit knowledge gained from length experience in their domain. Algo-Security’s game-theoretic algorithms provided a quantified methodology to make security decisions that optimized these objectives. Specifically, Algo-Security used a Bayesian- Stackelberg game theoretic approach to model adversarial security situations. These game theoretic models contained assumptions that seemed to align with the security context. They assume two parties – a defender and an adversary – play a repeated game. The defender moves 71 first, and the adversary is able to observe the defender. Based on watching the defender, the adversary chooses a target to attack. The Bayesian-Stackelberg methodology provides mathematically optimal recommended strategies for the players. For the defender, this results in a “mixed strategy” protecting different targets randomly. The use of game theory provides the defender an ability to deploy security resources randomly but still protect more important targets more frequently. Calculating these optimal game-theoretic strategies can be quite challenging from a computing perspective. For example, if a security organization has ten resources they can use to protect 100 targets, they must choose a strategy from a possible 1.73 x 10 13 potential strategies (Field Notes, Archival Materials). Algo-Security believed their quantification expertise in game theory had a wide variety of potential applications. During my time in the field, they explored the possibility of more than twenty different potential markets (see Table 7 for a detailed list). They started, however, by seeking to obtain contracts from law enforcement organizations with which their licensor, Gaming Expert, had deployed prototypes. Specifically, Gaming Expert had built prototypes to make security scheduling decisions for patrol officers protecting airports, ports and waterways, and transportation systems (i.e., rail, bus, or airplanes). These prototypes had been funded by research grants from the government, and had been only deployed on a limited research-focused basis; they were not commercially deployed. Algo-Security thus sought to convert these prototypes into paying customers. To complement these efforts, Algo-Security considered expanding to other law enforcement agencies (such as police departments or border patrol). They also considered targeting other non-related industries with competitive interactions that could be characterized by Bayesian Stackelberg assumptions. These other potential target markets 72 included cyber security (where should IT specialists scan networks), social media (where to spend advertising dollars given a competitor), and auditing of medical records. Table 7 – Algo-Security Target Markets Public Private Airports Private Security Rail Health Care Records Waterways and Rivers Partnerships with Large Security Software Companies Airways Sports Venues and Event Security Border Patrol Malls Nuclear Facilities Test-Takers World Bank/Human Trafficking Cyber-Security TSA Randomizer Poaching/Tigers Indian Government Social Media Analysis Drones, Cameras, and X-Ray Machines College Campus Security Scheduling Parole Appointments To make the prototype work for a client organization, the professional organization had to convert their expertise in game theoretic quantification into a strategy tool. The game theory was both too sophisticated for typical security decision-makers to understand and too complex 73 even for a game theory expert to calculate. Additionally, scheduling decisions deploying patrol officers lack a direct, unique causal connection with criminal outcomes. In this context, the rationalization perspective is challenged due to the inability for a quantitative model to consider all of the calculations. With the game theoretic model, for example, Espeland’s notion of commensuration occurred through a metric of “expected defender utility.” And commensuration had to take place in order to assign “payoffs” to targets. Thus this empirical context serves as an extreme case (Pettigrew, 1990) and an ideal context for to see how professionals inscribe their expertise in quantification in strategy tools. Data Collection My primary source of data is field notes from participant observation of Algo-Security from February 2013 through June 2014. Specifically, I primarily participated in two types of activities. First, I attended weekly business meetings facilitated by Byron Jones, the Chief Executive Officer of Algo-Security. Participants in the weekly business meetings included the founders who developed the original game theoretic models (professor Corey Fisher and recent PhD graduates Clifton Pineda and Joseph Harris), advisors who still worked for Gaming Expert (Donald Morgan and Patrick Yearby), and a computer programmer (Danielle Perez). These meetings lasted anywhere between one and three hours, and covered topics related to prospective customers, industry networking opportunities, investor fundraising, and the design/functionality of the strategy tool. In addition to these meetings, I interacted throughout the week with various personnel to develop a variety of marketing and investor collateral used as company sensegiving materials. Second, I observed and participated in two deployments of the strategy tool. The first deployment I participated in was the development of a prototype system for a transportation system in a large Metropolitan area (described in more detail in Chapter 4). The second 74 deployment I participated in was the development and deployment of the first commercial strategy tool, a software product deployed at Western Airport. During participant observation, I was able to observe activities and also engage in informal interviews with staff. I recorded this data by taking field notes. Because of the organic, informal nature of my participant observation, I brought a notebook and pencil with me to all meetings, and took detailed notes during every interaction. As soon as possible after the field time, I re-created my handwritten notes in typed format to facilitate future coding and to ensure the clarity and comprehensibility of my real-time notes. I supplemented these field notes with archival materials. Of particular importance to my research design was my capturing of iterative organizational sensegiving materials. Specifically, throughout my time in the field, Algo-Security created different documents to clarify their understanding of the functionality of their products and services (i.e., their strategy tool) and their business model. I detail these documents and highlight the time of their creation in Table 8 below. In addition to these sensegiving materials, I also captured e-mail correspondence, academic presentations made by Gaming Expert personnel about the game-theoretic approach to security scheduling, and written procedures for security scheduling. 75 Table 8 – Algo-Security Sensegiving Documents Date Sensegiving Document December 2012 Presentation for Initial Funding March 2013 Marketing Two-Pager April 2013 Presentation: Algo-Security “Pitch Deck” 1.0 May 2013 Website 1.0 August 2013 Website 2.0 September 2013 VC Two-Pager Presentation: Algo-Security “Pitch Deck” 2.0 October 2013 VC One-Pager Presentation: Algo-Security “Pitch Deck” 3.0 December 2013 Product Datasheet January 2014 Website 3.0 Presentation: Algo-Security “Cyber Pitch” 1.0 February 2014 What We Do Presentation: Western Airports Proposal March 2014 Presentation: Algo-Security “Cyber Pitch” 2.0 May 2014 All Presentations Consolidated and Updated Data Analysis To analyze my data, I follow Gioia et al.’s (2012) recommendations for analysis and presentation of qualitative research. Prior to coding my data, I organized my data by time (i.e., by month) so as to identify and analyze temporal dynamics in my data. I created a narrative case history and timeline describing what happened during my fieldwork. After doing this, I coded 76 my data using the constant comparative method (Strauss & Corbin, 1998). Specifically, I engaged in three distinct rounds of coding (Gioia et al., 2012). In the first round of coding, I identify the concepts and categories associated with Algo-Security’s design of the strategy tool using game theory to quantify decision-making. Concepts in the first round of coding are tightly tied to the substantive empirical context (Strauss & Corbin, 1998). In a second round of coding, I take a step back and look to identify more abstract, theoretical-relevant codes that I label second order themes and aggregate dimensions. The combination of the first and second round of coding creates the data structure, which I reflect in Figure 2 below. In a third and final round of coding, I look for the dynamic relationships between the theoretical themes, and create a dynamic model that serves as a grounded theory explaining how professionals inscribe their expertise in quantification into strategy tools. Figure 2 – Data Structure, Constructing Strategy Tools for Quantification of Decision Making Data Structure, Constructing Strategy Tools for Quantification of Decision 77 Data Structure, Constructing Strategy Tools for Quantification of Decision- 78 Findings: Developing a Game-Theoretic Tool for Strategic Decision-Making Following methodological recommendations for qualitative research, I initially organize my data by the theoretical themes of the data structure (see Figure 2). This data structure does not represent a dynamic model, but rather represents “the core concepts and their relationships that served as the basis for the emergent theoretical framework and a full grounded theory model” (Gioia, Price, Hamilton, & Thomas, 2010, p. 13). I organize my findings around the three aggregate dimensions that describe core concepts associated with a professional organization’s design of strategy tools: prototyping, pinging, and contextualizing. In prototyping, the professional organization applies academic insight to a particular domain, develops a quantified decision-model, and deploys the decision-model in a limited environment. In pinging, the professional organization makes sense of their environment by probing a variety of audiences with test messages and updating their evaluation of their market focus. They incorporate this environmental sensemaking by wordsmithing their messaging to new audiences and modifying the core features of their strategy tool. In contextualizing, the professional organization embeds the strategy tool in a specific domain and simplifies the quantified decision-making model while maintaining portability of the strategy tool to other domains. Prototyping In prototyping, the professional organization identified a decision-making problem that their expertise in quantification could aid. In order to do this, they had to identify an academic insight that they could apply to a problem in a particular domain. Once they obtained a deductive idea about what they could do with their expertise in quantification, they had to develop a quantified decision-model. They then deployed their model in a limited environment, testing the 79 model to ensure its effectiveness and applicability. In the prototyping phase, the professional organization aimed to represent a domain-specific problem faithfully in mathematical terms. Applying Academic Insight. Gaming Expert and Algo-Security professionals recognized that the security industry faces a fundamental problem: how do security providers deploy limited resources effectively? Algo-Security academic experts believed that academic knowledge related to game theory could help address this problem. Originally the Western Police Department approached Gaming Expert (led by professor Corey Fisher) to help them solve a problem: how do they (the police department) deploy their limited resources to provide security at an airport? Corey translated this to a more generic question: how do we take our limited resources and deploy them in an intelligent fashion. (Field Notes, 2/2013) Security organizations struggled with being random – and consequently unpredictable to an intelligent adversary. Throughout the time of my fieldwork, law enforcement personnel always seemed to assume this fundamental objective. Sergeant Aponte of Western Police Department, for example, described his main objective in deploying patrol officers as being “randomness:” “my goal is to keep the criminals on their toes; we have a lot of resources – we want to make it impossible for criminals to know what teams and what tactics they might face” (Field Notes, 3/2013). Leadership in security organizations also struggled to control field security personnel in order to avoid predictable patterns of behavior. Algo-Security programmer Joseph Harris described the problem in this way: One of the big problems with security systems is that security guards like to follow certain habits. For example, if a guard likes to get coffee at a particular shop, no matter where you schedule him to start, he will end up at that location at the particular time. You’ll see this with private security guards a lot, they’re at the donut shop. (Interview, 3/2013) The security industry thus faced challenges related to randomness and to the control of their field personnel. 80 Game theory seemed to offer an ideal solution to these problems in the security domain. I described this in field notes: [Algo-Security] applies game theory to the security industry. Specifically, they use game theory to model a security problem which can be optimized mathematically. They model the security problem as a game. A game consists of (1) players, (2) actions that the players can take, and (3) payoffs that result from the actions that each player takes. (Field Notes, 2/2013) What made game theory particularly attractive for the security domain was game theory’s assumption that players with different interests compete. This meant that the Algo-Security strategy tool was “more cerebral with how [it] looked at adversaries than other applications are” (Field Notes, 3/2013). Gaming Expert and Algo-Security used advanced game theory to model adversarial interactions. Corey said that in a game theory approach they predict how the adversary reacts to their [a security provider’s] move. The traditional problem with game theory has been the cycling problem: how do you get out of the cycle of “if I do this, they do this, I do this.” Corey says that they get away from this by using Stackelberg models. The idea being that you see what I’ve done and then adapt to what I’ve done. In other words, one side [i.e., the security provider] reveals all of their cards, and everything becomes simpler to model than traditional game theory. (Field Notes, 3/2013) This methodology enabled Algo-Security to develop a technique where “computer-generated randomness can remove the ability adversaries have to exploit repeatable patterns in security practices” (Field Notes, 2/2013). In summary, Gaming Expert and Algo-Security applied academic insight by recognizing that game theory provided a system that could quantify decision-making processes for the security industry. Specifically, game theory provided a mechanism security providers could use to quantify decisions to deploy field personnel to patrol locations unpredictably. 81 Developing a Quantified Decision Model. Once Algo-Security identified the potential of game theory to address the issue of scheduling security resources, they had to identify the relevant parameters to model the decision environment of the security provider. Gaming Expert did this by narrowing the description of the security environment to the following questions: If you are a new customer, I ask you the following questions: (1) can you identify a set of targets you need to protect? (2) what resources are available to you? (3) what are the capabilities of those resources? (4) can you provide some metric of how important each target is? (Interview, 3/2013) Gaming Expert called this process developing a “game matrix.” The game matrix described the environment. They described this process for a particular example, Western Airport. For example, at Western Airport, they sought to protect the airport against a vehicle- based IED (improvised explosive device or bomb). After this, we construct a game matrix or a payoff matrix. Basically we create scenarios where the adversary wins dollars, and we lose dollars. In creating the game matrix, we have to work with users to think about what attackers observe. We spend time understanding what is going on in the real world so we can take real world constraints and build them into the model: we take the information about the real world and optimize the decision-making process. (Interview, 3/2013) Gaming Expert thus created a structure that could be used to map the environment of the security provider into a Bayesian Stackelberg game theoretic model. They also needed to identify the appropriate real-world decisions to model. The game theory enabled the modeling of a scheduling decision: the assignment of a security resource to protect a particular security target. In the aforementioned example, a security provider might protect a facility from IEDs by establishing motor vehicle checkpoints on incoming roads. The scheduling decision would assign particular security resources to protect that checkpoint at a particular time. 82 Once the environment had been established via the construction of a game matrix and the decision output was identified, Gaming Expert had to develop a mathematical algorithm to “optimize resources.” Gaming Expert executive Donald Morgan describe the algorithm: We assign values to a bunch of different targets; we assign costs to different resources; and then in theory it becomes a very simple math set of equations. In practice, however, the computer takes a long time to solve these equations. The patent we have is on the numerical technique to program how the algorithm can be calculated. If you don’t have our technology, and something changes to the schedule, you would have to re-run the algorithm and it would take a long time. (Interview, 2/2013) Gaming Expert thus identified the relevant parameters to model the environment, identified the appropriate real-world decisions to model, and developed a mathematical algorithm to make the decisions optimally. Deploying in a Limited Environment. This theoretical application of game theory had to be tested. Gaming Expert performed tests in several different venues reflecting four specific applications of the general concept described above. Clifton Pineda, a Gaming Expert PhD student that eventually transitioned to be the lead engineer for Algo-Security, summarized these test deployments: Clifton says that he believes they have created four solutions: (1) setting up static checkpoints to protect static targets; (2) setting up static checkpoints to protect mobile targets; (3) setting up mobile checkpoints to protect mobile targets; (4) setting up mobile checkpoints to protect static targets. These different solutions link up with the land/air/sea/rail deployments Gaming Expert has done to date. (Field Notes, 3/2013) I observed a particular limited deployment for rail (described in more detail in Chapter 4 of this dissertation). Early on, Gaming Expert developed a game theoretic model to aid Metropol, a western law enforcement organization, in their efforts to minimize fare evaders on the train system. Their second client is Metropol and a rail system in a large western city that has 80,000 passengers a day. It’s impossible for Metropol to police more than a fraction of the cars. 83 Gaming Expert models all of the routes as possible mathematical flow models of passengers, and then superimposes police actions on top of this passenger model. Early results have been positive – Metropol gets a higher “interdiction” rate to stop fare jumpers. But they also want to use the system eventually to patrol against other crimes such as robberies or counter-terrorism. (Field Notes, 2/2013) In these initial deployments, Gaming Expert identified narrow, specific scheduling decisions they could automate with their game theoretic expertise. After they identified a particular application, they had to map numbers into the model developed. In other words, Gaming Expert and their client organizations needed to fill out the game matrix. In the rail application, for example, they worked to extend the algorithm from fare evaders to counter-terrorists. To do so, they worked to construct a payoff matrix for a set of train stations. Basically the way the algorithm works is that they establish ten different stations with three levels per station (above ground, below ground, and the mezzanine level). Each of the stations can have “higher interest” or “lower interest” depending on the “value” of the target...Each train station has a different weight that we get from Sergeant Aponte. How do you patrol them taking into both the weights of the different stations and the theory that an adversary is watching your patrols? We solve this as a mathematical problem. (Field Notes, 3/2013; Field Notes 4/2013) By mapping numbers into the quantified decision model, Gaming Expert represented the security provider’s environment with a game-theoretic mathematic model. Gaming Expert worked with their client security organizations to test the model. Metropol, for example, wanted to start with fare evasion because it “offered an opportunity to test – tangibly – the ability of the Gaming Expert algorithm to optimize security deployments” (Field Notes, 4/2013). Gaming Expert and Metropol worked together to extend this test by comparing game-theoretic decision-making models with previous decision-making models. For the test, the real question will be how the randomized deployment works relative to a non-randomized deployment. They will be performing three “sorties” – a 25% coverage, a 50% coverage, and a randomized coverage of the area. The idea is to compare resource allocations between these different deployment schedules. (Field Notes, 3/2013) 84 In testing the decision-making model, Gaming Expert and their client security organizations evaluated whether the quantified game-theoretic model was able to accurately represent the real environment of the security organization. Summary. To summarize, Gaming Expert developed a prototype that provided a preliminary test of the ability of game theory to quantify decision-making processes. In developing the prototype, they had to leverage a fundamental insight about how game theory might be used in the context of the security industry. Once Gaming Expert identified a particular application, they had to develop a theoretical model to quantify a particular decision-making problem, and then test that model. They deployed the model to see whether or not this decision- making model accurately represented the dynamics of real-world security scheduling challenges. Interestingly, Algo-Security followed a representational ontology of numbers in the prototyping phase. In this phase, they attempted to show that their methodology could be used to automate particular scheduling decisions. They did not pursue focus much energy on commensuration processes, using broad, intuition-based judgments for commensurate parameters such as resource capabilities or payoffs. In summary, prototyping seems to reflect what might be considered a traditional approach to the development of strategy tools. A professional organization takes their expertise in quantification and uses it to solve certain problems. Pinging As I tracked the development of the game theoretic strategy tool of Algo-Security over time, however, I observed theoretical themes that diverge significantly from prototyping. I group these themes under the label of “pinging.” I use the term pinging because of its association with sonar in which navigators probe their environment with short pulses to provide them with rapid 85 feedback about their environment. They can use this feedback to adjust their actions. Pinging thus consists of an actor probing their environment and responding and adjusting their actions based on the results of the probes. In my fieldwork, I observed Algo-Security probing their environment in two different ways. First, they probed a variety of different audiences with test messages to gauge reactions to different ways of describing their product and business model. Second, they engaged in internal dialog about different markets in a way that not only enabled them to evaluate the relative pros and cons of different markets, but also helped them clarify and focus the direction of their product. Algo-Security responded to these probing activities in two ways: they wordsmithed their messaging to identify what concepts resonated across potential markets, and they modified the core features of their program to mitigate fundamental pains in the security industry. Probing with Test Messages. Algo-Security constantly probed their environment by telling their story to a variety of different audiences. One of the most common ways they would test their product was by presenting a description of the product and the business model to investors. What is particularly interesting about their presentation to investors is that Algo- Security Management did not necessarily want to obtain external funding. Investors, however, provided a smart and intelligent audience Algo-Security could use to obtain information about how outsiders viewed their product. Early on, for example I recorded the following field notes after an informal conversation with Byron Jones, the Chief Executive Officer of Algo-Security: Right now the company has about $80,000 in seed funding. They need to get a plan in place to get real funding. They need to construct their story, to create a potential strategy. They have to figure out what arenas to target, how much to charge, who their competitors are, and what their strategy/positioning will be. Right now Byron’s immediate goal is to get a minor, paying maintenance contract from one of Gaming Expert’s prototype clients. (Field Notes, 2/2013) 86 Thus from the beginning Algo-Security planned to use the investor conversations to generate information. They did not necessarily want to avoid obtaining investor funding, but were at the same time very comfortable (and even preferred) self-funding from internal revenue. An example of this occurred on a trip in April 2013 where Byron went to an investor conference. He described the results of the interactions with investors. Byron said that the investor meetings went very well and that most people had positive feedback. For example, the Boston Harbor angel association asked him to present to them. But Byron said that he is not comfortable getting in front of investors yet for a more detailed conversation. There were two pieces of feedback that were pretty consistent from all investors: first, what does the up front process look like? Second, how does the algorithm integrate with data analytics? (Field Notes, 4/2013) Note that even in this early time, the investor feedback begins to push Algo-Security in the direction of analytics, which is completely absent from the prototyping phase (which is more focused on automation). This type of interaction continued throughout my fieldwork. It became particularly salient during October and November 2013, when Far West Ventures, a local venture capitalist, conducted preliminary diligence on Algo-Security. Far West asked questions about machine learning. My question is, how does game theory apply to machine learning. As I understand, Algo- Security is using statistical models to determine most probably threats based on recent actions or events. In my experience, game theory is based on how groups of people interact, not machine learning…Trying to understand the use case. (e-mail correspondence, 9/2013) In discussions about this correspondence, Clifton and Byron felt that the venture capitalist did not completely understand Algo-Security’s technology. I recorded an interaction between Clifton and Byron in my field notes: Byron says that the VC isn’t technologically an expert, and Clifton agrees with that assessment based on the question [about machine learning]. Clifton says that machine 87 learning and the game theory algorithm are two distinct concepts. He then talks about how machine learning really is about how data is analyzed. (Field Notes, 9/2013) The interaction with the venture capitalist, then, enabled Algo-Security to present their business model to outsiders. They thus obtained a gauge of outsider reactions. In addition to conversations with investors, Algo-Security also engaged in informal industry networking. In April 2013, for example, Byron attended a European conference on terrorism. Also, throughout my time at Algo-Security, Patrick Yearby – a Gaming Expert business development director and an official advisor to Algo-Security – set up several informal meetings with industry experts. These informal interactions provided the opportunity for outsiders to react to Algo-Security’s products, and possibly develop into informal partnerships. One example occurred early on in July when Byron had some informal conversations about the applicability Algo-Security’s technology might have for cyber-security. He had several informal lunch interactions with a software product manager, probing to identify whether there might be a partnering fit. Ultimately the Chief Executive Officer of the software company decided to “pass on a partnership with Algo-Security for now” because they were committed to providing “complete coverage of endpoint monitoring to our customers” (e-mail correspondence, 9/2013). Yet this set of interactions provided Algo-Security personnel with preliminary feedback about cyber-security as a potential product market. Algo-Security also probed the environment by pitching customers in different markets. Throughout my time in the field, Algo-Security personnel constantly put on mini-presentations for potential prospects. They would explicitly not try to accomplish too much during these meetings. In June 2013, the Algo-Security team talked about a preliminary conversation with the Western airport. In discussing how to approach a long-awaited meeting to convert the prototype into a paying customer that finally got scheduled, they emphasized the importance of going slow. 88 Byron says, the story is that what we have is in limbo. We have to be careful. We don’t want to close anything in this meeting; we want to get a follow-on meeting. Patrick agrees, saying that at the airport with have a six year, not a six day relationship. Byron reinforces Patrick’s point: he says that this will be a first meeting, and then we want to get a follow-up meeting. (Field Notes, 6/2013) Customer conversations thus provided opportunities to obtain feedback from customers; they specifically did not reflect a traditional pitch of software features. Updating Evaluation of Market Focus. Algo-Security also probed their environment by internally evaluating different markets. One way that they attempted to understand their environment was by crystallizing their understanding of “pain” in different markets. Unlike the prototyping stage, where the focus was on whether the game theoretic quantification decision model could represent a real-world decision, in the pinging stage the Algo-Security experts tried to answer the “so what” question by identifying concrete ways the game theoretic algorithm could improve a particular domain. Byron started a conversation in a weekly meeting by saying, “we need to be able to answer the question, what is the pain in the market?” (Field Notes, 5/2013). Joseph, one of the Algo-Security programmers, answered this question for one of the prototype domains. Joseph said that we solve two pains for law enforcement organizations protecting mobile targets. The first pain is scheduling. This particular client is an extreme case: not even looking at optimality, it took their scheduling people two weeks and three full people to schedule a month’s worth of officers on mobile targets. The second pain is the ability with scheduling to account for parameters such as risk, population, and scheduling restraints to schedule more intelligently. The pain that we solve is that we can analyze the constraints related to not having enough resources and too many parameters. (Field Notes, 5/2013) By verbally thinking about the market, Byron and Joseph narrowed focus on the ways that their tool streamlined scheduling processes and simplified a complex environment. This type of analysis congealed in discussion about the website. At this time, Byron was suggesting that the website feature descriptions of pains and solutions for different target 89 markets. Two target markets highlighted on the proposed website were “transportation” and “event security.” For each market, the proposed website highlighted a core problem and corresponding solution. The two markets can be compared in Table 9 below. Table 9 – Proposed Website Comparing Transportation and Event Security Markets Transportation Event Security The Problem: Security providers in the transportation industry face significant challenges: they must protect a combination of moving targets (i.e. trains, buses, airplanes) and stationary locations (i.e. train station, bus stations and stops, airports) from a variety of threats (i.e. ranging from common criminals such as thieves or fare evaders to terrorists). They must protect these targets with limited resources. The question they must answer: how do they allocate their resources to minimize their risk exposure to these threats? The Problem: Event security providers face significant challenges: they must protect a venue from a variety of threats for a short, intense period of time. Event security providers protect event- goers from crime, spontaneous acts of violence and destruction, and possibly terrorism. This security occurs as event-goers enter the facility, in the broader area surrounding the facility, and during the activities of the event itself. Determining where to go can be challenging depending on the type of event, where people are seated in the event, and how do security providers respond to the dynamic incidents of the event? The Algo-Security Solution: Every security provider has to answer this question by scheduling their patrol officers and resources to do certain activities in particular areas. Armorway helps security providers do this optimally by using an algorithm to deploy available security resources to the right place at the right time, randomly! Bottom line: we enable security providers to give clients more protection with fewer resources. The Algo-Security Solution: Every security provider has to answer this question by scheduling their patrol officers and resources to do certain activities in particular areas. Armorway helps security providers do this optimally by using an algorithm to deploy available security resources to the right place at the right time, randomly! Bottom line: we enable security providers to give clients more protection with fewer resources. - Proposed Copy for Website, 5/2013 90 Note that in the comparison between these two markets, Algo-Security uses a description of the problems the market faces to position their solution. Algo-Security also updated their evaluation of which market to focus on by analyzing competitor capabilities in different markets. One example of this occurred through their discussion of PredPol, a new technology provider that offered law enforcement agencies the prospect of “predicting crime in real-time.” In early 2013, for example, PredPol started to receive a large quantity of media coverage (Healy, 2013). Byron and I struggled to differentiate between what PredPol did and what Algo-Security did. By working through this, we began to more clearly understand the capabilities and positioning of Algo-Security. I observe that I am thinking that Algo-Security actually does everything that PredPol does, and more. I’m not sure about this, but it seems to me that the second part of the Algo-Security process – creating the game board – is actually doing the same thing that companies like PredPol do with their predictive model. It seems like you can’t allocate resources efficiently if you don’t know where the likelihood of attacks are…after an extended conversation, Byron agrees with me. He words it a little differently: it’s a chess game, and Algo-Security makes it so the adversary cannot protect himself against your move. The question that neither of us know is what the difference is between the way that Algo-Security sets up the gameboard and the way that PredPol would predict where future crime is. But it is clear that PredPol doesn’t tell the officers where to go – they just highlight where potential hot spots will be. They don’t assume limited resources; they don’t model the tradeoffs associated with targets/security resources. (Field Notes, 3/2013) This topic continued to function as a way to probe the environment in Algo-Security meetings. In May 2013, for example, a business advisor asked, “how is PredPol getting so much traction?” (Field Notes, 5/2013). This question was followed by a significant amount of conversation and a consensus that “we need to really nail down what product, exactly, we are providing, and what the story behind the product is” (Field Notes, 5/2013). Algo-Security also probed the market based on their evaluation of the long-term market potential. The organization struggled to quantify the long-term potential of different market niches. For example, should Algo-Security target rail or event security? Cyber security or 91 cameras? During my field time, for example, Algo-Security discussed the potential of more than twenty different markets (see Table 5). Byron did not see Algo-Security pursuing all of these markets simultaneously; he valued the importance of focus on particular target markets. But the discussions of these markets helped organizational members contemplate and articulate the essential value of their intellectual property. Ideally, they wanted to articulate their technology so that they could obtain a dominant position in the security industry. This was often discussed in terms of an analogy of Oracle. Byron then says that we want to be the Oracle of security. On Oracle’s website, you see that Oracle does two things: (1) they sell Oracle as a database, and (2) they sell Oracle as a solutions provider. We provide security solutions for rail, for airports. (Field Notes, 7/2013) By discussing the long-term market potential from different markets, Algo-Security sought to probe to find similar pains and problems in different industries, and then tie their solutions to address those pains. Their goal was to understand the environment in order to make their products and services “must-haves” for their target markets (Field Notes, 1/2014). Wordsmithing Messages. These probes of the environment were closely related to constant wordsmithing of their public messaging. The messaging subtly shifted over time as Algo-Security responded to the feedback from their probes. In their original presentation to receive funding, for example, they summarized their business model with the following phrase: Algo-Security simplifies the intelligent randomization of security assets, minimizing risk at optimum cost. (Presentation Materials, 12/2012) After preliminary investor meetings, however, Byron emphasized the importance of shifting away from the word randomized, and recommended replacing it with “optimize.” The word randomized is a negative connotation. We know what it means. [By unsaid contrast, he suggested that investors did not know what it means] The word optimize is better than randomize. We optimize your security resources…Randomize we change to optimize. They say the same thing, it just has more punch to it. (Field Notes, 4/2013) 92 As time passed, Algo-Security elaborated on this concept. In a discussion about website copy, they summarized their products and services: The solution Algo-Security offers is game theory intelligent analytics. Algo-Security software saves security providers time and money. Algo-Security takes data and puts it into an optimization engine that creates a schedule. In other words, Algo-Security identifies and optimizes to mitigate risk. (Field Notes, 6/2013) The use of these terms became increasingly integrated with the identity of the company. This identity subtly shifts away from game theory, and this emphasis change results in significant long-term impact on the strategy tool. Byron says that for us, this is the importance of using terms like “optimization” instead of “randomization.” Unpredictability is part of optimality, but it’s not an end in and of itself. He reiterates his thought that Algo-Security wants to be the Oracle of security. They have a database that is the core engine of their product. Then they have a solution for different sections or types of users. (Field Notes, 9/2013) The wordsmithing thus evolves with the probing, and the underlying concepts of the company subtly shift. Another example of wordsmithing occurred as they emphasize their creation of security policies, not scheduling or deployment. Byron emphasized after an investor meeting, “we come up with security policy, not fixed checkpoint recommendations” (Field Notes, 4/2013). By emphasizing the notion of security policy, this enabled Algo-Security to differentiate their services from other competitors. Basically Byron highlights the fact that what we do is focus on plans and procedures and policies, whereas other people only monitor. We provide automated decisions; they provide information. (Field Notes, 9/2013) This emphasis continued on even into a much later version of the website. In February 2014, Byron emphasized again that Algo-Security always needed “to avoid words like ‘deployment’ or ‘scheduling’ – they’re too narrow” (Field Notes, 2/2014). 93 These types of subtle wordsmithing changes prefigured a broader shift that resulted in the company subordinating automation to analytics. As a result of the probing feedback, they shifted their description subtly by June 2013, saying that “Algo-Security does risk analytics and resource optimization, with first a risk score being calculated, and then an algorithm optimizes resource allocation” (Field Notes, 6/2013). This changing emphasis continued as the organization worked on putting a “one-pager” together for venture capitalists. Byron provides some feedback on the one pager. He says that fundamentally we need to identify a solution to address. We play security games. We use all available information to put into our algorithms. Basically the types of terms he thinks we need to emphasize include “games,” “complexity,” “cyber-physical security,” and “visualization.” (Field Notes, 10/2013) This wordsmithing continued, shifting to the following description of “what we do” by March 2014. We augment an organization’s ability to make sense of dynamic, complex environments and to optimize the use of their resources. We exploit machine learning and patented game-theoretic algorithms to convert data into intelligence-driven strategies. Our software enhances an organization’s ability to focus their energy on understanding their competition and adversaries rather than gathering and analyzing information. (Algo- Security Website, 3/2014) The emphasis on game theory is significantly less; the organization promotes agency and reflection and rhetorically subordinates both game theory and the automation of information. Modifying Core Features. In addition to incorporating probes into the rhetoric describing the organization, the probes also inspired modifications to core features. An important example of a modification is that Algo-Security incorporated machine learning into their product. Early versions of the prototype simply sought to represent the environment of the security organization in the mathematical matrix. Algo-Security responded to questions about machine learning from cyber-security-interested venture capitalists in a way that shifted the 94 foundational design of their strategy tool. This shift was documented in e-mail correspondence with an investor. Machine learning and game theory are complimentary techniques. Machine learning focuses on prediction, based on known properties learned from the available data. Examples of data collected in the case of security domains would be number of passengers at an airport terminal, or amount of Internet traffic sent through a network link. Machine learning on security data allows us to estimate values for "threats", "vulnerabilities" and "consequences" in case of an attack. However, this knowledge, while useful, does not provide us with optimal strategies for the defender to allocate his or her resources. This is where game theoretic optimization is required. Algo-Security uses machine learning to build models for the real-world domain, in what we call a "game matrix" representation. Game theoretic algorithms then accept this game matrix as input and compute optimal strategies for intelligent deployment of security -- a deployment that maximizes the effectiveness of the limited defender's resources taking into account all the threats, vulnerabilities and consequences, as well as the intelligent behavior of potential adversaries. (e-mail correspondence with investor, 9/2013) Machine learning, rather than being written off as a mistake by someone technologically uninformed, then shortly became incorporated into the strategy tool for cyber-security. So there would be two different modules: (1) Machine learning module, which would take information about threats and the networks, and (2) Optimization module, which would provide scanning recommendations. (Field Notes, 2013) This application of cyber-security to machine learning became part of the broader business model, as evidence by the comment in the What We Do section of the website that “we exploit machine learning and game theoretic algorithms” (Algo-Security Website, 3/2014). I observed a similar adjustment to modify the core features of the strategy tool as Algo- Security began to incorporate simplified visuals into their offering. After several investor and customer meetings, Joseph suggested that Algo-Security incorporate a “heat-map” into their software. Imagine there are different types of risk…Imagine a tool that constructs each adversary type. Based on adversary types, here’s what your game-theory heat map of risk looks like. We can show them a map of what the new heat map is based on our patrol schedule, it’s a patrol and risk heat map. (Field Notes, 6/2013) 95 This type of thinking were reinforced by resistance Algo-Security felt from their customers. Byron, for example, felt like “the bottom is bought in” but that the “top is not bought in;” his response was that “we have to emphasize that we save money, overtime, etc.” and that “the way to do this is through visualization” (Field Notes, 7/2013). In a subsequent meeting, Byron really emphasized this point. Byron says that we need to sell the top and the bottom. We need to do visualization. When we show our visualizations, they start leaning forward. When we talk about game theory, they get a blank stare [Byron uses his body to illustrate these points]. Patrick agrees, and observes that at their first prototype they needed the visualization to really buy into what Gaming Expert was doing. (Field Notes, 7/2013) Even Dr. Fisher, the academic expert in game theory who pioneered the Algo-Security technology, agreed. Visualization is critical for all people. They’re not game theorists, they need to visualize the results. So that game theory and non-game theory results can be compared side by side. (Field Notes, 8/2013) Byron summed it up by arguing that “game theory is the core,” and that with that “strong nucleus” that “visualization sells” (Field Notes, 8/2013). Another illustration of how the probing ended up modifying core features is the way in which Algo-Security linked the product to measures of effectiveness. Initially, during prototyping the emphasis of effectiveness was on automating decision-making to achieve theoretical optimality. During the prototype, for example, Gaming Expert personnel spent a significant amount of time engaging in cost-benefit analysis, where they estimate “defender and adversary costs and benefits” and then minimize “expected defender utility” (Field Notes, 4/2013). Success became measured, however, in basic terms like coverage (i.e., how many times a security patrol viewed the object they were tasked with defending). Gaming Expert and Algo- Security periodically went back to defender expected utility, but more frequently they sought to 96 provide more broad evaluations of the success of the algorithm. As a result of this difficulty, they communicated the effectiveness of the algorithm through a graph (see Figure 3), showing the number of resources used in traditional scheduling as compared to Algo-Security scheduling. Figure 3 – Strategy Comparison for Effectiveness This discussion ended up shifting the emphasis away from the idea of “automation” and “optimality” to the way that they could empower security organizations through the strategy tool. The issues of multiple goals created an opportunity for a modified feature set. There is then a discussion about why changing these goals is somewhat problematic…when you change the goals, you lose theoretical optimality. Joseph thinks we should not sell the optimality, instead we should sell the landscape. The theory of it being that if it doesn’t work you switch the levers one at a time. But the idea here is that this way of visualizing risk is so much better than the current decision-makers operate, it is a massive step forward. (Field Notes, 1/2014) -5 -3 -1 1 3 5 7 9 1 2 3 4 5 6 7 8 9 10 Effectiveness Number of Resources Standard Scheduling Algo-Security 97 The discussion of the end goals thus moved away from the representational way that Gaming Expert used to approach the prototype. Summary. Pinging thus functions as a concept in which the professional organization shifts away from the abstract, academic knowledge they used in the prototype. Whereas the prototype simply attempts to quantify the model in terms of the quantification language like game theory, in pinging the quantification process changes quite a bit. The professional probes the environment by deploying rapid, pulse-like messages designed to get a reaction from their environment. They complement these pulses with internal discussions and arguments about the market. As they absorb and incorporate this information about their environment, they change their offerings: they wordsmith their messaging about who they are and what they do, and they modify the core features of the strategy tool itself. Theoretically, this concept is important because it reflects a different use of quantification than either the rationalization or the sociology of quantification perspective predict. Specifically, the rationalization perspective would conceptualize this phase of development as being primarily focused on improving the correspondence between the quantification model and the environments of client organizations. Conversely, the sociology of quantification perspective would conceptualize this phase of development as being primarily focused on improving the ways in which the strategy tool could model environmental complexities and thereby facilitate political discourse. What pinging shows, however, is that the professional organization uses their expertise in quantification as a springboard to identify and address fundamental pains in different markets. In other words, in pinging organizations use their expertise in quantification to facilitate the discovery of new and fresh ways of solving perennial industry problems. 98 Contextualizing In contextualizing, the professional organization works to deploy a formal strategy tool in a particular client environment. Contextualizing is distinct from prototyping. In prototyping the professional tests their ability to use expertise in quantification on a particular problem; in contextualizing the professional seeks to deploy a strategy tool within a particular environment. The rationalization perspective, for example, would suggest that deploying the strategy tool would be a matter of improving the correspondence between the real world and the mathematical game theoretic model. The sociology of quantification perspective would suggest that this phase would reflect political conflict in the organization. In my findings, however, I show that the professional works on embedding the tool in the organization by incorporating ancillary features to build support from front-line employees and they also develop relationships through multiple layers of the organizational structure. Although this might be perceived as political work, the effect aligned more with the concept of engaging the entire organization to leverage the tool to solve a variety of organizational problems. Additionally, during the contextualization process the professional organization simplifies quantification for the end user by visualizing model inputs and outputs, creating simple reports to compare new versus old decision-making practices, and reporting simplified performance metrics. During the process of contextualizing, they also seek to maintain portability by building an “engine” in a way that can apply to different “verticals” or markets. They also maintain portability by using examples from other markets or verticals to increase their overall legitimacy. Taken as a whole, in contextualizing the professional develops a strategy tool that no longer represents a direct quantification of decision-making processes, but rather reflects a process in which quantification becomes subordinated to and part of the client organizational culture. 99 Embedding the Tool. In order for client organizations to use the strategy tool, Algo- Security needed to embed the tool in the Western Airport environment. Once Western Airport had committed to move forward with the project, Algo-Security found that their previous prototype had not been in use for a long time. The Chief said that the reason they’re not using Algo-Security’s prototype now is because in the field they combine too many different patrol activities together. The algorithm is too narrow. Clifton said that this is where it got good for Algo-Security – they asked him what data he would need to make the algorithm apply to more broad patrolling decisions. They’ll have a follow-up meeting with the Assistant Chief to flesh this out in detail. (Field Notes, 2/2014) In other words, the game theoretic model developed in the prototype effectively addressed a targeted problem, but it failed to integrate into the broader activities of the organization. As Clifton commented right after the final deployment, “the first version was not usable because of our limited understanding of the domain” (Field Notes, 6/2014). This limited understanding did not relate to the particulars of their algorithm, but rather the way in which the practices in the organization were significantly more complex than the early theoretical models. The design of the product, as agreed to in the early meetings, differed from the prior version of the model. Specifically, Algo-Security agreed to design two products for Western Airport. We will be making 2 systems for them – system one that does checkpoints and foot patrols and system two that does scheduling for canines, code alphas, and perimeter patrols. We have identified individuals in the Western Airport police to go and talk about each of these systems, as well as about the entry of data in the system. (Field Notes, 3/2014) The development of two systems was required to fit the quantified decision-making model of the algorithm into the Western Airport security practices. The product, as ultimately deployed in May 2014, contained a significant quantity of features that did not exist in the prototype. The prototype, for example, consisted of a simple 100 schedule generator that assigned resources to particular checkpoints. The final strategy tool, however, contained a significant amount of additional information. The home screen, for example, featured a visual map of the facility highlighting the eight checkpoints graphically. Users could enable or disable each of the checkpoints, and assign each checkpoint with a “priority” to reflect real-time user intelligence about the relative importance of each checkpoint. The second screen showed users the estimated traffic flow. The third and fourth screens, a vehicle management module and an assignment roster, allowed users to manage a list of vehicles and personnel available for a particular day. Interestingly, the executive leaders of the Western Airport explicitly said they did not need this functionality; Algo-Security project manager Clifton decided to incorporate the functionality to make it “sticky” with the people using the product. Clifton says that the Sergeant is more interested in resource management than the chief had been. He wants resource management with respect to personnel and car hours. Simple things like this seem to be important to them. Byron says that simple things make the product a “must have.” We need to add these, because if our product can be deployed here, it can be deployed in other similar locations. (Field Notes, 3/2014) The fifth screen provided functionality for schedule generation. This replicated earlier versions of the prototype, but included enhancements that enabled users to generate schedules for multiple days at a time, export schedules to Excel files, match color coding systems used elsewhere in the organization, or regenerate schedules if “they don’t like the schedule” (Field Notes, 6/2014). The final screens provided administrative ability to manage officer rosters. Algo-Security anticipated that forthcoming versions would include analytics and reporting modules. In the contextualizing process, the quantification methodology becomes less and less central to the overall process. By enabling the user to regenerate schedules they dislike, for example, the possibility of bias is introduced and the very notion of randomness inherent in the 101 original game theoretic model is somewhat challenged. The tool contains the shifts to analytics and away from automation that occurred during the previous pinging process. Algo-Security also embedded the tool by developing relationships throughout the organization. Doing this helped the project implementation personnel develop a broader perspective on the process. Specifically, for example, during implementation they spent time with executive leaders (i.e., the Chief, who started to “email Clifton on Sundays”); front-line supervisors (i.e., the sergeants who controlled the scheduling and the personnel); and the analytics staff (i.e., civilian support staff who analyzed data in an attempt to detect patterns in traffic flow and/or criminal activity). By interacting with these different sections of the organization, the professional implementation team was put in a place where they had to design the tool to respond to requests from these different interests. The tool thus became a boundary object helping different segments of the organization to communicate. Clifton says that the chief views Algo-Security as a tool that can “bridge” two different factions within the department – the analytic quants who run predictive statistical analysis (who think their work is never used and discarded by the people in the field) and the operations people, who think that the work of the quants is not particularly helpful and has already been captured and utilized in their intuitive decision-making processes. (Field Notes, 4/2014) Importantly, these communication processes happen over and around the game theoretic expertise; embedding the tool into the culture had a more pragmatic flavor to it as the idea of the algorithm served as a mechanism to bridge different political factions within the organization. Simplifying Quantification. To contextualize the tool, they also had to simplify quantification. This first occurred through the visualization of model inputs and outputs. Some of this occurred through simple aesthetic design: presenting checkpoints on a facility map instead of a simple spreadsheet. Generally, however, the Algo-Security project implementation staff converted the numbers of the algorithmic calculation into graphical format to communicate with 102 the Western airport field personnel. Corey had foreseen the need for this after a feedback meeting during the initial prototype for the rail project. He commented: Corey says that we need to develop two new components to the algorithm, a user information tool and data analytics, where we show statistics, pie charts, and automated analyses. He also says that an important issue they will have to deal with is people who question why the algorithm makes certain recommendations. For example, they say, why is this patrol going here? Why is that patrol going there? He thinks that an expert system can provide justifications for why the system makes certain recommendations, and that this will be important. (Field Notes, 4/2013) The timing of this comment is important, because it shows that the during the contextualizing process the professional organization will naturally need to provide visualization to simplify the complexity of the model. This was also important for the Western Airport implementation. Joseph described this: The key for this project is visualization. All they have right now are tables, and they are so hard to follow – even for the officers making the tables. We can make an easy system that can help them discover data and trends. (Field Notes, 3/2014) Note that these visualizations did not necessarily refer to the algorithm itself; they simply wanted to augment the existing reporting capabilities for Western Airport so that they could enhance the decision-making ability of the security professionals. Another way that they had to simplify quantification was to create simple reports to compare new and old decision-making practices. One example of this is a visual heat map. In an analytical report, Clifton developed a numerical measure of risk effectiveness. He then superimposed contrasting three-dimensional bars on top of the facility map in order to compare a manually generated schedule with an Algo-Security algorithmically-generated schedule. By providing a simple comparison, Algo-Security simplified the quantification process and made it accessible to a broader swathe of personnel in the Western Airport organization. 103 Finally, they began to report simplified performance metrics. For example, they tracked how many people saw police officers. This metric was not optimized by the algorithm, which weighted the importance of targets independent of traffic flow, but served as a simple metric via which security personnel could evaluate the effectiveness of the algorithmic scheduling. Maintaining Portability. Finally, although Algo-Security worked on embedding the tool in the client organization and simplified quantification of the algorithm, they also sought to maintain portability. Specifically, they began to conceptualize the requests made by the Western airport as being generalizable. In a meeting following the commitment by Western Airport, Byron and Clifton commented “that what they are asking for actually makes the algorithm more generalizable” (Field Notes, 2/2014). At the same time, they sought to avoid “feature creep” and “not take too much time to get the first version out” in order to make public announcements as quickly as possible (Field Notes, 3/2014). They talked, for example, about the applicability of the Western Airport software to a potential nuclear power client. Byron says that what we will do for Western Airport will probably apply to the nuclear client. He thinks that most of the work for the prospective client could take place in six months – or that Algo-Security should not let it take longer than that. There is some discussion about how the nuclear client needs differ from Western Airport’s needs. Basically the conclusion is that these differences are minimal – the nuclear client wants handheld functionality, and they need to use different data. (Field Notes, 3/2014) Similarly, Algo-Security also thought that their campus security prospect also “just needs to deploy this existing product” (Field Notes, 3/2014). Ultimately, this portability was central to Algo-Security’s future, as Byron argued that “we have to show that Algo-Security has other applications, or our growth is limited” (Field Notes, 3/2014). They also maintained portability by drawing on the experiences in one domain to build support in another domain. For example, during a tour of a central rail facility, the Lieutenant in 104 charge of rail security advocated for the benefits of the Algo-Security product (based only on a prototype) to a leader in a waterways organization (Field Notes, 7/2014). Summary. In summary, Algo-Security contextualized the strategy tool to a particular environment by embedding the tool in the broader practices of Western Airport, simplifying the quantification process (to the point of hiding the math for the broader organization), all the while maintaining portability of the strategy tool to other domains. This concept also deviates from how the rationalization perspective and the sociology of quantification perspective view the embedding of quantification in strategy tools. In the rationalization perspective, the implementation process would be about improving the way in which the quantification tool modeled the problem. In my findings, however, the quantification tool is used more broadly as a spur to facilitate the improvement of a broader set of organizational activities. In the sociology of quantification perspective, the implementation would reflect political dynamics between different groups of the organization. Instead, I find that management use the quantification technology as a boundary object to align different political interests within the company around a common objective. A Theoretical Model of How Professionals Inscribe Quantification Expertise into Strategy Tools In order to inscribe expertise in quantification in strategy tools, professional organizations engage in the processes of prototyping, pinging, and contextualizing. I visually depict the dynamic interaction between these concepts in Figure 4 below. Figure 4 – Theoretical Model, How Professionals Construct Strategy Tools to Quantify Decision Theoretical Model, How Professionals Construct Strategy Tools to Quantify Decision-Making 105 Making 106 Specifically, the first step to inscribe expertise in quantification is to develop a prototype. In prototyping, the professional organization has to take a fundamental academic insight and apply it to a problem in a particular domain. The fundamental academic insight comes from mathematical knowledge grounded in an abstract knowledge system such as statistics, economics, or mathematics. Once the theoretical mathematical way of approaching a problem has been deductively identified, the expert must develop a quantified decision-model. This model then has to be tested and deployed in a limited environment, validating the general applicability of the expertise in quantification to a particular domain. Subsequent to prototyping, the professional organization engages in pinging. Pinging consists of two general activities: the professional organization obtains information about its environment via probing and evaluating, and the professional organizations responds to this information by wordsmithing and modifying core features of their strategy tool. These activities related as pinging occur in a rapid, iterative manner. The professional organization seeks to understand their environment; they adapt to the information they receive rapidly. In pinging, there is an interesting interplay between the ways in which the organization understands their environment. They seek to understand both by sending out probes, but also by internally discussing and evaluating different markets. These mechanisms combine together in an intriguing way to provide the organization with a continually updating understanding of their dynamic environment. Similarly, the professional organization adapts to their environment both by changing their messaging and changing their understanding of their core product features. During this process, they shift their emphasis away from the technology of quantification and move it towards addressing the significant, perennial problems that the industry faces. 107 Finally, the professional organization engages in contextualizing their strategy tool to a particular organization. In this process, the organization simultaneously works on embedding the tool in the client environment and simplifying quantification so that a broad audience inside the client organization can understand the strategy tool. Concurrently, the professional organization seeks to maintain the portability of their tool, ensuring that the foundational engine powering their quantification maintains applicability to other potential target markets. Discussion Existing research presents somewhat of a paradox related to two different perspectives on quantification. In one perspective, the rationalization perspective, professional organizations provide material affordances to client organizations by using quantification to improve existing decision-making processes. According to this perspective, organizational managers or leaders use quantification as a tool to improve the efficiency of a process. According to another perspective, however, quantification as a topic is politically charged, and the professional organization provides conceptual affordances to client organizations as quantification provides a way in which the organization can commensurate conflicting or divergent interests. My findings challenge both of these accounts. Although prototyping reflects processes we might expect based on the theory undergirding the rationalization perspective, pinging and contextualizing represent surprising ways in which professional organizations use quantification. Specifically, during these processes the actors in the professional organization shift their attention and energy away from the correspondence between their expertise in quantification and the environment of their client, and shift it towards understanding significant, perennial problems faced by many different types of client organizations. Essentially, I find that professional organization uses quantification as instantiated in the strategy tool as a device to help them 108 develop an outside perspective that hopes to address fundamental, significant issues addressed often only indirectly by their underlying expertise. 109 CHAPTER 4 – THE USE OF STATISTICAL EXPERTISE TO AUTOMATE DECISION- MAKING ROUTINES Introduction “Any sufficiently advanced technology is indistinguishable from magic.” – Arthur C. Clarke Organizations increasingly use complex algorithms to automate and optimize routines (McAfee & Brynjolfsson, 2012; Steiner, 2012). This trend extends efforts by management scholars and practitioners to create more efficient organizational routines (Taylor, 1911; Deming, 1982; Zuboff, 1988; Haeckel & Nolan, 1993; Davenport & Harris, 2007). Recent technological advances contributing to the phenomenon known as “Big Data” challenge organizations: to utilize massive quantities of data and analyze complicated problems, organizations must rely on sophisticated algorithms that require specialized expertise in “data science” (Miller, 2013; Nisbet et al., 2009). Organizations draw on this highly specialized expertise to apply complex statistical algorithms to change or replace central decision-making components of existing routines effectively (Davenport et al., 2010). Often, however, this specialized expertise lies outside existing organizational resources and makes the mechanics of the actual decision-making process largely inaccessible to organizational actors. Understanding the processes by which organizations integrate this new type of expertise and algorithmic decision-making into their routines has the potential to uncover important insights for strategy and organizational scholars. An algorithm consists of a set of rules used to solve problems that organizations can use to automate routines (Haeckel & Nolan, 1993). Organizations use such routines to coordinate their activities (March & Simon, 1958; Nelson & Winter, 1982). Management scholars suggest that standardizing and automating routines improves efficiency and organizational performance (Haeckel & Nolan, 1993; Davenport & Harris, 2007). When organizations use algorithms to 110 automate routines, they seek to improve efficiency by reducing discrepancies between structural or ostensive understandings of the routine and particular performances of the routine (D’Adderio, 2008; Feldman & Pentland, 2003). At the same time, recent studies of routines suggest that such efforts to control routines produce problematic side effects, as individuals enacting routines frequently tailor their performance of a routine to fit contextual circumstances by drawing on tacit knowledge learned from personal experience (Pentland & Rueter, 1994; Feldman, 2000; Howard-Grenville, 2005; Pentland & Feldman, 2008). This research implies that organizations face two considerable challenges in using algorithms to automate routines (Pentland & Feldman, 2008; D’Adderio, 2011). First, when organizations use complex statistical algorithms to automate a decision- making process, the algorithm performs a central part of a decision-making routine in a way that domain experts struggle to understand. Specifically, domain experts often make decisions based on heuristics learned over time (Bingham & Eisenhardt, 2011), but complex algorithms make decisions through sophisticated mathematical calculations (Nisbet et al., 2009). This creates a puzzle of theoretical and practical importance: how do domain experts integrate the mathematical expertise embedded in an algorithm into their routine? Second, by embedding a decision-making process into a routine, the algorithm undermines the ability of organizational actors to draw on their tacit knowledge and adjust their performance of the routine to match contextual circumstances (D’Adderio, 2008). Consequently, when organizational actors attempt to design a routine with a durable algorithm, unintended consequences may emerge during implementation (Pentland & Feldman, 2008). This creates another puzzle of theoretical and practical importance: how can organizations use algorithms to streamline routine performances without undermining the benefits derived from individuals flexibly performing the routines? 111 I seek to provide insight into these puzzles by examining the following research question: how do organizations use algorithms to automate routines? Following other studies in the routines literature (Pentland & Rueter, 1994; Adler, Goldoftas, & Levine, 1999; Feldman, 2000; Howard-Grenville, 2005; Rerup & Feldman, 2011), I employ a single in-depth case study to observe the detailed dynamics associated with an organization using a complex algorithm to automate a routine. I use participant observation to study a project implementation in which a law enforcement organization (Metropol, a pseudonym) and an expert service provider (Gaming Expert, a pseudonym) customized a complex game-theoretic algorithm to automate their decision-making routine used to schedule and deploy security resources used to prevent or limit criminal activities such as crime or terrorism. I find that the process by which Metropol and Gaming Expert negotiate the automation of the routine involved the bridging of two distinct approaches to decision-making: a grounded, embedded practical approach and an abstract, disembedded mathematical approach. Organizational actors engaged in three bridging practices to reconcile the discrepancies between these two divergent approaches. First, organizational actors reconciled different conceptions of their theoretical understanding of the scheduling routine by modeling and mapping the law enforcement organization’s environment into the game-theoretic algorithm. Second, organizational actors reconciled different conceptions of the manner in which individuals performed the routine by establishing algorithmic jurisdiction to define how much the algorithm would control the routine. Finally, organizational actors constructed validation to reconcile distinct conceptions of success. During this process, I find that domain experts inscribe their expertise into the algorithm in a process that magnifies the importance of their professional expertise by focusing it inside a software artifact. Additionally, as they engaged in these bridging 112 practices, organizational actors used the algorithm as a device to focus human agency and to re- think the processes underlying the broader scheduling routine. This study makes two central contributions to management research. First, prior literature suggests that organizational routines resolve political conflicts that arise from competing forms of expertise through truces (Nelson & Winter, 1982; Zbaracki & Bergen, 2010). This literature suggests that algorithms are designed to make decisions that reflect politically negotiated compromises. I show, however, that the extreme gap between the algorithmic and organizational routines compels organizational actors to engage in bridging practices. Consequently, domain experts inscribe their grounded, embedded, and pragmatic expertise into an artifact that instantiates abstract, disembedded, mathematical expertise. This inscription does not rationalize organizational expertise by transferring political control away from domain experts; rather, this inscription magnifies the complexity of domain expertise and “enchants” the algorithm. Second, prior literature suggests that artifacts such as algorithms that automate decision- making processes possess a durable nature that inherently threatens the ability of individuals to perform routines flexibly (Pentland & Feldman, 2008; D’Adderio, 2011; Cacciatori, 2012; Turner & Rindova, 2012; Howard-Grenville, 2005). I show, however, that the integration of an algorithm into an organizational routine requires the actors designing the algorithmic decision-making process to visualize future possibilities (Emirbayer & Mische, 1998). Consequently, rather than constraining future performances of the routine, the design of an algorithm shifts the locus of agency for future performances of the routine. Theoretical Background At the most fundamental level, organizations enable groups of people to work together to achieve common goals and objectives (March & Simon, 1958). Routines provide individuals 113 with motivational goals that adjudicate between different functional interests within an organization (Nelson & Winter, 1982; Gibbons, 2006; Zbaracki & Bergen, 2010). Inside organizations, routines also coordinate individual behavior by providing members with a cognitive understanding of appropriate behavior (Cohen & Bacdayan, 1994). Routines are a particularly important phenomena for strategy and organizational scholars to understanding because they serve as the micro-foundational building blocks of capabilities (Dosi, Nelson, & Winter, 2000; Salvato, 2009; Parmigiani & Howard-Grenville, 2011; Felin, Foss, Heimeriks, & Madsen, 2012). Although some depictions of routines emphasize their stable, gene-like properties (Nelson & Winter, 1982), other accounts portray routines as generative processes (Pentland & Rueter, 1994; Adler et al., 1999; Feldman, 2000; Feldman & Pentland, 2003). Specifically, scholars suggest that routines possess a dual nature: an ostensive or structural aspect that reflects the general, abstract idea of a routine, and a performative aspect that refers to the particular performances of a routine (Feldman & Pentland, 2003). This dual nature of routines has enabled scholars to overcome several theoretical challenges, such as explaining how routines change (Feldman, 2000) and explaining how routines exist with flexible and stable properties concurrently (Howard-Grenville, 2005; Turner & Rindova, 2012). Recent research has also described how some objects (“artifacts”) provide organizational actors with an abstract, ostensive understanding of how actors should perform a routine while other objects provide organizational actors with a means to track details of routine performances (D’Adderio, 2008, 2011). Taken as a whole, this research highlights both the centrality of individual improvisation or “effortful accomplishments” in the performance of routines (Pentland & Rueter, 1994; Feldman, 2000; Howard-Grenville, 2005) and the importance of understanding the interaction between the 114 different aspects of routines (Feldman & Pentland, 2003; D’Adderio, 2011; Salvato & Rerup, 2011). When an organization uses a complex algorithm to automate a decision-making component of a routine, it embeds both ostensive and performative aspects of the routine into an object or artifact (D’Adderio, 2008; Cacciatori, 2012). For example, in the trucking industry, companies schedule customer orders and dispatch trucks to deliver products to customers. Individuals enact this scheduling routine by using a heuristic-based decision-making process to dispatch particular trucks to particular jobs, where the abstract, generalized heuristic functions as a part of the ostensive aspect of the routine and a particular dispatching decision about where to send a truck functions as a part of the performative aspect of the routine. Alternatively, organizations can use a mathematical algorithm to enact this scheduling and dispatching routine. An algorithm thus functions as an object or artifact that provides an automated alternative to a human decision-making heuristic. The algorithm enables more powerful organizational actors to design and control the decision-making routine, narrowing the discrepancy between the ostensive and performative aspects of the routine throughout the organization (D’Adderio, 2008) and providing these actors with a means to instantiate principles from an espoused organizational schema inside routine performances (Rerup & Feldman, 2011). Organizations now use many different types of complex algorithms in their decision- making processes (Nisbet et al., 2009). These algorithms operate according to different problem- solving principles, often grouped into non-exclusive categories such as backtracking, divide and conquer, dynamic programming, optimization or greedy, branch and bound, brute force, and randomized algorithms. Deploying algorithms embeds a high degree of mathematical expertise that reflects an abstract, formalized body of knowledge that differs significantly from the 115 traditional forms of expertise used within an organization (Cacciatori, 2012). Using complex algorithms presents organizations with two challenges that scholars have yet to explain. First, organizations need to draw on mathematical sources of expertise to utilize complex statistical algorithms to automate routines. Traditionally, domain experts make decisions using heuristics (Bingham & Eisenhardt, 2011). Using a complex algorithm, however, replaces these heuristic judgments with a statistical model by embedding a calculative process rooted in a formal knowledge system into an artifact (Cacciatori, 2012). Often, these statistical models require specialized training and expertise to understand and interpret. Consequently, when organizations use complex algorithms they transfer some decision-making power and responsibility from domain experts to mathematical experts. This transfer of expertise aligns with the findings of prior technology studies that suggest that the implementation of technology leads to changes in role relations (Barley, 1986; Zuboff, 1988). These changes in role relations lead to the increasing value of intellective skills as organizations implement new technology (Zuboff, 1988). The implementation of complex statistical algorithms serves as a somewhat different case as compared to prior accounts of technological development, however. In previous cases, technology replaces the physical functions performed via organizational roles (i.e., new medical equipment or plant automation equipment perform activities previously done by people) (Barley, 1986; Zuboff, 1988). With statistical algorithms, however, technology replaces decision-making processes and the responsibility for judgment moves from domain experts to abstract mathematical formulas that lie outside traditional organizational roles and functions. This process reflects a challenge to traditional sources of professional expertise, as experts from a different profession such as mathematics or statistics seek to expand the scope of their professional tools (Abbott, 1988). 116 Different experts resolve disputes over jurisdiction through compromises in routines that reflect more or less temporary truces (Nelson & Winter, 1982; Zbaracki & Bergen, 2010). Theory that might explain how domain experts integrate the mathematical expertise embedded in the algorithm, then, offers us the ability to understand both how technology that changes judgment and decision-making processes becomes instantiated in organizational routines (D’Adderio, 2011; Leonardi & Barley, 2010) and how experts with conflicting sources of expertise resolve that conflict (Zbaracki & Bergen, 2010). Second, embedding heuristic decision-making processes into algorithms transfers control of the performance of a central component of a routine from individuals in the organization to an algorithm embedded in software, creating a tight link between the abstract, ostensive understanding of the decision-making routine and the particular performance of the routine (D’Adderio, 2008). When organizations embed algorithms into software to automate decision- making, the durability of this algorithmic decision-making process increases since the algorithm becomes embedded in a thick web or organizational relationships and becomes part of the habitual background within which individuals enact routines (D’Adderio, 2011). As the algorithm controls the performance of the routine, individuals in the organization lose the flexibility to modify the performance of the routine to adjust to contextual cues (D’Adderio, 2008, 2011). This loss of control may hamper the quality and effectiveness of the routine performance as individual tacit knowledge often plays an integral role in the enactment of routines (Feldman, 2000; Howard-Grenville, 2005; Pentland & Rueter, 1994). Although artifacts attempt to balance the organization’s ability to balance the need for consistency and to adapt to change, connections are still needed to overcome challenges to stability and maintain a consistent routine in the face of change (Turner & Rindova, 2012). Scholars have a limited 117 theoretical understanding of the processes by which organizations negotiate the inherent tension between the pursuit of efficiency in algorithmic automation and maintenance of flexible human performances as organizational design efforts frequently result in an array of unintended consequences that diverge from the routine designer’s intent (Pentland & Feldman, 2008). Scholars suggest that the design process significantly influences the long-term performance of the routine, where routines can be “live” and inspire generative, responsive patterns of action or be “dead” and lead to mindless conformity or disuse (Levinthal & Rerup, 2006; Cohen, 2007; Pentland & Feldman, 2008). Explaining how organizations attempt to use algorithms to streamline routine performances without undermining the benefits derived from individuals flexibly performing a routine thus offers promise to address this theoretical puzzle (Salvato & Rerup, 2011). To address these two theoretical challenges, I conduct an inductive study of how organizations use algorithms to automate routines. I respond to calls to better understand the role of artifacts and design in routines (Pentland & Feldman, 2008; D’Adderio, 2008, 2011; Cacciatori, 2012), and more broadly to calls asking scholars to investigate how technology influences social relations (Leonardi & Barley, 2010). In doing so, I also provide a micro-level perspective on how organizations develop routines and capabilities (Parmigiani & Howard- Grenville, 2011; Felin et al., 2012; Eggers & Kaplan, 2013). Methods Since I sought to develop new theory about how organizations use algorithms to automate decision-making routines, I draw on an in-depth case study from a single organizational setting (Pentland & Rueter, 1994; Adler et al., 1999; Feldman, 2000; Howard- Grenville, 2005; Rerup & Feldman, 2011) using an inductive, grounded theory approach (Gioia 118 et al., 2012; B. Glaser & Strauss, 1967). For my field site, I observed a security organization (Metropol) and an expert service provider (Gaming Expert) using an algorithm to automate decisions at the center of a frequent and significant routine: the scheduling of patrol officers. I now elaborate on my research design by describing the empirical context of the security industry, my organizational field sites, data collection, and data analysis. Empirical Context I use the empirical context of the security industry: public law enforcement organizations and private security companies that provide security services that protect citizens. This industry provides an appropriate setting in which to develop novel theory about how organizations use algorithms to automate routines for several reasons. First, security organizations traditionally allocate and deploy security resources with routines that utilize human decision-makers on a daily basis. Second, due to technological advances related to “Big Data,” security organizations increasingly rely on large quantities of data from crime databases, video recordings, and sensor data to inform the decisions which shape the performance of the routines allocating security resources. Finally, the security industry serves as an attractive context due to its substantive importance: in the United States, for example, public spending on law enforcement related activities reached almost $300 billion in 2013 (Chantrill, 2013) and private security industry spending was projected to grow to $45 billion in 2013 (IBISWorld 2012) 2 . Metropol. Within the security industry, I study Metropol (a pseudonym), a law enforcement organization located in a large metropolitan area. Metropol employed sworn officers and professional staff members, with a very large annual budget. I observed their department responsible for protecting transit operations such as bus and rail. Structurally, 2 Revenue numbers include NAICS codes 56161 (security services) and 56162 (security alarm services). 119 Metropol personnel performed executive management functions (i.e., a chief or commander), senior management functions (i.e., a lieutenant), line supervision functions (i.e., a sergeant or a team leader), and various front-line functions (i.e., sworn deputies and non-sworn security assistants). Metropol used a decision-making routine to schedule and deploy patrol officers in order to protect transit operations; during the time of my study, they worked with Gaming Expert, a university-based research organization, to automate their scheduling decision-making routine with a game theoretic algorithm. Gaming Expert. Gaming Expert, a research organization based in a large metropolitan university, specialized in conducting academic research related to the security industry. Founded approximately ten years ago, the organization developed research applicable to real-world problems in the security industry. Structurally, Gaming Expert consisted of a director, business development staff members, faculty, administrative support, and various post-doctoral researchers and graduate students. During my project, I worked with a Gaming Expert department that used game theoretic algorithms to automate scheduling decisions. Gaming Expert’s technology could apply to broader components of the routine for scheduling and deploying patrol officers. Prior to the time of my study, the group had deployed their game theoretic algorithms within several other public law enforcement agencies. Data Collection I spent seven months (from February 2013 to September 2013) conducting participant observation of a project in which Gaming Expert professionals worked with Metropol staff to design and develop an algorithm to automate routines for scheduling patrol officers. During data collection, I paid attention to methodological recommendations about how to study routines. Specifically, to study the processes organizations use to design algorithms to automate routines, I 120 examined the different aspects of the routine (Pentland & Feldman, 2005). To study the performative aspect of the routine, I used participant observation to help me understand how organizational actors coordinate activities (Pentland, 1992). Additionally, I reviewed artifacts such as reports or logs of performance to provide additional insight into the performative aspect of the routine (Pentland & Feldman, 2005). I used interview data to identify the ostensive aspect of the routine. I also reviewed artifacts such as standard operating procedures or checklists, since these artifacts often function as proxies or indicators of the ostensive aspect of the routine (Pentland & Feldman, 2005, 2008). During my fieldwork, I observed how organizational actors integrated the algorithm into the decision-making routines for scheduling patrol officers. My research design thus consisted of an ethnographic case study incorporating data from participant observation, interviews, and archival materials. Participant Observation and Interviews. I entered the field as an observer studying Algo-Security, the commercial spin-off from Gaming Expert. I attended Algo-Security meetings and was invited to accompany Gaming Expert personnel on behalf of Algo-Security during preliminary site visits to Metropol. As my participation in the project unfolded, I played a more active role by serving as a minor evaluator for different exercises conducted to test the algorithm. As an associate of Algo-Security, I attended most meetings relevant to the implementation of the project and received relevant e-mail exchanges. Between February 2013 and September 2013, I spent an average of approximately ten hours per week working on the project with Gaming Expert, Algo-Security and Metropol personnel. During this time, I participated in various types of activities such as meetings, on-site project updates, conference calls, and field trials of the algorithm. During these activities, natural opportunities emerged for me to interview participants of both Metropol and Gaming Expert about the existing scheduling routines and the design of the 121 algorithmic routine. While I conducted these interviews, I took extensive notes to capture as much detail as possible. After each day in the field, I also recorded detailed field notes about my general observations. Archival Information. I gathered supplemental archival data. I obtained and catalogued publicly available information about the Metropol and Gaming Expert organizations. This general information enabled me both to familiarize myself with industry terminology and to understand the cultural background of each organization. During participant observation, I collected artifacts used for the scheduling routine (i.e., Standard Operating Procedures, checklists, etc.). When possible, I captured information about the performative aspect of the routine by obtaining copies of daily and monthly performance reports. I also observed Gaming Expert lectures that introduced and explained the algorithm to academic audiences. Additionally, I accumulated archival materials relevant to the design process such as pictures of the mobile software and dashboards developed to represent various outputs from the algorithmic scheduling routine. Data Analysis To analyze my data, I compared and contrasted my emerging observations with existing theoretical literature. I utilized an approach for inductive data analysis that built on principles from grounded theory (B. Glaser & Strauss, 1967; Gioia et al., 2012) since pre-existing theory did not directly address my research question. I coded my data using the constant comparative method (Strauss & Corbin, 1998). I followed methodological recommendations for coding and data analysis by analyzing my data with distinct rounds of coding to identify first order concepts, second order themes, and aggregate dimensions (Gioia et al., 2012). 122 In the first phase of data analysis, I used open coding to identify concepts of interest within my data (Strauss & Corbin, 1998; Van Maanen, 1979). During this process, I sought to develop a clear understanding of each of the different approaches the two organizations used to approach the scheduling routine. For Metropol, I analyzed their existing routine, independent of the algorithm. For Algo-Security, I analyzed their routine as deployed in previous implementations. Additionally, I sought to understand how the organizations used the algorithm to automate the scheduling routine. To do so, I analyzed how organizational actors interacted as they designed the algorithm, specifically coding for the strategies the organizations used to address the discrepancies between their different approaches to the scheduling routine. During the second phase of data analysis, I sought to uncover the deep structure of my data, following the format and practices recommended by Gioia et al. (2012). At the beginning of this phase of data analysis, I re-coded my data for first order concepts that explained the phenomenon of interest connected to my field site (Van Maanen, 1979). I then took a step back to examine the broader theoretical themes that emerged from my data in order to develop second order themes and aggregate dimensions. I depict the data structure that emerged from this analytical process in Figure 5. In a final round of coding, I looked for relationships between concepts to develop an understanding of the dynamics that explain my phenomenon of interest (Gioia et al., 2012). In this phase, I paid particular attention to theoretical literature of relevance, and looked to identify the surprises in my context that could not be explained by existing theory (Eliasoph & Lichterman, 1999). My goal was to develop a theoretical explanation of the processes organizations use to apply algorithms to automate routines grounded in data from my empirical field site, yet also able to provide a general explanation of the processes by which organizations use algorithms to automate decision-making routines. Figure 5 – Data Structure, The Use of Statistical Expertise to Automate Decision Routines Data Structure, The Use of Statistical Expertise to Automate Decision 123 Data Structure, The Use of Statistical Expertise to Automate Decision-Making 124 Findings I present my findings in two sections. In the first section, I introduce the different approaches to security scheduling taken by Metropol and Gaming Expert. To do so, I provide a narrative description of each organization’s approach to the security scheduling routine. I then analyze this narrative to summarize critical distinctions between these two different approaches to the routine. In the second section, I describe how Metropol and Gaming Expert use the algorithm to automate the security scheduling routine. To aid narrative flow, I present first-order concepts in conjunction with and structured by second order theoretical themes and aggregate dimensions (Eisenhardt & Graebner, 2007). I find that Metropol and Gaming Expert bridged three distinct gaps in their approaches to the scheduling routine in order to use algorithms to automate the routine for scheduling. These gaps specifically reflected the challenges of streamlining routine performance without undermining flexibility and integrating algorithmic expertise into the routine. The first discrepancy relates to the organizations’ different understandings of the ostensive nature of the routine. Whereas Gaming Expert’s approach conceptualized scheduling decisions in terms of optimizing a mathematical causal model, Metropol’s approach conceptualized scheduling decisions in terms of a human scheduler acting based on learned responses of pattern recognition grounded in domain experience. The second gap developed from contrasting conceptions of the nature of the performative aspect of the routine. In the Gaming Expert approach, the algorithm directly caused and controlled performance, but in the Metropol approach patrol officers flexibly modified performances based on tacit-knowledge and pattern recognition. The final gap emerged from different conceptions of success: whereas Gaming Expert’s game theoretic approach defined success in terms of a mathematical, theoretically guaranteed solution, Metropol’s pragmatic approach understood success in terms of the organization’s ability to satisfy diverse 125 demands from multiple stakeholders. I show that Gaming Expert and Metropol reconciled these gaps using three different theoretical bridging practices: modeling and mapping, establishing algorithm jurisdiction, and constructing validation. I summarize this theoretical model below in Figure 6. I also provide additional representative data excerpted from my field notes to support this model in Table 10 below. Figure 6 – Theoretical Model, How Organizations Use Statistical Expertise to Automate Decision Organizations Use Statistical Expertise to Automate Decision-Making Routines 126 Making Routines 127 Table 10 – Representative Data Supporting Interpretations Theme Representative Quotations from Field Notes, Interviews and Archival Data Modeling and Mapping Structuring an Algorithmic Model “Robert describes the program as follows: he models patrol schedules in an adversarial environment. He incorporates spatio-temporal constraints into a Bayesian-Stackelberg game. This enables users to avoid predictability and allows for competition between the security resource and the adversary.” (Field Notes, 4/19/2013) “Developing the payoff matrix is pretty challenging. The calculation of risk so you can assign different values to different targets based on real-world understanding of threats, preferences of the adaptive adversary…then we provide the solution of mathematics.” (Interview, Algo-Security Director, 2/28/2013) “I like to start an implementation by asking the client, what are the worst things that can happen to your organization.” (Interview, Algo-Security Post-Doctoral Researcher, 3/8/2013) Constructing Numerical Parameters “We assign values to a bunch of different targets; we assign costs to different resources; then in theory it becomes a very simple set of math equations.” (Interview, Algo-Security Director, 2/28/2013) “Currently each of these parameters has been instantiated using the information provided by Metropol. We plan to refine these parameters as part of our future discussions with Metropol. At the moment, we pre-specified each of them.” (Project Memo, 3/27/2013) “We decided to simplify the matrix for purposes of the exercise. Rather than have a full matrix created, we will defer work on the full matrix and create a stripped down version of the matrix for the exercise.” (Interview, Algo-Security Director, 3/28/2013) “Sergeant Dale said that to construct the weighting, they looked at the infrastructure around them. They decided that during certain times one target would be more important than another. He mentioned that he worked with Algo- Security to use a simple scale with variations for the weightings of the inputs.” (Field Notes, 4/9/2013) Building Robustness “Teams of officers will patrol each of these six stations. Each team will have different capabilities which might help him to prevent or deter a potential terrorist attack. Each team will move between the different stations. The officers can either ride a train or use a car. The specific type of vehicle that is chosen will affect the time to move from one station to the other. Defining a good estimate of this number is a key question for future work.” (Project Memo, 3/27/2013) “The inputs to the model are formalized, but the mathematics are quite robust. When we refine the numbers, it only improves the quality of the results but the approach is quite consistent. The schedules have a respect for the values we assign to different locations…” (Field Notes, Algo-Security Director Commentary, 4/14/2013) “In other words, because we weight the important stations more, they will get 128 more resources, which means that they will get more of the K-9s and the VIPR units. This happens by a simple weighting; the effect could be magnified with a unit effectivenss parameter.” (Field Notes, Algo-Security project implementation meeting, 5/14/2013) Establishing Algorithm Jurisdiction Negotiating Algorithmic Control “I ask Sergeant Aponte what his guys thought about the algorithmic schedule relative to the manual schedule. He says, ‘They like yours a lot better. It’s easier. They don’t have to think about the paper. They can think about locating problems.’” (Field Notes, 5/16/2013) “Dr. Olson says, ‘In the real world, there are so many factors to weight, that computing the optimal solutions becomes very difficult. That’s what our algorithms do.” (Field Notes, 4/25/2013) “David says, ‘What we are doing isn’t anything people can’t do on their own. We take the knowledge in your brain to save time. We simply automate, using math, what you would do by hand.’” (Field Notes, 4/25/2013) “What we want to say to Metropol: we’re still interested in your experience. All we want to do is take over the scheduling. We can maintain flexibility in how they interpret the schedule.” (Interview, Algo-Security Post-Doctoral Researcher, 6/13/2013) “The Metropol officers reinforce the need to trust the local judgment of the team on the ground, you don’t want to provide too detailed direction.” (Field Notes, 3/27/2013) “Steven said, ‘The culture of law enforcement thrives on self-initiated actions. There is no substitute for the correct judgment of the officer whose intuition has been validated. The officers don’t want to feel constrained and have their decision- making ability taken away. With one Algo-Security client, the officers were made to understand that if the algorithm weren’t in place, they would not have been able to exercise their intuition to stop the car that had drugs.’” (Field Notes, Algo- Security Project Implementation Meeting, 5/13/2013) Algorithm as Creative Stimulus “What are patrol configurations that generate the best results? For example, there are uniform officers, plainclothes officers, and combinations of the two. The idea is that these resources may be combined in different ways better or worse to achieve different goals (i.e., to combat crime or terrorism)…there is also the issue of coordination, this is an issue with the wander around philosophy. We had a situation where all of our officers converged on one guy who had two knives out.” (Field Notes from listening to a Metropol Lieutenant, 5/16/2013) “Another parameter Robert has is optimal time for a sortie. He and Metropol used fifteen minutes for the sortie, but they probably should have randomized this. For example, a VIPR team might have a different time than another team. Should a VIPR team move around or stay in one spot?” (Field Notes, 5/16/2013) “They talked about how the algorithm needs to be deployed in such a way where the humans have the flexibility to act on instinct, and the algorithm updates to reflect those actions.” (Field Notes, Algo-Security Project Implementation Meeting, 5/13/2013) 129 “Some of the movements in this event were superb, but some featured police that were too bunched together. This didn’t have anything to do with the schedule, instead it was the training of people on small unit tactics.” (Field Notes, 4/25/2013) Scoping Algorithmic Influence “Sergeant Dale then distinguishes between intelligence and information. Information refers to facts: this is what’s happening here, this is what I need to look for. Intelligence refers to a picture where all of the information gets put together. When the information is put together it becomes intelligence…the intelligence picture requires a continuous feed of information…During a normal day, randomization happens all the time. This is where intelligence comes in: during an event, we may need to redirect the randomization.” (Field Notes, 4/25/2013) “The scheduling matrix is based on petty crime and ILP (intelligence led policing). For example, there were a few robberies at one station, so we sent patrol officers to that station to have some fixed post time. The problem is that the criminals know the schedule…but the idea is that the scheduling matrix is based on ILP.” (Interview, Metropol patrol officer, 7/15/2013) “The female officer makes an observation: randomization is great, but it’s just one part of the issue: the other part of the issue is, where do you put your resources?” (Field Notes, 4/25/2013) Constructing Validation Formulating Comparisons “We are sometimes asked, how do we optimize? The response is that optimization is something that occurs in mathematics. But in the real world, what we are doing is ‘optimizing on the status quo.’” (Interview, Algo-Security Doctoral Student, 3/7/2013) “Sergeant Aponte did not want to do the old schedule. He said the old way they did the schedule was to identify the number of locations to be covered at the desired coverage rate and then just divide it by resources: he’d have his high school daughter use her algebra to do it.” (Field Notes, 3/27/2013) “David says that we want to set the stage for evaluation. We need to not only identify how effective the sortie is, but also identify how does the random way of scheduling compare with the other method – the manual method – of scheduling. David then asks the group to think about how they would suggest evaluators compare the two systems.” (Field Notes, 4/25/2013) “Another one of our clients uses a ‘Red Team.’ They have undercover teams that play terrorist. Before our algorithm was deployed, the Red Team always won. After our algorithm was deployed, the Red Team was stopped four out of five times.” (Interview, Algo-Security Post-Doctoral Researcher, 3/7/2013) Creating New Measurements “Marcia suggested that there are two audiences that evaluate the Algo-Security algorithm: one is the user of the algorithm (i.e., the officer), the other is the evaluator of the algorithm (i.e., management). The intent is to develop a survey for to capture what users think about the product.” (Field Notes, 4/24/2013) “For our algorithm, the defender expected utility is calculated directly by the algorithm. The idea is that the algorithm was deployed over several weeks, and the 130 attacker conducted surveillance and then attacked. This is what gives us DEU [defender expected utility].” (Algo-Security e-mail, 5/11/2013) “For petty crime we model defender expected utility. I like the idea of also measuring passengers influenced though, so that we show that we can go to more places and cover more people.” (Interview with Algo-Security Post-Doctoral Researcher, 6/24/2013) “Steven says that he thinks that Metropol should be describing the process in terms of incident reduction. Here’s why incident reduction works: Metropol is not just looking for a petty criminal, Metropol is looking to improve the safety and security of the system.” (Field Notes, Algo-Security Meeting, 7/26/2013) “Why emphasize coverage? Because adversaries see your presence. The key is demonstrate an increased presence. For example, the K-9s. The dogs seemed to be everywhere. If they see presence, there will be a decreased likelihood of crime because they will encounter a protective measure.” (Field Notes, Algo-Security Meeting, 6/24/2013) 131 Scheduling Security Resources The Metropol Scheduling Routine To protect the critical infrastructure under their jurisdiction, Metropol deployed security resources (i.e., sergeants, deputies, and security assistants, hereafter often generically described as patrol officers) every day. Patrol officers performed different tasks depending on their role and the circumstances of the day. The approach Metropol used for this decision-making routine could be described in terms of four dimensions: fulfilling organizational resource deployment objectives, assigning and scheduling patrol officers to particular tasks and geographies, the performing of these tasks by patrol officers, and measuring results. Different organizational roles held responsibility for distinct dimensions of the routine. The executives set the organizational objectives; a scheduler put the schedule together; and patrol officers executed the schedule in the field. Organizational Resource Deployment Objectives. Metropol attempted to deter criminal and terrorist activity through the physical presence of patrol officers. The physical presence of a patrol officer created a powerful deterrent for a criminal or a terrorist. As a law enforcement officer commented, “…some street criminals aren’t rational, but this is rare. For example, a police officer has very few experiences where they arrest someone who is dumb enough to rob someone in front of a police officer. It happens – every officer has an experience or two like this – but it is memorable, and rare.” (Metropol Lieutenant quoted in Field Notes, 3/27/2013) Metropol also sought to prevent – or interdict – criminal or terrorist activity before it occurred. For example, their patrol officers responded to leads or tips to prevent pre-planned criminal activities. Similarly, Metropol patrol officers attempted to identify and apprehend criminals who had committed a crime. Metropol thus worked to prevent all types of criminals from successfully initiating and completing their criminal objectives. 132 Depending on circumstance, Metropol patrol officer prioritized efforts to thwart various types of criminals that posed more or less of a severe threat to the safety and security of the critical infrastructure at any given time. For example, Metropol patrol officers sought to minimize petty crimes such as pickpocketing, more serious crimes such as robbery or assault, and terrorism. The relative priority placed on neutralizing these different types of adversaries shifted depending on circumstances such as when Metropol officers would focus on anti- terrorism efforts in response to requests for cooperation with government agencies coordinated by the Department of Homeland Security (DHS). Scheduling Resources. To achieve these goals, Metropol faced a fundamental challenge analogous to that of any business: how could they maximize their security coverage with the resources they had? With an unlimited budget, for example, Metropol might station patrol officers at every public location every minute of the day. With limited resources (and a limited public appetite for draconian security measures), however, Metropol had to decide which potential venues to send their limited quantity of patrol officers to protect. Consequently, they had to make decisions about resource allocation and deployment such as the daily scheduling of routine patrols (i.e., which trains, and train stations, should patrol officers check for fare evaders?) or the sporadic assigning of locations for specialized teams in response to a bomb threat. Traditionally, Metropol made such decisions using a combination of available information (which they called “intelligence”) and intuition. To schedule resources effectively, Metropol sought to deploy security resources randomly. Since criminals presumably attempted to avoid detection, they also attempted to avoid the patrols of security organizations. By increasing the randomness of security patrol deployment, Metropol made it more difficult for a criminal to execute a crime successfully. This 133 enabled Metropol to better achieve goals of criminal deterrence and preventive interdiction. A Metropol sergeant described this objective of randomness in this way: “My goal is to keep criminals on their toes. We have a lot of resources – we want to make it impossible for criminals to know what teams and what tactics they might face.” (Metropol Sergeant quoted in Field Notes, 3/27/2013) As a counterpoint, however, complete randomization did not in and of itself necessarily achieve security goals. Metropol had to prioritize some targets over other due to inherent importance (i.e., a large building or central train station hub with a lot of passengers should be protected more than a strip mall or low volume peripheral station) or the likelihood of the occurrence of criminal activity (i.e., fewer security resources might be needed to patrol the area around an upscale neighborhood relative to an area with gang activity). Additionally, Metropol often had to change the relative priority for preventing different types of crime (i.e., petty crime, crime, or terrorism). On any given day, then, Metropol decision-makers made resource deployment decisions taking all of these factors into account. They did so using a combination of information and officer intuition. Metropol performed this dimension of the routine via a scheduler who assigned patrol officers to perform particular functions. Schedulers in Metropol assigned specific patrol officers to particular geographies and tasks. For example, in one performance of the scheduling routine that I observed, team leader Rivera assigned patrol officers Zeman and Nickerson to attend a morning briefing at 6:00 am, to patrol the financial district from 7:00 am to 10:00 am (a “fixed post”), and to patrol the rail system by riding a specified train from 10:00 am to the end of their shift at 3:00 pm. In another performance of the scheduling routine, Sergeant Dale assigned a group of sixteen patrol officers including K-9 units (officers accompanied by specially trained dogs), plainclothes officers, mounted patrols (officers on horseback), and armed patrol officers (officers with machine guns) to perform a sweep (a “counter-swarming exercise”) through eight 134 stations on the Southeast train line. The scheduler split the patrol officers into four groups and assigned each group to sweep through different stations. Task Performance. Once patrol officers received their scheduled task assignments, they traveled to their assigned geographic location and performed their assignments. Patrol officer responsibilities featured both rigid and flexible characteristics. Patrol officers for Metropol (as for any law enforcement organization) abided by law enforcement agency-specific rules of engagement that governed and constrained their actions. Patrol officers confronted diverse types of situations, which required them to exercise judgment and practical wisdom. For example, during a morning briefing that I attended, the sergeant informed the patrol officers about a new drug that significantly enhanced the physical strength of users. The sergeant encouraged the patrol officers facing such a situation to avoid “trying to be a hero” and instead work to counter criminals by enlisting support from other patrol officers, even at the risk of letting the criminal escape. His clear message to patrol officers: exercise good judgment by taking into account all of the relevant features of the situation. This reliance on the judgment and discretion of individual patrol officers contributed to patrol officers being responsible for enacting a flexible, dynamic interpretation of the scheduling routine every day. Management also provided patrol officers with some quantifiable performance targets to deter the petty crime of fare evasion. Security assistants checked fares for at least 150 passengers per day while deputies checked fares for at least 75 passengers per day. When the patrol officers checked for fares, they used a mobile phone that scanned the fare card of a passenger. This provided the patrol officer with a rapid visual report of the detailed payment history of the passenger for the day. The patrol officers had a high degree of latitude in terms of how they responded to fare violations (they did not, for example, have a quota for the number of citations 135 given). This latitude encouraged patrol officers to enact their schedules with flexible, idiosyncratic flair. For example, when I rode with patrol officers Zeman and Nickerson, I observed their activities at a train station. Each officer performed their job quite differently. Patrol officer Zeman selected an exit from the train station and asked each exiting passenger to provide proof of fare. Patrol officer Nickerson, however, oscillated between comprehensively covering one exit and standing back and looking for passengers who behaved suspiciously. After riding with the patrol officers for the day, I recorded the following reflection in my field notes: So what seems to happen is that the individual patrol officers seem to create their own way of doing the performance of the routine on a regular basis. So Zeman, for example, does her fare checks during the morning rush hour. Then she moves on to focus on citations later in the day. She tells me that other patrol officers do things differently… “I just worry about myself, everybody does it differently.” (Field Notes, 7/10/2013) The schedule directed the general focus of the patrol officers, but the officers flexibly interpreted the schedule as they performed their jobs. As a result of this, I noted in my field notes that patrol officers believed that “everything changes so much in a given day that the schedule is only a starting point and a guideline.” The performance of the counter-swarming exercise also exhibited flexibility, but the greater sense of mission in this scheduled activity circumscribed the individual performances of the routine. During the performance of this routine, I accompanied the officers by following at a distance. We entered the Chappell station, and the officer split up and searched the various levels of the station for unattended bags and other signs of suspicious activities. The officers attempted to act unpredictably yet thoroughly. Each officer acted according to his or her own discretion, but the clear mission of the exercise minimized variation in the performance of the routine. Measuring Results. I noticed a surprising absence of talk about tangible results among patrol officers. At first this surprised me, as it would seem that statistics and measurements of 136 petty crime, crime, and terrorism would be highlighted throughout all levels of the organization. Over time I realized, though, that Metropol experienced a wide diversity of criminal activity on any given day, and patrol officers focused their attention on the concrete, embedded security actions they embarked on during any given day. They paid more attention to the process than the results, because their organization, Metropol, succeeded as they experienced an absence of results (i.e., no crimes). Additionally, Metropol officers did not have a clear causal understanding of the relationship between the different aspects of the scheduling routine and quantifiable results. For example, if a crime occurred in a train station, could it have been prevented by a better schedule? Or, was the crime simply a random occurrence? Metropol staff therefore only imperfectly associated the scheduling decisions with the task performance of patrol officers and tangible results. Problematizing the Metropol Scheduling Routine. Metropol struggled with several perennial challenges. The first struggle revolved around the question of how to use limited resources most efficiently. For example, when Sergeant Dale noted that it would take 100 patrol officers to maintain a full-time presence for a group of buildings, he said, “No agency has that many people – I would have to either borrow people or use my resources as effectively as possible” (Field Notes, 4/25/2013). Historically, the officers responsible for scheduling used their judgment about where to best assign security resources. Metropol also struggled to deploy resources randomly. As discussed earlier, the officers believed that randomly distributing patrol officers increased the effectiveness of their security coverage. They struggled, however, to create a random schedule. For example, when I asked Sergeant Aponte to describe his scheduling process, he said, “I send some officers to particular locations, and other officers to random locations. I do this by looking at a map and trying to get 137 into the mind of a criminal or terrorist. But the idea is to just go random” (Field Notes, 5/16/2013). As a tangible example of this difficulty, when I rode with the security assistants, they commented that they had been at a particular fixed post station for several days in response to a flurry of criminal activity. When they went to their posts, though, “the crime just moved to other locations” (Field Notes, 7/10/2013). Metropol wanted to schedule patrol officers randomly, but creating randomness challenged the scheduler. This difficulty compounded due to logistical difficulties related to coordination of organizational activities such as shift starting times. Finally, Metropol struggled to manage task execution by the patrol officers. By nature, officers performed their tasks with a high degree of discretion and flexibility. This presented a management challenge, however. Some of this difficulty emerged from natural human tendencies. Lieutenant Emrich, for example, commented somewhat cynically, “Where are [the cops]? The cops are where the pretty girls are. If you don’t give specific direction, then they end up doing things that are more pleasurable or easy to do.” (Lieutenant quoted in Field Notes, 5/16/2013) Similarly, another source of difficulty related to the interpretation of management directives. When I talked with Deputy Rivera, for example, she mentioned that management struggled to teach security assistants to differentiate between “the spirit and the letter” of the law. This tension reflected in the diverse ways security assistants responded to the petty crime of fare evasion (i.e., some security assistants did not generate many citations, while others generated as many as 75 citations a day). This variation in routine performances arose from how security assistants interpreted violations differently. These three major problems – how to allocate limited resources, how to be random, and how to manage individual performances of routines – set the stage for Metropol to engage Algo- Security to help automate their scheduling routine. The Gaming Expert Game-Theoretic Scheduling Algorithm 138 Gaming Expert used game theory to help security organizations optimally respond to criminal or terrorist activities. Game theory studies strategic decision-making by employing formal mathematical models of intelligent, rational decision-makers. Specifically, Gaming Expert used game theory to model security forces defending targets from the attacks of adversaries (i.e., criminals or terrorists). In such competitive situations, the defender and the adversary attempt to outthink each other. Game theory formally models these competitive interactions by defining strategies and payoffs for each player. Such game theoretic approaches have attracted security organizations because they provide a simplified framework that analyze the competitive interactions between security providers and adversaries such as criminals or terrorists. I illustrate this game theoretic approach to security scheduling with a simplified game (see Figure 7). In this simplified game, there are two targets of interest (Target 1 and Target 2). The defender can choose to send a security resource to defend either Target 1 or Target 2. Similarly, the adversary can choose to attack either Target 1 or Target 2. The payoff matrix reflects the costs and benefits realized by the defender and the adversary based on the outcomes of different strategic choices. For example, in this simplified game matrix, if the defender chooses to protect Target 1, and the adversary chooses to attack Target 2, the attack succeeds: the defender’s payoff is -2, and the adversary’s payoff is 2. Conversely, if the defender chooses to protect Target 1 and the adversary chooses to attack Target 1, the attack is unsuccessful: the defender’s payoff is 1 and the adversary’s payoff is -1. 139 Figure 7 – Simple Example of a Game Matrix for Security Scheduling Simple Example of a Game Matrix for Security Scheduling * Payoff values described below Adversary Target 1 Target 2 Defender Target 1 1, -1 -2, 2 Target 2 -1, 1 2, 0 * The payoff values reflect a cost or benefit to the attack of a successful or unsuccessful attack. For example, n the table above, if the defender chooses to assign resources to protect Target 1, and the adversary attacks Target 1, then the defender realizes a payoff of 1, and the adversary realizes a payoff of -1. If the defender chooses to defend Target 1, but the adversary attacks Target 2, then the defender realizes a payoff of -2, and the attack realizes a payoff of 2. In this simple example, we see that the defender and attacker both place a greater value on Target 2 than Target 1. Gaming Expert’s game theoretic modeling added additional factors which increased the complexity of the model. For example, they used a variant of game theoretic modeling known as Stackelberg models. In a Stackelberg game, the defender moves first by scheduling resources to protect particular targets. While preparing for an attack, an adversary monitors the defender’s actions and seeks to exploit predictable defense patterns. In terms of the simplified example above, if the defender always protected the most important target (Target 2), the adversary would observe this and always attack the less important target (Target 1). Consequently, in a 140 Stackelberg game, defenders employ mixed strategies by choosing to defend different targets periodically. In the simplified example, the defender might deploy resources to protect Target 1 half of the time and Target 2 the other half of the time. Mathematically, the defender can calculate their optimum mixed strategy, and when the attacker observes the defender’s patterns of resource deployment over an extended period of time and responds optimally, the players achieve game theoretic equilibrium (i.e., Nash Equilibrium). Other factors of complexity modeled include the ability to incorporate assumptions such as uncertain payoffs (i.e., a defender may not really know how much an attacker values different targets) or multiple adversary types (i.e., a common criminal might have different payoffs than a terrorist). Gaming Expert used these game theoretic models that combine these features of uncertainty with adversary surveillance. Game theoretic experts call models that incorporate adversary surveillance and these types of uncertainty Bayesian-Stackelberg games. For simple exercises such as Figure 2, a security provider could mathematically calculate the optimal defender mixed strategy easily. In real-world security situations, however, these calculations become quite complex. For example, if a security organization has ten resources they can use to protect 100 targets, they must choose a strategy from a possible 1.73 x 10 13 potential strategies (Field Notes, Archival Materials). Sifting through this large quantity of potential actions to identify the optimal defender strategy creates a massive computation challenge for security providers. Gaming Expert developed proprietary, patented algorithms to overcome this computational challenge so that security organizations could identify and deploy game-theoretically optimal patrol resource schedules. Scheduling Objectives. Gaming Expert’s game-theoretic scheduling algorithm thus modeled a scheduling decision as a mathematical optimization problem for the defender. 141 Theoretically, a defender could choose a variety of strategies within the structure of the game matrix. For example, a defender might choose to use part of their limited resources to protect a particularly high value target like a central hub station all of the time. For their scheduling algorithm, however, Gaming Expert generated recommended strategies that optimized a metric called expected defender utility. Expected defender utility provided a mathematically optimum metric that represented the expected value of the payoffs based on the security organization or defender strategy and potential defender responses. By calculating and optimizing expected defender utility, Gaming Expert assumed that both the defender and the adversary would act in their self-interest by maximizing their expected potential gain. This mathematical construct of expected defender utility thus functioned as the scheduling objective in Gaming Expert’s approach to the decision-making routine. Scheduling Resources. Gaming Expert’s game-theoretic scheduling algorithm produced a recommended schedule that suggested which targets each security resource should protect. This output offered the security organization a detailed recommendation of how to schedule resources. The algorithm would generate random recommendations that would more frequently select the targets that the defender and the attacker valued more. The game-theoretic algorithm thus mathematically generated a detailed patrol officer deployment schedule that reflected the weighting parameters defined in the game matrix. Task Performance. In the game theoretic algorithm, the recommended schedule causally controlled task performance. Gaming Expert could, however, mathematically model task performance in various ways. For example, Gaming Expert could assign security resources different effectiveness parameters to account for variability in task performance for different patrol officers. They could also adjust parameters to reflect the possibility of systematically 142 imperfect task performance. The game-theoretic algorithm, however, always controlled detailed task performance via the mathematical assumptions built into the model. Measuring Results. The results of the decision-making routine differed from the objective of the routine. Theoretically, the algorithm optimized expected defender utility, but because of randomness, every individual performance of the scheduling routine would yield a different result. For example, the game theoretic model would periodically recommend that patrol officers cover a lower value target. If the adversary chose to attack the high value target while the defender randomly chose to patrol the low value target, the scheduling algorithm might provide idiosyncratic results that showed a negative outcome for the defender. Gaming Expert measured results, therefore, in terms of the consolidated mathematical expectation labeled expected defender utility. Optimal performance could therefore inherently only be measured over time. Interestingly, the algorithm also measured the results for the attacker. Gaming Expert cited research on terrorism events in particular that showed that adversaries spent a considerable amount of time and money conducting surveillance on their target of interest. By deploying their random algorithm, Gaming Expert theoretically increased the planning cost for the adversary to carry out an attack: “the adversary needs to collect more data to learn about what the police do, and even with that, the adversary wouldn’t be able to predict where the police will be on any exact date and time” (Field Notes, e-mail exchange with Gaming Expert post-doctoral researcher, 8/31/2013). Thus the results measured by the Gaming Expert algorithm incorporated both defender and attacker payoffs. 143 Two Distinct Approaches to Security Scheduling I summarize the above discussion by highlighting that these approaches to the routine differ significantly. The differences between these two approaches are summarized in Table 11 below. Table 11 – Comparison of Metropol and Gaming Expert Approaches to Scheduling Routines Comparison of Metropol and Algo-Security Approaches to Scheduling Routines Metropol Algo-Security Scheduling Objective Deter, interdict, and apprehend petty criminals, criminals and terrorists Maximize defender expected utility Scheduling Resources Scheduler intuition about how to be random, efficiently Set up as a game characterized by mathematical parameters Task Execution Patrol officers given latitude about how to best achieve organizational objectives Ostensive aspect of routine assumed to guide performative aspect of routine perfectly Measuring Results Imperfect link between scheduling activities and performance Theoretical, mathematical optimum Overall Approach Grounded, embedded, pragmatic Abstract, disembedded, mathematical The two organizations used substantively different approaches for the ostensive and performative aspects of the scheduling routine. Metropol’s routine reflected a grounded, embedded, pragmatic approach to the scheduling process. The scheduling decision functioned as one part of the routine; the patrol officer’s performance of the routine in the field featured much more prominently and somewhat overwhelmed the scheduling decision. The Gaming Expert routine, however, reflected an abstract, disembedded mathematical approach to the routine. The 144 scheduling decision featured prominently as the central process of the routine. In order to implement a new, automated algorithm-driven routine, Metropol and Gaming Expert actors needed to bridge these distinct approaches to the routine for scheduling and deploying resources. These different approaches highlighted the theoretical challenge of understanding how the organizations would integrate these divergent forms of expertise. Mathematical expertise undergirded Gaming Expert’s approach to scheduling, and law enforcement personnel struggled to respect this form of expertise. In an informal conversation, Dr. Olson described the problem in this way: Dr. Olson said that it is very difficult for [Gaming Expert] to gain trust. He listed off each of the major Gaming Expert clients, and said that initially for each client there was no trust or interest at all in using mathematical algorithms, but over time the potential client became more and more comfortable. (Field Notes, 4/17/2013) At the same time, the Gaming Expert professionals emphasized the importance of respecting the domain, experience-based expertise of Metropol law enforcement personnel. Specifically, many of the Gaming Expert personnel commented about the need for “immersion,” the process they used to learn about the domain-based expertise of their client organizations. These different approaches also highlighted the theoretical challenge of understanding how organizations use algorithms to automate routines without undermining the flexibility offered by human performances of the routines. The Metropol routine featured flexible, human performances in the scheduling of the patrol officers and the ways in which the patrol officers implemented that schedule. The Gaming Expert routine, however, featured performances dictated by a structured mathematical model that simplified a complex environment into a game matrix. Optimizing this mathematical model produced clear, objective scheduling recommendations that in and of themselves offered little flexibility: exercising flexibility would eliminate the optimality the model’s schedule recommendations. 145 Bridging Metropol and Gaming Expert Approaches to Decision-Making Routines Modeling and Mapping To use the algorithm to augment the routine, the organizational actors had to reconcile different conceptualizations of the ostensive nature of the routine. The Metropol approach treated the routine holistically and viewed the scheduling decision as subcomponent controlled by expert heuristic decision-making (Bingham & Eisenhardt, 2011). The Gaming Expert approach, however, treated the entire routine as a function of a scheduling decision that directly determined ultimate outcomes. These approaches would treat the same performance of the scheduling routine differently. For example, if Metropol experienced a flurry of crime at a train station, in the Metropol routine the scheduler would incorporate this new information into her decision- making heuristic (i.e., deploy available officers to that train station to respond to the increase in crime). In the Gaming Expert routine, however, the experience of crime would be an input to the algorithmic model that would have been created prior to the occurrence of the event. The Metropol approach thus assumed that schedulers exercised human judgment that observed and reacted to patterns based on experience. In contrast, the Gaming Expert model assumed that organizational actors had constructed a predictive causal model. These divergent understandings of the ostensive nature of the scheduling routine illustrate both of the theoretical issues of concern addressed earlier. First, in the Gaming Expert approach to the routine, the algorithm as an artifact represents a device used to impose a particular understanding of the ostensive aspect of the decision-making component of the routine. In the Metropol approach, however, each patrol officer has a different understanding of the ostensive aspect of the routine, and the ostensive aspect of the routine has minimal influence over the actual performance of the routine. Second, to use the Gaming Expert algorithm, the algorithmic 146 causal model and its underlying mathematical expertise to some extent had to replace the pattern recognition heuristics of the domain experts as the source of scheduling decisions. To overcome these gaps, Metropol and Gaming Expert engaged in modeling and mapping practices. The first modeling and mapping practice was structuring an algorithmic model. To use an algorithm to automate the routine, Metropol selected an algorithm that featured assumptions reflecting their organizational context. Hypothetically, Metropol could have selected a different algorithm to automate their routine, such as a simple randomization algorithm. In this instance, however, Metropol leadership and Gaming Expert believed that the game theoretic algorithm seemed to account for the characteristics of the Metropol security environment particularly well. Specifically, game theory assumed that criminals and terrorists behaved as rational adversaries that would opportunistically observe the patrol patterns of a security organization in order to exploit any observed weaknesses. Metropol personnel viewed criminals and terrorists as rational adversaries who focused on maximizing their interests (i.e., by placing different values on targets and observing patrol patterns to avoid being caught). Lieutenant Emrich described the rationality of criminals by noting “we have a tendency to think that people who are tattooed from the ghetto are stupid, but they’re not stupid – they just have a different preference for making a living” (Field Notes, 5/16/2013). Consequently, Metropol and Gaming Expert personnel selected the Bayesian- Stackelberg game theoretic model due to their belief that the assumptions underlying this model fit the law enforcement context particularly well. After they selected game theory as the disciplinary model for the algorithm, the organizational actors defined the structural properties of the game by constructing a game matrix. Larry, an Gaming Expert post-doctoral researcher, described this process: 147 If you are a new customer, I ask you the following questions: Can you identify a set of targets you need to protect? What resources are available to you? What are the capabilities of those resources? Can you provide some metric of how important each target is? … [W]e construct a payoff matrix. Basically we create scenarios where the adversary wins dollars, and we lose dollars. As we create the game matrix, we work with users to think about what their attackers will observe. We spend time understanding what is going on in the real world so we can take real world constraints and build them into the model, we take the information about the real world and optimize the decision-making process. (Interview, Gaming Expert post-doctoral researcher, 3/7/2013) This initial setup process thus set up the algorithm in such a way as to create a framework that could be used to map characteristics of the security environment into the mathematical algorithm. The second modeling and mapping practice was constructing numerical parameters. The mathematical game theoretic model required that organizational actors translate properties of the security domain into mathematical numbers that could populate the model. For example, if Metropol personnel needed to protect twelve different locations, they needed to assign a mathematical number reflecting the relative importance of each location for both the defender and the adversary. They could have assigned this number as a quantitative input from Big Data (such as a real-time input of how many people frequented that location), or as a qualitative score based on the intuitive judgment of an officer. Metropol and Gaming Expert personnel worked together to construct other relevant numbers such as travel time, security resources available, and the operational effectiveness of security resources. Since the algorithm processed scheduling decision-making in terms of mathematical cause and effect, Metropol and Gaming Expert personnel had to take the assumptions underlying existing, human-driven tacit decision-making processes and make them explicit in order to construct precise numerical parameters. The third modeling and mapping practice was building robustness. During the process of modeling the security organization context into the game theoretic model, the inherent 148 complexity of the real world challenged the validity of the ostensive nature of the algorithmic model. To respond to this challenge, Metropol and Gaming Expert personnel sought to bridge the different understandings of the ostensive aspect of the routine by designing the algorithmic model to adapt to idiosyncratic circumstances. An Gaming Expert post-doctoral researcher described this process as making the algorithm “less brittle.” Gaming Expert doctoral student Larry described how their model accounted for this uncertainty: There are many types of uncertainty that our model can handle. One is uncertainty about the payoffs. The next form of uncertainty relates to the officer’s execution. The next form of uncertainty relates to the observer. And the final form of uncertainty relates to improper modeling...We could have errors relating to all these types of uncertainty; the model can still handle it as long as we identify the errors. (Interview, Gaming Expert post-doctoral researcher, 3/7/2013) In addition to making the mathematics robust, Gaming Expert worked with clients such as Metropol to manipulate the parameters of the model to model complexities in the security environment. Larry described the process in detail: “…we do not necessarily know how a terrorist will attack. But the way we deal with this uncertainty is that we basically ‘charge them’ by having the model say that the terrorist has to ‘make an investment’ if we put in a metal scanner. The more of these types of resources we deploy, the higher the cost to the terrorist (in the model).” (Interview, 3/7/2013) The creative manipulation of the numerical parameters of the model enabled the organizational actors to design the algorithm to model the real world more robustly. The Metropol and Gaming Expert approaches to the scheduling routine defined the ostensive aspect of the routine differently. To bridge these different approaches, the organizational actors structured and populated the algorithmic model by constructing numerical parameters. This process constructed an infrastructure through which Metropol could apply the mathematical expertise of game theory into their security context. As they did so, Metropol and Gaming Expert maintained flexibility by building robustness into the model that accounted for 149 the uncertainties of the security domain and the ways in which the mathematical world of the game theoretic algorithm did not resemble Metropol’s security world. Establishing Algorithmic Jurisdiction The different approaches to scheduling also featured fundamentally different approaches to the performative aspect of the routine. In Metropol, officers performed the routine by drawing on the tacit knowledge that they learned from on the job experience. In the Gaming Expert approach, the algorithmic decision controlled the performance of the routine as the mathematical model centrally drove the performance of the routine. This gap between the different conceptions of the performative aspect of the routine directly relates to theoretical issues of flexibility and agency. For example, the algorithm might improve the Metropol routine by enabling leadership to control performances of the routine more effectively (i.e., the cops should not necessarily go to where the pretty girls are). At the same time, the algorithm could also potentially undermine the officer intuition that Metropol leadership valued and embraced. To reconcile this, Metropol and Gaming Expert personnel engaged in the bridging practice of establishing algorithmic jurisdiction that included negotiating algorithmic control, using the algorithm as a creative stimulus to improve the underlying scheduling routine, and scoping the influence of the algorithm. The first practice of establishing algorithm jurisdiction was negotiating algorithmic control. In the Gaming Expert approach to scheduling, the algorithm controlled the performative aspect of the scheduling routine by providing directives for patrol officer actions. Neither party believed, however, that the algorithm could replace the judgment of the patrol officers in the field. Consequently, Metropol and Gaming Expert worked together to negotiate the activities that the algorithm would control and would not control. This process began as organizational actors 150 developed an understanding of the comparative advantages of the algorithm and the human in decision-making. Rather than view the automation of the algorithm as a replacement for human judgment, organizational actors attempted to understand what the algorithm would do better than the patrol officers, and vice versa. For example, they recognized that the algorithm efficiently performed the grunt work of mathematical computation. Dr. Olson described the power of the algorithm this way: “In the real world, there are so many factors to weigh that computing optimum solutions becomes very difficult. That’s what our algorithms do” (Interview, 4/25/2013). The Metropol personnel also agreed with this assessment. When I asked Sergeant Aponte, for example, to compare the algorithmic scheduling routine with the prior Metropol routine, he responded by highlighting the efficiency of the algorithm. He thinks the difference [between the randomized algorithm and the existing Metropol system] is that the randomized algorithm will be more efficient. It takes time for the officers to figure out what they’re doing. But the efficiency of not having to pull out the piece of paper is part of why there will be extra coverage. (Field Notes, Informal Interview with Metropol Sergeant, 5/16/2013) In addition to these efficiency benefits, the algorithm also generated more random scheduling recommendations. Being random challenged the schedulers, and Metropol personnel liked the idea of automating this difficult task. In addition, Metropol leadership believed that using the algorithm removed bias from the decision-making process. The human element still remained important in the decision-making process, however. Metropol and Gaming Expert personnel believed that patrol officers and schedulers recognized patterns of importance – patterns that the algorithm by nature would struggle to take into account. Patrol officers had the ability to exercise judgment about how to handle the idiosyncratic information related to real-time intelligence updates. At the same time, Metropol 151 and Gaming Expert personnel questioned whether the patrol officer intuition would override the algorithm inappropriately. For example, when riding around with the security assistants, one security assistant noted that he felt like there was a particular station where he combatted a high frequency of petty crime, and he believed that the algorithm did not reflect this. Metropol and Gaming Expert leadership could interpret this observation in two different ways: they could reason that they could have modeled the algorithm incorrectly, or they could reason that the security assistant might have had an incorrect opinion. This ambiguous question highlighted a fundamental theoretical question: when should the patrol officer take over control from the algorithm? Metropol answered this question by allowing the patrol officer to control the algorithm. For example, Sergeant Dale commented at one point, “the officers on the ground can always override the randomized schedule” (Field Notes, 4/25/2013). In other words, although the algorithm performed the computational grunt work that the humans did not want to do, it did not infringe on the ability of the patrol officers to perform their routines flexibly as necessary. This maintenance of patrol officer control, however, was not a necessary outcome. Metropol could have used the algorithm as a device to increase control of patrol officer task performance. During several of the trials, for example, Gaming Expert post- doctoral researchers expressed frustration with the tendency of patrol officers to disregard the algorithmic schedule when it did not match their intuition. They highlighted instances where following the algorithm, even if the recommendations contradicted the intuitions of the patrol officer, improved the performance of the routine. The second practice related to establishing algorithmic jurisdiction was using the algorithm as a creative stimulus. The introduction of the algorithm highlighted frictions in existing organizational processes. For example, during tactical tests of the algorithm, observers 152 questioned the activities performed by the patrol officers (the “boots on the ground”) at various locations. Observers noticed that the officers tended to bunch up together when they patrolled particular locations. This observation did not directly relate to the algorithm, but the algorithm highlighted the issue and encouraged observers to reconsider and refine fundamental operational processes. The algorithm also highlighted new opportunities for process improvement. During an anti-terrorist counter-swarming exercise, for example, Metropol followed an operational practice that featured teams of all types sweeping through a large building for a fifteen minute period. Introducing the algorithm, however, stimulated Metropol personnel to consider how the algorithm might be used to randomize the time of a sortie, both at different times of the day and for different units. Similarly, discussions unfolded in which Metropol personnel attempted to understand the interaction effects that occurred from matching teams together. Would a VIPR team (a specialty anti-terrorist unit featuring different types of units with different pieces of equipment) obtain synergies if they worked with a plainclothes unit? The algorithm did not directly inform these types of issues. Rather, the observation of the task performance using the algorithmic schedule stimulated these discussions about the underlying operational processes. Similar to a boundary object (Carlile, 2002), the algorithm thus became a tool that organizational actors used to change and update the routine: the algorithm served as a device that stimulated Metropol personnel to rethink their overall routine and corresponding operational processes. The third practice related to establishing algorithm jurisdiction was scoping algorithmic influence. The algorithmic decision about how to deploy patrol officers functioned as one component of the routine. Organizational actors had to define how this decision-making component of the routine would influence the broader characteristics of the scheduling routine. In the Gaming Expert approach to the scheduling routine, the scheduling decision centrally 153 influenced and controlled the performances of the broader routine. In the Metropol approach, however, the scheduling decision played a less prominent role. The organizational actors had to reconcile these divergent approaches in order to use the algorithm within the routine. This process can be illustrated by the application of algorithms in the context of fraud detection. When an algorithm detects fraudulent activity on a credit card, the algorithm can set a series of action in motion, or the algorithm can bring a human actor into the decision-making picture. This process of scoping how much the algorithm influences the performative aspect of the routine is a central component of the process of how algorithms automate routines. I saw an example of how the algorithm’s application expanded scope into other areas of the performative aspect of the routine when Metropol expanded the scope of the algorithm’s scheduling decisions. Initially, Metropol used the algorithm to schedule criminal patrol schedules daily. As the project unfolded, Metropol decided also to try to use the algorithm to deploy patrol officers in response to terrorist threats. This increase in scope was not obvious. With a sporadic event such as a bomb threat, since the adversary has not had the opportunity to observe the actions of the security organization, some of the algorithm’s assumptions became less relevant and appropriate. They still used the algorithm, however, as a tool or device to handle this type of situation. Metropol personnel believed that the algorithm crafted better, more randomized scheduling decisions than the prior, manual scheduling decision-making process. Consequently, Metropol extended the jurisdiction of the algorithm from influencing the task performance of a simple patrol situation to influencing the task performance of a more complex terrorism threat. The divergent approaches Gaming Expert and Metropol used to approach the routine featured distinct ways of conceptualizing the performative aspect of the routine, particularly with regard to the centrality of the scheduling deployment decision in the context of the overall 154 scheduling routine. Organizational actors reconciled these divergent approaches by establishing the jurisdiction of the algorithm through negotiating the degree of algorithmic control over human performance, using the algorithm as a creative stimulus to change the broader routine, and scoping the influence of the algorithm throughout the broader routine and organizational context. Constructing Validation The final gap between the two different approaches to the routine related to differing conceptions of success. Metropol measured success in terms of their ability to meet the objectives and demands of multiple stakeholders. Understanding success was challenging for Metropol, particularly because their perception of success could be best described in terms of the absence of an outcome (i.e., no crime). The causal connections between Metropol’s scheduling and patrol officer activities were messy and somewhat ignored. For Gaming Expert’s approach, however, success could be clearly defined and understood. By definition, the theoretical model they established always succeeded (i.e., via mathematical proof). These divergent conceptions of success exacerbated the gap between the different forms of expertise. For the Metropol domain expert, practical wisdom justified their actions. Mathematical expertise trusted in hard numbers, however, rather than the judgment of the human performers. These differing conceptions of success thus required a bridging of these diverse forms of expertise. Metropol and Gaming Expert reconciled these different conceptions of success by engaging in the bridging practice of constructing validation. The first practice of constructing validation was formulating comparisons. Gaming Expert, from their prior project experiences, had created comparisons between their algorithm and alternative ways of scheduling security resources to validate their algorithmic approach to 155 scheduling. These historical comparisons served as a building block to create project-specific comparisons. Gaming Expert emphasized that no one could measure the success of a random scheduling algorithm perfectly, because in law enforcement, organizations define success in terms of the absence of crime. In a document Gaming Expert used to justify the effectiveness of their approach, they noted “since we cannot rely on adversaries to cooperate in evaluating Gaming Expert’s models and results, and there is (thankfully) very little data available about the deterrence of real-world terrorist attacks, these measures are the best available evidence of Gaming Expert’s effectiveness” (Internal Company Document on Gaming Expert Algorithm Effectiveness). They concurrently highlighted six different reasons why their algorithm outperformed other methods of scheduling. The effectiveness of the Gaming Expert system in providing optimum security risk at minimum cost has been measured and demonstrated in several ways, including (1) computer simulations of checkpoints and canine patrols, (2) tests against human subjects, including students, an Israeli intelligence unit, and on the Internet Amazon Turk site, (3) comparative analysis of predictability of schedules and methodologies before and after Algo-Security implementation, (4) red team / “adversary” team, (5) capture rates of guns, drugs, outstanding arrest warrants and fare evaders, and (6) user testimonials and congressional testimony. (Internal Company Document on Gaming Expert Algorithm Effectiveness). By formulating these comparisons, Gaming Expert supplemented their theoretical validation with other types of validation to justify the effectiveness of their algorithm. Metropol and Gaming Expert reconciled the different conceptions of success in the project implementation by constructing simplified project-specific comparison cases. For example, Metropol and Gaming Expert personnel conducted an analysis of algorithmic scheduling for fare evasion. The organizations used fare evasion as a laboratory case “to test…the level of coverage and randomness that [the algorithm] can achieve” (e-mail from Gaming Expert post-doctoral researcher to Metropol patrol officers, 7/9/2013). In this 156 comparison, Metropol and Gaming Expert first generated manual schedules for a week and then generated algorithmic schedules for a week to compare the results of the two schedules. Gaming Expert sponsored students to observe the patrol officers implementing both schedules so that they could capture performance statistics relevant for comparison. In another example, Metropol tested the effectiveness of the algorithm for scheduling counter-terrorism patrols. In this test, Metropol and Gaming Expert compared four-hour manually-scheduled and algorithmically-scheduled sorties that simulated a response to a terror threat. This test was by nature much less of a scientific comparison than the fare evasion test since the nature of the counter-terrorism exercise involved only one schedule. To evaluate this exercise, observers compared the resources required to compute the schedule, catalogued observer perceptions about the relative coverage delivered by the different scheduling methods, and provided a qualitative assessment of the exercise as a whole. For fare evasion and counter- terrorism, these comparison cases enabled Metropol and Gaming Expert personnel to evaluate the algorithm and develop an understanding of the merits of both the algorithm decision and the broader scheduling routine. The second practice of constructing validation was creating new measurements. Because of the lack of causal clarity in the existing Metropol routine, organizational actors struggled to obtain a direct measurement of the effectiveness of the algorithmic scheduling process. Theoretically, expected defender utility would provide a measurement of success since the algorithm inherently optimized this metric. Metropol personnel, however, struggled to understand this abstract metric. Consequently, Gaming Expert did not use expected defender utility as a number to validate measurement. Rather, they used expected defender utility as an abstract notion to provide philosophical support for the benefits of randomization. 157 Metropol and Gaming Expert also created a survey instrument to measure coverage, or patrol officer presence as observed by people stationed in the targets they defended. Metropol and Gaming Expert believed that observers would be able to detect greater presence with the algorithmic schedule, and they labeled this metric “coverage.” During the project, Metropol and Gaming Expert personnel measured coverage in other ways, extending the development of survey instruments by measuring coverage via simulation based on the estimated number of people going through specified locations during any given period. This measurement established a concrete metric that helped justify the effectiveness of the randomized schedule. Interestingly, however, this new measurement did not correlate directly with the method the algorithm used to schedule and deploy the security resources. The game theoretic model optimized expected defender utility, derived from the values that the defender and the adversaries place on the various targets. This value did not necessarily directly correlate to coverage. The coverage measurement, however, proved to be more tangible and measurable than the expected defender utility. Coverage usually improved with the implementation of the algorithmic scheduling routine, because the algorithm forced more movement of the patrol officers. With the creation of new measurements, Metropol and Gaming Expert thus dealt with the differing conceptions of success by approaching this gap from different perspectives. First, they continued to develop the notion of expected defender utility as a concept to justify the philosophy of randomness. Expected defender utility thus became a shortcut term used to describe this philosophy of randomness embedded in the algorithm. Second, they also developed more tangible, concrete measurements to justify the algorithm. These measurements did not 158 necessarily directly emerge from the model. Rather, they served as approximations used to justify the algorithmic routine and build credibility with diverse stakeholders. Discussion The organizational use of algorithms in routines provides two challenges to our existing understanding of routines. First, algorithms require sophisticated mathematical expertise that lies outside the expertise held by members of an organization whose expertise comes from extended experience in their domain (Bingham & Eisenhardt, 2011). Second, when organizations use algorithms to streamline the performances of routines in the pursuit of efficiency, they inherently undermine the flexibility that humans bring to routines when they engage in the performances of routines (D’Adderio, 2011; Feldman, 2000; Pentland & Feldman, 2008). These two theoretical puzzles led me to ask the research question, how do organizations use algorithms to automate routines? I sought to answer this question by conducting an in-depth case study in the security industry where a law enforcement organization and an expert service provider used a game theoretic algorithm to optimize a routine for scheduling and deploying patrol officers. I found that to create a new routine that used the algorithm to automate the scheduling decision organizational actors reconciled two divergent approaches to the routine. The domain experts approached the routine from a grounded, embedded, and pragmatic perspective. The expert service providers approached the routine from an abstract, disembedded, and mathematical perspective. These two approaches featured gaps that challenged organizational actors. First, organizational actors reconciled different understandings of the ostensive aspect of the routine. For the domain expert, the ostensive aspect of the routine resided in experts recognizing and responding to patterns. For the expert service provider, the ostensive aspect of 159 the routine resided in a mathematical model that featured variables with clear causal relationships. To reconcile this gap, organizational actors engaged in the process of modeling and mapping. In modeling and mapping, organizational actors structured an algorithmic model, and worked together to construct numerical parameters that could be used within the model. They pursued the goal of building robustness by designing the model to take various types of uncertainty into account. Second, organizational actors reconciled different understandings of the performative aspect of the routine. In the domain-expert understanding, individuals enacted the routine by drawing on tacit knowledge learned from experience. This resulted in flexible, autonomous performances of the routine. In the algorithmic routine of the expert service provider, however, the causal connections made explicit in the model controlled the performance of the routine. To reconcile this gap, organizational actors engaged in the process of establishing algorithm jurisdiction. To establish algorithm jurisdiction, organizational actors negotiated what processes the algorithm would control, used the algorithm as a creative stimulus to influence other areas of the routine, and scoped the influence of the algorithm. Finally, organizational actors reconciled distinct conceptions of success. For the domain expert, success had been a somewhat flexible construct in which organizational actors sought to meet various demands that emerged from daily interactions. The expert service provider, however, defined success with a simple number: a mathematical optimum. To reconcile this gap, organizational actors constructed validation by formulating comparisons and creating new measurements. This study offers several contributions to management research. First, I show that organizational design of an algorithm can more appropriately be conceptualized as a bridge 160 between different communities of practice than as a resolution of political conflict. Whereas prior literature suggests that routines function as truces that resolve conflict between communities with divergent political interests (Zbaracki & Bergen, 2010; Nelson & Winter, 1982), I show that domain experts use the expertise embedded in the mathematical algorithm as a device to create space for a new, automated routine to emerge. This automated routine facilitates the discovery of process improvements that accompanied the joint analysis of the activities and results. In essence, the algorithm serves as an enchanted, magical device that organizational actors use to construct a new routine. Second, I develop a theoretical model that explains how organizational actors overcome the challenges of maintaining flexibility with the algorithm and integrating external mathematical expertise into a routine. While prior literature has shown how experts inscribe their worldview into technology and routines (D’Adderio, 2011; Leonardi & Barley, 2010; Zuboff, 1988), I show how domain experts and mathematical experts use the algorithm to reconfigure many different, non-algorithm-related aspects of the routine. As they do so, the algorithm functions as a device that is imperfectly understood by organizational actors, and becomes enchanted with meaning that propels a transformation of the decision-making routine. Third, my study challenges the notion that algorithms tightly circumscribe the performance of decision-making routines (D’Adderio, 2008, 2011). Prior research both emphasizes that algorithms create a tight link between ostensive and performative aspects of the routine (D’Adderio, 2008, 2011) and that organizational actors engage in flexible, innovative performances of routines (Feldman, 2000; Howard-Grenville, 2005; Pentland, 1992). I provide an account that explains how organizational actors seek to manage this tension, showing that organizational actors use the algorithm to shift the locus of agency. My account suggests that 161 scholars should further investigate the ways in which designers seek to shape and direct agency by constructing the background within which organizational actors perform routines. Limitations and Opportunities for Further Study To develop a process model that explains how organizations use algorithms to automate routines, I used an inductive case study of one organization. Although a single case study is an appropriate research design to use for developing new theory (Pettigrew, 1990), further research can illuminate how idiosyncratic features of this case study’s empirical context might moderate or attenuate the theoretical model. For example, I study how organizations use game theory to automate routines. Further research could examine how organizations use other types of algorithms to automate routines. Additionally, I study how an organization and an expert service provider work together to automate the routine. Further research could examine the implications of how organizations use algorithms when they bring mathematical expertise inside their organization. In this study, although I have developed process model explaining how organizations use algorithms to automate routines, I did not develop an explanation of the relationship between the different components of this process and firm performance. This study might have several implications for understanding the relationship between organizational use of algorithms and performance that further research could investigate. For example, how important is the translation process to the performance of the algorithm? If, during the implementation of the algorithm, organizational actors reconcile the gaps between the different approaches to the routine imperfectly, how does that influence performance? Similarly, how significant is the selection of a particular algorithm to the performance of the routine? One might argue, for example, based on the results of this model that an imperfect algorithm enchanted might serve an 162 organization more effectively than a perfect algorithm in which the reconciliation processes remain undone. The further development of a deeper understanding of the relationship between the use of complex algorithms and firm performance is a significant topic of interest to scholars and practitioners alike. Conclusion With technological progress, organizations increasingly try to automate their routines (Haeckel & Nolan, 1993). Although Big Data has made this process increasingly more popular (McAfee & Brynjolfsson, 2012), little research has investigated how this works in interaction. Although one might assume that this trend leads to increasing rationalization (Weber, 1958; Porter, 1996), this study may provide some evidence that this trend may feature a greater degree of nuance. In my study, the abstract game-theoretic rationality drawing on mathematics and statistics did increase efficiency, but it did so in an enchanted manner. The algorithm instantiated mathematical expertise and functioned as an enchanted device that the domain experts used to rethink and reform their routine. 163 CHAPTER 5 - CONCLUSION In the modern era, organizations have used quantification to great effect. The efforts of Taylor (i.e., Scientific Management) increased labor productivity; the efforts of Deming (i.e., Total Quality Management) increased the quality of production. Often, changes in information technology have facilitated this rationalization of business processes (Zuboff, 1988). Like the physical sciences, then, many organizations have represented their environment mathematically. As with the physical sciences, the simplification of an organizational environment into mathematical parameters has led to tangible, positive improvements in productivity and quality. Recently, however, organizations have begun quantifying phenomena that may not have such a clear referent in the environment. Organizations, for example, seek to bring decision- making processes such as employee talent evaluation or brand advertising strategies under the purview of quantification. Specifically, quantification becomes a social mechanism that people from different environments and backgrounds use to communicate with each other. As quantification shifts to becoming a mechanism to facilitate communication in a complex, global world, numbers play a different and more powerful role in social interactions. Numbers appear objective and thereby help actors overcome local “subjectivities” that may prevent compromise. This use of numbers often fails (perhaps because of impossibility) to represent the environment completely; numbers strip away meaning. Still, as research by scholars such as Marion Fourcade (2011) show, however, quantification as instantiated by methodologies such as economics provide a means by which social actors can solve politically challenging problems that require comparing the values of different “orders of worth.” In these situations, quantification does not help people control their environment; rather quantification helps people avoid conflict. 164 The fundamental difference between these perspectives relates to their differing ontology of numbers. In the rationalization perspective, a number accurately reflects an object in the real world. For example, batching time might indicate the physical time required for a machine to complete a batch. In the sociology of quantification perspective, however, a number reflects an object that both reflects and defines the world. For example, a ranking might indicate a number that consolidates a variety of values (i.e., teaching quality, pay after graduation, etc.) constructed by some sort of political process that in turn also influences subsequent rankings. This distinction is increasingly critical in the world of Big Data, where organizations use quantification to scrape data from newspaper articles, Twitter, and Facebook to construct measures of “brand reputation” or use quantification embedded in a variety of instruments to evaluate the “talent potential” of employees. As organizations begin to quantify such complex decision-making processes, the role of quantification is unclear. Is quantification a tool organizations use to control their environment? Or does quantification reflect compromises and trade-offs that organizations use to balance competing interests? This shift in the ontology of numbers related to organizational decision-making motivates the need for additional research to describe the processes organizations use to quantify such processes. In my dissertation, I attempt to address this theoretical puzzle by asking the research question, how do organizations quantify their decision-making processes? I utilize an inductive case study research design that relies on participant observation of two professional organizations that seek to apply game theory to decision-making in the security industry. This is an ideal empirical context in which to study this research question. Security scheduling decisions reflect the kind of complexity that the sociology of quantification perspective highlights. For example, do law enforcement patrol officers seek to prevent petty crime? Assaults and robberies? 165 Or terrorism? Similarly, they have no real causal understanding of the behavior of their adversaries. Observing how quantification works in this type of environment provided me with an “extreme case” (Pettigrew, 1990) of the quantification of decision-making that facilitates the discovery of new concepts and grounded theory. Contributions My findings suggest that quantification of complex phenomena diverges from accounts offered by both the rationalization and sociology of quantification perspectives. Rather than rationalizing business processes or resolving political conflict, I find that quantification functions as a pragmatic cultural tool that organizations use to enhance their ability to resolve the perennial issues they struggle with in their organizational environment. In Chapter 3, I study how a professional organization seeks to inscribe their expertise in quantification into a strategy tool. Prior theoretical accounts would describe this in one of two ways. First, according to the rationalization perspective, the professional organization would focus their energy on achieving alignment between the environment and the quantified decision model. Second, according to the sociology of quantification perspective, the professional organization would use quantification to reflect political compromises in their organizational environment. My findings reflect neither of these accounts, and instead observe quantification happening in a different way. When the professional organization engages in prototyping, it does seek to rationalize a decision-making process by representing their client organization’s environment in a quantitative decision-making model. The nature of the professional organization building a generic tool, however, that needs to connect with multiple audiences, inspires the process of pinging. In pinging, conversations with different audiences force the professional organization to expand the scope of its thinking beyond individual decisions 166 quantified, and instead situates its expertise in quantification in a broader story that emphasizes the ability of their strategy tool to help solve perennial problems in the organization that they and/or the broader industry struggle to resolve. Put another way, when the professional organization quantifies complex decision-making, the tool of quantification becomes used by the domain experts to resolve a variety of “pains” in the industry, not just the pain specifically addressed by the quantification methodology. In Chapter 4, I drill into the process of how a professional organization and their client seek to implement a quantified decision-making process. Existing theory would suggest that this type of implementation could happen in one of two ways. In the rationalization perspective, the focus would be on how organizations can use quantification to automate existing decision- making processes. For example, a scheduler might spend a certain amount of time conducting a schedule that could be replaced by the quantified schedule. In the sociology of quantification perspective, the focus would be on how quantification could be used to help balance competing interests. Although I find hints of both of these themes, the overall story seems to be quite different. My empirical data suggest that the gap between the real world and the mathematical world is great enough that even in the prototyping phase, the quantified decision-model faces resistance from real-world exigencies. Organizations must reconcile these gaps and resistances; they must use bridging strategies to reconcile the gaps and create a foundation for a culture that integrates quantification rather than eschew existing domain-based solutions. This research makes several contributions to management scholarship. First, by characterizing research on quantification by the two dimensions (locus of agency and ontology of numbers), I highlight the importance of approaching the study of quantification of decision- making with accurate assumptions. These suggested assumptions reflect current trends in 167 business: the rationalization perspective, for example, assumes an ontology of numbers that does not allow for study of quantifying complex concepts such as brand management or human performance; the sociology of quantification perspective fails to prioritize the active agency of organizational actors. Second, prior academic and theoretical accounts of quantification highlight the nature of the conflict between different forms of expertise. This conflict may exist when quantification directly replaces a person. When numbers become increasingly complex, however, the quantification cannot function as a direct replacement for humans. Instead, the process of quantification instead creates an educational canon used by domain experts. The domain experts use quantification – without completely understanding it – to attempt to accomplish their objectives. The statistical expertise becomes absorbed into the domain-based expertise. Third, prior literature and theory about practice adoption highlights the ways in which practices of administrative innovations are “made to fit” (Ansari et al., 2010) during adoption. I find, however, that the process of applying the innovation actually functions much more like an analogical inspiration rather than a template to be fit (V. L. Glaser, Kennedy, & Fiss, 2014). In other words, the emphasis of the administrative innovation that promotes the quantification of complex decision-making processes may have to be on transformation rather than adaptation. Fourth, scholars have suggested that more research is needed on the development of strategy tools. By studying the processes professional organizations use to develop a strategy tool, I find a surprising result. The creation of the strategy tool requires the professional organization to think more broadly, focusing on the pains removed by the tool more than the expertise of the organization. The expertise in quantification provides legitimacy and core 168 functionality that organizations can use; once the professional organization has to export the tool to multiple locations, however, the focus shifts to more pragmatic, embedded knowledge. Finally, prior research on technology and routines highlights the durable, static nature of quantification that becomes embedded in algorithms. I find, however, that the nature of the quantification of complex phenomena challenges this presumption. Because complex phenomena feature this mutually constitutive ontology, they require organizational actors to engage in active shaping of future performances of organizational processes. In other words, when organizations use quantified algorithms to automate routines, they shift the locus of agency, rather than eliminate agency. Conclusion In Moneyball (2004), Michael Lewis tells the story of the statisticians overcoming the scouts. Preliminary observations might have validated this story. I suggest, however, that the current state of baseball provides hints of the importance of my theoretical argument that quantification becomes subsumed into the culture of particular domains. Consider the following excerpt from an article in the New York Times earlier this year: For more than 100 years, baseball looked pretty much the same from the grandstands. There were three players spread in the outfield, a pitcher on the mound, a catcher behind the plate, and four infielders neatly aligned, two on each side of second base. But a radical reworking of defensive principles is reshaping the way the old game is played, and even the way it looks…Some baseball positions as they have long been known are changing before our eyes. The cause is the infield shift, a phenomenon exploding this year as more teams are using statistical analysis and embracing a dynamic approach to previously static defenses…Teams that shift regularly are lowering opposing teams’ batting averages by 30 to 40 points on grounders and low line drives. (Waldstein, 2014) The “shift” represents a fundamental approach to a perennial problem in baseball: how do you defend against really good hitters? Interestingly, the quantification by the statisticians here gave 169 data to domain experts, and the domain experts have used the information from quantification to change the way the game is played. Quantification of decision-making processes does not rationalize complex numbers that both reflect and define reality. Rather, this quantification functions as a tool whereby domain actors magnify their power over their social world. The numbers present organizational actors with an ability to use some form of theoretical knowledge – that may or may not directly apply to their work – to re-imagine the possibilities in their domain. This transformation may have the potential to do great good, or harm – but regardless, scholars need to study the phenomenon of quantification from the perspective of the actors who use the quantification, since much of the value comes from the “art” performed by the data scientist rather than the “science.” 170 REFERENCES Abbott, A. (1988). The System of Professions: An Essay on the Division of Expert Labor (1st ed.). University Of Chicago Press. Adler, P. S., Goldoftas, B., & Levine, D. I. (1999). Flexibility versus Efficiency? A Case Study of Model Changeovers in the Toyota Production System. Organization Science, 10(1), 43–68. doi:10.2307/2640387 Albin, P., & Foley, D. K. (1998). Barriers and Bounds to Rationality: Essays on Economic Complexity and Dynamics in Interactive Systems. Princeton University Press. Ansari, S., Fiss, P. C., & Zajac, E. J. (2010). Made to Fit: How Practices Vary As They Diffuse. The Academy of Management Review (AMR), 35(1), 67–92. Barley, S. R. (1986). Technology as an Occasion for Structuring: Evidence from Observations of CT Scanners and the Social Order of Radiology Departments. Administrative Science Quarterly, 31(1), 78–108. Beniger, J. (1989). The Control Revolution: Technological and Economic Origins of the Information Society. Harvard University Press. Bingham, C. B., & Eisenhardt, K. M. (2011). Rational heuristics: the “simple rules” that strategists learn from process experience. Strategic Management Journal, 32(13), 1437–1464. doi:10.1002/smj.965 Boudreau, J. W., & Jesuthasan, R. (2011). Transformative HR: How Great Companies Use Evidence- Based Change for Sustainable Advantage (1st ed.). Jossey-Bass. Burawoy, M. (1998). The Extended Case Method. Sociological Theory, 16(1), 4–33. doi:10.1111/0735- 2751.00040 Cabantous, L., & Gond, J.-P. (2011). Rational Decision Making as Performative Praxis: Explaining Rationality’s Éternel Retour. Organization Science, 22(3), 573–586. doi:10.1287/orsc.1100.0534 171 Cacciatori, E. (2012). Resolving Conflict in Problem-Solving: Systems of Artefacts in the Development of New Routines. Journal of Management Studies, 49(8), 1559–1585. doi:10.1111/j.1467- 6486.2012.01065.x Carlile, P. (2002). A Pragmatic View of Knowledge and Boundaries: Boundary Objects in New Product Development. Organization Science, 13(4), 442. Chantrill, C. (2013). US Government Spending. USGovernmentSpending.com. Retrieved from www.usgovernmentspending.com Cohen, M. D. (2007). Reading Dewey: Reflections on the Study of Routine. Organization Studies, 28(5), 773–786. doi:10.1177/0170840606077620 Cohen, M. D., & Bacdayan, P. (1994). Organizational Routines Are Stored As Procedural Memory: Evidence from a Laboratory Study. Organization Science, 5(4), 554–568. doi:10.2307/2635182 D’Adderio, L. (2008). The performativity of routines: Theorising the influence of artefacts and distributed agencies on routines dynamics. Research Policy, 37(5), 769–789. doi:10.1016/j.respol.2007.12.012 D’Adderio, L. (2011). Artifacts at the centre of routines: performing the material turn in routines theory. Journal of Institutional Economics, 7(Special Issue 02), 197–230. doi:10.1017/S174413741000024X Davenport, T. H. (2014). Big Data at Work: Dispelling the Myths, Uncovering the Opportunities. Harvard Business Review Press. Davenport, T. H., & Harris, J. G. (2007). Competing on Analytics: The New Science of Winning (1st ed.). Harvard Business School Press. Davenport, T. H., Harris, J. G., & Morison, R. (2010). Analytics at Work: Smarter Decisions, Better Results. Harvard Business Review Press. Davenport, T. H., & Patil, D. J. (2012, October). Data Scientist: The Sexiest Job of the 21st Century. Harvard Business Review. Retrieved April 24, 2014, from http://hbr.org/2012/10/data-scientist- the-sexiest-job-of-the-21st-century/ar/1 172 Deming, W. E. (1982). Out of the Crisis. Cambridge, Mass.: MIT Press. Desrosieres, A. (2001). How Real Are Statistics? Four Posssible Attitudes. Social Research, 68(2), 339– 355. Dosi, G., Nelson, R. R., & Winter, S. G. (2000). The nature and dynamics of organizational capabilities. Oxford [etc.]: Oxford University Press. Eggers, J. P., & Kaplan, S. (2013). Cognition and Capabilities. The Academy of Management Annals, 7(1), 293–338. doi:10.1080/19416520.2013.769318 Eisenhardt, K. M., & Graebner, M. E. (2007). Theory Building from Cases: Opportunities and Challenges. The Academy of Management Journal, 50(1), 25–32. doi:10.2307/20159839 Eliasoph, N., & Lichterman, P. (1999). “We Begin with Our Favorite Theory …”: Reconstructing the Extended Case Method. Sociological Theory, 17(2), 228–234. doi:10.1111/0735-2751.00076 Emirbayer, M., & Mische, A. (1998). What Is Agency? American Journal of Sociology, 103(4), 962– 1023. doi:10.1086/231294 Espeland, W. N., & Sauder, M. (2007). Rankings and Reactivity: How Public Measures Recreate Social Worlds. American Journal of Sociology, 113(1), 1–40. doi:10.1086/517890 Espeland, W. N., & Stevens, M. L. (1998). Commensuration as a Social Process. Annual Review of Sociology, 24, 313–343. Espeland, W. N., & Stevens, M. L. (2008). A Sociology of Quantification. European Journal of Sociology / Archives Européennes de Sociologie, 49(03), 401–436. doi:10.1017/S0003975609000150 Espeland, W. N., & Vannebo, B. I. (2007). Accountability, Quantification, and the Law. Annual Review of Law and Social Science, 3, 21–43. doi:10.1146/annurev.lawsocsci.2.081805.105908 Feldman, M. S. (2000). Organizational Routines as a Source of Continuous Change. Organization Science, 11(6), 611–629. doi:10.2307/2640373 Feldman, M. S., & Pentland, B. T. (2003). Reconceptualizing Organizational Routines as a Source of Flexibility and Change. Administrative Science Quarterly, 48(1), 94–118. doi:10.2307/3556620 173 Felin, T., Foss, N. J., Heimeriks, K. H., & Madsen, T. L. (2012). Microfoundations of Routines and Capabilities: Individuals, Processes, and Structure. Journal of Management Studies, 49(8), 1351– 1374. doi:10.1111/j.1467-6486.2012.01052.x Foucault, M. (1995). Discipline & Punish: The Birth of the Prison. Vintage. Fourcade, M. (2011). Cents and Sensibility: Economic Valuation and the Nature of “Nature.” American Journal of Sociology, 116(6), 1721–77. Gibbons, R. (2006). What the Folk Theorem doesn’t tell us. Industrial and Corporate Change, 15(2), 381–386. doi:10.1093/icc/dtl002 Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2012). Seeking Qualitative Rigor in Inductive Research: Notes on the Gioia Methodology. Organizational Research Methods. doi:10.1177/1094428112452151 Gioia, D. A., Price, K. N., Hamilton, A. L., & Thomas, J. B. (2010). Forging an Identity: An Insider- outsider Study of Processes Involved in the Formation of Organizational Identity. Administrative Science Quarterly, 55(1), 1–46. doi:10.2189/asqu.2010.55.1.1 Glaser, B., & Strauss, A. C. (1967). The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine Transaction. Glaser, V. L., Kennedy, M. T., & Fiss, P. C. (2014). Making Analogies Work: The Extension of Financial Market Concepts to Online Advertising. Working Paper. Greenwood, R., Suddaby, R., & Hinings, C. R. (2002). Theorizing Change: The Role of Professional Associations in the Transformation of Institutionalized Fields. The Academy of Management Journal, 45(1), 58–80. doi:10.2307/3069285 Haeckel, S. H., & Nolan, R. L. (1993). Managing by Wire. Harvard Business Review, 71(5), 122–132. Harris, J. G., Craig, E., & Egan, H. (2009). How to Create a Talent-Powered Analytical Organization (Accenture Research Report). Retrieved from https://www.yumpu.com/en/document/view/8936608/how-to-create-a-talent-powered-analytical- organization-accenture 174 Healy, P. (2013, January 8). LAPD Chief Wants to Expand “Predictive Policing.” NBC Southern California. Retrieved June 9, 2014, from http://www.nbclosangeles.com/news/local/LAPD- Chief-Charlie-Beck-Predictive-Policing-Forecasts-Crime-185970452.html Howard-Grenville, J. A. (2005). The Persistence of Flexible Organizational Routines: The Role of Agency and Organizational Context. Organization Science, 16(6), 618–636. doi:10.2307/25146000 Jarzabkowski, P., & Kaplan, S. (2014). Strategy Tools-In-Use: A Framework For Understanding “Technologies Of Rationality” In Practice. Strategic Management Journal, n/a–n/a. doi:10.1002/smj.2270 Kennedy, M. T., & Fiss, P. C. (2009). Institutionalization, Framing, and Diffusion: The Logic of TQM Adoption and Implementation Decisions among U.S. Hospitals. Academy of Management Journal, 52(5), 897–918. doi:Article Kerr, S. (1975). On the Folly of Rewarding A, While Hoping for B. The Academy of Management Journal, 18(4), 769–783. Knights, D., & Morgan, G. (1991). Corporate Strategy, Organizations, and Subjectivity: A Critique. Organization Studies, 12(2), 251–273. doi:10.1177/017084069101200205 Kvale, S., & Brinkmann, S. (2008). InterViews: Learning the Craft of Qualitative Research Interviewing (2nd ed.). Sage Publications, Inc. Langley, A. (1999). Strategies for Theorizing from Process Data. The Academy of Management Review, 24(4), 691–710. doi:10.2307/259349 Langley, A., Smallman, C., Tsoukas, H., & Ven, A. H. V. de. (2013). Process Studies of Change in Organization and Management: Unveiling Temporality, Activity, and Flow. Academy of Management Journal, 56(1), 1–13. doi:10.5465/amj.2013.4001 Lave, J. (1996). The Practice of Learning. In Understanding Practice: Perspectives on Activity and Context (pp. 3–35). New York: Cambridge University Press. 175 Lencioni, P. M. (2010). The Four Obsessions of an Extraordinary Executive: A Leadership Fable (J-B Lencioni Series) (1 edition.). Jossey-Bass. Leonardi, P. M., & Barley, S. R. (2010). What’s Under Construction Here? Social Action, Materiality, and Power in Constructivist Studies of Technology and Organizing. Academy of Management Annals, 4(1), 1–51. Levinthal, D., & Rerup, C. (2006). Crossing an Apparent Chasm: Bridging Mindful and Less-Mindful Perspectives on Organizational Learning. Organization Science, 17(4), 502–513. doi:10.1287/orsc.1060.0197 Lewis, M. (2004). Moneyball. Lofland, J., Snow, D. A., Anderson, L., & Lofland, L. H. (2005). Analyzing Social Settings: A Guide to Qualitative Observation and Analysis (4th ed.). Wadsworth Publishing. March, J. G. (2006). Rationality, foolishness, and adaptive intelligence. Strategic Management Journal, 27(3), 201–214. doi:10.1002/smj.515 March, J. G., & Simon, H. A. (1958). Organizations (2nd ed.). Cambridge, MA: Wiley-Blackwell. McAfee, A., & Brynjolfsson, E. (2012). Big Data: The Management Revolution. Harvard Business Review. Retrieved from http://blogs.hbr.org/cs/2012/09/big_datas_management_revolutio.html Miller, C. C. (2013, April 11). Universities Offer Courses in a Hot New Field: Data Science. The New York Times. Retrieved from http://www.nytimes.com/2013/04/14/education/edlife/universities- offer-courses-in-a-hot-new-field-data-science.html Nelson, R. R., & Winter, S. G. (1982). An Evolutionary Theory of Economic Change. Belknap Press of Harvard University Press. Nisbet, R., Elder IV, J., & Miner, G. (2009). Handbook of Statistical Analysis and Data Mining Applications (1st ed.). Academic Press. Orlikowski, W. J., & Scott, S. V. (2008). Sociomateriality: Challenging the Separation of Technology, Work and Organization. The Academy of Management Annals, 2(1), 433–474. doi:10.1080/19416520802211644 176 Parmigiani, A., & Howard-Grenville, J. (2011). Routines Revisited: Exploring the Capabilities and Practice Perspectives. The Academy of Management Annals, 5(1), 413–453. doi:10.1080/19416520.2011.589143 Pentland, B. T. (1992). Organizing Moves in Software Support Hot Lines. Administrative Science Quarterly, 37(4), 527–548. doi:10.2307/2393471 Pentland, B. T., & Feldman, M. S. (2005). Organizational routines as a unit of analysis. Industrial and Corporate Change, 14(5), 793–815. doi:10.1093/icc/dth070 Pentland, B. T., & Feldman, M. S. (2008). Designing routines: On the folly of designing artifacts, while hoping for patterns of action. Information and Organization, 18(4), 235–250. doi:10.1016/j.infoandorg.2008.08.001 Pentland, B. T., & Rueter, H. H. (1994). Organizational Routines as Grammars of Action. Administrative Science Quarterly, 39(3), 484–510. doi:10.2307/2393300 Pettigrew, A. M. (1990). Longitudinal Field Research on Change: Theory and Practice. Organization Science, 1(3), 267–292. Porter, T. M. (1996). Trust in Numbers. Princeton University Press. Rerup, C., & Feldman, M. S. (2011). Routines as a Source of Change in Organizational Schemata: The Role of Trial-and-Error Learning. Academy of Management Journal, 54(3), 577–610. doi:10.5465/AMJ.2011.61968107 Salvato, C. (2009). Capabilities Unveiled: The Role of Ordinary Activities in the Evolution of Product Development Processes. Organization Science, 20(2), 384–409. doi:10.1287/orsc.1080.0408 Salvato, C., & Rerup, C. (2011). Beyond Collective Entities: Multilevel Research on Organizational Routines and Capabilities. Journal of Management, 37(2), 468–490. doi:10.1177/0149206310371691 Sauder, M., & Espeland, W. N. (2009). The Discipline of Rankings: Tight Coupling and Organizational Change. American Sociological Review, 74(1), 63–82. doi:10.1177/000312240907400104 Steiner, C. (2012). Automate This: How Algorithms Came to Rule Our World. Portfolio. 177 Steuerman, E. (2003). The Bounds of Reason: Habermas, Lyotard and Melanie Klein on Rationality. Routledge. Strauss, A. C., & Corbin, J. M. (1998). Basics of Qualitative Research: Second Edition: Techniques and Procedures for Developing Grounded Theory (2nd ed.). Sage Publications, Inc. Taleb, N. N. (2010). The Black Swan: Second Edition: The Impact of the Highly Improbable: With a new section: “On Robustness and Fragility” (2nd ed.). Random House Trade Paperbacks. Taylor, F. W. (1911). The Principles of Scientific Management. Harper. Turner, S. F., & Rindova, V. (2012). A Balancing Act: How Organizations Pursue Consistency in Routine Functioning in the Face of Ongoing Change. Organization Science, 23(1), 24–46. doi:
10.1287/orsc.1110.0653
Van Maanen, J. (1979). The Fact of Fiction in Organizational Ethnography. Administrative Science Quarterly, 24(4), 539–550. doi:10.2307/2392360 Waldstein, D. (2014, May 12). Who’s on Third? In Baseball’s Shifting Defenses, Maybe Nobody. The New York Times. Retrieved from http://www.nytimes.com/2014/05/13/sports/baseball/whos-on- third-in-baseballs-shifting-defenses-maybe-nobody.html Weber, M. (1958). From Max Weber: Essays in Sociology (Browning of Pages.). Oxford University Press (Galaxy imprint). Welch, J., & Byrne, J. A. (2003). Jack: Straight from the Gut (Reprint edition.). New York: Business Plus. Whittington, R. (2006). Completing the Practice Turn in Strategy Research. Organization Studies, 27(5), 613. Zammuto, R. F., Griffith, T. L., Majchrzak, A., Dougherty, D. J., & Faraj, S. (2007). Information Technology and the Changing Fabric of Organization. Organization Science, 18(5), 749–762. doi:10.1287/orsc.1070.0307 Zbaracki, M. J., & Bergen, M. (2010). When Truces Collapse: A Longitudinal Study of Price-Adjustment Routines. Organization Science, 21(5), 955 –972. doi:10.1287/orsc.1090.0513 178 Zuboff, S. (1988). In the Age of the Smart Machine: The Future of Work and Power. United States of America: Basic Books.
Abstract (if available)
Abstract
Organizations quantify increasingly diverse types of decision-making processes. Traditionally, many scholars conceptualize quantification as a means of rationalizing decision-making processes, wherein organizational leaders strive to control business processes and eliminate human bias. Recently, however, other scholars have suggested that quantification of decision-making processes often reflect political compromises that resolve conflicts between competing interests. To develop theory to explain the tension between these perspectives, I ask the research question, how do organizations quantify decision-making processes? To answer this question, I engage in participant observation of Gaming Expert and Algo-Security, two organizations that use game theory to quantify decision-making in the security industry. While prior research emphasizes the role of quantification in rationalizing business processes or resolving political conflict, my study shows that quantifying decision-making processes leads to a different outcome. Specifically, quantification functions as a cultural tool or an “enchanted algorithm” that organizational actors draw on to develop innovative approaches to overcome perennial organizational challenges. By offering an empirically grounded analysis of how organizations quantify decision-making, I contribute to strategy research on the emergence of routines and capabilities, as well as research on organizational decision-making more broadly.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
The structure of strategic communication: theory, measurement, and effects
PDF
The interactive effects of incentive threshold and narcissism on managerial decision-making
PDF
The individual- and team-level effects of organizational gender strategies
PDF
How do acquirers govern the deal-making process? Three essays on U.S. mergers and acquisitions 1994 – 2017
PDF
For the love of the game? ownership and control in the NBA
PDF
Empirical essays on alliances and innovation in the biopharmaceutical industry
PDF
The effects of accounting performance and professional relationships on promotion, dismissal, and transfer decisions in a conglomerate
PDF
Developing and testing a heuristic-systematic model of health decision making: the role of affect, trust, confidence and media influence
PDF
Essays on the effect of cognitive constraints on financial decision-making
PDF
Examining influences on the decision-making process of Filipino students and parents in selecting a post-secondary institution
PDF
Adjusting the algorithm: how experts intervene in algorithmic hiring tools
PDF
The interpersonal effect of emotion in decision-making and social dilemmas
PDF
The influence of risk on decision-making during walking
PDF
21st century superintendents: the dynamics related to the decision-making process for the selection of high school principals
PDF
Who's the physician in charge? Generalist and specialist jurisdictions in professional practice
PDF
Understanding the decision making process of California urban schools superintendents through Bolman and Deal's four leadership frames
PDF
Using cognitive task analysis to capture palliative care physicians' expertise in in-patient shared decision making
PDF
Automated contracts and the lawyers who don't review them: adoption and use of machine learning technology
PDF
Expanding the definition of bounded rationality in strategy research: an examination of earnout frames in M&A
PDF
CEO selection performance: does board experience matter?
Asset Metadata
Creator
Glaser, Vern Lee (author)
Core Title
Enchanted algorithms: the quantification of organizational decision-making
School
Marshall School of Business
Degree
Doctor of Philosophy
Degree Program
Business Administration
Publication Date
09/22/2014
Defense Date
07/11/2014
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
algorithms,capabilities,decision-making,OAI-PMH Harvest,quantification,rationality,routines,technology implementation
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Fiss, Peer C. (
committee chair
), El Sawy, Omar A. (
committee member
), Eliasoph, Nina (
committee member
), Mayer, Kyle J. (
committee member
)
Creator Email
vglaser@ualberta.ca
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-520569
Unique identifier
UC11298686
Identifier
etd-GlaserVern-2975rev.pdf (filename),usctheses-c3-520569 (legacy record id)
Legacy Identifier
etd-GlaserVern-2975rev.pdf
Dmrecord
520569
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Glaser, Vern Lee
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
algorithms
capabilities
decision-making
quantification
rationality
routines
technology implementation