Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
How can metrics matter: performance management reforms in the City of Los Angeles
(USC Thesis Other)
How can metrics matter: performance management reforms in the City of Los Angeles
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
How Can Metrics Matter: Performance Management
Reforms in the City of Los Angeles
by
Robert W. Jackman
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
PUBLIC POLICY AND MANAGEMENT
August 2019
1
Contents
CONTENTS .......................................................................................................................................... 1
LIST OF FIGURES AND TABLES ..................................................................................................... 4
INTRODUCTION ................................................................................................................................. 5
CHAPTER ONE: EXAMINING THE LITERATURE IN PURSUIT OF PERFORMANCE......... 13
1.1 INTRODUCTION ........................................................................................................................... 13
1.2 ORGANIZATIONAL CONSTRAINTS .............................................................................................. 16
1.2.1 IN PURSUIT OF SUCCESS ............................................................................................................ 17
1.2.2 INHERENT CHALLENGES OF PUBLIC MANAGEMENT ................................................................... 20
1.3 FACILITATING IMPLEMENTATION .............................................................................................. 24
1.3.1 CHOICE OF INSTITUTIONAL DESIGN ........................................................................................... 24
1.3.2 LEADERSHIP .............................................................................................................................. 28
1.3.3 ORGANIZATIONAL FEATURES .................................................................................................... 30
1.3.4 METRICS AND DATA .................................................................................................................. 33
1.3.5 TRAINING .................................................................................................................................. 34
1.3.6 INTEGRATION OF DATA INTO MANAGEMENT ............................................................................. 35
1.4 IMPLEMENTATION PROCESSES ................................................................................................... 37
1.4.1 AVOIDING PITFALLS .................................................................................................................. 39
CHAPTER TWO: METHODOLOGY .............................................................................................. 41
2.1 INTRODUCTION ........................................................................................................................... 41
2.1.1 RESEARCH QUESTIONS .............................................................................................................. 41
2.2 OVERVIEW OF RESEARCH APPROACH ....................................................................................... 42
2.2.1 RESEARCH CONTEXT ................................................................................................................. 43
2.3 RESEARCH SETTING AND DATA SOURCES .................................................................................. 44
2.4 DATA COLLECTION METHODS ................................................................................................... 47
2.4.1 SURVEY PROCEDURES ............................................................................................................... 47
2.4.2 INTERVIEW AND MEETING OBSERVATION PROCEDURES ............................................................ 49
2.4.3 DOCUMENT AND METRIC COLLECTION PROCEDURES ................................................................ 49
2.5 DATA ANALYSIS METHODS......................................................................................................... 50
2.5.1 CASE-STUDY ANALYSIS ............................................................................................................ 51
2.5.2 FUZZY SET QUALITATIVE COMPARATIVE ANALYSIS ................................................................. 52
2.5.3 MULTIVARIATE REGRESSION ANALYSIS .................................................................................... 54
CHAPTER THREE: A QUALITATIVE ANALYSIS OF BACK TO BASICS ............................... 56
3.1 THE CONTEXT OF IMPLEMENTING PERFORMANCE REFORMS IN LOS ANGELES ...................... 56
3.2 QUALITATIVE CASE STUDY ANALYSIS ....................................................................................... 62
2
3.2.1 DESCRIPTIONS OF DEPARTMENTS SELECTED ............................................................................. 63
3.2.2 DESCRIPTION OF THEMES EXAMINED WITHIN CASE-STUDY DEPARTMENTS .............................. 64
3.2.3 DEPARTMENT A ........................................................................................................................ 66
3.2.4 DEPARTMENT B ......................................................................................................................... 70
3.2.5 DEPARTMENT C ......................................................................................................................... 76
3.2.5 DEPARTMENT D ........................................................................................................................ 82
3.2.6 DEPARTMENT E ......................................................................................................................... 88
3.2.7 DEPARTMENT F ......................................................................................................................... 93
3.3 SUMMARY AND DISCUSSION ..................................................................................................... 102
CHAPTER FOUR: EXAMINING SUCCESS THROUGH A DEPARTMENTAL LENS USING
QUALITATIVE COMPARATIVE ANALYSIS .............................................................................. 111
4.1 LINKING THE QCA APPROACH TO PERFORMANCE MANAGEMENT REFORMS ....................... 113
4.1.1 LEADERSHIP ............................................................................................................................ 114
4.1.2 ANALYTIC CAPACITY .............................................................................................................. 114
4.1.3 GOOD METRICS ....................................................................................................................... 115
4.1.4 INNOVATIVE CULTURE ............................................................................................................ 115
4.1.5 STRATEGIC PLANNING ............................................................................................................. 116
4.1.6 POLITICAL ATTENTION ............................................................................................................ 116
4.1.7 ORGANIZATIONAL SIZE AND RESOURCES ................................................................................ 116
4.1.8 IMPLEMENTATION SUCCESS..................................................................................................... 117
4.2 FUZZY SET QUALITATIVE COMPARATIVE APPROACH............................................................. 118
4.2.1 CALIBRATION OF MEASURES ................................................................................................... 121
4.3 RESULTS .................................................................................................................................... 131
4.3.1 CONFIGURATIONS FOR 2015 IPMU SUCCESS MEASURE ........................................................... 133
4.3.2 CONFIGURATIONS FOR 2015 SURVEY SUCCESS MEASURE........................................................ 137
4.3.3 CONFIGURATIONS FOR 2016 IPMU SUCCESS MEASURE ........................................................... 140
4.3.4 CONFIGURATIONS FOR 2016 SURVEY SUCCESS MEASURE........................................................ 143
4.3.5 CONFIGURATIONS FOR UNSUCCESSFUL PERFORMANCE MANAGEMENT IMPLEMENTATION ...... 146
4.3.6 EVALUATING THE INTERSECTION OF OUTCOMES ACROSS SOLUTIONS ..................................... 147
4.4 DISCUSSION ............................................................................................................................... 149
CHAPTER FIVE: MULTIVARIATE REGRESSION ANALYSIS................................................ 154
5.1 DATA AND VARIABLES .............................................................................................................. 156
5.1.1 DATA ..................................................................................................................................... 156
5.1.2 DATA TESTING AND MODEL CREATION .................................................................................. 163
5.1.3 REGRESSION MODELS EMPLOYED........................................................................................... 174
5.2 RESULTS AND ANALYSIS ........................................................................................................... 175
5.2.1 AWARENESS OF PERFORMANCE MANAGEMENT REFORMS ....................................................... 175
5.2.2 ATTITUDES TOWARD PERFORMANCE MANAGEMENT REFORMS ............................................... 179
5.2.3 INDIVIDUAL USAGE OF PERFORMANCE INFORMATION ............................................................. 182
5.2.4 ORGANIZATIONAL USAGE OF PERFORMANCE INFORMATION ................................................... 186
5.2.5 PROFICIENCY IN USING PERFORMANCE INFORMATION............................................................. 190
5.3 DISCUSSION ............................................................................................................................... 193
3
CHAPTER SIX: CONCLUSION ..................................................................................................... 198
6.1 INTRODUCTION ......................................................................................................................... 198
6.1.1 HOW IS SUCCESS DEFINED WHEN IT COMES TO PERFORMANCE MANAGEMENT REFORMS? ......... 199
6.1.2 WHAT FACTORS ARE CENTRAL TO SUCCESSFUL PERFORMANCE MANAGEMENT SYSTEMS AND IN
WHAT COMBINATION? ......................................................................................................................... 200
6.1.3 HOW DO ORGANIZATIONS OVERCOME OBSTACLES THAT ARISE WHILE IMPLEMENTING
PERFORMANCE MANAGEMENT REFORMS? ............................................................................................ 202
6.2 IMPLICATIONS FOR THEORY .................................................................................................... 202
6.3 IMPLICATIONS FOR PRACTICE .................................................................................................. 206
6.4 LIMITATIONS OF THE STUDY .................................................................................................... 207
6.5 SUGGESTIONS FOR FUTURE RESEARCH.................................................................................... 209
APPENDIX A: LOS ANGELES PERFORMANCE MANAGEMENT SURVEY ......................... 211
APPENDIX B: INTERVIEW AND MEETING OBSERVATION PROTOCOLS ........................ 223
B.1 INTERVIEW PROTOCOL FOR DEPARTMENTAL OFFICIALS WORKING ON COMPSTAT .......... 223
B.2 OBSERVATION PROTOCOL FOR COMPSTAT MEETINGS........................................................ 224
APPENDIX C: SELECTED FSQCA VARIABLES CONSTRUCTED WITH SURVEY
QUESTIONS ..................................................................................................................................... 225
C.1 OUTCOME VARIABLES ............................................................................................................ 225
C.2 INDEPENDENT VARIABLES ...................................................................................................... 225
APPENDIX D: SELECTED REGRESSION MODEL VARIABLES CONSTRUCTED WITH
SURVEY QUESTIONS .................................................................................................................... 229
D.1 DEPENDENT VARIABLES ......................................................................................................... 229
D.2 INDEPENDENT VARIABLES ...................................................................................................... 230
D.3 CONTROL VARIABLES ............................................................................................................. 231
REFERENCES .................................................................................................................................. 235
4
List of Figures and Tables
Figure 1.1 The Implementation Cycle ................................................................................................ 37
Figure 3.1 City Revenue and Staffing ................................................................................................. 58
Table 3.2 Qualitative Ratings of Case-Study Departments .............................................................. 101
Table 4.1 Descriptive Statistics for 2015 Measures .......................................................................... 128
Table 4.2 Descriptive Statistics for 2016 Measures .......................................................................... 129
Table 4.3 Correlations of 2015 Measures ......................................................................................... 131
Table 4.4 Correlations of 2016 Measures ......................................................................................... 131
Table 4.5 Configurations for Achieving Implementation Success (IPMU Measure 2015) .............. 136
Table 4.6 Configurations for Achieving Implementation Success (Survey Measure 2015)............. 139
Table 4.7 Configurations for Achieving Implementation Success (IPMU Measure 2016) .............. 142
Table 4.8 Configurations for Achieving Implementation Success (Survey Measure 2016)............. 145
Table 5.1 Skewness and Kurtosis of Pooled Data Variables ............................................................ 164
Table 5.2 Skewness and Kurtosis of Panel Data Variables .............................................................. 164
Table 5.3 Multicollinearity Statistics for Independent and Control Variables (Pooled Data) ........ 166
Table 5.4 Multicollinearity Statistics for Independent and Control Variables (Panel Data) .......... 166
Table 5.5 Descriptive Statistics for 2015 Pooled Data ...................................................................... 169
Table 5.6 Descriptive Statistics for 2016 Pooled Data ...................................................................... 169
Table 5.7 Descriptive Statistics for 2015 Panel Data ........................................................................ 170
Table 5.8 Descriptive Statistics for 2016 Panel Data ........................................................................ 170
Table 5.9 Difference of Mean Scores Between 2015 and 2016 (Pooled Data) .................................. 171
Table 5.10 Difference of Mean Scores Between 2015 and 2016 (Panel Data) .................................. 171
Table 5.11 Correlations for Independent and Control Variables (Pooled Data)) ........................... 172
Table 5.12 Correlations for Independent and Control Variables (Panel Data)............................... 173
Table 5.13 OLS Regression Models for Awareness of Performance Management Reforms (Pooled
Data) .................................................................................................................................................. 177
Table 5.14: Random-Effects Regression Models for Awareness of Performance Management
Reforms (Panel Data) ........................................................................................................................ 178
Table 5.15 OLS Regression Models for Attitudes Towards Performance Management Reforms
(Pooled Data) ..................................................................................................................................... 180
Table 5.16 Random-Effects Regression Models for Attitudes Towards Performance Management
Reforms (Panel Data) ........................................................................................................................ 181
Table 5.17 OLS Regression Models for Individual Usage of Performance Information (Pooled Data)
........................................................................................................................................................... 184
Table 5.18 Random-Effects Regression Models for Individual Usage of Performance Information
(Panel Data) ....................................................................................................................................... 185
Table 5.19 OLS Regression Models for Organizational Usage of Performance Information (Pooled
Data) .................................................................................................................................................. 188
Table 5.20 Random-Effects Regression Models for Organizational Usage of Performance
Information (Panel Data) .................................................................................................................. 189
Table 5.21 OLS Regression Models for Proficiency in Using Performance Information (Pooled
Data) .................................................................................................................................................. 191
Table 5.22 Random-Effects Regression Models for Proficiency in Using Performance Information
(Panel Data) ....................................................................................................................................... 192
5
Introduction
Performance management systems have been widely adopted across all levels of
government within the United States and internationally over the last three decades (Gao, 2015;
Gerrish, 2016; Moynihan, 2006; 2008; Nielsen, 2013; Radin, 2006). Governments adopt
performance management reforms as data-driven attempts to improve organizational processes,
accountability, outcomes, and ultimately the services delivered to the citizens they serve (Behn,
2003; 2014; Moynihan, 2013). Despite the tantalizing results promised by successful
performance management reforms, only a handful of governments have achieved such results
(Behn, 2003; Bratton & Malinowski, 2008; Smith & Bratton, 2001). More commonly,
performance management reforms have ended up only partially adopted, treated as compliance
exercises, or as outright failures facing numerous impediments (Diefenbach, 2009; Gerrish,
2016; Moynihan, 2006; Radin, 2006). This dissertation explores what factors contribute to
successful reforms, which factors are central to success, how different combinations of factors
contribute to this success, and how well functioning performance management systems are able
to overcome impediments. It approaches these research questions through a holistic three-year
study of performance management reforms within the City of Los Angeles.
The impetus for this research occurred when Eric Garcetti was elected Mayor of Los
Angeles in mid-2013, and the enactment of his Back to Basics agenda began. Back to Basics
revolved around implementing a citywide performance management system to improve the
delivery of City services. A grant from the Haynes Foundation allowed for the opportunity to
research the evolution of performance management reforms from within a public organization
over a three-year period.
6
The extensive period of observation revealed gaps in the overall understanding of how
performance management systems develop:
• First, how is success defined when it comes to performance management reforms? Is it
simply the adoption of performance measures or a deeper shift in how a public
organization functions?
• Second, what factors are central to successful performance management systems and in
what combination?
• Third, how do organizations overcome obstacles that arise while implementing
performance management reforms?
This dissertation attempts to answer these questions, fill in some of the gaps in the literature and
provide a practical approach for how governments can undertake performance reforms.
Much of the published research on performance management is either definitional
(Ammons, 2008; Moynihan, 2008), prescriptive (Hatry, 2006; Hood, 2012; Moynihan, 2006),
focused on challenges to performance management reforms broadly (Diefenbach, 2009; Radin,
2006), or on specific challenges to reforms (Lipsky, 2010; May & Winter, 2007; Sanger, 2008).
Much of this literature has taken a critical tone and compared the promise of success through
performance management reform with the often-disappointing results found in practice
(Moynihan, 2013). While skepticism is often warranted given the fantastical expectations that
come with undertaking performance management reforms, such an approach leaves little space
for measured optimism when studying performance management. This dissertation also does not
embrace the unbridled optimism often associated with performance reforms but does try to move
beyond much of the cynicism found in the existing literature in order to examine potential
7
pathways to success in performance management with a measured approach within an applied
context.
A few branches of literature have endeavored to examine specific factors that can help
promote success during reforms and partially define what success means for performance
management systems (Behn, 2014; Hatry, 2006; Hood, 2012). Research has theorized and
empirically studied a handful of success factors such as leadership (Behn, 2014; May & Winter,
2007; Moynihan & Pandey, 2010), managerial attention (Moynihan, 2006; 2008), and political
attention (Bourdeaux & Chikoto, 2008; Moynihan & Ingraham, 2004) reasonably thoroughly.
Other potential success factors such as data systems (Lu, 2008), training (Kroll & Moynihan,
2015), and organizational culture (Hood, 2012) have been explored with less depth. What these
existing studies have in common is their focus on a single factor or a handful of factors that
might contribute to successful performance reforms.
Early empirical research on performance management systems focused on qualitative
observation mostly through case studies (Moynihan, 2013). Some of the foundational
understanding of performance management comes from the work of Wildavsky (1968; 1973) and
Schick (1966) in this case study style. As the study of performance management grew, research
approaches moved away from qualitative observation and towards quantitative evaluation
(Moynihan, 2013). While these quantitative studies utilized a variety of data, snapshot surveys of
public workers were one of the most common data gathering approaches (Melkers &
Willoughby, 1998; 2001; 2005). More recently a handful of mixed-method studies have probed
facets of performance management systems, but this method of study remains rare (Heinrich,
2009; Soss, Fording, & Schram, 2011).
8
By considering the rhetorical tone, research topics, and empirical approaches of previous
research on performance management, this dissertation looks to move forward the literature on
performance reforms. What distinguishes this study from previous work on reforms is its
dynamic research approach to performance management implementation and the way it
investigates combinations of factors that promote success. This multi-faceted research style
combines a measured approach to understanding recipes for performance management success
with both multi-modal data collection and synthesized mixed-methods. The ability to investigate
the evolution of performance management systems within the City of Los Angeles over a three-
year period allowed for a detailed investigation of success factors. The different methodologies
employed aided in the study of reforms at the operational, managerial, and institutional levels of
the City. While a comprehensive investigation of performance reforms in Los Angeles cannot
provide definitive answers about what leads to successful performance management systems, it
begins to answer this question in a more integrated manner and points to further avenues of
research. This dissertation pushes the study of performance management towards a holistic and
practical approach.
Chapter One examines the existing literature on performance management reforms and
connects this previous research to the efforts within the City of Los Angeles. It begins by taking
a broad view of the foundational literature available on performance management, as rooted in
the New Public Management. It then further considers how the promise of performance reforms
has played out with a handful of studied successes balanced against a much longer record of
failures and mixed-results. The chapter then examines three interconnected strands of
performance management literature that set up the research in the rest of the dissertation. It first
considers the literature that attempts to define what successful implementation of performance
9
management reforms should be. It then turns to the two complementary branches of literature
that examine both success factors and impediments to performance management reforms as
previously studied. The comprehensive set of factors used for empirical investigation in
subsequent chapters was developed from these three areas of literature.
Chapter Two lays out the mixed-methodology employed in both the data-collection and
empirical analysis conducted in this dissertation. Data for this dissertation was collected over a
three-year period from a number of different sources at the City of Los Angeles. Much of the
research draws upon data from two surveys of over 1,500 public managers at the City who
worked with performance information. Additional quantitative data was collected from budget
documents, performance metrics, strategic plans, and other related documents between mid-2014
and mid-2017. A second set of qualitative data was gathered through interviews with members of
City leadership and with staff in six case-study departments. Observation of performance
management meetings was also conducted as the final form of data collection.
Three different methodologies are employed in a complementary fashion to assess paths
to success and potential pitfalls in the Los Angeles Back to Basics initiative. Semi-structured
case-studies organized around different performance management themes draw upon the
interviews and meeting observations. Multivariate regression analysis via several time varied
models incorporates the survey data and other quantitative data. Finally, qualitative comparative
analysis (QCA) using Boolean logic is employed as a methodological bridge between the other
two methodological approaches. Chapter Two provides the rationale for utilizing each of these
methodologies and provides the underpinning of each methodology. Many of the technical
details of each methodology are expanded upon in the subsequent analytic chapter in which they
are employed. Research suggests this is the first study to combine these three methodologies to
10
examine performance management and the first to employ QCA in any capacity in this area of
study.
Chapter Three is broken into two complementary sections that explore the Back to Basics
reforms in Los Angeles qualitatively. The first portion of the chapter is devoted to providing an
overview of the performance management process in Los Angeles and setting the research
context for the remainder of the dissertation. This section of Chapter Three also provides a
narrative overview of the progression of the Back to Basics initiative throughout the research
period.
The second section of Chapter Three examines potential paths to successful performance
management reforms in Los Angeles at the departmental level through six case-studies of City
departments. These departments were selected in consultation with City leadership as both key
departments to the Back to Basics agenda and as departments with potential for success in
implementing performance reforms. Departments were analyzed across six key themes identified
in the literature and during the course of research as potential aspects of reforms that can drive
successful outcomes. These thematic areas were: (1) Strategic Planning, (2) Metrics, (3) Data
Systems and Information Technology, (4) Integrating Performance Metrics into Decision
Making and Leadership, (5) Employee Buy-In and Training, and (6) Resources. As expected,
departments that performed well in all or most of these areas were more successful in
implementing performance management systems during the course of the Back to Basics
reforms. Certain factors appeared to be essential to implementing reforms, with a handful of
departments being able to overcome the absence of other factors to achieve relative success.
Strong leadership, large organizational size, good metrics and integrating performance measures
into decision making were identified as factors that could lead to success during the case-study
11
process, often in combination. In contrast, departments were able to overcome a lack of
dedicated resources for reforms and mediocre data systems and still be successful, at least in the
medium-term.
Chapter Four employs fuzzy set qualitative comparative analysis (fsQCA) to consider
what combination of factors might lead to the successful implementation of performance
management systems. Fuzzy set QCA uses Boolean algebra to identify necessary and/or
sufficient causal factors linked to specific outcomes, in this case, successful performance
management implementation. Because performance management systems are configurational
and complex with many possible factors playing a role in a successful outcome, fsQCA is well
suited to bringing greater understanding to causal recipes for success during reforms.
Using data from 35 City departments as cases to be inputted into the fsQCA framework,
this chapter uncovers both core conditions and causal configurations that lead to successful
performance management reforms. Leadership, large organizational size, and good metrics are
classified as core conditions for successful performance management systems. The presence of
these three conditions in various configurations forms one path to successful performance
management reforms. The fsQCA also identifies other ways departments can overcome obstacles
to reach a successful performance management outcome.
The final analytic chapter, Chapter Five, is framed by two multivariate regression models
that examine which factors lead individual public managers, and by extension the institution they
work within, to performance management success. The survey data of Los Angeles public
managers from 2015 and 2016 is employed in the models. The first approach is an ordinary least
squares (OLS) model with a time effect included using pooled data from all City managers
surveyed. The second approach uses a random-effects model over time and panel data from City
12
managers who completed both surveys. Each model utilizes the same variables in order to
understand better how various factors impact different populations attitudes, actions, and skills
during the course of performance reforms.
Each model produced reasonably similar results with a handful of slight variations
between models. Public service motivation is identified as the most prominent factor that
positively impacts how public managers in Los Angeles undertake performance management
actions. Training in performance information use shows a weaker relationship over a smaller
range of performance measurement related outcomes. The models also pinpoint organizational
factors, that can both positively and negatively impact reforms.
The concluding chapter, Chapter Six, discusses how the results from each of the three
analytical chapters overlap. This discussion brings an overall holistic perspective to the findings
in this dissertation and lays out credible, but measured ways governments can approach
performance management reforms that result in successful outcomes. This advice for
practitioners is complemented by suggestions about how researchers can move forward research
on the subject of performance management success and develop new theoretical understanding.
Taken together these chapters create a rich tapestry of understanding that details how public
organizations can undertake performance management reforms successfully, while also raising
important new questions that have the potential to move research and practice forward.
13
Chapter One: Examining the Literature in Pursuit of
Performance
1.1 Introduction
The performance management movement that emerged in the 1980s and early 1990s is
rooted in the philosophy of New Public Management and the notion of “reinventing
government” to make it more responsive and effective for its constituents (Osborne, 2006;
Osborne & Gaebler, 1992). Public organizations at all levels of government adopt performance
management systems as data-driven attempts to improve organizational processes, outcomes, and
accountability (Behn, 2003; 2014; Moynihan, 2013). This approach establishes explicit
performance standards against which government outcomes are to be measured with the goal of
improving government service delivery (Ingraham, 1993; Moynihan, 2006). The benefits of a
well-functioning performance management system are many: improved organizational
efficiency, better customer satisfaction, clearer organizational mission, and greater capacity to
achieve strategic objectives to name just a few. Success, however, is far from assured. Much of
the empirical literature on performance management reforms has emphasized the difficulty of
implementing and sustaining data-driven reforms (Frederickson, 2003; Moynihan, 2006; 2008;
Radin, 2006). The prominent success of a handful of PerformanceStat style reforms in places
such as Baltimore and New York City (Behn, 2003; Bratton & Malinowski; 2008; Smith &
Bratton, 2001) has overshadowed that the large majority of performance management reform
attempts have led to incomplete implementation or compliance responses (Diefenbach, 2009;
Gerrish, 2016; Moynihan, 2006; Radin, 2006). For example, in a comprehensive “report card”
grading of state-level performance systems, Moynihan and Ingraham (2003) give states the
equivalent of a weighted C+. Moynihan's (2006) follow-up examination of state level reforms
14
noted the disappointment that has resulted from partial adoption. Ammons (2008) critiques
performance systems at all levels of government for failing to systematize the use of analysis in
operational and managerial decision making. Moreover, at the federal level, the evolving
GPRA/PART/GPRMA regime has received mixed reviews (Moynihan, 2013; Mullen, 2006;
Radin, 2006; 2008). The large body of research examining efforts to implement performance
management regimes documents a broad range of challenges public organizations must
overcome to derive public value from these management reforms (Ammons, 2008; Moynihan
2008, Radin, 2006, Sanger, 2008). While most public managers, when asked in surveys or
interviews, claim to use performance data to inform decisions, research on actual management
practices cast doubt on these claims (Melkers & Willoughby, 1998; 2001; 2005; Moynihan,
2013). For example, Sanger (2013) reviewed 190 US cities that collected a broad range of
metrics. She found that only 27 of them had mature systems that included metrics that could
drive organization outcomes and among these 27, only a handful were found to be effectively
employing metrics in management decision-making (Sanger, 2013).
This spotty record is not surprising considering the inherent mismatch between incentive
management and the politically contested nature of services provided by public agencies
(DiMaggio & Powell, 1983; Moynihan, 2006). Indeed the very democratic character of public
organizations has been argued as anathema to innovation as hierarchy, functional organization,
and political constraints can induce organizational paralysis (Scott, 1998; Wilson, 1989).
Administrators cannot act upon performance metrics under conditions where discretion and risk-
taking ability are constrained by regulatory rigidity, and under such circumstances, public
managers may perceive performance management as a time-wasting compliance exercise (Dull,
2009; Moynihan, 2006; 2013; Sanger, 2008). In many cases, the perception of performance
15
management reforms as being successful is far greater than the actual results, which are often
underwhelming (Frazier & Swiss, 2008).
The large and growing body of research on performance management has divided along
three lines. First, a number of scholars have conceptualized performance as a more or less
objective concept that can be measured and incorporated into public management systems and
public organizations (Ammons, 1995; Osborne & Gaebler, 1992; Osborne & Plastrik, 1997;
Wholey & Hatry, 1992). In contrast, a second body of literature constructs performance related
reforms as contextual, contested, and subject to a number of impediments (Behn, 2003;
Frederickson, 2006; Moynihan, 2006; 2008). A third characterization has argued that the
mechanistic focus of performance management is at odds fundamentally with the broader
representative goals of public organizations and the citizens they are meant to serve (Radin,
1998; 2006). Much of the work done to understand the design, implementation, operation, and
evaluation of performance management regimes is grounded in one or more of these research
canons.
While the first wave of research on performance systems in the 1990s was reasonably
optimistic about the potential of reforms, more recent waves of literature have shown skepticism
about the possibilities for performance management. Though in many respects this skepticism
may be warranted, this developing cynicism about performance management has created an
opening for new research that takes a balanced approach to studying what might lead to
measured success for reforms. Building upon previous research that is definitional (Ammons,
2008; Moynihan, 2008), prescriptive (Hatry, 2006; Hood, 2012; Moynihan, 2006), and that
examines the challenges of implementing performance management systems (Diefenbach, 2009;
Radin, 2006) this dissertation attempts to fill this gap.
16
The rest of this chapter builds the foundation for studying performance management
success factors in the City of Los Angeles by examining several strands of literature. The
difficulty in defining success in the context of performance reforms is first explored. The chapter
then moves into considering impediments and organizational constraints to performance
management reforms. Finally, previous research on success factors during reforms, and how
public organizations can overcome challenges during performance management implementation
are studied. This literature sets up the success factors that are explored empirically in the
subsequent chapters of this dissertation.
1.2 Organizational Constraints
As noted earlier, performance management systems and related data-driven reforms in
the public sector are alluring to both political leaders and public managers because of the
possibility of reinventing government to make it more responsive and effective for their
constituents. For instance, many public sector leaders perceived performance management
reforms as a way of bringing results-based accountability to public agencies, which in turn
potentially reduced the need to undertake difficult actions such reducing services or levying
higher taxes (Durant, 1999). With this tantalizing version of success viewed as a possibility,
performance management reforms became popular with organizational leaders at all levels of
government (Behn, 2014). Yet, on the flip side, failure also presented potentially drastic
consequences, an outcome often less considered by public leaders (Radin, 2006). For example,
Diefenbach (2009) considers a range of negative consequence associated with performance
reforms, including: devaluation of public goods, centralization of power in organizations,
primacy of managers over other interest groups, and lowering of public sector staff morale, all of
17
which he argues lead to an endless cycle of reforms, which try to ameliorate the problems
created in the first place.
1.2.1 In Pursuit of Success
One of the challenges that arises when defining success in public organizations as it
relates to performance management systems is that the efficiency and performance standards of
the reforms are incorporated from private sector management practices (Ingraham, 1993;
Moynihan, 2006). The doctrine of New Public Management, which undergirds performance
management, emerged in the 1980s as a synergy of two literatures that study private sector
organizations; economic theories of the firm (Moe, 1984; Niskanen, 1971; Tullock, 1965;
Williamson, 1981) and managerialism (Aucoin 1990; Peters & Waterman, 1982). Despite
performance management reforms having originated in the private sector, they have endured in
public organizations (Kettl and Kelman, 2007). Performance management incorporates many
ideas from the private sector, including: total quality management, business process engineering,
accelerated technological change, public choice theory, and the human potential movement
(Kamensky, 1996). Despite these epistemologies, it appears that performance management
success in private sector organizations is less transferable to public organizations.
An important consideration for defining performance management success in public
organizations is considering the "public" character of these organizations (Denhardt, 1981).
Public sector organizations have unique obligations and responsibilities to the citizens they serve
that must be captured separately from how private organizations operate in order to conduct a
proper analysis. Though the idea of “public” in public sector organizations is often contextual, it
is helpful to draw upon the five perspectives of public put forth by Frederickson (1991) in
considering all the facets of public organizations interactions with the public: (1) the pluralist
18
perspective, (2) the public as a consumer, (3) the public as represented, (4) the public as a client,
and (5) the public as citizen. The fact that many of Frederickson's (1991) perspectives on
"public" appear to have contrasting goals is possibly one significant reason why public
organizations have struggled to define success and improve their performance. If the public as a
consumer or a client conflates with a goal of improved service performance, the pluralist public,
functioning through the grinding process of democracy, is inherently inefficient and undermines
the consumer perspective. Another way to view the multiple public perspectives put forth by
Fredrickson (1991) is that public organizations face a series of conflicting goals that make
straightforward process improvement and simple outputs hard to implement. Spicer (2004)
furthers this idea by noting that purposive organizations with a single mission can operate much
more effectively than those which must uphold a variety of missions.
A clear definition of success for a performance management system in the public sector
has been elusive, due in part to the varied purposes of performance systems, which results from
organizational heterogeneity. Behn (2003) identifies eight different purposes that may be
attributed to a public sector performance management system: (1) evaluate, (2) control, (3)
budget, (4) motivate, (5) promote, (6) celebrate, (7) learn, and (8) improve. Clearly, success will
be more attainable for public organizations that seek to celebrate rather than to evaluate, control,
and improve. Organizational culture also arguably influences the appropriate measure of success.
Hood (2012) argues that defining success for performance management systems is fluid and will
depend on the structure and culture of the bureaucracy implementing the system. He identifies
three types of bureaucracies: hierarchal, individualist, and egalitarian. For hierarchal structures
he argues, reaching aspirational targets would denote success. In contrast, success in egalitarian
systems would involve the provision of intelligence to bureaucratic actors, while benchmarking
19
will be critical to success in individualist organizations. As Hood (2012) notes, however, most
bureaucracies are hybrid types of organizations that make goals necessarily contended, and
assessment of implementation difficult.
Although much of the scholarship focuses primarily on whether an organization has
adopted some form of performance metrics, simple adoption is a fairly low bar (Behn, 2003;
2014; Hatry, 2006; Moynihan & Ingraham, 2004; Moynihan & Lavertu, 2012). Bourdeaux and
Chikoto (2008) argue that going through the motions of a performance management system may
be as detrimental as not adopting a performance system at all. A more robust adoption of
performance management systems would require that performance metrics systemically alter
how a government organization operates. This might be considered the distinction between going
through the motions and actually changing departmental operations, as by integrating
performance management practices and metrics into managerial routines (Behn, 2014; Dull,
2009). Another example of robust adoption is the active utilization of performance metrics for
problem-solving and the development of forward-looking objectives (Behn, 2005; Hatry, 2006;
Sanger, 2008).
Heinrich’s (2002) scholarly work on performance management reforms identifies four
interlocking principles that define success in planning a performance management system. First,
in the planning stage performance metrics should be closely aligned with the hoped-for goals
(Baker, 1992). Second, the metrics put in place should mirror as closely as possible actual
performance within an organization (Baker, 1992). Third, the metrics should be inexpensive to
implement and relatively simple to manage (McAfee and McMillan, 1988). And, finally, it
should not be possible for managers to game performance management systems to reflect
performance gains that did not actually take place (Hart, 1988). These factors might define a
20
proper and successful planning regime and implementation plan for a performance management
system.
1.2.2 Inherent Challenges of Public Management
Public organizational change in general, and performance management reforms
specifically, are made difficult by the structural features of democratic institutions and the public
organizations that represent them. Both Wilson (1989) and Scott (1998) have identified three
distinct levels of public organizations: the institutional level, the managerial level, and the
operational level. Through the lens of these organizational levels, evidence demonstrates that the
legal and political foundations of public organizations can impede data-driven reforms. The
institutional level of a public organization is primarily focused on establishing an organizational
mission internally and promoting legitimacy to the external environment (Wilson, 1989; Scott,
1998). This stability and institutional legitimacy often is pursued via mimetic isomorphism, the
process of imitating structures and routines as practiced by peer organizations (DiMaggio &
Powell, 1983; Frumkin & Galaskiewicz, 2004). Often public leaders seek symbolic reforms that
will be perceived favorably by peers, but that does not focus on the details of implementation. In
addition, data-driven reforms can be viewed negatively if they reveal underperformance in a
manner that can undercut legitimacy (Diefenbach, 2009).
Research linked to performance management reforms at the institutional level of public
organizations confirms these issues. Sanger (2008) uncovered that directives from the highest-
level managers within public organizations (institutional level) on performance management
reforms did not comport to operational realities. Further, additional research found that
perceptions of performance management reforms as successful were far greater at the top levels
of public organizations than perceptions of middle managers or ground-level workers (Frazier &
21
Swiss, 2008). Additionally, the institutions that are designed for the public sector are also
products of bounded human rationality that they were created to overcome. As such, these
organizations struggle to change endogenously in ways that improve performance, and that can
dramatically overcome bounded rationality (Greif & Laitin, 2004). The work of Wilson (1989)
encapsulates this idea in the public sector bureaucracy, showing that institutions designed by
public leaders have structures inherently in contrast to improving core organizational
performance and mission. Bureaucrats are further hindered by their bounded rationality and the
inherent lack of incentives (Wilson, 1989).
At the managerial level, constraints tend to involve the extremely restricted nature of the
environment that public managers face. Public sector organizations are rife with bureaucratic
restrictions and do not promote robust incentive systems for managers (Moynihan, 2006).
Without discretion to subvert these structural hurdles, managers are likely to respond to data-
driven reforms as mere compliance exercises (Radin, 2006; Moynihan, 2013). The lack of data-
driven metrics integrated into management routines, and organizational decision-making allows
weak incentive systems in public organizations to persist (Dull, 2009). Hence performance
management success requires reforms to managerial practices and organizational culture. Both
Nielsen (2013) and Gerrish (2016) put forth best practices that can define success for
performance management systems based on a meta-analysis of varying studies on performance
management. Nielsen (2013) argues that the critical goal is to empower managers, so they will
utilize metrics to improve departmental actions and make decisions. Gerrish (2016) situates what
he sees as success within the actions of particular portions of bureaucratic departments. The use
of performance benchmarking, evidence of active and elected official support, and the utilization
of outcomes from performance metrics are all actions that define successful performance
22
management systems. A careful reading of both Hatry (2006) and Yang and Hsieh (2007) would
lead to similar conclusions on the integral role that managerial reform plays in performance
management success.
At the operational level, public sector employees’ primary focus is on the technical and
operational nature of their responsibilities, along with the need to successfully complete their
primary tasks. This goal frame would seem to comport with the objectives of performance
management reforms, yet the ambiguous nature of many public organizations’ activities forces
operational employees to focus on their immediate tasks as opposed to the broader incentives
established by the organization (Wilson, 1989). This type of immediate task pressure can cause
frontline workers to utilize rote decision-making templates or default to routine structures instead
of using decision tools appropriate to any specific task at hand (May & Winter, 2009; Lipsky,
2010). Under such circumstances, workers at the operational level may perceive that data-driven
metrics imposed by senior managers are not commensurate with their day-to-day work, and thus
are viewed as an additional drain on their time (Sanger, 2008).
Furthermore, there is frequently a tension between the values of public organizations and
those of data-driven reforms themselves, leading to a lack of results from these reforms (Radin,
2006). Research in this vein demonstrates that even when performance values can be aligned
with the values of public organizations, this process is poorly undertaken, leading to
disappointing results (Frederickson, 2003; Frederickson, 2006). A related issue that has been
observed in poorly functioning performance management reforms is goal displacement. The
argument behind goal displacement is that performance management reforms induce measured
performance while negatively influencing the actual performance of public organizations (Courty
& Marschke, 2004; 2008; Heinrich & Marschke, 2010). A number of empirical studies confirm
23
that performance management reforms do not significantly improve performance (Gerrish, 2016;
Heckman, Heinrich, & Smith, 1997; Hvidman & Andersen, 2014; Moynihan, 2006; 2008;
Rosenfeld, Fornango, & Baumer, 2005).
Considering these constraints, many attempts to systematize performance management
are burdened by unrealistic expectations, pinning unlikely hopes on the ability of management
reform to solve many intractable government issues (O'Hare, 1995). Being burdened with
unachievable goals from the outset can foster active resistance, particularly if unrealistic goals
are perceived to be politically motivated (Moynihan, 2008; LaVertu et al., 2013). In turn, such
perceptions hinder the diffusion of performance management to lower level employees (Frazier
& Swiss, 2008). Goal displacement and the utilization of irrelevant measures are also systemic
impediments to successful performance management systems. The difficulty of measuring
agency outputs can lead to measurement of indicators that have limited relation to organizational
goals, or to goal displacement when agency staff members concentrate on metric attainment at
any cost (Heinrich & Marschke, 2010; Perrin, 1998).
Previous research also argues that time can be the enemy of successful performance
reforms. These arguments about decay in performance management reforms emphasize the
detrimental effects of declining political attention. For example, Bourdeaux and Chikoto (2008)
demonstrate that short time horizons for political leaders, due to attention spans and electoral
concerns, can inhibit performance management reforms. Further work by Bourdeaux (2008) on
performance based-budgeting attempts in Georgia showed that gubernatorial attention and
agency leadership attention to reforms is constrained by time. At the level of staff and
managerial interaction, Dull (2009) contends that there is a brief window of time to demonstrate
the value of performance management reforms to public managers and other staff before these
24
bureaucrats no longer find reform attempts credible. Another line of thinking argues that for
performance management reforms to be successful a long period of time is needed to work out
growing pains and to institutionalize changes. This argument often is raised when performance
management reforms appear to be struggling, and patience is called for by public leaders
(Moynihan, 2006). The manner in which performance regimes evolve over time remains
somewhat unclear because much of the research on performance management is retrospective or
utilizes cross-sectional survey data (Melkers & Willoughby, 1998; 2001; 2005).
1.3 Facilitating Implementation
If a number of structural and organizational issues within public organizations inhibit
performance management reforms, research has also identified a number of factors important to
advance performance management. Although there is no single formula for success, systems
must be tailored to the particulars of an organization’s activities, information technology
systems, personnel, and culture (Behn, 2014). Nevertheless, the literature has identified a number
of key success factors:
1. Choice of institutional design
2. Leadership
3. Specific organizational features
4. Informative metrics supported by timely and high-quality data
5. Management innovations that integrate data into decision-making processes, including
training of line managers
1.3.1 Choice of Institutional Design
A key consideration is the institutional design of the performance system. Performance
management reforms in essence use metrics to measure outputs and outcomes while utilizing a
25
variety of incentives to reach these performance goals. Performance management systems can be
broadly broken into four categories, each with progressively more stringent incentives and
metrics: (1) Managing for Results, (2) PerformanceStat, (3) Performance Contracting, and (4)
Performance Based Budgeting (Behn, 2014). A brief summary of the ramifications of different
levels of performance management systems helps to inform how picking a particular type of
system impacts the possibility of success.
One would expect that stronger incentives and more detailed metrics would be more
successful in improving performance, in that incentives can provide focus for managers and
garner more effort from operational level staff (Heckman et al., 2011; Behn, 2003). Both
Prendergast (1999) and Moynihan (2008) argue that robust metrics and incentives in public
organizations can promote organizational learning and attract more motivated employees. It
appears however that a “Goldilocks mean” is appropriate, in that systems with particularly weak
or strong incentives and data systems face significant implementation challenges.
Managing for Results type systems of performance management are usually considered
“soft-incentive” approaches (Behn, 2003). Managing for Results methods tend to lead to
compliance responses, as discussed earlier (Dull, 2009; Moynihan, 2013). Looking at specific
Managing for Results efforts, such as the Government Performance and Results Act (GPRA)
have shown that metrics collected are rarely utilized to hold managers responsible (Moynihan,
2013). Research by Moynihan (2006) of state-level Managing for Results efforts has pointed to
the lack of managerial flexibility and operational authority as key structural impediments that
have led to a lack of implementation success. As these requisites may not always be in place,
soft-incentive reforms may devolve into compliance exercises in which data are reported up but
do not inform line or mid-level management choices (Moynihan 2013).
26
There is widespread acknowledgment of political and cognitive barriers to rationality in
the performance based budgeting process (Wildavsky & Caiden, 2004). Performance based
budgeting approaches tend to suffer from implementation failure with high-level financial
incentives attached to public organizations' strategy and operations (Wildavsky & Caiden, 2004).
Typical public organizational features appear to be incommensurate with the diametrically
opposed reward/punishment style of performance based budgeting reforms (Hou et al., 2011).
The political vagaries of the budgeting process, specifically wielding line-item control of budgets
instead of focusing on programmatic outcomes, are an impediment to performance based
budgeting (Hou et al., 2011). Aligning cost accounting with program-level management requires
a sizeable managerial cost and dedicated leadership in order to make performance based
budgeting systems successful (Melkers & Willoughby, 2001; Barzelay & Thompson, 2006). The
threat of budgetary cuts in the presence of poor performance is possible in theory, although, as a
number of critics point out, the appropriate action to take in the presence of poor performance is
not evident, as budgetary reductions to low-performing agencies can have perverse impacts on
performance (Barzelay & Thompson, 2006; Hou et al., 2011).
PerformanceStat reforms and performance contracting appear to be the performance
systems with the greatest potential for success. PerformanceStat reforms tend to utilize firm
metrics to manage organizational outputs while balancing less rigid incentive structures. In
combination with organizational success factors laid out at the start of this section,
PerformanceStat reforms have been successful at the local level in places like Baltimore and
New York (Smith & Bratton, 2001; Behn, 2005; Bratton & Malinowski, 2008). In places where
PerformanceStat reforms have proven successful managers running the programs have been held
accountable primarily through public meetings, improved career trajectories based on meeting
27
benchmarks, and organizational peer pressure (Behn, 2003; 2014). In its original formulation
PerformanceStat comprised four main tenets for organizational decision making: (1) accurate
and timely intelligence, (2) rapid deployment, (3) effective tactics, and (4) relentless follow-up
and assessment (Behn, 2003). In PerformanceStat systems, failure is not defined by poor results,
but by not having a strategy to deal with these poor results (Heskett, 2012). The governance costs
of PerformanceStat systems are not insubstantial and require investments in data gathering and
analysis, though Behn (2013) reports that several successful systems rely on just a handful of
capable analysts. This is not to say that PerformanceStat systems are a panacea, as there have
been many unsuccessful implementation attempts, but the durability of reforms observed in
successful cases does point to possible long-lasting benefits.
The structure of performance contracting usually establishes benchmark metrics and sets
a percentage of contract payments based on the outcomes of these metrics. Performance
contracting in a number of cases has led to increases in performance and cost savings for public
organizations that contract with private organizations to provide certain services (Heinrich &
Choi, 2007; Dee & Jacob, 2010). The intense incentives and firm metric approach tend to
comport better with the organizational structure of private organizations (Heinrich & Choi,
2007). However, performance contracting is open to potential manipulation of data through the
timing of reporting or outright cheating on contracts (Heckman et al., 2011).
In the case of Los Angeles, the police department was the one unit within the City that
had extensive experience designing a performance management system. The Los Angeles Police
Department (LAPD) was one of the first local government agencies in the United States to adopt
a PerformanceStat style system (Bratton & Malinowski, 2008). Former New York City chief of
police William Bratton brought the system with him when he transitioned to the same job in Los
28
Angeles in the early 2000s (Bratton & Malinowski, 2008; Smith & Bratton, 2001). As with its
implementation in New York City, the PerformanceStat system implemented by the LAPD was
generally considered successful in helping reduce crime through data-driven approaches (Bratton
& Malinowski, 2008; Smith & Bratton, 2001).
When the Garcetti Administration took office in mid-2013, it looked to replicate the
success of the LAPD PerformanceStat system that had been in use for over a decade. The
Mayor’s Office mandated that all thirty-five City departments come up with a PerformanceStat
style system. However, this mandate seemed to allow each City department to create a “soft”
PerformanceStat approach with less structured incentives and systems. In fact, it is more likely
that the performance management reforms attempted in Los Angeles ended up closer to a
Managing for Results style system. A description of these efforts and their implications for
success are explored further in Chapter Three.
1.3.2 Leadership
The critical role of strong and consistent organizational leadership cannot be
overemphasized. Leaders set the tone for changing the managerial habits within public
organizations and promoting the cultural shift to create data-oriented managerial styles (Behn,
2003; 2014). Leaders provide mission clarification that guides the direction of performance
management systems and push for the resources to implement that vision (Sanger, 2013).
Leaders drive the sense of accountability linked to metrics and establish a tone that promotes
innovative thinking (Behn, 2014). Without consistent and robust support from organizational
leaders, operational staff and middle managers are unlikely to be diligent in pursuit of the
complex goals set forth by performance management reforms (Moynihan & Ingraham, 2004;
Bourdeaux & Chikoto, 2008; Dull, 2009; Moynihan & Lavertu, 2012).
29
Leadership at the highest institutional level has been shown to have the strongest trickle-
down effect on other levels of public organizations when it comes to imbuing performance
management goals (May & Winter, 2007). Further, high-level leaders who build linkages across
other levels of public organizations contribute to the successful implementation of data-driven
reforms (Lynn et al., 2000). Moynihan and Pandey (2010) note that an essential function of
leaders is to lend coherence to the organizational mission in relation to performance management
reforms.
Leadership during organizational change demonstrates that factors beyond simple
performance incentives might impact on how public sector employees work and motivate
themselves (Perry, 2000). However, the mechanism of how leadership positively impacts
performance management reforms is still not entirely clear. For instance, Fernandez and
Moldogaziev (2013) believe that leadership is empowering employees through increased
knowledge and skills. On the other hand, Perry and colleagues (2006) posit that goal setting is
the central function of leadership that leads to improved organizational outcomes. Yet, Behn
(2014) argues that leadership is a tacit skill that is hard to formalize during the course of reforms.
In any case, leadership in various forms appears to be a central success factor in the
implementation of performance management systems.
The leadership approach to performance management reforms in Los Angeles started
with the Mayor calling for his Back to Basics agenda and filtered down to other senior leadership
in the City. The Mayor hired a number of new senior leaders to drive Back to Basics and asked
each of the general managers of City departments to re-interview for their jobs with a leadership
plan for performance reforms. These senior leaders were expected to drive the early
implementation of reforms and the adoption of performance measures. As performance reforms
30
moved along, the City lost some of these senior leaders and faced challenges with mid-level
public managers taking up leadership roles. The City's approach to leadership during
performance management reforms is explored further in Chapter Three.
1.3.3 Organizational Features
A number of organizational and related staff factors have been cited as important to the
success of performance regimes. A lack of organizational resources and staff expertise has been
noted in the literature as key deficiencies when a performance management reform does not get
off the ground (Melkers & Willoughby, 1998; 2001; 2005). Boyne (2003) notes that one of the
most straightforward paths to improving performance is through additional resources in the form
of human capital and financial resources. In planning for performance management systems,
setting aside adequate resources is thought to lead to better adoption of reforms (Boyne &
Gould-Williams, 2003). Proper resource allocation was cited in a meta-study of reforms as one
important factor for success (Gerrish, 2016). On top of resources, organizational size and the
spare capacity that comes with greater size have been considered positive factors for reforms
(Nielsen, 2013).
In the same vein, the analytic capacity of organizational staff members is a central
element in well-run data-driven systems (Lu, 2008). Public organizations must be able to
integrate this analytic capacity into performance routines in order to gain this benefit (Ammons
& Rivenbark, 2008). Sufficient analytic capacity in key managers tasked with driving
performance reforms is predicted as a level of innovation for performance management systems
(Hou et al., 2011).
The endogenous motivation of operational level staff at public organizations is also
important. As Perry (2000) and colleagues (2010) demonstrate, traditional motivation theory
31
does not adequately explain public and nonprofit employee behavior. Public service motivation
has been shown to be positively related to the individual performance of public workers (Perry et
al., 2010). A general theorization of how public service motivation might work under
performance management reforms argues that specific goals linked to data-driven metrics can
augment public servant performance (Durant et al., 2006). Performance management systems are
more successful when a distinct mission within the organization matches with a high-level of
public service motivation from staff (Moynihan & Pandey, 2007; 2010).
The mission of the governmental organization and its relation to constituents influence
whether data-driven reforms are likely to succeed. Radin (2006) evokes Wilson’s (1989) work on
organizational function in noting the difficulty that “coping agencies” have in implementing
performance approaches. Coping agencies are those in which neither service outputs not
organizational outcomes are readily observable (Wilson, 1989). In such agencies, not only can it
be difficult to measure outputs and outcomes, but the programmatic logic of production is
frequently unknown or politically contended such that the collection and application of evidence
may not reveal management innovations that will improve performance. In contrast, "production
agencies" where both outputs and outcomes are readily observable may provide an environment
conducive to successful performance reforms (Wilson, 1989). Indeed, local government agencies
arguably have a superior track record to their state and federal counterparts because they supply
a greater proportion of production-type goods and services, such as fire and police (Bratton &
Malinowski, 2008; Smith & Bratton, 2001; Behn, 2005). In a study of local agencies in England,
Walker, Damanpour, and Devece (2010) focused on the importance of micro-organizational
features to organizational performance, using an outcome measure that positively associates
innovation, performance management, and organizational performance. They identify a number
32
of organizational features as associated with performance, including goals, targets, indicators,
systems of control, and delegation of authority. Similarly, it has been argued that organizational
reform should precede performance management system implementation to reduce hierarchical
features of the organization and empower staff to take risks (Sanderson, 2001; Sanger, 2008).
At the level of individual public managers, a number of political factors have been
identified as promoting the utilization and operationalization of performance information.
Although there is no overarching theory as to the factors that promote performance information
usage, political attention is noted as one possible cause (Moynihan & Pandey, 2010). The role of
political stakeholders can drive the use of performance information in positive and negative
directions (Bourdeaux & Chikoto, 2008). Further political research demonstrates that more
liberal political settings promote greater utilization of performance information (Askim, Johnsen,
& Christophersen, 2008; Moynihan & Ingraham, 2004).
Within Los Angeles, there is great variability in the organizational features present,
largely because of the variability in the thirty-five departments tasked with implementing
performance management systems. First, each department had a number of built-in
organizational features prior to the Back to Basics initiative. Second, because the Mayor's Office
allowed each department a significant amount of leeway in developing their own performance
management system this led to further differentiation. Finally, some departments were singled
out as "core departments" by the administration, adding another layer of differentiation. The
organizational features studied during the Back to Basics research conducted for this dissertation
are further discussed in Chapter Three.
33
1.3.4 Metrics and Data
Performance management systems can only be as strong as the data on which they are
constructed. As discussed by Behn (2014) the best data has several features:
1. Timely and Updated Frequently: Managers require information on recent
organizational performance to identify ongoing issues and address them. Regular
performance updates that occur either weekly, monthly or quarterly are needed to support
rapid analysis of trends and evaluation of management strategies.
2. Comparability: Metrics that can be compared over time, across sub-jurisdictions, or to
readily available benchmarks provide much clearer indicators of organizational
performance and promote analytic thinking.
3. Trustworthy: Data that is widely accepted as reliable helps organizations build a
consensus on organizational performance and issues, support actions based on that data,
and avoid the conflict that can arise when metrics are manipulated.
4. Low-Cost: Data that is too time-consuming and costly to collect, collate, and analyze
tends to become disused as organizations focus on their core processes. Data that is
automatically collected in the normal process of doing business (e.g., budget numbers,
case processing times) has strong advantages.
5. Integrated: Novel insights arise when data from previously stand-alone data sets are
integrated allowing for new comparisons and analyses.
Building on this data, successful performance management systems develop metrics that are
measurable, meaningful, and manageable (Hatry, 2006). They are measurable when they are
constructed based on high-quality, reasonably available data. They are meaningful when they are
linked to the outcomes the organization is seeking to achieve or are linked to the processes and
34
outputs that lead to those desired outcomes. Finally, metrics are manageable when they indicate
performance deficits that can be addressed through managerial action (Hatry, 2006). A
considerable variation in metrics was observed in Los Angeles, and these differences are
examined further in Chapter Three.
1.3.5 Training
The role of training in the implementation of performance management systems has been
studied through a handful of research paradigms. Preparation through training for performance
reforms is associated with greater perceived effectiveness in using performance information
(Gerrish, 2016). A study of federal government workers who had received training on
performance information indicated that despite reservations about reforms, training increased
their effectiveness in engaging with data-driven processes (Cavalluzzo & Ittner, 2003). A survey
of government financial officers indicated that performance management training was something
they felt they needed in order to be effective in utilizing performance information (Julnes &
Holzer, 2001). Kroll and Moynihan (2015) produced the most detailed research on the impact of
training on performance management reforms. Training is positively associated with reform
implementation, but the exact mechanism for why this occurs could not be identified in the
course of the study (Kroll & Moynihan, 2015). As such Kroll and Moynihan (2015) propose that
training be designed specially in tandem with the goals of reforms. Surveys of state and local
governments found that personnel lacked skills in the use of data-driven information technology
and by extension hampered the development of performance reforms (Melkers & Willoughby,
2001; 2005). Similarly, Lu (2008) found that the analytic capacity of public sector workers was
underdeveloped due to a lack of training in performance measures. Building communication
35
linkages between managers and their staff through development programs has been shown to be
a valuable skill in performance management reforms (Lynn et al., 2000).
In support of its performance system, the City of Los Angeles embarked on two
complementary training programs that ramped up over the summer of 2016. The Innovation and
Performance Management Unit (IPMU) within the Mayor's Office was the lead department
within the City for promoting training programs. The first training program the City promoted
and encouraged senior managers to attend a week-long "Six Sigma" style program. Six Sigma is
drawn from the "Quality Management" movement in the private sector and focuses on highly
detailed data-driven training. The second program, which involved middle-level managers, was a
one-day seminar on the basics of using performance information. By the end of 2016, both
programs were discontinued. It was unclear whether the programs were discontinued because of
a perceived lack of effectiveness or due to cost or administrative concerns.
1.3.6 Integration of Data into Management
While many public organizations experiment with performance measurement, they often
have difficulties making the transition to performance management. The inability to make this
critical leap can be attributed to a number of implementation failures, similar to those discussed
earlier in this chapter. Organizations may provide inadequate resources to support investments in
information technology and analytic capabilities (Lu, 2008). Also, bureaucratic and political
constraints may impede organizations from reallocating resources in ways that could address
problems that the data reveals (Behn, 2003). Organizations may underinvest in the managerial
time and commitment to review data, identify strategies to address performance gaps, and
follow-up (Behn, 2014). Behn (2014) places particular emphasis on the importance of regular
meetings for driving organizational change. At these meetings, he recommends that managers
36
consistently review trends in metrics, identify gaps in performance, set out strategic action plans,
and review previous action plans (Behn, 2014). Indeed, there is considerable attention in the
performance literature to the importance of embedding performance systems in a coherent
strategic planning effort that clearly identifies desired organizational outcomes and the activities
necessary to achieve them (Hatry, 2006; Sole & Schiuma, 2010).
Another important set of impediments to transitioning into performance management is
the tacit nature of the necessary changes (Behn, 2014; Moynihan, 2008). Data and metrics are
essential inputs into the transformation, but they do not themselves provide faultless guidance.
Managers must understand what questions to ask of the data and understand the multiple ways
the data may be interpreted (Behn, 2014). Moynihan recommends that managers promote an
interactive dialogue over the meaning of data within organizations (Moynihan, 2006; 2008).
Such skills concerning inquiry and interpretation are not easily taught, but an absence of these
skills can prevent an organization from effectively employing even the best of data (Gerrish,
2016).
The City of Los Angeles struggled to move performance reforms into a place where
performance metrics were actively integrated into decision-making processes. Many departments
began the performance management reform process through simple data collection and counting
exercises. As the Back to Basics initiative progressed, the trajectories of different departments
and their use of performance management to make decisions diverged. Some departments
learned to embrace data as a key decision-making and problem-solving tool, while others
remained at a compliance level. In Chapter Three, the decision-making processes of six case-
study departments are further explored.
37
1.4 Implementation Processes
Integrating the aforementioned factors into a working and productive performance management
system requires a
prolonged and intensive
development process that
can spread over multiple
years (Hatry, 2006). As
depicted in Figure 1.1
1
,
the process of
implementation in its
most logical form begins
with organizational
strategic planning that
clarifies the
organization's mission and objectives. Metrics are developed based on those objectives, and the
data required for these metrics are identified. An implementation process needs to pilot test
analytic strategies, methods of reporting data, and organizational processes for incorporating
analysis into management improvement (Hatry, 2006). Once the main steps are in place, the
system requires consistent follow up that will become an important input to future strategic
planning exercises (Behn, 2003; 2014).
These steps do not necessarily have to be taken in the standard order described above.
Some argue that for many public organizations efforts to develop a consensus on the
1
Adapted from Hatry (2006).
Strategic
Planning
Establish
Mission &
Objectives
Metrics &
Measures
Identify
Data
Sources
Develop
Analytic
Strategies
Collect
Data,
Report,
Manage
Follow up
Figure 1.1 The Implementation Cycle
38
organization’s mission and objectives creates unnecessary confusion and conflict (Frederickson,
2003; Frederickson, 2006). It can be more productive, to begin with, metrics and allow the
objectives to emerge over time (Ukeles, 1982). Alternatively, others have begun by identifying
what data is available and moving forward from there (Sanger, 2013). Regardless of order, it is
desirable to cover all of the steps in the development cycle (Hatry, 2006).
To undertake these implementation steps, Hatry (2006) recommends a working group of
8-12 members per department. The members should include people with knowledge of the
administration of the program under study, people with knowledge of related programs,
measurement experts, information technology representatives, and representatives of the
budgeting process. This group would be expected to meet frequently for the first year as the
process progresses through the implementation steps. In addition, it is recommended that the
group reach out to stakeholders such as program workers and customers to seek their input on the
program objectives that should be examined (Sanger, 2013). After an initial system is developed,
it is recommended that organizations undertake extensive pilot testing, which can take over a
year to complete (Hatry, 2006).
In Los Angeles, departments began the implementation process by effectively skipping
the first two steps suggested by Hatry (2006) and moved directly to developing measures and
collecting data. Most departments did swing back towards strategic planning and developing
PerformanceStat plans after a period of time. A number of other suggested implementation
processes were overlooked or only partially engaged. The discussion of the six case-study
departments in Chapter Three further explores the implementation process in Los Angeles.
39
1.4.1 Avoiding Pitfalls
A final key to success is anticipating and avoiding pitfalls in the course of implementing
performance management systems. Performance management initiatives are fragile and are often
derailed by unanticipated factors (Behn, 2014, Sanger, 2008). Political crises and distractions are
to be expected, and managers must be prepared to continue their support of management system
implementation even when their attention is being drawn elsewhere (Moynihan, 2008). Another
potential impediment is organizational instability. Rapid changes in personnel and organizational
mission challenge performance metrics by sapping organizations of leadership and analytic
capabilities and undermining the organizational learning that the performance metrics are
intended to achieve (Diefenbach, 2009).
In sum, empirical studies have found that performance management is difficult to do
well, but also revealed a wide array of factors that appear related to success. Performance
management appears to be less controversial in organizations that are not coping types, such that
either outputs or outcomes can be observed and measured. An array of capacity related features
contribute to success, including organizational resources, analytic capacity, and information
management systems that support the effective collection of well-defined metrics. Leadership
commitment and communication have been argued to be particularly critical, as is the presence
of strategic planning that identifies a goal-based approach to organizational management. And
finally, performance management appears to be promoted by an innovative organizational
culture that supports risk-taking and strives to improve organizational performance through
training and committed public managers.
While this body of work has detailed many important relationships in the performance
management implementation process, there remains a need to integrate these insights to identify
40
a formula, or more likely formulas, for success. The investigation conducted in this dissertation
of performance management implementation in Los Angeles works to provide a set of results
that start to answer what these formulas for success might look like. The dissertation extends the
literature by integrating detailed case study findings with survey data to support both
multivariate analysis of influential factors as well as rigorous cross-case comparisons through the
application of fuzzy set QCA. The next chapter builds the methodological framework used to
approach the study of successful implementation of performance management reforms.
41
Chapter Two: Methodology
2.1 Introduction
The purpose of this chapter is to discuss the research methodology used to conduct a
mixed-methods study of the implementation of performance management reforms within the
City of Los Angeles. Examining performance management over-time, using multi-modal data in
combination with mixed-methods allows for a new understanding of what impedes the success of
data-driven reforms, and what combination of factors may overcome these obstacles and
promote success. The chapter begins by restating the key research questions posited by the study
and then ties them to the purpose of the mixed-methods research approach employed. The
chapter then briefly touches on the context of performance reforms in Los Angeles (discussed
further in Chapter Three) and ties this to the framework of data sources utilized within the City.
A discussion of both the data collection methods and an overview of the subsequent analysis
techniques follow. The specific technical details and a full description of each analysis technique
are covered in the analytical chapter in which these techniques are employed. The limitations of
the research approach and study as a whole are discussed in the concluding chapter, Chapter Six.
2.1.1 Research Questions
This study looks to build upon the literature discussed in Chapter One, using the research
approach discussed in this chapter to address three interconnected research questions. These
research questions, which were first discussed in the introduction of this dissertation, are restated
below:
1. How is success defined when it comes to performance management reforms? Is it simply
the adoption of performance measures or a deeper shift in how a public organization
function?
42
2. What factors are central to successful performance management systems and in what
combination?
3. How do organizations overcome obstacles that arise while implementing performance
management reforms?
2.2 Overview of Research Approach
This study employed a mixed methodology approach, balancing multivariate analysis and
qualitative comparative analysis (QCA) of survey data and metrics with a qualitative assessment
of the implementation of the Back to Basics reforms. This mix of methods was necessary to
analyze the various layers of reform and the interactions between them in order to search for
recipes for successful performance management systems. The qualitative research undertaken in
the course of this study served three purposes, assessed reforms thematically, and set the stage
for additional empirical analysis. First, the semi-structured interviews and meeting observations
that were conducted provided an overview of the Back to Basics process, which grounded the
rest of the research agenda. Second, these observations also led to a thematically structured
analysis of success factors and challenges in six-case study departments. Finally, the qualitative
analysis provided valuable case knowledge for the QCA analysis. This substantive knowledge
was vital in understanding the resulting causal configurations produced by the fuzzy set QCA.
Fuzzy set QCA (fsQCA) was employed because it is uniquely suited to the complex and
configurational nature of performance management systems. Grounded in the qualitative
knowledge gained in Chapter Three, this set-theoretic approach employed the thirty-five City
departments as cases to be assessed by the fsQCA framework. In applying fsQCA, this
dissertation can provide a unique causal perspective on the combination of factors that can lead
to successful performance management reforms. These configurations can be linked to the
43
qualitative observations in the case-studies to provide a richer understanding of how cases are
linked to real-world examples.
Finally, the multivariate regression models employed in this dissertation provide another
layer of understanding related to what combination of factors can lead to successful performance
reforms. These models are employed to assess both the operational level of reforms by
examining individual public managers and reforms as a whole by studying a large and diverse
cross-set of these same managers. Both an ordinary least squares (OLS) and random-effects
model are used to study the evolution of reforms over several periods of time.
By employing three interconnected analytic methods, this dissertation can bring a more
robust understanding of what leads to successful performance management implementation. The
results of each analytic technique are discussed in the chapters where they are employed.
Subsequently, in the concluding chapter of this dissertation, the results from each chapter are
cross-referenced to examine how the results produce overlapping findings of success factors and
combinations of these factors. By employing this cross-validation, this dissertation produces
rigorous findings from a holistic approach to studying how performance management systems
can be successful.
2.2.1 Research Context
Soon after his election in June of 2013, Mayor Eric Garcetti announced his Back to
Basics agenda which sought to bring COMPSTAT style performance management reforms to
every department within the City of Los Angeles. COMPSTAT was first utilized by the New
York Police Department and subsequently by the Los Angeles Police Department as a data-
driven crime prevention system (Bratton & Malinowski, 2008; Smith & Bratton, 2001). The new
administration in Los Angeles was looking to bring the performance management system that
44
had been successfully employed by the LAPD and the NYPD to the City as a whole in order to
improve service delivery to constituents (Bratton & Malinowski, 2008; Smith & Bratton, 2001).
The previous administration of Mayor Antonio Villaraigosa had certain project-based initiatives
such as the Green Streetlight replacement plan, which were data-driven. However, with the
exception of the LAPD, the majority of other City departments had generally lacked any
systematic policy on the use of evidence in management prior to Garcetti’s election.
The performance management reforms initiated under Back to Basics asked City
departments to develop a series of performance metrics to be utilized within each department and
to be observed by the Mayor’s Office. This type of performance management reform is usually
considered a “soft” approach to data-driven management, in the vein of PerformanceStat, and in
contrast to higher stake incentive-based approaches (Behn, 2003). PerformanceStat style reforms
typically call for regular meetings and data metrics to challenge managerial efforts and to bring
data-driven planning to problem-solving. As discussed in the previous chapter, this
PerformanceStat approach tended to evolve into a Managing For Results style performance
management system during the course of time Back to Basics was studied. Greater narrative
detail on the Back to Basics agenda is laid out in the next chapter, Chapter Three.
2.3 Research Setting and Data Sources
When the new administration in Los Angeles set out to put a performance management
system in place starting in late 2013, an ideal setting to study the progression of such reforms
was provided. The research included in this dissertation is part of a larger “action research”
project that investigated the Back to Basics reforms within the City. “Action research” refers to
an approach to studying public management that contributes both to the practice of public
administration and strengthens the academic literature in the field. The Haynes Foundation is
45
committed to this style of "action research," and a major research grant from the foundation
made conducting this study possible.
The cooperation of the City of Los Angeles made this study possible, and the City
acknowledged the opportunity to advance both the practice of and literature about performance-
based reforms. With Mayor Garcetti heavily invested in the Back to Basics agenda, both the
Mayor’s Office and the City’s Information and Performance Management Unit (IPMU) had a
vested interest in the holistic research approach employed to investigate the progress of the
City’s reform process. Ultimately, this dissertation is derived from a partnership between the
City of Los Angeles, the Haynes Foundation, and the University of Southern California’s Sol
Price School of Public Policy.
Beyond being invited by the City to study Back to Basics, the performance management
reforms in Los Angeles provided several additional compelling reasons to undertake a holistic
study. First, the invitation to study performance management reforms from the “ground floor”
over the course of their evolution was a unique opportunity. While the goal of this approach was
to study reforms from their inception, the actual research process began about eight months after
the administration took office. Second, Los Angeles is a large city with a diverse set of
departments, which allowed for reform progress to be compared across many different public
management settings. Finally, with the ability to study the entire City, this project allowed for
research viewpoints to touch the institutional, managerial, and operation levels of the
organization simultaneously and holistically (Wilson, 1989; Scott, 1998).
While the overall research approach in this dissertation considers reforms across the
entire City of Los Angeles, six departments were selected for additional in-depth case study
analysis. Four of the case study departments were selected in consultation with the Mayor’s staff
46
and the IPMU, as departments that were considered essential to the overall success of the
performance management initiative. The fifth and sixth departments were selected due to the
researcher’s familiarity with their functions, and because these departments added additional
diversity to the research.
The City of Los Angeles employs over 32,000 staff, which made it impractical to survey
every City worker with respect to the Back to Basics performance management reforms. Instead,
the two surveys that make up the core of the quantitative research in this study were derived
from a sample of over 1,500 mid and upper-level managers whose central responsibilities
touched on data-driven metrics and the implementation of Back to Basics. This group of public
managers was selected because they represented a population that was central to the success of
performance management reforms. The mix of City departments and individual public managers
examined during this project allowed for the research to cover the institutional, managerial, and
operational aspects of the Back to Basics reforms (Wilson, 1989; Scott, 1998).
Since the bulk of data collection conducted as part of this dissertation dealt with human
subject research, the research protocol was submitted to the University of Southern California
Institutional Review Board (IRB). Survey procedures, interview protocols, and meeting
observation criteria were all reviewed and approved by the IRB. All forms of data collection
were done confidentially and monitored by the IRB, in addition to being governed by a data
collection agreement with the City of Los Angeles. Data collected was anonymized and kept on
an air-gapped, password protected data storage system.
47
2.4 Data Collection Methods
2.4.1 Survey Procedures
Both panel data and pooled data were collected through two surveys administered to mid
and upper-level managers at the City of Los Angeles in the spring of 2015 and the fall of 2016.
The survey instrument was designed using a combination of items drawn from previous research
on performance management adapted to the unique circumstances of Los Angeles and through
new items explicitly developed for the project in collaboration with advisors at the University of
Southern California. Some of the items included in the survey came from previous surveys
conducted by the United States Government Accountability Office, the National Administrative
Studies Project, and the Government Performance Project. Additional survey items on
performance management were adapted from Melkers and Willoughby, (1998, 2001, 2005) and
survey items on public service motivations from Perry (1996, 2001) and Moynihan and Pandey
(2010).
The survey covered relevant items across all three levels of Wilson (1989) and Scott’s
(1998) organizational framework. At the individual level, the survey included items measuring
managers public service motivations, their analytic skills, and their perception of City leaders’
commitment to performance management reforms. Organizational level questions focused on
organizational culture, utilization of data-driven metrics, and political, organizational, and
technical limitations that impede the use of performance data. At the institutional level, the
survey considered goal conflict within the City, as well as political pressure exerted on program
stakeholders. A complete copy of the survey instrument is included in Appendix A.
Distribution of the survey to the relevant City employees was accomplished in
conjunction with the IPMU and with the assistance of each of the individual City departments.
48
Through the IPMU, departments were asked to submit lists of all City employees whose job
directly involved the use of performance metrics. These lists were cross-referenced against
department organizational charts to ensure proper coverage of the relevant City workers.
Department general managers were asked to review the final lists for accuracy before
distribution. Through the City's personnel department, email addresses of the identified staff
were collected, and City staff were invited to complete the survey via an initial email from the
researchers and prompted with three follow-up emails over the course of a six-week period. Data
was collected using a secure Qualtrics portal, and anonymized while inserting a unique coding
number that allowed for the matching of panel data across surveys.
The first survey was conducted over a six-week period during the spring of 2015. The
initial survey included 1,659 mid and upper-level managers. A response rate of 55.5% was
obtained during this round, for a total of 920 responses. Of these 920 responses, only 787
surveys were fully completed for an adjusted response rate of 47.4%. Three departments did not
have any responses submitted and were excluded from the survey analysis. For the remaining
departments, response rates varied from 25% to 100%. The second survey panel included 1,585
City managers during a six-week period in the fall of 2016. The second survey received 763
responses for a response rate of 48.1%. Of these responses, 667 were fully completed for an
adjusted rate of 42.1%. During this second survey, two small departments opted out of the
survey, and another two did not submit any responses. These four departments were removed
from survey analysis during this round. Again, the remaining departmental response rates varied
between 20% and 100%. Across both surveys 394 unique responses were cross-validated,
resulting in individual-level panel data from this group.
49
2.4.2 Interview and Meeting Observation Procedures
Each of the six case-study departments, along with additional City leadership figures,
were included in a series of qualitative interviews and performance management meeting
observations over a two and a half-year period between mid-2014 and the end of 2016. The
purpose of these interviews was to further flesh-out the themes from the survey instrument and to
gain an additional avenue of assessment on the Back to Basics implementation process and the
management routines developed in each department. The interviews followed a semi-structured
format that allowed for the exploration of key research themes, while also letting interview
subjects expand on specific issues, successes, and challenges they might have experienced over
the course of their work with performance metrics. As with the survey data, all identifying
information was stripped from interview transcripts and subjects were guaranteed confidentiality.
A copy of the interview protocol and questions can be found in Appendix B.
Meetings where performance metrics are evaluated, and strategic plans or problem-
solving strategies are developed from these metrics are a vital component of successful
performance management reforms (Behn, 2005). As such, meeting observations were considered
an essential part of understanding the progress of Back to Basics reforms in Los Angeles. A
standard protocol was used during the observation of these meetings to cover a set of key themes
that could be compared across the case study departments. During the study, 87 interviews were
conducted and 20 COMPSTAT style meetings were observed and recorded.
2.4.3 Document and Metric Collection Procedures
Attempts at performance management reforms produce a regular stream of documents,
reports, and metric dashboards (Behn, 2005). To understand the development of Back to Basics a
number of different types of documents were collected during the course of the study. While
50
particular attention to document collection was given to the six case-study departments and the
Mayor’s Office, additional documents were also collected from a range of departments across the
City.
Four core types of documents were collected during the course of the research with
cooperation from the IPMU, the Mayor’s Office, and from individual departments. Each
department across the City was asked to develop a plan for implementing a performance
management system when Mayor Garcetti first took office. Written copies of these plans were
collected from each of the six case-study departments and from thirteen additional departments.
Strategic plans were also gathered from each of the case-study departments and eleven additional
departments. These two types of planning documents were used as a baseline to understand
where City departments began their performance management reform process.
Two additional types of operational documents were collected during the course of the
research, primarily from the case-study departments. Minutes and planning documents from
COMPSTAT meetings were gathered to understand how departments were conducting ongoing
performance management activities. Finally, metrics dashboards, data systems dashboards, and
metrics presentations were assembled during the study. Performance metrics were collected from
a combination of these two types of operational documents and through additional consultations
with the IPMU, and case-study departments. Metrics were gathered both qualitatively, to assess
their typology, and quantitatively, to measure the progress in the use of performance metrics.
2.5 Data Analysis Methods
Analysis of the project data was conducted using three complementary methods in order
to cross-validate findings holistically over the course of the entire dissertation project. Interviews
and meeting observations were assessed using a thematic coding approach. Two multivariate
51
regression models, one using panel data to assess individual managers and another using pooled
data were developed. Finally, fuzzy-set QCA was employed to examine recipes for success
across the different types of departments. An overview and the framework for each of the
analytic approaches utilized in this dissertation are discussed below. The specific technical
details of each analytic approach and an in-depth discussion about the variables/constructs used
are discussed in each chapter where the analytic approach is employed.
2.5.1 Case-Study Analysis
The case-study analysis in this study derives its core themes from the three overarching
research questions laid out earlier in this chapter. The case-study analysis also draws from the
themes in the survey used during the project. As such, the analysis does not follow traditional
iterative qualitative approaches, such as grounded theory (Strauss & Corbin, 1994) or
phenomenology (Schutz, 1967). Instead, the qualitative analysis follows a structured approach
where interviews were built to delve into particular themes. As discussed earlier in this chapter,
semi-structured interviews (see Appendix B) and consistent meeting observation protocols were
used to assess subjects on a set of themes related to the success of performance management
reforms.
Interview transcripts and meeting observation transcripts were coded using six core
themes related to performance management reforms identified in the literature in Chapter One:
1. Strategic Planning
2. Metrics
3. Data Systems and Information Technology
4. Integrating Performance Metrics into Decision Making and Leadership
52
5. Employee Buy-In and Training
6. Resources
These themes were further coded directionally in order to indicate whether statements from
interviews and meeting observations were positive, negative, or neutral. Additionally, coding
was structured chronologically, to help understand the evolution of performance reforms over
time.
Meeting minutes, strategic plans, and other documents were also coded using the
framework discussed above. The author conducted primary coding of all interviews, meeting
observations, and other documents. Two additional researchers independently reviewed the
coding to ensure inter-coder validity and reliability. Finally, the departmental case-study analysis
was conducted to compare the development of performance management systems. This analysis
is documented in Chapter Three.
2.5.2 Fuzzy Set Qualitative Comparative Analysis
The fuzzy set Qualitative Comparative Analysis (fsQCA) acts as a unifying force
between the case-study analysis and the multivariate regression models employed in this
dissertation. The purpose of employing the set-theoretic approach of fsQCA is to study the
departments within the City of Los Angeles as cases to discover what causal configurations lead
to successful performance management implementation. The primary reason to use fsQCA is for
the ability to study causal complexity when looking at performance management reform
outcomes. Fuzzy set QCA methodology has successfully been employed within other academic
disciplines to study performance through organizational configurations (Andrews et al., 2015;
Fiss, 2011; Frambach et al., 2016). This dissertation is the first attempt to apply this approach to
the study of public organizations in examining performance and data-driven reforms.
53
With data on performance reforms in Los Angeles from two time periods, cases from
both time periods are studied through the set-theoretic framework. Additionally, two outcome
measures of performance management success are employed, as a result of discussion in the
literature about the varying definitions of successful implementation. This setup leads to four sets
of causal configurations that examine successful performance management system outcomes.
These four configurations are compared across each other to discuss the paths laid out by the
configurations of input measures. Additionally, configurations are grounded in substantive case
knowledge and discussed in relation to how they fit the observations from the case-study
departments in Chapter Three. As a final step, Boolean intersection is employed to look at the
common configurations across the different outcome measures (Ragin, 1987). The input and
outcome measures utilized in the fsQCA analysis are listed below.
2.5.2A Outcome Measures
• Performance Management Implementation Success from IPMU
• Performance Management Implementation Success from Survey Data
2.5.2B Input Measures
• Large Budget
• Large Organization
• Strategic Plan
• Mayoral Attention
• Analytic Capacity
• Innovative Culture
• Strong Leadership
• Good Metrics
54
The definition, construction, and calibration of each of these measures are discussed in detail in
Chapter Four where the fsQCA analysis is conducted. Additional details on some of the
measures can be found in Appendix C. The full technical details of the fsQCA model and
subsequent analysis are also discussed in Chapter Four.
2.5.3 Multivariate Regression Analysis
This dissertation utilizes two complementary multivariate regression models to analyze
the survey data and the other quantitative metrics collected. The purpose of these models is to
study success factors within Los Angeles as they relate to both individual public managers and
the City as a whole. Both models employ similar data and assess dependent variables related to
the individual and organizational attitudes, actions, and skills as impacted by a number of
independent and organizational variables. These variables are drawn from the literature in
Chapter One.
The first model employs pooled data covering all City workers surveyed in both 2015 and
2016. This model is an ordinary least squares (OLS) model over time, which can also be called a
pooled OLS model. A time-effect is included in this model to account for the fact that the data
spans two time periods. This model is meant to analyze both individual public managers and the
City as a whole by covering all staff working with performance information. The second model
uses panel data from City staff who participated in both rounds of the survey. This model is a
random-effects model over time. The one difference between the two models is how time is
incorporated into the model. The time-effect is built into each of the variables in the random-
effects model, as compared to a specific time dummy variable present in the first OLS models.
Each of the dependent, independent, and control variables in the models are listed on the next
page.
55
2.5.3A Dependent Variables
• Awareness of Performance Management Reforms
• Attitudes Towards Performance Management Reforms
• Individual Usage of Performance Information
• Organizational Usage of Performance Information
• Proficiency in Using Performance Information
2.5.3B Independent Variables
• Public Service Motivation
• Training in Performance Information Use
• Wilson Typology – Production Organization
2.5.3C Control Variables
• Organizational Communication 1
• Organizational Communication 2
• Organizational Impediments to Performance Management Reforms
• Length of Use
• Number of Employees Supervised
• Years Worked at the City of Los Angeles
• Highest Level of Education
The specific design of each of these variables and the technical details of both regression models
are described in detail in Chapter Five, where the analytic models are employed. Further details
about some of the variables are also provided in Appendix D.
56
Chapter Three: A Qualitative Analysis of Back to Basics
2
This chapter focuses on a qualitative discussion of the Back to Basics performance
management reforms in the City of Los Angeles and lays the groundwork for a deeper
understanding of the performance system in Los Angeles. This evaluation also brings contextual
perspective to the quantitative and fsQCA results in the following chapters. This chapter is
broken into two complementary parts; the first section provides an overarching qualitative
narrative that examines the implementation of Back to Basics and includes insight into the
political, budgetary, and organizational situations that existed within the City. This section also
discusses the progression of performance management system implementation citywide. The
second section takes a structured qualitative approach to examining performance management
reform themes in six case study departments.
3.1 The Context of Implementing Performance Reforms in Los
Angeles
Mayor Garcetti’s overarching policy platform designed to reform the operations of the
City of Los Angeles was known as Back to Basics. This platform was featured in his 2013
election campaign as part of a promise to restore the quality of life in Los Angeles through
improved economic health, greater public safety, and the restoration of City services via
improved City management. At its core Back to Basics was an attempt to change the City’s
management practices through a performance management system. Though not extensively
covered in this study, Back to Basics also aspired to bring performance-based budgeting to Los
Angeles in the longer-term.
2
The first 7 pages of this chapter was partially adapted from a formative evaluation report submitted to the City of
Los Angeles. This report was co-authored with Chris Weare.
57
The Garcetti administration’s vision for a Back to Basics performance management
system was derived from COMPSTAT (short for Comparative Statistics), which was developed
by the New York Police Department (NYPD) in the 1990s as a crime tracking system (Bratton &
Malinowski, 2008; Smith & Bratton, 2001). The COMPSTAT system used comparative data-
driven metrics on crime to strategically target resources and enforcement activities with the goal
of reducing crime. This COMPSTAT style performance management system first appeared in
Los Angeles in the early 2000s when William Bratton, the former chief of the NYPD, became
the chief of the Los Angeles Police Department (LAPD). Although COMPSTAT originated as a
crime-tracking tool, it was eventually implemented by a number of state and local governments
to manage public policy programs using performance metrics (Behn, 2003; 2014; Moynihan,
2008; 2013). Most notably the former Mayor of Baltimore and Governor of Maryland, Martin
O’Malley, created the CitiStat and StateStat systems to manage government services (Behn,
2003; 2005).
When the Garcetti administration took office in mid-2013, the impetus for systemic
reform of City management was driven by widespread resident dissatisfaction with City services
(Leavey Center, 2014). Similar concerns about the efficacy and delivery of City services were
the impetus behind the reform of the City’s charter in 1999 that led to the creation of
neighborhood councils and strengthened the mayor’s administrative powers (Sonenshein, 2013).
Subsequent polling of Los Angeles residents in the mid-2000s supported the argument that these
charter reforms did not improve resident satisfaction with Los Angeles City services (Leavey
Center, 2007). Recent polling of Los Angeles residents demonstrated continued dissatisfaction
with City services, especially compared to residents served by other Los Angeles County
municipalities (Leavey Center, 2014).
58
The Garcetti administration faced two significant resource constraints in its efforts to
effect performance reforms. The main challenge for the City stemmed from the 2008 financial
crisis and the ensuing recession, events that placed the City in a financial bind from which it had
not yet fully recovered. As seen in Figure 3.1
3
, real per capita general fund revenues (in 2014
dollars) grew 3% per year, or from $4.78 billion to $5.07 billion, from 2005-06 to 2007-08.
Following the financial crisis, revenues dropped 9%, hitting their nadir in 2011-12 at $4.63
billion. While revenues eventually recovered, they would not approach pre-recession levels until
2016.
Prior to the Garcetti administration, City staffing figures peaked in 2007-08 at 37,173 (as
seen in Figure 3.1). Post-financial crisis, City staffing dropped to 35,864 in 2009-10, and then
sharply declined to 32,965 the following year. Over the next couple of years, City staffing
declined roughly another 1,000 employees before it leveled out at the start of the Garcetti
administration in mid-2013. As one senior member of the Garcetti Administration put it during
the interview process, "I
expected that there would
be resistance to data-driven
management due to cultural
factors, but the real
constraint is capacity. They
do not have the bandwidth
3
Revenue data for Figure 3.1 was drawn from the City of Los Angeles Adopted Budgets from 2005-06 through
2014-15. Staffing data is drawn from the City of Los Angeles Adopted Budget 2014-15. The author used historical
CPI and Census data to conduct calculations.
Figure 3.1 City Revenue and Staffing
59
to take on additional analytic tasks and run their organization at the same time.”
The resource constraints that the Garcetti administration faced while attempting to
execute management reforms were exacerbated by the complicated bureaucratic structure of the
Los Angeles government. The structure of Los Angeles’s government was established during the
progressive era of governance in the United States, which divided political control in an attempt
to thwart corruption (Sonenshein, 2013). The 1999 reform of the City’s charter attempted to
strengthen bureaucratic responsiveness and provide the mayor with more flexible powers
(Sonenshein, 2013). As a result of these structural changes, City bureaucracy was isolated from
resident demand, which in turn made it difficult for the bureaucracy to be responsive
(Sonenshein, 2013). Despite the attempt to restructure and improve city management through
charter reform, bureaucratic habits remained entrenched in Los Angeles, and the City lacked a
model of government that centered on a strong mayor akin to those in New York or Chicago
(Sonenshein, 2013).
Performance management style decision-making was not completely new to the City of
Los Angeles prior to the Garcetti Administration. As discussed earlier in this section, the Los
Angeles Police Department had utilized a COMPSTAT system credited with crime reduction
since the early 2000s. Other performance management efforts within the City had been limited
and episodic. In 2001, the Hahn administration sought to develop a program called LAStat. The
initiative attempted to consolidate City service delivery to seven service districts, where the
completion of service requests would be tracked to improve neighborhood-level services.
However, this program was abandoned when Hahn failed to get re-elected for a second term. The
incoming Villaraigosa administration started a performance management system based around
key policy initiatives Villaraigosa had championed during the campaign such as solar power
60
expansion and the One Million Trees program. The performance reforms put in place by
Villaraigosa were run primarily out of the Mayor’s office and did not focus on changing
department level management. In the first survey run as part of this study, there was evidence
that five departments collected some level of performance data prior to Back to Basics, despite
the episodic and haphazard nature of previous performance reform attempts in Los Angeles.
Upon taking office, Mayor Garcetti launched his performance management plan with a
PerformanceStat style system. The central goal of these reforms was to put in place a functioning
system of performance metrics in each of the City’s thirty-five departments. As part of an initial
flurry of activity, the Mayor asked all thirty-five department heads to reapply for their jobs. Ten
general managers either retired or resigned during this process, though these departures were not
necessarily due to opposition to performance reforms. Subsequently, in the fall of 2013, the
administration asked all departments to submit COMPSTAT plans detailing how each
department would implement performance management practices. Concurrently the
administration brought in some seasoned senior leaders and created Chief Innovation
Technology Officer and Chief Data Officer positions to help guide the use of data in the City.
The development of performance management systems occurred at three overlapping
levels within the City. First each department, following the Mayor’s mandate, independently
developed their own COMPSTAT systems with occasional assistance from the Innovation and
Performance Management Unit within the Mayor’s office. Second, the City’s four deputy
mayors were tasked with setting up a higher-level review process that brought together multiple
departments to discuss metrics. Finally, the Mayor’s Office worked with outside consultants to
create a third set of performance metrics that focused on generating broad outcome metrics that
could be disseminated to the public. Based on interviews and observations conducted during the
61
course of the study, most of the performance management systems that were developed followed
a “learning-by-doing” style. Because the Mayor's Office and the Innovation and Performance
Management Unit (IPMU) did not set official standards and hold the individual departments to
account via standards for the PerformanceStat that was envisioned, the performance management
reforms in departments tended to turn into “soft” PerformanceStat systems in practice. In fact,
based on observations in the course of this research, the systems in Los Angeles tended to
operate more like Managing for Results type reforms. Each City office or unit worked
independently to identify valid and consequential metrics during this process. While some
departments settled on metrics fairly early on in the process and started to build innovation
strategies around them, a large number of departments repeatedly returned to the drawing board
to find meaningful metrics.
The experiential and decentralized development model adopted by Los Angeles was in
contrast with more centralized models applied by other cities and states. In observing the City’s
experience with performance management, it became apparent that the City’s approach to
reforms diverged from best practices outlined in the literature in a number of specific ways. First,
the traditional development model posited by Hatry (2006) begins with the use of strategic
planning and logic models to identify goals and objectives. This step was not emphasized in the
City’s approach; few of the COMPSTAT plans submitted by departments cited strategic plans or
logic models. Based on research observations, this omission appeared to be the result of several
factors including time constraints, lack of experience with data-driven management practices,
and the goal ambiguity faced by most City departments (Courty & Marschke, 2004; 2008;
Heinrich & Marschke, 2010; Radin, 2006).
62
Hatry (2006) and Behn (2003; 2014) also recommend that the process include broad
outreach to internal and external stakeholders early in performance management reforms and that
a diverse group of stakeholders be co-opted to help build a performance management system
(Sanger, 2013). In contrast, during the study, small teams or single person operations within the
case study departments that did not have the full range of expertise related to performance
reforms were observed. Using survey data, and adjusting for response rates it appeared that fewer
than eight individuals in over half the City departments were involved in performance reforms
directly.
The overall resources (discussed further in the second part of this chapter) devoted to
Back to Basics were constrained due to the City’s budget position. For example, the Innovation
and Performance Management Unit, the central hub for performance reform efforts, had only 3.5
full-time-equivalent staff. In contrast, the City of New York has eleven staff members on its
performance management team.
This overarching narrative of Back to Basics examined a number of contextual factors
surrounding the implementation of a performance management system. These political,
organizational, and financial factors provided an understanding of the circumstances that
surrounded the performance management reforms. This discussion sets the stage for the next
section of this chapter and the subsequent quantitative and fsQCA analysis. The next section
builds upon the narrative discussion and adds further structured qualitative analysis via case
studies.
3.2 Qualitative Case Study Analysis
This section structurally examines six City departments across a number of qualitative
themes unearthed during the course of interviews, meeting observations, and document
63
collection. These departments represent a fairly diverse sample of the various types of agencies
across the City and the different approaches taken to implement performance management
reforms. Departments were anonymized as per University of Southern California Institutional
Review Board procedures and as part of the data sharing agreement with the City of Los
Angeles. Non-identifying features of the selected case-study departments are provided in order to
offer relevant context about how departmental characteristics impacted the implementation of
performance management reforms.
3.2.1 Descriptions of Departments Selected
Department A: A small department with less than 200 full-time-equivalent staff and a budget
smaller than $30 million annually. This department was community facing and dealt with
abstract services and goals related to the community.
Department B:A large department with more than 1,000 full-time-equivalent staff and a budget
larger than $150 million annually. This department dealt with infrastructure in Los Angeles and
was not particularly community facing
Department C: A medium-sized department with between 200 and 1,000 full-time-equivalent
staff and a budget between $30 million and $150 million annually. This department was partially
community facing and dealt with concrete services to a select portion of the community
Department D: A large department with more than 1,000 full-time-equivalent staff and a budget
over $150 million annually. This department was very community facing and dealt with both
infrastructure and concrete services to community members.
Department E: A large department with more than 1,000 full-time-equivalent staff and a budget
over $150 million annually. This department was community facing and dealt with visible,
concrete services provided across the City.
64
Department F: A large department with more than 1,000 full-time-equivalent staff and a budget
over $150 million annually. This department was community facing and dealt directly with
citizens on a day-to-day basis.
The case-study departments examined in this study were generally larger departments,
with greater staff and financial resources as compared to all City departments on average. In
addition, most of the departments covered delivered visible and/or concrete services to the
citizens of Los Angeles. Based on the earlier discussion of success factors for well-executed
performance management reforms, most of the case-study departments were arguably in a good
position to execute reforms. These departments generally fell closer to the production side of the
Wilson (1989) typology, which would, in theory, have made it easier to measure their work. The
literature also noted that additional financial and employee resources were useful attributes when
putting a performance management system in place (Boyn, 2003; Boyne & Gould-Williams,
2003). Yet, as the closer examination conducted as part of this case research revealed, just
because these departments appeared to have had more resources and functions that were easier to
measure did not mean performance management reforms were ultimately successful for each
department. Discussion of these case-studies also demonstrated that a number of additional
factors played a role in how performance management reforms unfolded.
3.2.2 Description of Themes Examined Within Case-Study Departments
The themes developed to examine the case-study departments were developed from
reviewing the literature and in discussion with Los Angeles City staff. Each of the themes laid
out below are linked to the discussion of potential success factors that were laid out in the
literature review in Chapter One. Additionally, City staff also indicated that many of these
65
themes discussed in the next sections were prominent in the course of implementing Back to
Basics.
Strategic Planning: Examined themes related to general strategic planning and strategic plans
by a department. Also considered planning for COMPSTAT systems within each City
department.
Metrics: Considered themes around how and why a particular department developed metrics.
Looked at what metrics departments were using.
Data Systems and Information Technology: Examined what data and information systems
departments were using or developing, related to performance management reforms.
Integrating Performance Metrics into Decision Making and Leadership: Looked at how
departments were utilizing their performance metrics to make proactive strategic decisions. Also
looked at the department’s use of data to solve problems and make institutional changes.
Considered how leadership impacted performance reforms.
Employee Buy-In and Training: Themes related to how City employees felt about performance
management reforms and how they reacted when working with data. In addition, this theme
covered efforts to train City staff on using data-driven approaches to reform and how staff
responded to the training.
Resources: Considered what resources departments had available for performance management
systems. And, how departments utilized resources to move performance management systems
forward.
66
3.2.3 Department A
3.2.3A Strategic Planning
Department A’s strategic plan was not particularly developed at the beginning of the
Back to Basics process. Subsequent strategic plans and the department’s COMPSTAT plan
similarly lacked specific detail or goal setting related to performance metrics. Instead, the
department focused on higher-level abstract planning that did not go into specific detail about
how performance measures might be used to achieve departmental goals. Department A’s
COMPSTAT plan essentially reviewed, with minimal detail, what systems the department
currently used to track its work. Throughout the plan, it was noted that the department did not
know whether the current data it collected was accurate or timely. The several forward-looking
pages of the COMPSTAT plan provided only broad aspirations to establish performance metrics
and regularly review data.
In discussions with Department A’s staff and the general manager, they argued that what
the department did was intangible and thus not conducive to either strategic planning nor
metrics. It was noted that department staff found it “mind-numbing” to develop a logic model of
what the department did. In contrast, the department did attempt to encourage its community
partners to strategically plan in order to spend money properly across the fiscal year. Overall, it
appeared that Department A had not actively used the strategic planning process to iterate and set
the table for their attempt to implement a performance management system. In reviewing a
strategic plan that the department released three years into the Back to Basics policy, Department
A was still laying out broad, non-quantifiable goals and stating that performance metrics would
be linked to these goals.
67
3.2.3B Metrics
The process of developing metrics at Department A took on two distinct forms during the
course of studying the department. The first iteration of metric development at Department A
was described as a “top-down” attempt at conceiving data for the department to track. The
general manager, along with a handful of senior staff within the department, composed a small
number of high-level input and process metrics that the department would track over the first
year of Back to Basics. A year into reforms at the City, the department attempted a second
“bottom-up” process to create another set of metrics. This process was led by a small number of
vocal middle managers in the department. These staff noted that Department A “had not really
tracked things in the past,” but wanted to track “outputs.” The second effort ultimately resulted
in two new output metrics that the department started to track.
Most of the metrics that Department A tracked required the cooperation of community
partners to provide data. These partners did not want to be held accountable via data according to
several staff members and the general manager of the department. Department A did not
consistently receive data from its community partners over the course of the research period
which hampered its ability to use data effectively. The Mayor's Office gave Department A a few
metrics that were budget related, but the general manager and department staff did not think
these metrics accurately reflected what the department did. Overall, it appeared that the
performance management system that Department A put in place did not lead to manageable
metrics that could improve problem-solving and decision-making at the department. This
concern was echoed over a period of time through interviews with a number of department staff.
68
3.2.3C Data Systems and Information Technology
Department A’s data systems were rudimentary compared to most other departments
studied. The department primarily used Google Docs as a vehicle for data collection and process
management. This reliance on Google Docs stilted the department’s ability to customize its data
system for its unique needs. At one-point Department A considered the IT system that the City
Council used, but after much consideration decided that it could not be adapted to their needs.
Staff throughout the department noted that there was no funding available for Department A to
create its own data management system and the staff did not expect to receive any such funding
in the future. Towards the end of the study period, in late 2016, Department A started using a
low-cost commercial case management system that offered some customizability. It was unclear
whether changing over to this system improved the previously poor data management capacity
the department faced during the majority of observation.
3.2.3D Integrating Performance Metrics into Decision Making and Leadership
Department A employed a “soft” process when it used performance metrics to make
decisions and solve problems. Through meeting observations and staff interviews, it appeared
that utilizing performance data was merely suggested within the department. The result was that
the execution of the Mayor’s Back to Basics principles seemed to be a “compliance exercise”
rather than a meaningful change. Department A held meetings infrequently, roughly three times
a year, and the meetings that were held did not follow a set schedule during the period of
research conducted on Back to Basics. When meetings happened, performance data was
reported, but more in-depth analysis or use of data to make decisions or changes to departmental
procedures were absent. The culture of the department did not appear to evolve to embrace data-
driven processes over the course of the research.
69
3.2.3E Employee Buy-In and Training
As discussed in the preceding section, the culture around the implementation and
execution of performance management at Department A lacked buy-in on a department-wide
level. While interviews with the general manager indicated a topical interest in embracing
performance management reforms, the general manager did not perceive interest from the rest of
the department’s staff. The general manager believed that many staff members were not honestly
reporting metrics and were simply inputting data that made them look successful. This did not
necessarily reflect on the staffs’ commitment to their jobs but reflected the earlier notion that
what they did was intangible and could not be measured. In discussions with Department A staff
it was also communicated that staff did not have enough time or bandwidth to add performance
information tasks to their already full work schedule.
3.2.3F Resources
One reason why Department A staff may have lacked the bandwidth to execute a full
performance management system was a dearth of resources, both for performance management
systems and the department overall. In 2009, after the financial crisis, the department’s budget
was cut more than 50% in anticipation of a merger with another City department. However, this
merger never materialized, and resources were not repatriated to the department. In addition, the
department’s staff was cut more than 60% in the six-year period preceding performance
management reform efforts. In discussions with department leaders, they indicated that
Department A was essentially “treading water” when it came to resources during the period they
were studied for this project.
70
3.2.4 Department B
3.2.4A Strategic Planning
At the start of the Back to Basics process, Department B did not have a strategic plan and
had not had one for many years prior. Under the direction of the general manager and other
senior leaders, Department B released a strategic plan in October of 2014, about one year into the
City’s COMPSTAT process. The strategic plan was the result of meetings and discussions within
the department about setting goals that could be linked to metrics and data. In interviews with
leaders of sub-units within Department B, it was highlighted that each of these units did its best
to pull any historical data it had collected under previous administrations in order to help develop
a strategic plan. The strategic plan clearly linked the department’s goals to methods of measuring
these goals with data. Many staff members across Department B indicated that the impetus for
developing the department’s strategic plan was clearly linked to Back to Basics reforms in the
City.
Along with the strategic plan, Department B’s COMPSTAT plan laid the foundation for
the performance management system instituted within the department. The department’s
COMPSTAT plan was developed about six months prior to its strategic plan and provided
detailed ways in which the department would link the use of data to goals. Department B’s work
with City infrastructure may have allowed for a smoother process when it linked goals to
metrics. From observing a range of the department's COMPSTAT meetings, it was evident that
there were constant attempts to align the meetings back to the strategic plans and COMPSTAT.
These planning documents were used as a foundation that the department continued to build
upon over the course of the study period.
71
While Department B was fairly thorough in following through on an executable strategic
plan and COMPSTAT plan, the department did not use logic models in substantive capacity over
the course of the research period. Only the department’s information technology group
developed and routinely used logic models. The technical nature of this group’s work allowed
for logic models to be deployed as part of their regular work processes. Overall, Department B
appeared to follow best practices for performance management systems when it came to the
practice of strategic planning, with the exception of not deploying logic models widely.
3.2.4B Metrics
Department B had mixed-results when it came to defining and employing metrics over
the course of performance reforms in Los Angeles. While senior leaders and managers stated that
they wanted to move past workload indicators (those focused on inputs, processes, and outputs)
in practice, this did not happen consistently within the department. Although the department did
end up collecting a range of data over the course of the three years it was studied, the inability to
move that data into outcome metrics and to develop certain other core metrics hampered the
department’s ability to move beyond moderate effectiveness with performance management
reforms. Department B did have success with matching process and output metrics to
departmental goals. However, without a robust set of outcome metrics, the department was not
clearly able to demonstrate how its outputs impacted Los Angeles more broadly.
Department B faced two distinct hurdles when it came to developing impactful metrics.
The first was a simple lack of available or collected data that related to what certain internal units
wanted to measure. In some cases, the lack of relevant data was a result of inadequate systems,
while in other cases it was the result of departmental needs not filtering down to frontline
operational staff that would be needed to collect the data. Over the course of the study, the
72
department was able to mitigate this problem slowly, through persistent follow-up by department
leaders or through changes in the system. Yet, as a result of the size of the department,
inconsistency in this area persisted in research observations.
The other challenge that Department B consistently faced in defining meaningful metrics
was the complexity of much of the work the department conducted. Staff argued that many of the
tasks conducted were complex and esoteric, which did not make them easily quantifiable. As a
partial solution the department did try to categorize some of these tasks over the course of the
reforms, but still faced instances where work was not definable within the system. Another
challenge for the department was the fact that a large portion of departmental activity was
conducted in conjunction with outside contractors. The complexity of incorporating a diverse set
of contractor data into the central COMPSTAT system led to a number of instances of
misalignment. “quality control” was an oft-repeated issue with contractors for Department B. In
cases where contractors did not conform to departmental reporting requirements, this ended up
creating additional work for department staff.
3.2.4C Data Systems and Information Technology
Prior to Back to Basics, Department B had attempted to collect metrics sporadically, but
these efforts generally ended poorly when the computer systems could not support management
functions well. Long-tenured staff estimated that the last attempt to implement a new data system
cost the department over $10 million and was shelved before completion due to lack of
functionality. Over the course of the three years studied, Department B showed progress in
building and using department-wide data systems but did not reach the full actualization it had
envisioned. Department B succeeded in implementing a department-wide workflow system,
moving many processes from paper to digital, and certain units became well integrated with
73
outside data vendors. There was mixed-success with the rollout of a public facing request system
for infrastructure, which functioned well on the public side, but was not effectively linked to
internal department systems. Finally, at the tail end of the research, the department still had not
achieved the goal of creating a department-wide project management system.
Department B's most significant achievement during the course of Back to Basics was the
connection of all of its offices under one work tracking system. Though this system still had
some technical limitations, it allowed central tracking of workflow across the entire department.
The department aspired to turn this central data system into a true project management system
but did not have the resources to complete this transition. Several units within Department B also
had success with subject-specific data-systems. One unit’s assistant general manager
implemented a computer system that utilized Geographic Information Systems to track
infrastructure hotspots in the City. Another unit had built a fruitful relationship with an outside
data vendor, which had helped the unit analyze data for over 25 years. This unit had the most
robust and stable data collection within Department B. The outside vendor provided insight that
helped to push forward performance management within this unit.
The public facing data collection system run by Department B had multi-functionality
that allowed the public to contact the department through a range of mediums. Internally,
however, the system required manual entry of certain types of requests through certain modes so
that the department could process it. Department B aspired to improve the functionality of this
system as well. Overall, the information technology and data systems at Department B were a
mixed bag, with significant accomplishments and room for further improvement.
74
3.2.4D Integrating Performance Metrics Into Decision Making and Leadership
Over the first year of the administration’s Back to Basics program, Department B
struggled to incorporate data-driven decision making into its work. Many staff members within
the department were concerned that the political concerns of the Mayor’s Office and the City
Council would lead to inefficient permutations of performance management and instead focus on
pet projects. Initial COMPSTAT meetings were held monthly within the department but were
essentially review sessions of metrics collected and did not feature follow-up. A number of
senior leaders also wanted to establish advance-planning groups within the department and were
concerned that metrics were being linked to inefficient processes. The department appeared to be
fairly demoralized during the initial stages of performance management reforms.
Eighteen months into performance management reforms the approach and culture within
Department B started to shift more positively towards reform. The department had begun to shift
some of its procedures and resources in response to what the metrics they were collecting were
illuminating. For instance, the department started reassigning staff to units that were identified as
needing help based on data backlogs. Another example of shifting procedures was the use of
department interns to manually check-in work to make sure requests from the public were
legitimate before assigning more senior staff to work on these requests. During monthly
COMPSTAT meetings, department leaders and staff members started to embrace the use of data
to connect with departmental stakeholders. In several meetings that were observed two years into
performance reforms the general manager pointed out how all staff members had incorporated
data into their decision-making and the difference it was making within the department.
As Back to Basics moved into its third year, the general manager and assistant general
managers at Department B had become proficient in using COMPSTAT meetings to answer
75
targeted questions about the department’s work and to follow-up to get subsequent answers. The
department started to move beyond simple process improvements and started to use performance
management to work on strategic objectives. For example, the department created a roving strike
force to help guide critical departmental projects to completion. Overall Department B showed a
significant evolution in how it used performance metrics over the course of the three years it was
studied.
3.2.4E Employee Buy-In and Training
As with employing data to make decisions, the culture in Department B started with
reservations about performance management reforms. Cultural resistance within the department
to a data-driven approach was initially fairly prevalent. The general manager and other senior
leaders knew that a comprehensive cultural shift would be required to ensure the long-term use
of performance management at Department B. Initially, staff members were worried that
political concerns would override effective programming at the department. There was also
trepidation about the possibility of performance data being used to punish staff.
A cultural shift started to occur at Department B twelve to eighteen months into
performance management reforms. The two most significant drivers of cultural change were the
realization that performance metrics could actually help staff do their jobs more effectively and
the successful use of data to justify important departmental decisions. In the course of putting
together new data systems, the department got buy-in from staff as it took their suggestions and
made the system relevant to their work. The general manager noted that the culture truly started
to change within the department when the general manager no longer had to drive the
COMPSTAT meetings, and other staff started to take the lead on reviewing and incorporating
data into decisions.
76
During the course of the research, there were no formal training programs instituted by
Department B although a handful of ad hoc and project specific training initiatives did take
place. For example, one unit within the department trained its interns to focus on one specific
task related to performance management to ensure consistency and efficiency in their work.
Another unit ran informal training sessions on Geographic Information Systems that linked to
tracking metrics. The general manager hoped to have the Innovation and Performance
Management Unit train some of its staff, but the central training program was shuttered before
this could occur.
3.2.4F Resources
Though Department B was a large department with a sizable budget and workforce
compared to many other City departments, a lack of dedicated resources for performance
management was a constant challenge. Interviews with department leaders and staff indicated
that the Mayor's Office routinely denied budget requests for additional resources. In many
instances, it was apparent that Department B could have pushed forward reforms more
effectively with an infusion of money and additional staff. A refrain often heard from the general
manager and the assistant general managers was “they would try to figure out how to make
additional changes with existing internal resources.”
3.2.5 Department C
3.2.5A Strategic Planning
Department C’s record was mixed when it came to foundational planning for the Back to
Basics initiative. The department had last produced a short, formal, strategic plan in 2007 and as
a result, it was fairly disconnected from present conditions in the City of Los Angeles. While the
department had produced a handful of “planning” documents in the first year of the Back to
77
Basics program, these documents were specific to individual policies and did not provide a
cohesive set of goals for the department. Across the three years observed, Department C did not
make an effort to pull together these documents and create a strategic plan connected to
overarching goals and metrics for the department.
In contrast, Department C did produce a thorough and detailed COMPSTAT plan in
response to performance reforms in the City. This document provided three layers that linked
together with the department's goals with its performance management system. The first section
of the COMPSTAT plan outlined how Department C created an internal performance
management unit to help define metrics, link metrics to goals, and conduct ongoing data
analysis. The second section of the document laid out a handful of key goals that Department C
hoped to push forward through performance management reforms. Department C rounded out its
COMPSTAT plan with a series of well-constructed metrics, which linked directly to the goals of
the department. In a functional sense, it is possible that the department’s COMPSTAT plan
doubled as a strategic plan during the Back to Basics process.
An additional consideration for Department C as it entered performance management
reforms was a concurrent structural re-organization of the department. The department planned
to fundamentally change how it dealt with its work, by moving from a task-specific
organizational chart to a new geographic based organizational structure. This departmental re-
organization took eighteen months to complete, and many leaders in the department hoped to
standardize the use of metrics across each of the new geographic units in conjunction with
performance management reforms.
78
3.2.5B Metrics
Department C attempted to build-upon its COMPSTAT plan by instituting performance
metrics across the department over the three years it was studied. The department separated the
metrics it wanted to employ into two different categories based on achievement timelines. The
department created policy metrics it wanted to track over a long period of time that linked to
goals within the department. Department C staff wanted to use these metrics to track whether it
was achieving its long-term goals related to Back to Basics. The second group of metrics
developed within Department C focused mostly on efficiency and operating procedures. The
department planned to track these shorter-term metrics on a monthly basis while reviewing the
long-term metrics on a quarterly basis.
As with other departments within Los Angeles, Department C had mixed results in how it
was able to actually develop and utilize the metrics it proposed in its COMPSTAT plan. While
the department proposed to track ten different efficiency and process metrics in its COMPSTAT
plan, one year into performance reforms Department C was only tracking four of these metrics at
monthly COMPSTAT meetings. By the end of the three-year study period, the department had
settled into tracking six short-term performance metrics. The biggest hurdle that Department C
faced in trying to measure all ten efficiency metrics was getting the right data collected across
the whole department. Lack of staff bandwidth due to the massive departmental reorganization
was also a factor that hampered the full deployment of efficiency and process metrics.
It was more difficult in the course of this study to accurately observe and assess the long-
term metrics that Department C tracked over the three-year study period. While these
performance metrics were regularly documented by department staff and brought up in quarterly
meetings, it was not clear whether the department was actively using them to achieve
79
departmental goals. The fact that these long-term metrics were not at the forefront of everyday
staff member’s minds on a regular basis may have inhibited how impactful they could be in
performance reforms. Overall, Department C had moderate success with developing and tracking
metrics over the course of Back to Basics.
3.2.5C Data Systems and Information Technology
Data systems at Department C appeared to be a major roadblock when it came to
actualizing performance management practices. The outdated data collection and project
management system within the department was the biggest complaint that both senior leaders
and frontline operational staff voiced in the course of interviews. Department C's data system
was created in 2001 and despite several attempted updates, and a rebranding in 2015, it was still
very antiquated and had a non-intuitive interface. Depending on which staff members in the
department were asked, data prior to either 2005 or 2011 was unusable. Though the system
improved over the course of the three years studied, most of the staff members were not
convinced that the data in the system was accurate. The staff was concerned that data was
counted twice if touched by two different departmental units and that some departmental cases
were disappearing once entered into the system. This lack of trust in the accuracy of the
department’s data system may have undermined some of the progress Department C was trying
to make with performance management reforms.
Internal information technology staff members at Department C were constantly required
to triage the department’s data system. This work hampered staff member’s ability to use their
time to analyze metrics and data, a role which was essential as the system lacked the ability to
use data for forecasting and trend analysis, thus requiring staff members to attempt this task
manually. Department C’s leaders delayed making any upgrades to their data system over the
80
three years studied due to a promise from the Mayor’s Office that a new data system for the
department would be funded fully in the coming years. In discussions with department leaders,
they staked much of Department C’s capability to analyze and collect data in the future on this
new system. It seemed likely that the anticipated arrival of this new system might have been
holding back efforts in the near term to improve the computer systems at the department. In
addition, early planning for the new data system appeared to indicate it lacked certain key
functions that the department would need going forward. There was little discussion of linking
departmental goals and functions to this new system.
3.2.5D Integrating Performance Metrics Into Decision Making and Leadership
Over the first year of Back to Basics, Department C struggled to incorporate performance
metrics into its decision-making process. The department did not hold its first COMPSTAT
meeting until roughly six months into the Back to Basics initiative. Though meetings were then
held every four-to-six-weeks, these early meetings acted as simple review sessions of the metrics
produced. Follow-up and probing of staff members by Department C leaders was mostly non-
existent across the first eighteen months of meetings observed. The definition of problems
uncovered by data and discussion of possible solutions to these problems was not a feature of
meetings. Most COMPSTAT meetings instead featured discussions about resource shortages and
other impediments that were hampering the improvement of the metrics being tracked. Most
discussions that occurred at meetings during the first half of the study were not guided by goals
or mission orientation.
Over the second eighteen months of research observation, there was a noticeable shift in
how Department C utilized performance metrics, specifically it had a more problem-solving
oriented approach. Discussion with department staff members revealed two clear structural
81
reasons for this shift. First, the department instituted a monthly four-week cycle that led into
each month’s COMPSTAT meeting. In week one the department’s performance unit built data
dashboards and distributed them to senior management. Senior staff met with the performance
unit during week two to prepare questions for staff in their sub-units and then distributed the
resulting questions. Over week three, senior staff and their teams would review these questions.
And in the fourth week, each team would come to the COMPSTAT meeting prepared to answer
questions and solve problems. This process accelerated Department C's usage of performance
metrics to make meaningful changes in department actions and processes. In addition, when the
geographic reorganization at Department C was completed, it helped to eliminate silos around
work processes in the department and led to more collaborative problem-solving.
3.2.5E Employee Buy-In and Training
Early buy-in as it related to the City’s performance management initiative was tentative
within Department C. This attitude of resistance towards performance management went hand-
in-hand with the department’s early COMPSTAT meetings that capped out as review sessions of
metrics. In the first eighteen months of Back to Basics, discussion with Department C staff
revealed most employees treating COMPSTAT as a compliance exercise. Most Department C
staff came to use data, but an analytic mindset was not prevalent within the department. Many
staff members within Department C also did not believe that performance management captured
the qualitative nature of much of their work.
At the research’s eighteen-month mark, the culture within Department C began to shift
towards greater buy-in to performance management. However, in discussions with senior staff
and frontline employees, the cultural shift was not as pronounced as the shift towards the usage
of data for decision-making discussed in the previous section. Over the course of the three years,
82
Department C was studied, no official training programs for performance measures were
observed.
3.2.5F Resources
During the course of the research period Department C appeared to lack additional
resources dedicated to performance management reforms. However, leaders within the
department and other staff did not seem to consider this dearth of resources problematic, at least
during the interviews conducted. Without dedicated resources from the City for performance
management, Department C was able to reapportion some fees that it collected to fund an analyst
position for its performance management system. Through the City’s general budget,
Department C was in line to hire a number of new workers as part of its geographic
reorganization. This additional analytic capacity appeared to be beneficial in the course of Back
to Basics.
3.2.5 Department D
3.2.5A Strategic Planning
For the first six-months of Back to Basics, each of the sub-units within Department D
created their own COMPSTAT plans independent of one another. When a new general manager
took over the department at the six-month mark, the most senior leaders, the assistant general
managers, came together and created a department-wide COMPSTAT plan. While Department
D's COMPSTAT plan was extensive (by far the longest of the case study departments), it did not
include any explicit mapping of logic models or formally documented strategic planning, even
though senior leaders were forced into some amount of strategic thinking to produce the plan.
The leaders of Department D acknowledged they were approaching the planning process
“backward” with a COMPSTAT plan first and an implicit strategic plan as part of that process.
83
The leaders expected to formalize goals and create a true strategic plan farther down the road of
Back to Basics. They were hopeful that performance management would help Department D set
its goals in the future. During the course of the three years Department D was studied no
strategic plan was produced.
4
As such, Department D ended up with a detailed COMPSTAT plan
in terms of defining metrics, building data systems, and methods for collecting data, but the plan
did not explicitly link these processes to the department’s goals.
3.2.5B Metrics
With an absence of articulated strategic goals, the performance metrics developed and
utilized by Department D ended up being primarily input, process, and output metrics. Without
considered objectives, the department was not able to define outcome measures in the course of
performance reforms. A consistent challenge for Department D when it came to metric usage
was getting the proper data collected in a usable fashion. Over the course of the research period
Department D typically inputted data in quarterly batches, which often suffered data gaps. The
gaps in data made the reliability of metrics within Department D suspect at times during the
performance management process. While the department did not define goals on its own, the
leaders of the department did attempt to connect metrics to broader goals laid out by the Mayor’s
Office through the Back to Basics agenda.
One sub-unit of Department D collected five different process and output metrics. These
metrics seemed to be set at arbitrary levels and were not connected to any particular goal at the
department. Another set of geographically dispersed sub-units collected unrelated metrics at the
beginning of the Back to Basics initiative. The general manager of Department D worked over
4
In 2018 Department D produced a formal strategic plan with its goals mapped out. Due to this plan being produced
after the conclusion of the research process, this study cannot comment on how the plan connects to current
performance management practices within Department D.
84
the first year of performance management reforms to harmonize the metrics gathered by these
sub-units. However, the synchronized metric was not grounded in any useful empiric structure.
These sub-units collected data on the “bottom ten” programs within each sub-units’ territory.
Because “bottom ten” was a category that would always have ten programs, the department
could do little with this metric other than review it. The general manager of Department D
wanted to capture more public-facing metrics, but collection methods to do so were not
developed over the three years investigated. Generally speaking, Department D suffered from
having metrics that could only be used for counting purposes.
3.2.5C Data Systems and Information Technology
Much of Department D’s efforts surrounding performance management reforms centered
on creating a centralized information technology system to capture the department’s data. This
central digital system brought together a raft of paper-based data that the department had
collected prior to Back to Basics. The general manager noted that it took hundreds of staff hours
to digitize the previously hard-copy data and make it usable. Before this digitization data would
often be lost if staff members shifted jobs and Department D staff members would spend lots of
time responding to manual data requests from City Council offices. Over the course of the
research period, there were conflicting reports from Department D leaders and front-line staff on
the effectiveness of the newly constituted data collection system. Senior leaders such as the
general manager and assistant general manager often stated that with the new system they were
“constantly capturing data.” In contrast, many front-line staff members argued that the data
system was a “work in progress.” This discrepancy persisted over the three-year period of
research.
85
Discussions with internal information technology staff indicated that they wanted to
attempt to link Department D’s data system with a geographic information system. They hoped
to use “hotspots” to track trends in the department’s metrics via this geographic information
system. Department D worked around the typical procurement and budget process within the
City of Los Angeles to fund much of its data system update. The department applied for and
received a $250,000 grant from an outside organization as the critical funding element of the new
data collection system. Department D also appeared to be using technology to automate some of
the peripheral functions that department staff might typically do in order to save money.
3.2.5D Integrating Performance Metrics Into Decision Making and Leadership
Department D stood out among the case study departments as it was the one department
studied that succeeded in using performance management systems to make decisions from the
beginning. The department had a strong start but then struggled to maintain momentum, before
effectively abandoning the approach altogether. In contrast, other case study departments either
struggled consistently or improved with time; Department D was the only department to regress.
Over the first year of the COMPSTAT process, the general manager and other senior leaders
were quite forceful in COMPSTAT meetings, pushing staff to dig deeper into the metrics. The
general manager thought performance management reforms could be useful to the department,
beyond the goals set forth by the Mayor’s Back to Basics agenda. During the first year of
reforms Department D held COMPSTAT meetings every six weeks, and the department’s
leaders repeatedly attempted to move their staff towards analyzing metrics for problem-solving.
However, it did not appear that front-line staff within Department D adopted this style of
thinking over this period of time.
86
Eighteen months into performance management reforms Department D started to face
both internal and external challenges that stalled and effectively killed its use of performance
metrics for decision-making. The department got into a protracted debate with the Mayor’s
Office and the Innovation and Performance Management Unit as to how effective its
performance management system was. This conflict led to Department D halting its bi-monthly
COMPSTAT meetings and considering how to re-tool them. Once the COMPSTAT meetings
stopped, however, they did not resume during the last eighteen months that the department was
observed. In discussion with senior leaders, they noted that once the department got out of the
habit of meeting regularly to analyze and attempt to utilize data, inertia made it basically
impossible to re-start the bi-monthly COMPSTAT meetings. Concurrently, many of the senior
leaders in Department D who had spearheaded early COMPSTAT meetings either retired or
moved on to other jobs, taking with them much of the built-in institutional memory.
Over the last eighteen months that Department D was studied, the broad performance
management activities that occurred within the department effectively became a counting
exercise. While Department D continued to collect data for metrics, the use of these metrics in
any substantive way ceased. While observation of the department indicated a pause in the active
use of performance management, it is possible that some behind the scenes efforts to continue
implementing performance management reforms may have gone unobserved.
3.2.5E Employee Buy-In and Training
At the outset of the Back to Basics initiative, none of the staff within Department D had
received any formal training on performance measurement practices. However, a number of
senior staff at the department did make an informal trip to the Los Angeles Police Department
early on in Back to Basics to study how the police department had used COMPSTAT for roughly
87
the previous fifteen years. Leaders from Department D found these interactions with the Los
Angeles Police Department helpful and at times overwhelming. Because the police department
had been using performance metrics for an extended period, many of the lessons they attempted
to impart onto Department D were more advanced than what the department would be capable of
until it built up its performance management system. The general manager at Department D
hoped to engage in formal training for data-driven practices, but over the course of three years,
this hope was never realized.
As discussed in previous sections buy-in by Department D staff was bifurcated across
levels of seniority during the first year of reforms. Senior department leaders were very engaged
in pushing performance management reforms. In contrast, middle-managers and front-line
operations staff demonstrated resistance to embracing performance metrics. Tensions between
the different levels of Department D staff played out as senior leaders tried to encourage and
direct other staff in the department to think more critically about the data they were collecting.
Though the relationship was collegial, front-line staff never embraced comprehensive
performance management principles.
3.2.5F Resources
Department D was generally strapped for resources related to performance management
reforms during the Back to Basics initiative. Yet, the department did not dwell on the lack of
resources as was evidenced by the fact that it was not a recurring theme during interviews and
meetings with department staff. Department leaders were fairly inventive in finding ways to
circumvent traditional resource procurement procedures. Other challenges discussed earlier
appeared to have a more significant impact on the limitation of performance reforms within
Department D.
88
3.2.6 Department E
3.2.6A Strategic Planning
Compared to many other departments in the City of Los Angeles, Department E started
working towards performance management reforms considerably later than other departments
participating in the Back to Basics initiative. Most departments submitted initial COMPSTAT
plans in late 2013 or early 2014; Department E did not finalize a COMPSTAT plan until April
2015. At first glance, this delay in planning might be interpreted as the result of the department
struggling with performance management. However, in practice, this delay was the result of an
extensive preparation period within Department E to ensure it was prepared to execute a
performance management system. In early 2014, Department E was singled out by the Mayor’s
Office as a key department in the Back to Basics agenda. For the majority of 2014 Department E
was studied by outside experts, brought in by the Mayor’s Office to examine the issues the
department faced moving forward. These outside experts produced a report in early 2015 that
outlined some of the key obstacles Department E faced and suggested several central goals the
department should link to performance initiatives.
The reports produced by outside consultants became the foundation of Department E’s
COMPSTAT plan that was submitted in April 2015. The department had extensive experience
producing strategic plans, and it had been doing so bi-annually prior to the Garcetti
Administration. During the research period, Department E accelerated this process and started
producing a strategic plan annually. With a long planning period Department E was able to
produce two complementary plans: a strategic plan and a COMPSTAT plan that connected key
department outcomes to metrics. Specifically, Department E was able to define four key output
metrics that in turn linked to four outcome metrics. Because Department E was considered
89
central to the Back to Basics agenda, staff in the Mayor’s Office and the Innovation and
Performance Management Unit (IPMU) provided considerable support for a large amount of the
strategic planning done by Department E. Public support of the department’s plans was also a
prominent feature of the Back to Basics public relations strategy.
3.2.6B Metrics
With a long period of time to conceptualize and define metrics, Department E was able to
produce a core set of performance measures related to its central work as a City department.
Because Department E’s functions were fairly straightforward and directly connected to
particularly visible outcomes in Los Angeles communities, the department patterned its metrics
off this connection. Department E created four output metrics, which then linked to four outcome
metrics, with both sets of performance measures tracked in tandem. The metrics were connected
to an overarching scoring system for measuring the overall effectiveness of Department E. The
department used raw data for its metrics, and then input these metrics into the scoring system,
which in turn ranked success on a three-point color-coded scale. This system allowed the
department staff to intuitively track the progress of individual metrics and the overall impact of
work on Department E’s performance as a whole.
Prior to Back to Basics, Department E had collected data sporadically. Some of the data
the department already had on hand could be connected to its newly defined metrics, while other
data was not ultimately useful for the newly conceptualized performance measurement approach.
Once Department E defined its core metrics it allowed for a period of time to collect data before
actively analyzing and employing these metrics. The department initiated a full scoping of data
across the purview of its metrics and created special teams to collect this data. Once the initial
data collection and mapping was completed, about eighteen months into Back to Basics, the
90
department started to deploy its metrics actively. Subsequently, Department E updated its data
with these specialized data teams quarterly over the final eighteen months studied. The data-
collection process was refined several times over this period using techniques based on best
practices observed in other North American cities.
3.2.6C Data Systems and Information Technology
Department E extensively rebuilt its information technology and data collection systems
as part of performance management reforms. The department conducted this multi-system
overhaul in several ways. First, Department E was able to both adopt and connect itself to two
pieces of existing data infrastructure within the City of Los Angeles. In one instance Department
E used a citywide public reporting tool as a critical data collection mechanism and data
repository. In another case, the department borrowed an established mapping grid system from
another department within the City. In collaboration with the Mayor’s Office, Department E used
this existing grid format to build a geographic information system and link this geographic
interface to the larger data collection apparatus within Department E.
The second avenue that Department E took when it built its data collection systems was
working with outside vendors to bring more technology to its front-line workers. For example,
the department partnered with a vendor to design an application (app) that front-line staff could
use in the field to update department data using staff observations. Many department leaders and
front-line staff agreed that the data system upgrades within Department E had made the
department more proactive and less reactive when it came to executing its core functions. A data-
dashboard was available to all staff within the department which allowed staff to work from a
common data framework. The assistant general manager responsible for performance
management within Department E argued that the department had one of the most sophisticated
91
data systems of any comparable department in cities across North America. As Department E
moved toward the conclusion of the study period, it released its dashboard to the public to allow
access to open data from the department’s operations.
3.2.6D Integrating Performance Measures Into Decision Making and Leadership
With the long lead time for planning and metric development, Department E did not start
incorporating performance measures into its decision making until more than eighteen months
into performance management reforms. In order to encourage department staff to start
incorporating performance management practices into their workflows, Department E held four
simulated COMPSTAT meetings. These simulated meetings allowed staff a chance to practice
engaging with data, without any direct consequences. In discussions with Department E staff, it
was clear that they appreciated the opportunity to preview how COMPSTAT meetings would
function and to give feedback before the system went “live.”
When Department E started to hold regular COMPSTAT meetings, it was holding up to
sixteen separate meetings per quarter because it was such a large department. The large number
of meetings that occurred across different units within the department led to a high degree of
variability in how effectively Department E staff members were incorporating data into problem-
solving and forward-looking decision making. The department also had very different roles for
different units within the department, which made coordination across units a challenge that
Department E was constantly dealing with.
In early COMPSTAT meetings, most of the data analysis came from senior leaders who
were pushing middle-managers and front-line staff to dig deeper into the data. Most middle-
managers were conducting some level of data-driven analysis, but over the first six months of
COMPSTAT meetings, the actions that were taken as a result of this analysis were fairly
92
simplistic. In most observations staff members were simply arguing for more attention to be paid
to functions that were struggling based upon measured metrics. Eventually, a handful of units
within Department E started to engage more consistently with higher-level problem solving as a
result of analyzing and applying relevant data. At the conclusion of the research period, the
department tried to replicate this success across a larger range of units.
3.2.6E Employee Buy-In and Training
Over the first eighteen months of the research period, there was little qualitative data
about how Department E employees were reacting to performance management reforms. This
was due to limited activity related to performance management within the department, outside of
a handful of senior leaders. These senior leaders appeared to be very enthusiastic and full of
insightful questions during the initial stages of performance management system development.
The enthusiasm that senior leaders demonstrated early on in the process eventually seemed to
spread to other Department E staff. This broader culture change within Department E appeared
to occur due to a pair of complementary factors. First, senior leaders within the department had
mostly come up through the ranks of the department. As a result, senior leaders understood their
subordinate’s positions well and had developed a strong rapport with their staff. In COMPSTAT
meetings the atmosphere was always collegial and appeared to put most Department E staff
members at ease during the transition to performance management practices. Second, the
significant resources (see next section) that were allocated to build the performance management
system within Department E made the staff working towards performance management reforms
feel valued and supported.
Internally, Department E did not conduct any within department training for staff on
performance management practices. The aforementioned simulated COMPSTAT meetings were
93
the one effort within the department to expose staff to performance management practices in a
structured manner. Department E was one of two departments observed that did send a number
of its staff to a central training program run out of the Innovation and Performance Management
Unit. It was unclear what impact, if any, this training for a handful of staff had on performance
reforms in Department E.
3.2.6F Resources
It was clear that Department E had access to the greatest number of financial and analytic
resources of all of the case study departments. As a priority Back to Basics department, the
Mayor’s Office directed multiple millions of extra dollars to the department’s performance
management system. Part of this infusion of resources was in response to early data that showed
Department E had previously provided disparate service to certain neighborhoods stratified by
socio-economic status. Because it was such a large department, Department E was also able to
leverage a number of staff members into analytic positions that could support the Back to Basics
agenda. There were also ongoing resource infusions into certain functions of the department
during the second eighteen months of observations based on identified needs. This was in
contrast to other departments who identified needs but did not receive financial support from the
City to address them.
3.2.7 Department F
3.2.7A Strategic Planning
When the Back to Basics initiative began, Department F was dealing with questions
regarding the validity and integrity of data that was collected by the department during the
previous City administration. The Mayor’s Office decided that an outside consultant should be
brought in to review the past data and help the department move forward with performance
94
management reforms. As a result, Department F’s complete COMPSTAT plan was drafted by
this outside consultant. The COMPSTAT plan examined best practices in a range of comparable
departments in cities across North America. This COMPSTAT plan laid out a fairly detailed set
of metrics and goals based on how other comparable departments around North America had
improved services using performance management systems. This was a roadmap for how
Department F could utilize performance management to achieve its goals. However, the early
focus of Department F was instead on ensuring the validity of the data that the outside consultant
had been brought on to examine. As such, for the first year of Back to Basics, Department F
adopted a fairly defensive posture when it came to strategic planning.
Starting in early 2015, more than a year into Back to Basics, Department F had moved
beyond the need to restore integrity to its previous data and started to push into preparing for a
more comprehensive COMPSTAT plan. At this time senior leaders in Department F went back
to the outside consultant’s COMPSTAT plan and used it as a base for building a detailed
strategic plan for the department. Department F produced a COMPSTAT plan that laid out the
department’s goals and described how these could be achieved through the use of related
performance metrics and data-driven practices. Despite getting off to a slow start and spending
the first year of Back to Basics combing through previous work to ensure its reliability,
Department F did end up with a solid foundation as a result of the outside consultant’s proposals
and subsequently the efforts of the department’s senior leadership.
3.2.7B Metrics
Department F ended up with a fairly well laid out COMPSTAT plan and strategic plan,
but holding itself to the benchmark metrics it had established proved to be challenging during the
course of research. After validating previous data within the department and establishing
95
benchmarks based on best practices, Department F consistently fell short when attempting to
reach these metrics. As discussed previously the first year of Back to Basics was spent
standardizing data and ensuring its reliability. Several of the core metrics that Department F had
previously tracked and wanted to continue to track into the future were defined differently in
various units across the department. The outside consultant helped them standardize these
existing metrics and what resulted was a set of core output and outcome metrics that matched
what other similar departments were doing in other cities.
Department F ended up with three levels of metrics. The first level was public data that
showed averages for the entire department. The second level was broken out by units within
Department F and also available to the public. The final level of data was for internal use only
and was used in Department F’s COMPSTAT meetings. Many of the department’s goals were
included in this third level and not shared with the public for fear that it might make the
department look bad. In general, Department F was only reaching the benchmark for best
practices on most of its metrics about 25% of the time. Department leaders were worried that the
public would not accept this low standard. While department leaders believed the data was a
valuable tool to help improve the department’s work, this fear of the public response was
pervasive at times. Some of the reasons why Department F struggled to achieve its benchmarks
are discussed in the performance measures and decision-making section.
It is important to note in this section that despite spending the first year validating and
extensively defining metrics, from the perspective of front-line staff and people in the field there
were still subtle differences in how certain units looked at the existing data. This allowed for a
potential situation where one unit would input data based on one definition and a second unit
would input the same data using a slightly different set of criteria. Trying to filter down standard
96
data definitions and clean up small differences in the data was an ongoing challenge Department
F faced during the three years it was studied. Department F also worked on designing additional
performance improvement metrics unique to Los Angeles that were not a part of nationally
recognized benchmarks. There was a lot of debate about whether to compare across or within
departmental units during this planning. By the end of the research period, Department F decided
to compare within units and set this standard moving forward, though it was unclear how
effective this structure ultimately turned out to be. Toward the end of the research period,
Department F developed a technical screening tool for metrics that ensured that the only data that
was counted had been validated properly in three overlapping ways.
3.2.7C Data Systems and Information Technology
Adapting and improving Department F’s existing information technology infrastructure
was a substantive challenge during the course of Back to Basics. The core computer system that
much of Department F ran off of was built in the 1980s. There was an ongoing effort within
Department F to upgrade this information system to allow better tracking of core functions.
Though this core computer system did not directly facilitate performance metric collection, it
was required for Department F’s communication, and its accuracy was dependent on the
system’s proper performance. For example, when the core computer system for Department F
malfunctioned at one point during the research period, one unit within the department started
getting double the work it expected. Work functions within the department that should have been
flowing fairly equally to two different units were instead flowing to one. This error led to metrics
for both units being inaccurate.
On top of attempting to update its core computer system, Department F also worked to
develop a separate data collection system for performance metrics. For the first eighteen months
97
of performance management reforms, this new data collection system was a work in progress.
After eighteen months Department F had a reasonably robust data collection system that
included a dashboard that all department staff could access. However, department leaders aspired
to have this data collection system and associated dashboard update in real-time. Department F
worked toward this real-time update capability during the second eighteen months of the
research period. At the end of the study timely updates of data were available for some, but not
all, metrics across Department F. During the second half of the research period, Department F
also initiated a program to automate much of its data collection. This program was a result of
senior leaders within the Department realizing that manual collection by certain groups of staff
led to inaccurate data input. An automated system was designed to make sure as soon as a unit
began work, the metrics related to this work would be automatically tracked. This system was
implemented late in the research period and appeared to fix some, but not all, of Department F’s
data validity problems.
3.2.7D Integrating Performance Measures Into Decision Making and Leadership
In discussions with Department F front-line staff and leaders, there were repeated
statements from both levels that indicated their belief that the department had the right leadership
team in place to execute data-driven reforms. Once Department F worked through its existing
data validation issues it brought in a new leader to fill a senior position that was created to
oversee the department’s COMPSTAT plan. This chief COMPSTAT officer was highly regarded
within the department and had gained credibility based on previous work with other well-known
COMPSTAT leaders within the Los Angeles Police Department. In discussions with Department
F’s chief COMPSTAT officer, they referenced Behn’s (2003; 2014) four principles of
COMPSTAT and other existing research on the subject. Department F’s COMPSTAT leader was
98
well versed in the principles of performance management systems. The creation of a new senior
level position to oversee their performance management system was unique to Department F
when compared to other case study departments. This position provided a layer of institutional
legitimacy that other departments did not possess.
Earlier in this case study, it was noted that Department F struggled to reach the metric-
based goals it had outlined in its COMPSTAT plan. This lack of improvement in metrics across
the board did not appear to be the result of failing to integrate performance management systems
into the department fully. In fact, Department F seemed to gather valuable insight into why it
struggled to achieve its benchmarks through intensive analysis. The performance management
team within Department F was the first group within the department to start working
thoughtfully and intensely with data. This unit regularly went into the field to investigate why
Department F struggled to meet certain benchmarks. It discovered structural issues that were
hard for the department to control and other factors that the department could work to mitigate.
The department’s COMPSTAT unit proposed and ultimately spearheaded many changes to
existing procedures and to the way the department’s core work was approached. The luxury of
having a dedicated performance management group and the resulting capacity for analysis was
another unique feature of Department F as it related to other case study departments.
Integrating performance measures into problem-solving did not happen as quickly in the
majority of Department F's units. The transition to this integrated approach became more
apparent eighteen months into the process when the department began holding monthly
COMPSTAT meetings. During the first wave of meetings, most middle-managers and front-line
staff were hesitant to engage with data. In general, most of the early COMPSTAT meetings were
counting exercises for the department. However, after about six months of COMPSTAT
99
meetings and consistent prodding and encouragement from senior leaders, more staff in
Department F started to engage the data in their work. There was increased discussion about
using data to achieve the core mission of the department, and unit leaders started to reorient
some of their operations based on data analysis. Many members of the department talked about
moving beyond the narrow focus of just “hitting the numbers” and instead pushing to achieve
overall improvements for Department F.
3.2.7E Employee Buy-In and Training
Getting staff to buy into the Back to Basics agenda started slowly due to the fact that
Department F did not start utilizing performance management until more than a year into Back to
Basics. The head of Department F’s COMPSTAT unit believed that COMPSTAT was likely
initially thought of with negative connotations and wanted to adapt performance management to
fit the culture of Department F. In early COMPSTAT meetings many staff members displayed
opposition to using performance management practices. In one instance a middle-manager was
asked what he was going to do about the poor metrics in his unit and he replied that it was not his
problem. The COMPSTAT chief at Department F had this manager transferred to a less desirable
unit, to demonstrate the department was serious about the importance of employee buy-in. This
attitude was reasonably prevalent for the first six months of performance management reforms in
the department.
The two-year mark for Back to Basics coincided with one year of active performance
management in Department F and a noticeable shift in the organizational culture. Interviews and
meeting observation conducted during this time revealed two types of attitudes toward
performance management within the department. The majority of middle-managers and front-
line staff had adopted a grudging acceptance of performance management reforms and believed
100
it would be a part of Department F for the long-term. This shift in attitudes was confirmed by
discussions with the Department F chief of COMPSTAT who noted that there was little
pushback from staff after the two-year mark. A smaller number of Department F staff fully
embraced performance management. One unit leader, for example, started to produce reports
about the data within their unit without any prompting. This unit leader found bringing these
reports improved how much was gained from the monthly COMPSTAT meetings.
3.2.7F Resources
Compared to the other case study departments, acquiring the necessary resources for
performance management reforms within Department F did not appear to be a significant
challenge. When Department F started to execute its COMPSTAT plan a year into Back to
Basics, the unit responsible for COMPSTAT had only one analyst and the designated
COMPSTAT manager. Six months later this unit had a total of six designated staff and was
hoping to fill two additional positions. Department F seemed to have a wealth of staff analytical
resources to help execute its performance management systems. Department F received a large
infusion of money to help build information technology infrastructure during the course of Back
to Basics. The department was also allocated resources to hire an outside consulting firm to
review its early and past data and to help develop a strategic information technology plan later in
the back to basics process. Throughout the research period, there were few comments from
members of the department about a lack of resources related to performance management
reforms.
101
Table 3.2 Qualitative Ratings of Case-Study Departments
Dept. A Dept. B Dept. C Dept. D Dept. E Dept. F
Strategic
Planning
Poor Good Acceptable Good Very Good Good
Metrics Very Poor Acceptable Good Acceptable Good Good
Data Systems & IT Very Poor Acceptable Poor Good Very Good Acceptable
Integrating PM Into
Decision Making &
Leadership
Very Poor Good Good Very Poor Good Good
Employee Buy-In &
Training
Very Poor Good Good Very Poor Good Acceptable
Resources Very Poor Poor Poor Acceptable Very Good Very Good
102
3.3 Summary and Discussion
This section summarizes the findings from each of the six case-study departments with
respect to the themes covered in the preceding sections. Table 3.2 provides a qualitative
description characterizing the overall results for each case-study department for the themes
covered in this chapter. The table is based upon a five-point Likert scale that corresponds to the
following levels of implementation success for each of the themes: Very Poor; Poor; Acceptable;
Good; Very Good. The next paragraphs consider the overall success of each department’s
reforms based on the factors in the table.
The first thing to note about interpreting Table 3.2 and the general analysis of the case-
study departments as a whole is that just because a department was rated well in most thematic
categories, this did not automatically translate into a better functioning performance management
system. However, generally, based upon the qualitative thematic ratings, departments with a
greater number of high qualitative ratings did tend to have more successful performance
management systems. This section now turns to considering each of the departments and how
they fared based upon themes observed and overall with regards to performance reforms.
Based upon the factors studied and as a whole, Department A clearly struggled in
implementing a performance management system. Department A did not score higher than poor
in any one thematic area and was judged to be very poor in five of the six areas considered. It
was clear that because Department A was a small department with few employees and few
resources its ability to get any reasonable momentum connected to reforms was hampered.
Additionally, because Department A was a coping style organization under Wilson typology and
had hard-to-quantify responsibilities, this also appeared to be a large impediment. The other very
103
poor results for Department A seemed to flow from these factors. It would be safe to clearly
define reform attempts in Department A as a failure during the three years it was studied.
The other department that clearly ended up struggling with the implementation of
performance management reforms over the course of Back to Basics was Department D.
However, Department D reached this point far differently than Department A. Department D’s
strategic planning and performance metric development were good and acceptable. As such, in
the early stages of reform, the department appeared to set itself up for a reasonable level of
implementation success. The resources and data systems at Department D's disposal also pointed
to the possibility of moderate success with performance management implementation. What
ultimately hampered Department D was leadership struggles, lack of employee buy-in, and
unfavorable political attention from the Mayor's Office. These factors halted Department D's
performance reforms in their tracks after the department had displayed promise during the first
half of the research period.
Outside of Departments A and D, each of the other four case-study departments displayed
some level of overall success with performance management reforms. In considering these four
departments in relation to one another, Department E was clearly the most successful department
within Los Angeles when it came to implementing a performance management system. It is also
the department that appeared to most closely follow the steps for planning a performance
management system as set out by Hatry (2006). Department E had a long period of time (over a
year) to structure its performance management system prior to it being put in place in order to
follow these planning principles. Once the system was in place the departments’ size and ample
resources were leveraged effectively. And with relative employee buy-in, strong leadership, and
mayoral attention, Department E was able to keep momentum. Finally, without going into
104
specific details about the department’s core function, Department E was one of the most clearly
defined production organizations under Wilson typology during the course of the research. It
seems likely this ease of measuring outputs and outcomes was also a substantial contributing
success factor for Department E.
Department F was the department, after Department E, that had the most going for it
based upon the ratings in Table 3.2 when it came to factors contributing to performance reforms.
However, as compared to Department E, Department F only achieved moderate success when it
came to performance management implementation overall. In the course of the research, it was
hard to place an exact reason for why Department F was not as successful as many of the factors
studied indicated it should be. It is possible that Department F set the benchmarks for its
performance management system too high, and by extension, its success seemed modest with
respect to these benchmarks. On the other hand, there may have been missed factors in the
course of research that impaired the department’s progress. In any case, Department F’s
performance management system was a moderate success, but never quite achieved what it
seemed to be capable of.
Finally, we turn to Departments B and C, which in many respects are the most interesting
case-studies from this research. While Department A had an obvious set of factors linked to
failure and Department E had an obvious set of factors linked to success, both Department B and
C had a mix of good and bad factors in the themes studied, and still achieved a fair amount of
success during reform implementation. Thus, it is from these departments that we are likely to
learn the most about overcoming impediments and key factors to successful performance
management systems.
105
Based upon the overall budget size, both Departments B and C appeared to have ample
financial resources, but over the course of the study it became clear that performance
management specific financial resources in each department were limited. Yet, each of these
departments appeared to mostly overcome this dearth of financial resources. In the same vein,
the data and information technology systems in each department were on the lower end of the six
case-study departments, another obstacle that was generally worked around. In the course of
studying the thematic areas covered in this chapter related to Departments B and C, there were a
handful of key factors that helped each department overcome obstacles and build reasonably
well-functioning performance management systems. First, despite having limited financial
resources earmarked for reforms, both departments were large based upon the number of staff.
This large pool of human resources and extra capacity seemed to be a significant factor in
pushing reforms forward. The other two critical factors for Departments B and C that seemed to
flow together were leadership and buy-in from a large number of operational level staff.
Leadership was both critical in finding creative ways around the lack of financial resources in
each department, and for encouraging employees to embrace reforms. Once staff in Departments
B and C started to take ownership of many of the performance management functions, the
reforms within the departments became more cemented. Overall, Departments B and C provided
a window into several key ways organizations could be successful when it comes to performance
management reforms.
In considering each of the six case-study departments, there are answers to the
overarching research questions asked in this dissertation. These questions are considered in the
context of these case-studies in the final sections of this chapter.
106
3.3.1A How is success defined when it comes to performance management
reforms?
The qualitative approach to studying the departments in this chapter over time can
provide several thoughts on what we consider success in the course of performance management
reforms. The fact that the case-studies were carried out over a three-year period suggests that
success for performance management systems should not be judged as a snapshot but as an
evolution. Performance management systems evolve over time and can do so in good and bad
directions. Judging Department D over the first eighteen months of Back to Basics might have
resulted in the assumption that it would become a reasonably successful department. Yet,
eighteen months later Department D's performance management system had essentially ground
to a halt and clearly was not successful. In contrast, many of the other case study departments
displayed improvement over the course of implementing performance reforms. This finding
lends credence to previous literature that preaches patience for performance management
reforms (Moynihan, 2006). However, with the contrasts in how departments fared over time this
dissertation continues the trend in the literature of mixed-findings on how time impacts reforms
(Bourdeaux, 2008; Dull, 2009; Moynihan, 2006).
Another dichotomy in the literature is whether success is defined by implementing
processes based performance reforms or actively incorporating performance information into
decision-making (Behn, 2003; 2005; Hatry, 2006; Hood, 2012). It is possible that this is in fact a
false dichotomy as findings from this chapter seem to indicate each definition of success is
required for the other. Without proper performance management processes, it is not possible to
make informed decisions, and making data supported decisions strengthens the ongoing
durability and proper reinvention of processes. Ultimately, actively using performance
107
information may be the top-level definition of success, but processes are also a vital measure
undergirding reforms.
3.3.1B What factors are central to successful performance management systems
and in what combination?
This chapter provides important differentiation among factors that lead to successful
performance management systems and helps to understand in what combination these factors
create a recipe for success. If only Department E had been observed in the course of the research,
we might conclude that all the themes covered in this chapter are required for successful
performance management reforms. And, perhaps ideally each of the factors present for
Department E should be strived for in the course of reforms. Having good strategic planning,
performance metrics, data systems, leadership, employee culture, and organizational resources is
a good thing for any organization in the course of implementing reforms. However, things like
organizational size, or being a production type organization like Department E are not
necessarily under the control of organizations that undertake performance reforms.
As such, this chapter can lend insight into what factors standout in the course of success
and in what combination. The evidence from the departments studied shows that in considering
organizational capacity, a department’s size is a more important success factor than its financial
resources. This both expands the literature as organizational size has been understudied with
respect with performance reforms and contrasts with some of the previous work that lends
importance to financial resources (Boyne, 2003; Boyne & Gould-Williams, 2003; Gerrish,
2016).
In terms of other factors to consider based upon the case-study departments good metrics,
and to a lesser extent strategic planning also seem to be core factors that assist in performance
management success. Between the two, good metrics seems to be more essential, as Department
108
D still struggled with reforms even after having a solid strategic planning process. Choosing the
correct metrics has been heavily emphasized in the literature as a central component of well-
functioning performance management systems (Behn, 2014; Hatry, 2006).
In a qualitative context, leadership and employee culture in the form of embracing
performance management reforms would seem to be the most important factors that influence
success. All the case-study departments that exhibited a level of overall success during Back to
Basics had both strong leaders and over time a core group of employees that bought-into the
reform process, either enthusiastically or grudgingly. These two factors also seem to feed off one
another, with high-level leaders pushing reforms at first, leading to an eventual change in culture
among a broader group of staff, some of which eventually took up leadership roles themselves.
These findings build upon two aspects of the literature, the general importance of leadership
during performance reforms (Behn, 2014; Dull, 2009; Moynihan & Ingraham, 2004; Moynihan
& Lavertu, 2012; Sanger, 2013), and that leadership can drive organizational culture in favor of
reforms (May & Winter, 2007; Moynihan & Pandey, 2010).
Finally, with respect to factors that may have less of an impact on the success of
performance management implementation, it appeared to be the case that data systems and
training were less important to success. While neither of these findings is definitive, departments
were able to have relative success with one or both of these factors being less than ideal during
the course of Back to Basics. In addition, the Innovation and Performance Management Unit
(IPMU) closed down the two training programs it was running for City staff after only a year. It
seems likely that in the long run having well-run data systems and some level training is
probably still beneficial for a performance management system. But, the findings from this
109
chapter weaken the evidence that these are crucial factors for success (Cavalluzzo & Ittner, 2003;
Julnes & Holzer, 2001; Kroll & Moynihan, 2015; Melkers & Willoughby, 2001; 2005).
The previous paragraphs have supported and extended the literature on success factors for
performance management reforms. By providing answers on a combination of factors for
success, this chapter can move thinking about performance reforms into new areas. The observed
combination of factors that led to success across the four case-study departments (Departments
B, C, E, and F) deemed to have a moderate level of success or better, were: strategic planning,
good metrics, leadership, and employee buy-in. Across Departments B, C, E, and F of these four
factors, each department was observed with at least three of them at good or better, with the
fourth being at least acceptable. While any organization may want to have all the factors that
Department E had in the course of this research, the recipe of themes discussed above seems to
be a reasonable path to consider for organizations that seek performance management
implementation success. The specific combination of strategic planning, good metrics,
leadership, and employee buy-in as a combination for successful performance management
implementation pushes the literature on performance management reforms forward.
3.3.1C How do organizations overcome obstacles that arise while implementing
performance management reforms?
Some of the themes discussed in the previous section about success factors also relate to
how organizations can overcome impediments to performance management reforms. The two
relevant findings from this case-study chapter consider the types of impediments that are easier
to overcome, and the success factors needed to overcome impediments.
When considering overcoming obstacles to performance management reforms, it is
important to examine what impediments can be realistically overcome. Much of the literature on
obstacles to performance management systems identifies specific impediments to reforms but
110
struggles to explain how to overcome these impediments (Diefenbach, 2009; Moynihan, 2006;
Radin, 2006). This research contributes to the literature by illuminating a handful of obstacles
that can be overcome. Two of the case-study departments (Departments B and C) were able to
overcome poor data systems and a lack of financial resources. While this is not a definitive
finding, it does indicate that over a fairly long period of time organizations can work around
these impediments and still reach a reasonable level of success. In turn, we can see that a dearth
of leadership and lack of employee buy-in toward reforms are impediments that may be harder to
overcome as Departments A and D tended to lack these factors, leading each to suffer from a
challenging implementation process. Each of these findings brings additional nuanced
understanding to the research on performance management.
In turn, it is also apparent that strong leadership and good employee buy-in can help to
overcome impediments. In the cases observed Department B and C both overcame negative
factors largely because of these two positive factors. This finding continues to support the strong
case in the literature for leadership for successful performance management reforms (Behn,
2014; Sanger, 2013), and the impact of leadership on organizational culture (Moynihan &
Ingraham, 2004; Moynihan & Lavertu, 2012). In addition, these findings add specifics to the
literature on what kinds of impediments leadership and organizational culture can help to
overcome.
111
Chapter Four: Examining Success Through a Departmental
Lens Using Qualitative Comparative Analysis
5
A series of case studies provide a window to examine performance management reforms
at the departmental level. While case studies can delve into individual departments and provide
granular insight into performance management reforms, a different methodological approach is
required to study implementation at the departmental level systemically. Traditionally,
multivariate regression models have been employed to study performance management reforms
across the institutional level, departmental level, and operational level (e.g., James, 2011;
Moynihan & Lavertu, 2012; Moynihan, 2013). While these regression models have explanatory
power through the linkage of a handful of independent conditions with dependent variables,
these models are ill-suited to providing causal conditions for success across a range of distinct
City departments. Performance management reforms are complex systems to understand, and
previous research has not applied a configurational lens to the study of these reforms. To better
understand the conditions for success in implementing performance reforms a qualitative
comparative analytic (QCA) approach to study the set of departments within the City of Los
Angeles was adopted.
The QCA methodology acts as a bridge between the primarily qualitative case study
analysis conducted in the previous chapter and the cross-sectional regression approach employed
in the subsequent chapter. Since its initial development 30 years ago, QCA has been widely
applied in sociological work, but this method has only recently begun to be applied to questions
of public management (Andrews, Beynon, & McDermott, 2015; Schlager & Heikkila, 2009).
QCA is developed from a distinct set of epistemological and analytic premises. In contrast to
5
An early draft of this chapter was co-authored with Juliet Musso and Chris Weare.
112
traditional case study approaches that seek complete explanations of a small number of highly
complex cases, QCA strives to make a comparison between more cases in a more rigorous
manner that reveals causal explanations. It provides analytic rigor not through the variable-based
regression approach of cross-sectional statistical analysis, but by using Boolean logic to identify
necessary and/or sufficient causal factors linked to specific outcomes.
Within the regression framework, differences in organizational outcomes (or other
outcomes of interest) are seen to arise from a common functional relationship that relates
outcomes to variation in explanatory variables and some random error. The regression
framework emphasizes parsimony by focusing in on a small number of influential variables and
generalizability by estimating a common functional form. Local variation in cases is treated as
noise rather than involving particularities that may require explanation. Most importantly,
regression coefficients are understood to represent an underlying and common continuous
function wherein changes in explanatory variables change outcomes on the margin.
The central feature of QCA is that it permits the analysis of causal complexity. While the
regression equation forces all cases to be described by a single functional form, QCA can
identify multiple, different combinations of causal factors that can lead to the organizational
outcome of interest (Ragin, 2000; Schneider & Wagemann, 2010).
6
In fact, depending on the
combinations, individual factors can have opposing effects under different circumstances. These
features of QCA make it a particularly appropriate avenue for public management research.
Public management scholars are often interested in complex cases that involve a range of
particular features not easily reducible to a parsimonious set of variables, yet strive to identify
6
In theory, it is possible within the regression framework to develop more flexible models that include case-specific
dummy variable and interaction terms. Data limitations, however, typically preclude the estimation of such complex
models.
113
important commonalities between cases. Both the complexity and configurational nature of
potential success factors for reforms are central reasons to apply QCA. In studying the Back to
Basics performance management reforms within Los Angeles, QCA is particularly useful in
understanding how departments across the City have successfully approached reforms. The set of
departments provides a number of cases to be inputted into the QCA framework to understand
better the input factors that can lead to "recipes for success" when it comes to performance
reforms.
The rest of this chapter lays out the set-theoretic case approach to studying departmental
level reforms within Los Angeles. First, the chapter returns to the literature, supplemented by
insight from the case study analysis to discuss which input factors were included in the QCA
model, that is, the “recipes” for successful performance management implementation. The
chapter then discusses the logic of QCA, the data, variable construction, and the specific model
developed in this study in greater depth. Finally, the chapter turns to the outcomes of the model
and discussion of the findings.
4.1 Linking the QCA Approach to Performance Management
Reforms
Chapter One of this dissertation laid out a detailed discussion of factors for successful
performance management system implementation. This section briefly returns to that
examination of the literature as the foundation of the input variables used to study departmental
cases within Los Angeles. This discussion also draws on themes developed in the preceding case
study analysis which reinforce the knowledge from the literature. Using this knowledge and
drawing from the survey instrument that undergirds this research, a number of input variables are
scoped. The set-theoretic approach of QCA relies on the definition of these input variables based
114
upon substantive researcher knowledge of the cases at hand, making the intersection of case
study analysis and literature review especially well-suited to this research approach.
4.1.1 Leadership
The first causal factor considered is the impact of leadership when it comes to the
successful implementation of performance reforms. Leaders set the mission for performance
management reforms and link the needed resources to this mission (Sanger, 2008; 2013). This
organizational mission to move forward data-driven reforms is further strengthened when leaders
at various levels of an organization push for simultaneous mission coherence (Moynihan &
Pandey, 2010). Operational staff in public organizations struggle to execute the goals connected
to performance management reforms without consistent and reliable support from leaders
(Moynihan & Ingraham, 2004; Bourdeaux & Chikoto, 2008; Dull, 2009; Moynihan & Lavertu,
2012). Finally, the impact of leadership spills over to other subsequent causal factors discussed
in this chapter, such as promoting accountability and innovation (Behn, 2003). These findings
from the literature are supported by the case study analysis conducted in the previous chapter,
where persistent leadership was a feature of departments that pushed forward their performance
management systems successfully.
4.1.2 Analytic Capacity
Staff expertise and the latent organizational capacity related to this function are a key
factor cited in the literature for successful performance management systems (Melkers &
Willoughby, 1998; 2001; 2005). The analytic capacity of a set of staff members within an
organization is a core component of a well-run data-driven system (Lu, 2008). Deficiencies in
analytic resources demonstrate the opposite impact, with organizations that lack this capacity
struggling to implement performance reforms (Melkers & Willoughby, 2005). In the course of
115
the case study analysis in the last chapter, departments with dedicated analytic units or staff
appeared to be in a relatively better off position when it came to the functioning of data-driven
systems.
4.1.3 Good Metrics
Performance management systems are built on the foundation of well-designed metrics.
Behn (2014) has identified five crucial features for useful data and metrics: (1) Timely and
Updated Frequently, (2) Comparability, (3) Trustworthiness, (4) Low-Cost, and (5) Integrated
into Decision Frameworks. Hatry (2006) also classified metrics that lead to successful
performance management systems as measurable, meaningful, and manageable. The integration
of these two frameworks leads to best practices in metrics development and the collection of
data. Case study departments within the City that invested in well-crafted metrics and diligently
collected data demonstrated improved usage of performance metrics to drive their departmental
agendas.
4.1.4 Innovative Culture
The transition from simple performance measurement to the full practice of performance
management is often predicated on an organization adopting an innovative culture. Public
managers who know what questions to ask of the metrics and understand that data can have
multiple answers are essential to creating this innovative culture (Moynihan, 2008). An
interactive dialogue that constantly frames what the data means to an organization is another
predictor of an innovative performance management system (Moynihan, 2008). Beyond this, it
has been reasoned that performance management reforms should empower staff to take risks
(Sanderson, 2001; Sanger, 2008). An innovative culture was a broadly problematic characteristic
for case-study departments to develop in the course of the study.
116
4.1.5 Strategic Planning
As part of the foundation of data-driven systems, strategic planning is described as a
critical pillar of ensuring the success of performance management reforms. Prior research
demonstrates that strategic planning during the initial phase of performance management reforms
led to higher uptake in performance measurement utilization in subsequent phases of reforms
(Sole & Schiuma, 2010). Hatry (2006) proved that skills gained during the strategic planning
process are valuable to organizations further along during reforms. Case study departments
which engaged in strategic planning and development of COMPSTAT plan early in the Back to
Basics process exhibited more advanced performance management systems later down the road.
4.1.6 Political Attention
Though systemic research has not coalesced around one theory of political attention
connected to data-driven reforms, it has still been identified as a relevant factor in promoting
performance management systems. At the level of individual departments, research has
considered political attention a driver of performance information utilization (Moynihan &
Pandey, 2010). Political stakeholders are capable of driving performance management reforms in
positive directions (Bourdeaux & Chikoto, 2008). A handful of departments in Los Angeles were
singled out for attention from the Mayor's Office during the Back to Basics initiative.
4.1.7 Organizational Size and Resources
Capacity related factors are another aspect of performance management that can have an
impact on development. Additional financial resources for performance reforms have been
emphasized as a clear factor in increasing the chances of success (Boyne, 2003). In allocating
these resources, a clear dedicated stream of funding is thought to improve how resources are
utilized to push forward performance management reforms (Boyne & Gould-Williams, 2003). In
117
addition to financial resources, the size of an organization can impact a performance
management system. Large organizations that have spare capacity (Nielsen, 2013), and can
allocate this capacity towards reforms (Gerrish, 2016) appear to be able to execute performance
reforms better. In the assessment of case-study departments in the previous chapter, department
size had a clear positive impact on Back to Basics implementation. The evidence for additional
financial resources was mixed, with some departments leveraging extra resources, but others
succeeded despite scarce resources.
4.1.8 Implementation Success
The above factors have each been cited as potential causal implicants, leading towards
the successful implementation of performance management reforms. Each of them is considered
(and defined subsequently) in this set-theoretic approach to studying departmental level cases of
data-driven reforms. In turn, defining what constitutes an outcome of successful implementation
of performance management reforms is a complicated proposition. For instance, Behn (2003)
defines eight different possible purposes for data-driven systems. Figuring out which of these
eight purposes (or combination of them) defines success is an evolving challenge for public
organizations. Other research maintains that success for performance management systems can
be fluid and will depend on the culture of the department implementing the system (Hood, 2012).
There is also a divide between literature that considers the simple adoption of performance
reforms as success versus a data-driven reform process altering how an organization operates. An
example of this second type of successful adoption is the ongoing utilization of data-driven
systems for problem-solving and forward-looking planning (Behn, 2005; Hatry, 2006; Sanger,
2008).
118
4.2 Fuzzy Set Qualitative Comparative Approach
The approach in this chapter engages a set-theoretic approach grounded in fuzzy set
qualitative comparative analysis (fsQCA). This fsQCA approach allows for a thorough
examination of the causal conditions that contribute to the outcome of a successful performance
management system. FsQCA can describe a significant amount of causal complexity and is
uniquely suited to describing causes for departmental level success factors across a range of
performance management cases within Los Angeles (Fiss, 2007; Ragin, 2000; 2008). The fsQCA
approach is patterned on configurations of features connecting an overall typology, and by
examining a set of cases, the approach can remove features that do not contribute to the studied
outcome. A set-theoretic approach analyzes sets of causes and outcomes to help recognize causal
patterns. FsQCA looks at these causal configurations via a set and sub-set relationship. In the
case of performance management reforms in Los Angeles, fsQCA looks at the set of departments
with successful reforms and the causal factors that contributed to this outcome. Boolean algebra
is employed logically to reduce the number of causal conditions that truly lead to the studied
outcome.
The detailed empirical logic used to identify causal processes unfolds across four
interconnected stages. The first stage is the calibration of independent and dependent variables
into the aforementioned sets. In fsQCA, these variables are calibrated by membership level
across the range of the measure. The specific independent and outcome measures used in this
research and their calibration are described in detail in the next section of this chapter. Once
these measures are placed in a set, the second stage of fsQCA results in the development of a
truth table, this truth table is a matrix of all possible configurations of the independent causal
conditions as rows. Each row of the truth table represents a set of causal conditions, and across
119
the full truth table, all possible causal combinations are listed. The truth table is arranged by the
number of cases present for each combination, with there being three possibilities: (1) multiple
cases, (2) a single case, and (3) no cases.
The third stage of the fsQCA approach parses the number of rows in the truth table based
on two threshold conditions. The first condition is the least number of cases necessary for that
causal solution to be considered. The second condition is the lowest consistency score of a
solution. Under fsQCA, consistency denotes the degree to which cases correspond to the set-
theoretic expression in a given solution. Consistency in fsQCA can be measured using the
number of cases that exhibit the given set of causal attributes and the outcome, divided by the
number of cases with the same set of causal attributes, but that does not demonstrate the
outcome. The fsQCA employed in this research utilizes a measure of consistency that employs a
small price for minimal inconsistencies and a more substantial price for major inconsistencies
(Ragin, 2006). The minimum threshold for consistency was set at greater than 0.83, which is
above the prescribed minimum threshold of 0.80 (Greckhamer et al., 2018; Ragin, 2008). In
addition, the minimum PRI consistency threshold was set at 0.73, with one score being between
0.73 and 0.77 and the rest being above 0.77. These PRI consistency scores reflect results above
the acceptable threshold of 0.70 and the preferred threshold of 0.75 (Greckhamer et al., 2018).
Because the fsQCA in this study looks at cases over two-time periods within the City and utilizes
two complementary outcome measures, there are four unique fsQCA case models run in the
analysis. The particulars of this approach are discussed in the next section. The two analyses run
using the 2015 cases had an N of 34, while the two analyses run using 2016 cases had an N of
31. With the small N across all variations of the fsQCA, the threshold for the minimum number
cases in each approach was set at one. Across the four models run, 11, 18, 8, and 27 cases
120
exceeded the minimum consistency threshold of 0.83 for the outcome of successful performance
management implementation.
The fourth stage involves using Boolean algebra to reduce the original truth table to
simplified recipes logically. The reduction conducted for this research is based upon an algebraic
formula devised by Ragin (2005; 2008). This formula sorts causal conditions into core and
peripheral conditions, utilizing counterfactual analysis. Counterfactual analysis is helpful in the
reduction of the truth table because even a handful of causal conditions can lead to an enormous
number of rows in the truth table. In addition, many configurations will not result in any cases, a
problem of “limited diversity” (Ragin, 2000). Counterfactual analysis allows for an approach that
can work around the empirical challenge of this limited diversity issue (Ragin, 2005).
The truth table formula mitigates the challenge of limited diversity via counterfactual
analysis by sorting between parsimonious and intermediate solutions through the characterization
of "easy" and "difficult" counterfactuals (Ragin, 2008). Counterfactuals that are easy refer to
combinations where a redundant causal condition is added to a set of causal conditions that
already produced the studied outcome. To illustrate this theory, consider that the causal
conditions X and Y, but not Z lead to the studied outcome. Data does not exist to show that the
combination of X and Y and Z would cause the studied outcome, but researcher knowledge
indicated that the addition of Z could lead to the studied outcome. In this scenario, an easy
counterfactual formula would show that both (X and Y and Z) and (X and Y, but not Z) will
result in the studied outcome. In this situation, the expression can be reduced to just X and Y,
because the presence or absence of Z has no basis on the studied outcome. Easy counterfactual
analysis allows the researcher to use the simplified formula if the addition of another causal
condition would not make a difference.
121
Difficult counterfactual analysis, on the other hand, is the opposite of the situation
described in the previous paragraph. In this situation, a condition is removed from a set of causal
conditions with the assumption being that the removed condition is redundant. In such a situation
there may be data showing that X and Y and Z results in the studied outcome, but no data that
the combination X and Y, but not Z also results in the studied outcome. The question that is
tougher to answer in difficult counterfactual analysis is whether removing a causal condition will
make a difference. If the previous substantive knowledge indicates the presence of Z is part of
the studied outcome it is more difficult to decide whether Z is redundant and can be discarded.
Two kinds of causal solutions come out of easy and difficult counterfactual analysis — a
parsimonious solution, which includes all simplifying assumptions whether they come from easy
or difficult counterfactuals. An intermediate solution is the second type of solution which only
incorporates easy counterfactuals. In combination, the parsimonious and intermediate solutions
lead to the concept of core and peripheral conditions for the studied outcome. Core conditions
are those that appear in both parsimonious and intermediate solutions. Peripheral conditions are
those that only appear in the intermediate solution, having been eliminated in the parsimonious
solution. The strength of causal core conditions from this approach is defined by the power of the
data, not the linkage to other configurational conditions.
4.2.1 Calibration of Measures
4.2.1A Outcome Variables
This research is concerned with whether the implementation of performance management
reforms was successful in Los Angeles. In order to study the successful implementation of
performance management reforms, two complementary outcome variables were used. The use of
two outcome variables was warranted for three reasons. The first reason is the differing
122
definitions of success for the implementation of performance management reforms discussed in
Chapter One and reiterated at the beginning of this chapter. While some literature considers the
implementation of a functioning performance management system as a success (Hatry, 2006),
other strands of literature argue that for data-driven reforms to be considered successful they
must change organizational behavior and lead to active use of metrics in decision making (Behn,
2003; 2014).
The second and third reasons for using two outcome variables are related to the validity
and reliability of the outcome measures. Because one of the outcome measures came from the
same source as many of the input measures, there was some concern about common source bias
(discussed later in this chapter). The second outcome measure is entirely independent of all the
input measures and thus alleviated much of this concern. Finally, two outcome variables seemed
appropriate to ensure better the validity of measuring successful performance management
reform implementation. With access to survey data from spring 2015 and fall 2016, along with
other data from both of these periods, causal case outcomes were measured in both periods and
analyzed together during the discussion section of this chapter. While time-period based
difference analysis is an emerging trend in QCA, the presence of only two periods of data, and
the still experimental nature of the process made it unsuited for this analysis (Greckhamer et al.,
2018).
Outcome Measure One - Performance Management Implementation Success from
Innovation and Performance Management Unit: The outcome of significance in this research
is the successful implementation of a performance management system at the departmental level
in the City of Los Angeles. The first outcome measure comes from the Innovation and
Performance Management Unit (IPMU) with the Mayor’s Office. As the central unit for
123
coordinating Back to Basics, the IPMU was also tasked with evaluating each of the City's 35
departments’ progress in implementing data-driven reforms. This systemic evaluation by the
IPMU examined at what level each City department's performance management system was
functioning. As such, this measure leans more towards the level of successful process
implementation of a performance management system (as discussed in the previous section). In
its evaluation the IPMU rated departments on the following four-point scale: (1) Not yet started,
(2) Testing/development stage, (3) Partial COMPSTAT, and (4) Full COMPSTAT. The IPMU
rated departments on a roughly annual basis during Back to Basics. To match the survey data
from spring 2015 and fall 2016, the ratings from these two years were utilized for the outcome
measure. The mean IPMU rating for 2015 was 2.41, while the mean rating for 2016 was 2.71.
Fuzzy set QCA requires that variables be transformed into sets calibrated at three
significant thresholds: full membership, full non-membership, and the crossover point. The
crossover point refers to the "fuzziest" point in the data where the measure is neither in nor out of
the particular condition (Ragin, 2008). This midpoint of the fuzzy set is a qualitative bridge
between full membership and full non-membership (Ragin, 2000; 2008). Fuzzy set measures
generally use a scale of 1 to 0, with 0.95 indicating the threshold of full membership, 0.5
indicating the midpoint, and 0.05 indicating the threshold for full non-membership (Ragin, 2005,
2008). Using this calibration procedure, the IPMU performance management implementation
success measure was transformed into a fuzzy set. The threshold for full membership was set at
4, corresponding to the IPMU's rating of "Full COMPSTAT." The midpoint was set at 2.5,
between the IPMU's middle two characterizations of performance management progress. Full
non-membership was set at 1, the designation for "not yet started" developed by the IPMU.
124
Outcome Measure Two - Performance Management Implementation Success from Survey
Data: The second complementary outcome measure that measures performance management
implementation success is based upon the survey data collected as part of this research. Survey
questions used in the creation of this outcome measure are partially adapted from the work of
Meier and O'Toole (1999; 2001). These survey questions deal with departments using
performance management to do such things as setting priorities, allocating resources, creating
new programs, and refining performance measures, among others. A full set of questions is
available in Appendix C. Questions followed a five-point Likert scale, considering the extent to
which performance management has impacted the factors as mentioned above. This outcome
measure is thus meant to consider how successful departments have been in actively adopting
performance management measures into department functionality. This measure is
complementary to the previously discussed outcome measure created by the IPMU that is more
process based in relation to successful implementation.
Survey scores for each City department were averaged and then aggregated into an index.
For the 2015 index, the scale showed strong reliability, with a Cronbach's alpha score of 0.92. A
similarly strong fit was recorded using the 2016 index as well, with an alpha score of 0.91. The
full membership threshold for this measure was equated with "to a great extent" on the Likert
scale, while the crossover point was equated with "to a moderate extent," and the full non-
membership point at "to a small extent."
4.2.1B Independent Variables
The independent measures included as causal conditions within the sets are each related
to the literature discussed in Chapter One and earlier in this chapter. The data for these variables
also comes from the 2015 and 2016 surveys, along with City budget documentation, and IPMU
125
documentation. Survey questions used are partially adapted from Meier and O'Toole (1999;
2001), Melkers & Willoughby, (1998; 2001; 2005), and Perry (1996; 2000). Survey questions
used in the creation of independent measures are included in Appendix C. Eight independent
measures identified in the literature and during the case studies are included in the fuzzy set
analysis. During the research process, 35 departments were surveyed. As discussed earlier, due
to missing data from a handful of departments there was an N of 34 cases for 2015 and an N of
31 cases for 2016. Descriptive statistics for all original and fuzzy set independent measures are
displayed in Table 4.1 and 4.2
Large Budget: This measure is pulled from City budget documents for the 2014-15 and 2016-17
and is the amount of money in each City department’s budget. The threshold for what constituted
a large budget was arrived at through the case study process and additional discussions with City
staff. The full membership threshold was set at $150 million, the crossover point at $75 million,
and the full non-membership threshold at $10 million. Slight adjustments were made between the
two years covered to account for a small amount of inflation.
Large Organization: The second measure is also drawn from City 2014-15 and 2016-17 budget
documents. In the course of the case study process, the fuzzy set calibration thresholds were set
using substantive knowledge about how many employees a department would need to be
considered a large organization. The full membership threshold was set at 400 employees, the
crossover point was set at 200 employees, and the full non-membership threshold was set at 50
employees.
Strategic Plan: The third measure indicates whether a department had a strategic plan or not
during the 2014-15 and 2016-17 fiscal years. Data was combed from public records for each
126
department and then cross-referenced through the IPMU. This was a binary measure, so full
membership was set at 1, and full non-membership was set at 0.
Mayoral Attention: In discussions with the Mayor's Office and the IPMU it was noted that the
Mayor's Office had planned to give special attention to five City departments that the Mayor
considered especially critical to his Back to Basics agenda. Over the course of interacting with
the Mayor's Office, the IPMU, and some of the selected departments this was confirmed. This
variable is also binary and sets full membership for mayoral attention at 1 and full non-
membership at 0.
Analytic Capacity: The fifth measure is based upon a single survey question that focused on the
analytical personnel that was supplied to a department. The scores for each department were
averaged and then included in the fuzzy sets after calibration. The question used a five-point
Likert scale. The threshold for full membership was set equivalent to "to a great extent," the
crossover point at "to a moderate extent," and the threshold for full non-membership at "to a
small extent."
Innovative Culture: The sixth measure was based upon five survey questions that considered
different traits within a department related to taking risks, entrepreneurship, and commitment to
innovation, among others. These questions were placed in an index that demonstrated a high
degree of reliability, with an alpha score of 0.87 for 2015 data and an alpha score of 0.89 for
2016 data. The scores for each department were averaged and then calibrated into a fuzzy set.
Based upon the equivalent of the five-point Likert scale from the questions the threshold for full
membership was set equivalent to "to a great extent," the crossover point at "to a moderate
extent," and the threshold for full non-membership at "to a small extent."
127
Strong Leadership: The seventh independent measure was created using eight survey questions.
These questions dealt with multiple aspects and approaches to leadership and used a five-point
Likert scale. These questions were aggregated into an index, with the scores of one question
being reversed as the question represented traits relating to poor leadership. The index
demonstrated good reliability with both 2015 and 2016 data demonstrating an alpha score of
0.84. The scores for each department were averaged and then calibrated into a fuzzy set. Based
upon the equivalent of the five-point Likert scale from the questions the threshold for full
membership was equated with "to a great extent," the crossover point at "to a moderate extent,"
and the threshold for full non-membership at "to a small extent."
Good Metrics: The final independent measure is made up of twelve survey questions that
represent different ways departments practice good metrics. In this case, half of the questions'
scales were reversed, as those six questions dealt with poor approaches to metrics. The questions
were then aggregated into an index that had an alpha score of 0.88 for 2015 data and an alpha
score of 0.86 for 2016 data signifying high reliability. Average scores were taken for each of the
City departments and then calibrated into fuzzy sets. Based upon the equivalent of the five-point
Likert scale from the questions the threshold for full membership was set at "to a great extent,"
the crossover point at "to a moderate extent," and the threshold for full non-membership at "to a
small extent."
128
Table 4.1 Descriptive Statistics for 2015 Measures
Variable
No. of
Items
Min-Max Mean
Std.
Deviation
Alpha
Implementation
Success (IPMU)
1 1-4 2.41 1.18
Implementation
Success
(Survey)
6 6-30 20.27 2.40 0.92
Large Budget 1
1,720,732-
4,807,391,834
500,743,470
1,145,668,918
Large
Organization
1 10-13,707 1,335 2613
Strategic Plan 1 0-1 0.79 0.41
Mayoral
Attention
1 0-1 0.12 0.33
Analytic
Capacity
1 1-5 2.83 0.58
Innovative
Culture
5 5-25 15.80 2.56 0.87
Strong
Leadership
8 8-40 28.27 3.68 0.84
Good Metrics 12 12-60 39.75 7.21 0.88
129
Table 4.2 Descriptive Statistics for 2016 Measures
Variable
No. of
Items
Range Mean
Std.
Deviation
Alpha
Implementation
Success (IPMU)
1 1-4 2.71 0.97
Implementation
Success
(Survey)
6 6-30 21.00 2.46 0.91
Large Budget 1
1,591,167-
5,326,225,000
624,115,220
1,333,848,833
Large
Organization
1 10-13,875 1,497 2,747
Strategic Plan 1 0-1 0.84 0.37
Mayoral
Attention
1 0-1 0.16 0.37
Analytic
Capacity
1 1-5 3.00 0.73
Innovative
Culture
5 5-25 16.30 3.01 0.89
Strong
Leadership
8 8-40 29.37 3.69 0.84
Good Metrics 12 12-60 40.91 5.19 0.86
130
4.2.1C Calibration and Directionality of Causal Conditions
As described in the previous two sections about the outcome and independent measures,
converting variables into sets requires calibration to make these measures appropriately fuzzy.
Conversion to fuzzy sets requires calibration of three critical points: full membership, full non-
membership, and the crossover point of maximum ambiguity. Ragin (2008) describes using these
three points to convert raw variables into set measures using a direct method. Full membership
and full non-membership are treated as the upper and lower bounds of the set, with calibrated
scores anchored between these bounds and the crossover point. The fuzzy measures are rescaled
between 0 and 1, with 0.95 as the threshold of full membership, 0.05 as the threshold of full non-
membership, and 0.5 the crossover point (Ragin, 2008). For the current research, this
transformation was conducted using fs/QCA software (version 3.0).
In addition, based upon substantive case knowledge, the expected directionality of causal
conditions must be specified in the course of using the fs/QCA software. This is done to
differentiate the complex and intermediate solutions, and to provide a better understanding of the
configurations. Based upon the discussion of the literature above and in Chapter One, and the
substantive knowledge of the cases from Chapter Three, the directionality of all causal
conditions was specified as positive. This process was also accomplished using fs/QCA software
(version 3.0).
131
4.3 Results
Table 4.3 Correlations of 2015 Measures
Variable 1 2 3 4 5 6 7 8 9
1 Large Budget
2 Large Organization 0.56*
3 Strategic Plan 0.21 0.21
4 Mayoral Attention -0.09 0.04 0.19
5 Analytic Capacity 0.02 -0.02 -0.16 -0.26
6 Innovative Culture -0.04 -0.04 -0.20 -0.29 0.69*
7 Strong Leadership -0.12 -0.01 -0.10 -0.26 0.73* 0.73*
8 Good Metrics 0.02 0.12 0.02 -0.32 0.60* 0.43* 0.68*
9 Implementation Success (IPMU) -0.17 0.19 -0.01 0.34* -0.16 -0.13 0.11 0.24
10 Implementation Success (Survey) 0.08 0.21 -0.22 -0.31 0.52* 0.34 0.33
0.35 0.41*
*Significant at 5%
Table 4.4 Correlations of 2016 Measures
Variable 1 2 3 4 5 6 7 8 9
1 Large Budget
2 Large Organization 0.54*
3 Strategic Plan 0.19 0.18
4 Mayoral Attention -0.12 0.07 0.19
5 Analytic Capacity -0.14 -0.16 -0.31 -0.23
6 Innovative Culture -0.15 -0.21 -0.37* -0.19 0.79*
7 Strong Leadership -0.13 -0.24 -0.19 -0.26 0.62* 0.71*
8 Good Metrics 0.07 0.04 0.26 0.07 0.22 0.17 0.38*
9 Implementation Success (IPMU) -0.25 0.19 -0.04 0.32 0.05 -0.14 -0.13 0.04
10 Implementation Success (Survey) -0.21 -0.09 -0.13 -0.04 0.41* 0.32 0.45 0.25 0.44*
*Significant at 5%
Table 4.3 and 4.4 present the correlations for all the measures used in the fsQCA for 2015
and 2016 data. As would be expected, the table shows a large positive correlation in both years
between the size of a department’s budget and the number of employees within a department.
For 2016 there is a bit of an unexpected negative correlation between strategic planning and
innovative culture. This negative correlation is also displayed in the 2015 data but does not rise
132
to statistical significance. As expected, innovative culture, good metrics, and strong leadership
are all strongly positively correlated across both years of data. That these measures each come
from departmental survey trends lends credence to this trend. Finally, it is a welcome sign to see
the two implementation success measures positively correlated in a moderately high fashion. In
general, the distribution of the correlations across the data is normal.
The following sections first individually discuss the causal configurations for each of the
two years covered and further delineated by the two different outcome measures utilized. After
the individual analysis of these four fuzzy sets, intersectional analysis based on Ragin (1987) is
utilized to compare across outcome measures using Boolean logic. Results are presented in four
tables using notation developed by Ragin and Fiss (2008). Under this framework, large circles in
the table indicate core conditions, while small circles represent peripheral conditions that
contribute to the outcome. Solid circles indicate that a condition must be present, whereas blank
circles with a line across them indicate the condition must be absent. A blank space in the table
indicates a "do not care" state, where the condition may either be present or absent. The
significance of differentiating between the core and peripheral conditions in fuzzy set analysis
was argued by Fiss (2011) to be central to understand organizational configurations as such core
conditions are those for which the evidence displays a robust causal relationship to the outcome
measure (Fiss, 2011). A weaker relationship is theorized with regard to peripheral conditions.
This differentiation is a valuable lens through which to view performance management systems,
where some conditions might have a more significant impact than others on reforms. This
follows previous public management literature that examines the core and peripheral traits of
public organizations (Hannan & Freeman, 1984; Kelly & Amburgey, 1991).
133
In addition to core and peripheral conditions, the configuration tables display consistency
(previously discussed in this chapter) and coverage. How much a configuration accounts for
instances of an outcome is related through coverage scores, contributing to the empirical
importance of the causal configuration (Young & Park, 2013). Unique coverage denotes the
coverage solely attributed to that causal configuration. On the other hand, raw coverage displays
coverage across cases and allows for overlap. Finally, the solution coverage measure indicates all
the solutions related to a particular outcome (Ragin, 2008).
4.3.1 Configurations for 2015 IPMU Success Measure
The solution table (Table 4.5) displays the fuzzy set results for the three configurations
for the 2015 IPMU outcome measure for performance management implementation success. The
three intermediate configurations presented exhibit good consistency, with each having a
consistency score above 0.90. Also indicated in the solutions table is the presence of both core
and peripheral conditions. Configuration 2 also acts as a neutral variation of configurations 1 and
3. As such, in the three configurations presented there is both first-order equifinality and second-
order equifinality related to the outcome measure of successful performance management
implementation.
In looking at the three configurations presented, we can see several core conditions across
all three configurations, which are also necessary conditions for successful performance
management implementation. Both being a large organization and having strong leadership are
present in each of the three configurations as core conditions and necessary conditions. This
points to a consistent recipe for performance management implementation success that includes
these two factors. Leadership has been theorized as an essential condition, one that organizations
could improve, whereas becoming a large organization is not within a departments control. Other
134
core conditions, the absence of a large budget in configuration 1 and the absence of a strategic
plan in configuration 3 are discussed in relation to those specific configurations in the next
paragraphs.
In configuration 1, the absence of a large budget is a core condition. In conjunction with
large organization being a core and necessary condition across all three configurations, this
would point to having greater human resource capacity over greater financial capacity as
organizational size factor related to performance management implementation success. An
important aspect of employing fsQCA methodology is connecting configurations to the
substantive knowledge of cases to understand the results better. The presence of strong
leadership is cited extensively in the literature and was present in some form in four of the case-
study departments (Departments B, C, E, and F). The presence of strong leadership as a core
condition is clearly meaningful in configuration 1 in the context of cross-validation.
The more important consideration when interpreting configuration 1 as it relates to the
cases is why being a large organization contributes to success, but also the absence of a large
budget also contributes to success. Large organizational size is supported both by the literature
and the handful of large case-study departments that displayed implementation success.
However, the literature calls for large financial resources as an important success factor, which is
not demonstrated in configuration 1. Department C within Los Angeles is a good example of
configuration 1 and helps to explain why the absence of a large budget is meaningful in a way
not explained by the literature. In the case of Department C, which had a fairly small budget, the
lack of financial resources forced the department to get creative in how it approached reforms
and this culture of doing more or being creative with less developed as a result of the resource
crunch in the department. The presence of strong leadership and the extra staff capacity in
135
Department C as a result of relatively large size accented the ability to take this approach to
performance management implementation. As such, configuration 1 demonstrates that a large
organization, with strong leaders, and good metrics, can achieve a successful outcome even
without substantial financial resources to support performance management reforms.
Configuration 2 presents the simplest configuration, with large organization and strong
leadership as core conditions, and a strategic plan and mayoral attention as peripheral conditions.
In this configuration, large organization and strong leadership are necessary conditions, with
strategic planning and mayoral attention effectively acting as "nice to have" conditions. In the
parsimonious solution, these two factors become "do not care" conditions. This, once again,
demonstrates the value of the two core conditions, a large organization, and strong leadership.
Department E within Los Angeles is a good example of this straightforward configuration. Its
large size and strong leadership were obvious over the course of study as positive factors, while
the addition of good metrics and political attention functioned as the cherry-on-top. It was clear
during the research that Department E would have had a fair amount of success with
implementation even without these peripheral factors.
Configuration 3 is the most complex causal path to successful performance management
implementation, and also the most difficult to link to the cases in this dissertation. When a
strategic plan is removed from the configuration, we still observe a successful outcome. This
additional core condition of the absence of a strategic plan, in combination with the presence of
the other causal factors in configuration 3, does not match very well with any of the case-study
departments researched for this dissertation. Departments did succeed with all the present
conditions shown in configuration 3, but not with a lack of strategic planning. Because six
departments were observed closely for case-studies and a handful of other departments were
136
observed less closely for this study, it seems possible that configuration 3 represents a
department within the City that was not observed during the qualitative research process.
It is only possible to theorize then that perhaps several peripheral conditions that appear
in configuration 3 may offset this lack of planning. In configuration 3, analytic capacity,
innovative culture, and good metrics
are possible causal factors that can
help lead to performance
management implementation
success. This makes logical sense as
each of these peripheral factors can
be cited as a derivative of strategic
planning and as possible
replacements for planning in this
configuration. Across all three
configurations, the parsimony of
large size and strong leadership is
still primary in explaining paths to
performance management
implementation success. The overall coverage score indicates that the combined models explain
50 percent of the membership in the outcome. This coverage is relatively extensive, but still
leaves a substantive portion of configurations as unsystematic.
Table 4.5 Configurations for Achieving Implementation
Success (IPMU Measure 2015)
Solution
Configuration 1 2 3
Large Budget ⦸
Large Organization
⚈ ⚈ ⚈
Strategic Plan
⚈
⦸
Mayoral Attention
⚈
Analytic Capacity
⚈
Innovative Culture
⚈
Strong Leadership
⚈ ⚈ ⚈
Good Metrics ⚈
⚈
Consistency
0.91 1 1
Raw Coverage 0.34 0.16 0.11
Unique Coverage 0.23 0.14 0.02
Overall Solution Consistency 0.94
Overall Solution Coverage 0.50
137
4.3.2 Configurations for 2015 Survey Success Measure
The second solutions table (Table 4.6) demonstrates that there are twice as many
intermediate configurations for the 2015 survey-based outcome measure of performance
management implementation success. Comparisons to the solution from the 2015 IPMU outcome
measure are considered later in the chapter but discussed in general terms in this section. Five of
the configurations in the table demonstrate a high consistency of over 0.90, while one
configuration has an acceptable consistency of 0.83. The minimum and the maximum number of
core conditions remains at two and three respectively, demonstrating similar causal choice to the
previous configurations in achieving the outcome. As compared to the previous set, there are no
clear necessary conditions, with strong leadership again coming close and featuring as a core
condition in five of the six configurations presented.
Configuration 5 in the table is identical to the most parsimonious configuration in the
previous section, relying on the core conditions of large organization and strong leadership. The
peripheral conditions of a strategic plan and mayoral attention are also present. Large
organization and strong leadership feature as core conditions once again across several
configurations. Good metrics is now a core condition in three configurations, shifting from its
peripheral position in the previous table. Configuration 2 is a neutral permutation of
configuration 5, demonstrating second-order equifinality across this set of configurations. In
configurations where large organization is present as a core condition, either good metrics or
strong leadership are required to reach the outcome. This demonstrates that without other
supporting core conditions, large organizations alone cannot achieve success. In the same vein,
in any configuration where good metrics is a core condition, either strong leadership or large
organization are required conditions for the outcome. Once again Department E within Los
138
Angeles is an excellent representation of configurations 1, 2, and 5. This department is a clear
example of strong leadership, large size, and good metrics working in some combination leading
to success as discussed through a similar set of conditions in the above section. In addition,
Department F is a good representation of substantive knowledge matching configuration 2.
Unlike Department E, Department F struggled for a long time to achieve good metrics that
eventually allowed it to be more successful. This narrative comports with configuration 2 where
good metrics become a core condition.
In contrast, there are configurations where the presence of strong leadership alone as a
core condition can lead to the outcome, overcoming absent core conditions. Configuration 3 and
4 are examples of this situation playing out with strong leadership as a core factor overcoming
the core condition of the absence of a large budget and the absence of analytic capacity. Once
again Department C is a good representative of the factors found in configuration 4, overcoming
the lack of a large budget through strong leadership that led to new ways to utilize the limited
resources of the department. Though configuration 6 appears to be a neutral permutation of
configuration 4 at first glance, it adds good metrics as a core condition. This permutation still fits
the example of Department C fairly well and is similar to a configuration discussed in the
previous sections, with good metrics only shifting from peripheral to core.
A final note about configurations considers the contradictions of configuration 3. The
absence of analytic capacity as a core condition, even with the presence of strong leadership,
would appear to fall outside the realm of substantive knowledge about the outcome. Several
departments within Los Angeles were observed qualitatively and appeared to reach middling
outcomes with respect to performance reforms, with the causal combination in configuration 3.
However, none of these departments seemed to meet a clear level of observable success. As
139
such, configuration 3 is the second example of a configuration that may have occurred in a
department not closely observed in the course of this research. However, with a strong overall
solution coverage score of 0.86, we can see that more of the outcome can be explained in this set
of causal configurations.
Table 4.6 Configurations for Achieving Implementation Success (Survey Measure 2015)
Solution
Configuration 1 2 3 4 5 6
Large Budget
⦸
⦸
Large Organization
⚈ ⚈
⚈
Strategic Plan ⚈
⚈ ⚈ ⚈
Mayoral Attention
⚈
Analytic Capacity
⦸
⚈
Innovative Culture
⚈ ⚈
⚈
Strong Leadership
⚈ ⚈ ⚈ ⚈ ⚈
Good Metrics
⚈ ⚈
⚈
Consistency
0.92 0.99 0.98 0.95 0.83 0.96
Raw Coverage 0.51 0.62 0.41 0.29 0.09 0.26
Unique Coverage 0.02 0.07 0.04 0.07 0.02 0.04
Overall Solution Consistency 0.92
Overall Solution Coverage 0.86
140
4.3.3 Configurations for 2016 IPMU Success Measure
Moving to the 2016 sets, the solutions table (Table 4.7) displays configurations for the
IPMU performance management implementation success outcome measure. In comparing across
years and across outcome measures only general comparisons between the sets are discussed.
The consistency scores of the three configurations presented in Table 4.7 are all above the
acceptable level of 0.80. It is purely coincidental that the 2016 sets for the IPMU outcome
measure have the same number of configurations as the 2015 sets.
Unlike the first two sets of configurations, this set does not demonstrate any second-order
equifinality, only first-order equifinality. The intermediate solution (and by extension the
parsimonious solution) does not demonstrate any necessary conditions in the configurations. The
minimum number of core conditions required to reach the outcome is reduced to one, and the
maximum number is reduced to two as compared to the previous two sets of configurations. This
indicates less constraint on the possible causal configurations departments have to reach a
successful outcome. This may indicate that as performance management systems develop and
mature, fewer causal conditions may be required in order to reach a successful implementation
outcome. For example, in configuration 3 only being a large organization is a required core
condition for success with a handful of peripheral conditions being present. This could indicate
that as performance management systems mature, simply having the size to execute the system
becomes central, with some peripheral conditions being nice to have. While not an exact match
for any department within Los Angeles, Department B may come closest to approximating a real
case of configuration 3. For Department B its large size and thus the sheer number of different
units implementing reforms allowed for a level of success, with variations among departmental
sub-units in the peripheral factors observed in configuration 3.
141
As with the previous two sets of configurations the main core conditions that are present
that lead to a successful outcome are large organization and strong leadership. Configuration 2 in
the table once again combines large organization with strong leadership to reach the successful
outcome measure. The parsimonious version is the same as the previous two sets of
configurations, with the intermediate solution replacing only mayoral attention with good metrics
as a peripheral condition. In Configuration 2 we see a consistent recipe for implementation
success demonstrated across three different groups of cases. Large organization and strong
leadership appear to be enduring conditions for success across departments in Los Angeles. As
discussed previously in the above two sections there are several departments (Department E
specifically) within Los Angeles that fit this configuration a version of which shows up across
three solutions tables as a consistent causal recipe for success.
Configuration 1 is an outlier configuration that has its only core condition as the absence
of strong leadership. The intermediate solution also includes the presence of the peripheral
condition strategic planning. On its face and in relation to the substantive knowledge from Los
Angeles, configuration 1 appears implausible. For instance, the two departments in the City,
Department A and Department D that lacked strong leadership also had the least success as
observed during case-study. These departments lack other causal factors not seen in
configuration 1, which does not completely rule out configuration 1. Going back to the original
truth table we find that configuration 1 is linked to a department in Los Angeles that received no
substantive qualitative research during the course of this study. It is possible this department
responsible for configuration 1 is unique in a way not yet studied in the literature or
substantively, especially with strong leadership showing up as a core condition in many other
configurations. As such, further qualitative research is needed in the future to explain
142
configuration 1. With an overall coverage score of 0.70, this set of configurations provides
coverage for more than two-thirds of the possible recipes for success.
Table 4.7 Configurations for Achieving Implementation
Success (IPMU Measure 2016)
Solution
Configuration 1 2 3
Large Budget
Large Organization
⚈ ⚈
Strategic Plan ⚈ ⚈
Mayoral Attention
Analytic Capacity
⚈
Innovative Culture
⚈
Strong Leadership
⦸
⚈
⚈
Good Metrics
⚈
Consistency
0.85 1 0.92
Raw Coverage 0.24 0.13 0.46
Unique Coverage 0.05 0.02 0.03
Overall Solution Consistency 0.86
Overall Solution Coverage 0.70
143
4.3.4 Configurations for 2016 Survey Success Measure
The final solutions table (Table 4.8) examines six configurations that utilize the 2016
survey data outcome measure for performance management implementation success. All six
configurations demonstrate high consistency scores, all above 0.90. As discussed previously, it is
coincidental that six configurations are present in both the 2015 and 2016 survey-based outcome
data. The six configurations do not present any necessary conditions, only sufficient conditions
for reaching the outcome. This fourth set of configurations has one as the minimum and the
maximum number of core conditions required to reach the outcome. Once again, this indicates
fewer constraints on the possible causal complexity required to reach the outcome and a similar
indicator to the previous set of configurations that simplicity happens as performance
management systems mature. The six configurations in the table, demonstrate both first-order
and second-order equifinality, with configurations 4, 5, and 6 being different permutations of
each other.
Table 4.8 displays the first set of configurations that do not count either the presence of
strong leadership or being a large organization as a core condition. Instead, the only present core
condition found in the solutions table is good metrics across three of the present configurations.
The absence of analytic capacity, innovative culture, and strategic planning also feature in at
least one configuration as core conditions. Strong leadership still features in four configurations,
as it did in the previous three sets, but it has been downgraded to a peripheral condition in this set
of configurations. The presence of large organization only appears as a peripheral condition in
one configuration.
As stated earlier, configurations 4, 5, and 6 are permutations of one another, with good
metrics as the unifying core condition. The parsimonious solution across these three
144
interconnected configurations is simply the presence of good metrics. In each case, the presence
of strong leadership is a peripheral supporting condition. Strategic planning, large organization,
and innovative culture also feature in this group of configurations. Configuration 4 comes closest
to replicating the combination of large organization, strong leadership, and good metrics that was
prevalent in the previous three sets of configurations. Configuration 4 demonstrates that over
time these three key factors are still a recipe for successful performance management
implementation, though the importance of strong leadership and being a large organization are
diminished. This causal recipe still relates well to Department E and Department F as it did in the
previous sets discussed. Though perhaps this recipe fits Department F the best, as it truly needed
good metrics before it could reach a higher level of implementation success. As such
configurations 5 and 6 also can be fairly well applied to Department F, where good metrics are
the central core condition. Configuration 4, and to a lesser extent configuration 5, and 6 represent
consistency with the three previous sets discussed.
The left side of the solutions tables, with configurations 1, 2, and 3 are more complicated
configurations to evaluate. These three configurations feature one absent core condition and no
other core conditions as leading to the outcome. The parsimonious solution in each of these cases
relies on either the absence of a strategic plan, the absence of an innovative culture, or the
absence of analytic capacity. This replicates the conundrum of configuration 1 from the previous
2016 solution table, where the absence of one core condition is expected to lead to the successful
outcome. However, these configurations are more plausible both logically and based upon
substantive knowledge as compared to configuration 1 from the previous 2016 table, which had
the absence of strong leadership as its only core condition. Logically strong leadership has
featured as a present core condition in many other causal recipes whereas strategic planning,
145
innovative culture, and analytic capacity have not. As such configurations 1, 2, and 3 in this
solution table (Table 4.8) are potential cases where a department overcame an obstacle to
achieve reform. While none of the case-study departments fit these configurations, several
departments peripherally observed during the course of research could fit these configurations.
These departments were definitively missing one condition as noted in configurations 1, 2, and 3,
but effectively had ”do not care” situations for the rest of the causal factors. A handful of
departments in Los Angeles appeared to achieve reasonable success under these configurations.
In a sense, we see “average” departments who overcome an obstacle to reach moderate
performance management success. The overall solution coverage for Table 4.8 is a very robust
96 percent, effectively covering all possible causal configurations.
Table 4.8 Configurations for Achieving Implementation Success (Survey Measure 2016)
Solution
Configuration 1 2 3 4 5 6
Large Budget
Large Organization
⚈
Strategic Plan ⚈ ⚈
⦸
⚈
Mayoral Attention
Analytic Capacity
⦸
Innovative Culture
⦸ ⚈
⚈
Strong Leadership
⚈ ⚈ ⚈ ⚈
Good Metrics
⚈ ⚈ ⚈
Consistency
0.91 0.96 0.98 0.94 0.93 0.95
Raw Coverage 0.55 0.42 0.16 0.63 0.72 0.63
Unique Coverage 0.02 0.02 0.07 0.02 0.02 0.01
Overall Solution Consistency 0.90
Overall Solution Coverage 0.96
146
4.3.5 Configurations for Unsuccessful Performance Management
Implementation
The four solutions and their resulting configurations discussed in the above sections help
provide a richer understanding of causal configurations that lead to successful performance
management outcomes. To better understand this subject the opposite outcome, unsuccessful
performance management implementation was also modeled. This additional fsQCA analysis
was conducted by using the two previously discussed outcome measures but by reversing the
membership scores to have the full membership threshold represent unsuccessful performance
management implementation. While this symmetry of modeling is standard in regression
analysis, it is important also to include this approach when conducting a causal analysis with
fsQCA (Greckhamer et al., 2018). This is because the causal conditions that lead to the presence
of an outcome are likely to be different than the causal conditions that lead to the absence of the
outcome, in this case, unsuccessful performance management implementation. In doing so, we
consider causality with regard to performance management success as asymmetric.
In conducting a fuzzy set analysis of the outcome of unsuccessful performance
management implementation across both years and both outcome measures, there were no
solutions consistently identified with the fsQCA. As such, consistency scores for all
configurations were far below the acceptable threshold level of 0.80. In these results, we see that
there is not a set-theoretic relationship for the outcome of unsuccessful performance management
implementation. This means that as discussed above there are particular paths available to
organizations that want to achieve successful performance management systems, but no clear
paths to explain why organizations may fail to achieve success. This additional finding is both
helpful in illuminating clearly success factors and their combination for public organizations, but
also frustrating as it does not provide clear answers for how public organizations can avoid
147
failure during the implementation of reforms. In conclusion, there are many ways to fail at
performance management system implementation, but specific causal factors are required in
combination to succeed.
4.3.6 Evaluating the Intersection of Outcomes Across Solutions
Boolean logic, as suggested by Ragin (1987), was used to more formally evaluate the
complementary outcome measures used in the multiple sets of fsQCA in this chapter. By
notating mathematically, the causal configurations that lead to both of the outcome measures,
and then simplifying this equation, the causal solutions that encompass both outcome measures
are revealed. This process generates more substantive causal power in explaining successful
performance management implementation overall. This Boolean comparison logically combines
the configurations for the two 2015 outcome measures in one mathematical equation and the
configurations for the two 2016 outcome measures into a different mathematical equation.
Boolean notation uses the "*" sign as the logical "and," the "+" sign as the logical "or" and the
"à" as the logical implication sign. Uppercase letters are treated as the presence of a condition,
while lowercase letters are treated as the absence of the condition. Specific condition for this
research are notated as follows:
• XA15: 2015 IPMU Outcome Measure
• XB15: 2015 Survey Outcome Measure
• XA16: 2016 IPMU Outcome Measure
• XB16: 2016 Survey Outcome Measure
• A: Large Budget
• B: Large Organization
• C: Strategic Plan
• D: Mayoral Attention
• E: Analytic Capacity
• F: Innovative Culture
• G: Strong Leadership
• H: Good Metrics
148
The 2015 data can be spelled out in the following two equations:
(R1) = (aBGH + BCDH + BcEFGH) à XA15
(R2) = (BCH + BGH + CeFG + aCFG + BCDG + aEFGH) à XB15
If we intersect the two equations we get the following:
(R1)(R2) = (aBGH + BCDH + BcEFGH)(BCH + BGH + CeFG + aCFG + BCDG
+ aEFGH)
= (aBGH + BCDH + BcEFGH) à X15
Through the Boolean intersection of the first set of results for the 2015 IPMU outcome measure
(R1) and the second set of results for the 2015 survey outcome measure (R2), we can see that the
resulting intersection leads to the exact same set of configurations seen in (R1). From this result,
we can see that all of the configurations in (R2) are subsets of the configurations in (R1). This
intersection finding is very substantive and helps to confirm two important findings for the
research. First, that both outcome measures used in this chapter (at least for 2015 data) measure a
similar outcome construct of performance management success. Second, the intersection
reinforces that there are specific causal configurations that lead to the outcome of successful
performance management. This result strengthens the validity and reliability of the causal recipes
that have been discovered in the course of the fsQCA.
The 2016 data can be spelled out in the following two equations:
(R1) = (Cg + BCGH + BEFG) à XA16
(R2) = (Ce + Cf + cFG + BGH + CGH + FGH) à XB16
If we intersect the two equations we get the following:
(R1)(R2) = (Cg + BCGH + BEFG)(Ce + Cf + cFG + BGH + CGH + FGH)
= (Cge + Cgf + BCGH + BcEFG + BEFGH) à X16
149
While the findings for the intersection of the configurations for both of the 2016 outcome
measures are not as apparently clear cut as for 2015, the results are helpful for understanding the
causal recipes discovered in this chapter. We see that BCGH as a configuration is present in (R1)
and in the resulting intersection of (R1) and (R2). The rest of the configurations present in the
intersection equation are mixed subsets of the configurations in (R1) and (R2) with most of the
configurations from the intersection being subsets with one condition different from
configurations in (R1) or (R2). The only slight difference in configurations between (R1) and (R2)
still strengthens the notion that both outcome measures are measuring a similar construct of
performance management success. This finding also strengthens the idea that the causal
configurations discovered by the fsQCA in this chapter are valid in the way they suggestion
recipes for successful performance management implementation.
4.4 Discussion
The previous sections discussed causal configurations that lead to successful performance
management implementation using fsQCA analysis and in tandem with substantive knowledge
drawn from the case-studies in Chapter Three and other background research on Back to Basics.
Each of the findings from the analytic chapters in this dissertation are further discussed
holistically in the concluding chapter, Chapter Six. As such, this discussion section summarizes
the key findings from the analysis in this chapter through the lens of the three central research
questions in this dissertation. As an inductive analytic technique fsQCA does not provide
definitive evidence for causal relationships. Nevertheless, the causal configurations found in the
sets examined in this chapter provide insight into the factors that contribute to successful
performance management system implementation and how departments can overcome
impediments.
150
4.4.1A How is success defined when it comes to performance management
reforms?
This chapter utilized two complementary outcome measures that represented
performance management implementation success. The first measure was drawn from the
Innovation and Performance Management Unit within Los Angeles, while the second measure
was constructed using survey data. While the IPMU outcome measure was a more process-based
measure and the survey outcome measure was a more active adoption of performance
management to make decisions measure, both were meant to fundamentally capture successful
implementation. The IPMU measure looks at whether processes are successfully in place and the
survey measure looks at whether a department is successfully using these processes. This mirrors
the dichotomy seen in the literature on how to define successful reforms (Behn, 2003; 2005;
Hood, 2012; Sanger, 2008).
While the main purpose of this chapter was not to parse definitions of performance
management success, there are still several things the results can add to the understanding of this
subject. First, the results demonstrate that there can be several valid measures of success for
performance management. It is possible that the focus should be more on defining multiple
measures to evaluate whether reforms are successful, like this chapter, rather than arguing about
a single definition of success. And, that multiple measures should be used to help gauge success
during reforms. Multiple measures help to understand subsequent factors and combinations for
success better. Second, this chapter shows that even though there may be multiple measures of
success, many of the paths to success use similar causal combinations of factors. The intersection
of the configurations for both outcome measures provides strong evidence for this second claim.
The intersection also provides possible evidence that both outcome measures represent a very
similar construct of performance management success.
151
4.4.1B What factors are central to successful performance management systems
and in what combination?
This chapter sheds light on the benefits of strong leadership, good metrics, and large
organizational size in developing successful performance management reforms at the
departmental level. Key findings include:
• Strong leadership and being a large organization, usually in tandem, are consistently part
of successful performance management implementation configurations.
• Good metrics also figure prominently in successful outcomes, at times as the sole core
condition for success.
• The intersection of the configurations across both outcome measures reinforces that the
conditions of strong leadership, large organizational size, and good metrics show up in a
large number of causal recipes that lead to performance management success.
• The absence of a large budget is the one somewhat contradictory condition that exists in
several configurations across periods examined but can be explained by substantive
knowledge.
• Strategic planning, mayoral attention, analytic capacity, and innovative culture are
supporting factors in many configurations, but are not necessary or sufficient for a
successful outcome.
• Several configurations that feature one absent core condition provide evidence that
organizations can overcome certain obstacles to enact successful performance
management systems.
The results from this chapter suggest that strong leadership and being a large organization are
fairly universal factors that can have a positive impact on organizations that are implementing
data-driven management reforms. As they are both core conditions in the configurations in which
152
they appear, it is not possible to say definitely if one condition drives the other in performance
management success, or if they merely work generally in tandem. These findings support the
literature by confirming that both of these are important factors for performance management
success and extend the literature by showing their impact in tandem (Behn, 2014; Nielsen, 2013;
Sanger, 2013).
Good metrics, whether in combination with other factors such as strong leadership and
being a large organization or as a single factor also feature prominently in successful reforms.
This also confirms previous research that shows that successful performance management
systems are built upon measurable, meaningful, and manageable metrics (Hatry, 2006). This
shows that good metrics are linked to outcomes within successful departments (Behn, 2014). At
the same time, other factors theorized as important in data-driven reforms such as innovative
culture, analytic capacity, and strategic planning are not as central as is often argued. Further
research on these factors will hopefully shed light on what level of importance these peripheral
factors play in data-driven reforms.
4.4.1C How do organizations overcome obstacles that arise while implementing
performance management reforms?
While this chapter was also not explicitly about examining impediments to successful
performance management reforms, the findings in this chapter provide some evidence to aid in
answering this research question. Several configurations developed during the fsQCA analysis
showed a successful outcome with only one absent core condition as part of the parsimonious
solution. Absent core conditions in these configurations include a strategic plan, innovative
culture, or analytic capacity. These configurations potentially demonstrate how organizations can
overcome obstacles to implement performance reforms successfully. This is supported by
substantive knowledge through peripheral observation of a handful of departments during the
153
research process. The findings may also point to what factors are not critical for performance
management implementation success. Further research is likely needed to explore these
configurations and to find what other factors (if any in particular) allow these
departments/organizations to overcome impediments to success.
154
Chapter Five: Multivariate Regression Analysis
The final analytic chapter of this dissertation builds upon the case-study analysis and set-
theoretic examination, through the use of two complementary multivariate regression models.
The two models are employed in order to study the Back to Basics reforms at both the individual
operational level and across the City as a whole (Scott, 1998; Wilson, 1989). In taking this
approach, these regression models sandwich around the in-depth case-study research and the
configurational departmental case analysis. The first regression model employs a set of true
panel data from individual public managers surveyed within Los Angeles in 2015 and 2016. The
second multivariate analysis uses pooled data from all City staff surveyed in both years. While
the first model might be considered the “individual level” analysis and the second model the
“institutional level” analysis, the complementary nature of both models allows an overlap of
results across both levels.
While the previous two chapters assessed broader definitions of performance
management implementation success, this chapter focuses on how specific performance
measures are used and the precise attitudes around performance reforms at both the individual
and institutional levels. The outcomes considered in this chapter look at awareness and attitudes
towards performance management reforms, along with usage and proficiency of performance
measures. This group of dependent variables examines the progression of how performance
management systems are adopted. From first awareness of reforms, and the perceptions that are
formed, to when City staff start to utilize metrics in their work, and what proficiency develops
from this usage. This set of dependent variables helps to assess the important role that public
managers have been expected play in successful reforms as theorized in previous literature
(Hatry, 2006; Moynihan, 2006; 2008; Yang & Hsieh, 2007). In looking at these factors over two
155
time periods, there is an added understanding of how performance management reforms may
evolve.
Some of the independent variables identified as impacting performance management
systems from the previous two analytic chapters are also present in this regression approach.
Factored index variables include previously considered themes such as strategic planning (Sole
& Schiuma, 2010), innovation (Hou et al., 2011), and analytic resources (Lu, 2008). Resources
(Boyne, 2003), bureaucratic structures (Behn, 2014), and information technology systems
(Melkers & Willoughby, 2001; 2005) are also considered. These factors are included in broader
indexes in this chapter and represent organizational impediments and supporting organizational
factors that may inhibit or assist performance reforms.
With individuals (and by extension how they make up Los Angeles as a whole) the focus
of this chapter, new independent variables are introduced that connect more directly to individual
public workers. Both public service motivation (Moynihan & Pandey, 2007; 2010), and training
in data-driven practices (Kroll & Moynihan, 2015) have been theorized as positive factors for
performance management reforms. The role of an organizations’ Wilson typology is also
considered in this chapter, as it has been understudied at the individual public worker level
(Radin, 2006; Wilson, 1989). As such these variables are considered, along with other individual
control factors such as education level, public manager experience, and the number of employees
supervised within the models in this chapter. Because all the aforementioned factors have been
examined in-depth in Chapter One literature review and the previous two analytic chapters, there
is only a brief discussion of the literature above in this chapter.
The two models created for this chapter utilize the same variables across models to help
standardize how we understand performance reforms. The two different models were designed in
156
response to the different sets of data available. For the true panel data, a random effects model
over time was developed. For the larger set of pooled data, a pooled ordinary least squares (OLS)
model, with a time effect dummy variable was built. The remaining sections of this chapter lay
out the creation of these models, their results, and subsequent analysis. It begins by discussing
the two sets of data used for the models and then turns to the definition and creation of the
variables. From there the chapter discusses the models employed, their results, and concludes
with analysis and discussion of the findings.
5.1 Data and Variables
5.1.1 Data
The data for the models in this chapter comes almost exclusively from the survey of
middle and upper-level public managers who are involved with performance measures. The
survey and related data collection were discussed thoroughly in the previous chapter on
methodology. This section reviews the data from the survey that was utilized in the two
regression models discussed in this chapter. The first set of data is true panel data drawn from
survey respondents who answered the survey across both the 2015 and 2016 iterations. This
group of City managers was composed of 394 individual respondents. This set of data
incorporated into a random effects model over time can represent the impact of performance
management reforms at the individual level at the first order, and the institutional level at the
second order. The second set of data incorporated into the OLS model is made up of all the
survey respondents across both iterations of the survey. This pooled data is made up of the 920
respondents from the 2015 survey and the 763 respondents from the 2016 survey for a total of
1,683. This second set of data primarily represents the institutional level and individual change
within Los Angeles at a secondary level.
157
5.1.1A Dependent Variables
Both of the models created in this chapter include the same five dependent variables. In
fact, except for one time-related variable that was required to be changed due to the difference
between models (discussed in the next section), all of the variables across models were held
constant. This consistent approach allows for a more reliable understanding of the actual change
that happened within the City of Los Angeles, as the models can partially cross-validate each
other. Four of the dependent variables examined are defined as "individual" variables, as they are
meant to represent the characteristics of individual public managers within the City. When taken
in aggregate across the large pool of data collected for this research, these individual-level
variables can also partially represent the institution of the Los Angeles City government as a
whole. The four individual dependent variables studied in the models are: (1) Awareness of
Performance Management Reforms, (2) Attitudes Towards Performance Management Reforms,
(3) Individual Usage of Performance Information, and (4) Proficiency in Using Performance
Information. These four individual dependent variables are discussed in-depth below.
The fifth dependent variable considered in each of the regression models is a group or
organizational level variable. Though this variable is drawn from individual answers to a set of
survey questions, the answers were intended to represent the characteristics of the group or sub-
unit that the individual worked within. The fifth dependent variable examined in this chapter is
Organizational Usage of Performance Information.
Three of the five dependent variables are made up of indexes created from a series of
survey questions meant to represent the latent variable. Typically, exploratory factor analysis is
used in the creation of such indexes. However, in the case of the questions developed for this
research, questions were intentionally grouped in advance to cover specific subjects of interest to
158
the research. Additionally, many of the survey constructs were derived from previous research
(Meier & O'Toole, 2001; Melkers & Willoughby, 1998; 2001; 2005; Moynihan & Pandey, 2010;
Perry, 1996; 2001). As such, no exploratory factor analysis was conducted in the course of
generating indexes. Confirmatory factor analysis was used to define the final composition of
index variables and for the fine tuning of the complete model as discussed in a subsequent
section of this chapter. All of the dependent variables had excellent Cronbach’s Alpha scores
(greater than 0.80), suggesting the items loaded well on the factors. The specific survey
questions used in the creation of index variables are included in Appendix D. The five dependent
variables are described below.
Awareness of Performance Management Reforms: This variable is derived from a single
survey question that asked public managers about their awareness of the requirements of the
Back to Basics initiative. This question was scored on a four-point Likert scale that rated the
managers from “low” to “extensive and involved in designing requirements.”
Attitudes Towards Performance Management Reforms: This variable is derived from a
single survey question that gauged City staff’s feelings about how much potential performance
management had to improve their department’s goals. This question was scored on a five-point
Likert scale from “disagree strongly” to “agree strongly.”
Individual Usage of Performance Information: Six questions were used to create the index for
this variable, which was then factored. The questions consider a number of activities that City
managers engaged in with regard to using performance measures in their work, from setting
program priorities to allocating resources. A different five-point Likert scale was employed that
started at “to no extent” and ranged up, ending with “to a very great extent.”
159
Organizational Usage of Performance Information: This indexed variable was created using
five questions that were factored to represent organizational group usage of performance
measures within City departments. The five questions that were factored covered such topics as
whether groups used metrics to understand how they operate to rating the quality of work
conducted by teams. The same five-point Likert scale was employed for this variable as for the
previous variable that dealt with individual performance information usage.
Proficiency in Using Performance Information: The final dependent variable is an index
comprised of four factored questions. City managers were posed a set of questions that asked
them to rate their abilities in areas such as interpreting performance measures and using such
metrics to create reports. A five-point Likert scale pegged to proficiency was employed, starting
at a proficiency level of “none” and going all the way up to “expert.”
5.1.1B Independent Variables
The two models discussed in this chapter each employ three independent variables that
were predicted to impact the range of dependent variables. In addition, a number of control
variables that are also considered to impact performance management reforms are discussed in
the next section. Of the three independent variables considered, one involves individual traits of
public managers, one covers organizational actions, and the third variable examines
organizational structure. The three independent variables are: (1) Public Service Motivation, (2)
Training in Performance Information Use, and (3) Wilson Typology – Production Organization.
Two of the variables are comprised of factored indexes, while the third is a coded dummy
variable. As with the dependent variables, exploratory factor analysis was not utilized as survey
questions were specifically designed to capture the latent variables represented. However, as
discussed previously confirmatory factor analysis was utilized in the final creation of variables to
160
load factors. The indexed variables once again displayed excellent Cronbach’s Alpha scores with
each being greater than 0.80. The full set of questions used for the independent variables can be
viewed in Appendix D.
Public Service Motivation: This indexed variable is built upon seven questions adapted from
Perry (1996; 2000) that cover the public service motivations of public sector workers and then
factored. The questions range from asking managers whether public service is important to them,
to whether they are willing to make sacrifices for the good of society. Answers were given across
a five-point Likert scale that started at “disagree strongly” and went up to “agree strongly.”
Training in Performance Information Use: This variable is composed of a factored index of
five questions, which gauge how much training in performance management managers have
received from the City of Los Angeles. Each question provided a binary option of answering
either “yes” or “no” to whether training was received. Questions covered training in subjects
such as strategic planning, designing performance measures, and using performance information.
Wilson Typology – Production Organization: This independent variable codes whether the
department the individual public manager works within is classified as a production organization
under Wilson typology. The coding of departments was conducted independently by the author
and two other researchers. Differences in coding were resolved using a majority voting process
and further discussion. Uncertainty remained about whether or not the Los Angeles Fire
Department and the Los Angeles Police Department were production organizations. As a result,
a sensitivity analysis was conducted with a model that included these two departments and
another that did not include them.
161
5.1.1C Control Variables
Additional control variables were added to the model to cover other factors that may
impact how public managers approach performance management reforms. Some of these latent
variables were meant to incorporate a number of the factors covered in the previous two analytic
chapters such as strategic planning, leadership, departmental culture, information technology,
and resources. Other variables control for public managers level of advancement within the City
bureaucracy, time within the organizations, and level of education. Some control variables are
indexed survey questions using confirmatory factor analysis, like variables in the previous
sections, while others are represented by a single item. Once again, all factored index variables
displayed excellent Cronbach's Alpha scores (above 0.80), indicated good factor loading and
internal consistency. A full description of the survey questions used for each of the control
variables can be found in Appendix D.
Organizational Communication 1: This index variable is composed of four factored questions
that incorporate cultural and structural factors into how units within a department communicate
about performance management reforms. Questions incorporate aspects of strategic planning,
developing performance measures, and innovative culture. Questions were based-upon a five-
point Likert scale starting at "to no extent" and going up "to a very great extent."
Organizational Communication 2: The second organizational communication variable is based
on one survey question that considers the amount of communication a public manager has had
with other City staff within their department and across City departments. This variable is coded
numerically based on the amount of communication conducted.
Organizational Impediments to Performance Management Reforms: This large factored
index variable incorporates a number of factors that have been previously discussed as
162
potentially impeding performance reforms. Questions covered subjects such as poor data
systems, lack of resources, lack of analytic capacity, and dearth of innovative culture, among
others. These factors are rated on a five-point Likert scale that begins at “to no extent” up to the
top level of “to a very great extent.”
Length of Use: This variable uses one survey question that captures how long a manager’s
department had been utilizing performance information in its work.
Number of Employees Supervised: This variable counts the number of employees the surveyed
individual manages. It can partially serve as a latent measure of the managers level of
responsibility within the City bureaucracy.
Years Worked at the City of Los Angeles: This variable measures the amount of time an
employee has worked at the City of Los Angeles.
Highest Level of Education: This variable measures the highest degree of education the public
manager had achieved. It is coded as a categorical variable from “less than a high school
diploma” up to “received a “Ph.D.”
5.1.1D Time Effect Variables
With survey data covering two time periods the effect of time comes into play when
considering the results. As discussed previously, the effects of time on performance management
reforms are contested in the literature (Bourdeaux & Chikoto, 2008; Dull, 2009). As such, it is a
variable worth exploring in this analysis. Because different models are employed between the
panel data and the pooled data, slightly different time effects were needed to represent the
change in time for both models. The panel data is set by each unique individual and by the
change in the year. In this case, the time effect is baked right into the model, representing
163
change. For the pooled data, a dummy variable for the change in the year was created. In this
case, the time effect is visible through the variable in the model.
5.1.2 Data Testing and Model Creation
This section of the chapter lays out the various steps undertaken to make sure the data
was suitable for analysis, while also detailing the reasons for the two econometric regression
models chosen.
5.1.2A Test for Normality
The skewness and kurtosis statistics for all of the variables for both models were checked
in order to evaluate the normality of the variables. Skewness measures the degree and direction
of asymmetry for a variable, with zero representing a normal distribution. Kurtosis represents
how heavy the tails of the distribution of a variable are. A normal distribution is around 3, with
heavier or lighter distributions moving significantly above or below this number. Table 5.1 and
Table 5.2 display the skewness and kurtosis statistics for both the pooled data and the panel data.
The skewness statistics for both sets of data appear to be generally close to a normal distribution,
except in the case of Years Worked at the City of Los Angeles. This makes sense and is baked
into the expectations of the model, as the survey covered middle and upper-level managers at the
City who would have been more likely to have worked at the City for a longer period of time.
The kurtosis statistics demonstrate that some of the variables have a moderate amount of kurtosis
for both the pooled data and the panel data. However, with none of the kurtosis statistics over 6
or under 1, the distribution is still reasonably normal and does not present a concern for the
analysis.
164
Table 5.1 Skewness and Kurtosis of Pooled Data Variables
N
Skewness
Statistic
Kurtosis
Statistic
Awareness of PM Reforms 1,035 0.14 1.77
Attitudes Towards PM Reforms 1,262 -0.63 3.17
Ind. Usage of Performance Info. 1,150 -0.59 3.04
Org. Usage of Performance Info. 1,156 -0.37 2.90
Proficiency W/ Performance Info. 1,439 -0.18 2.84
Public Service Motivation 1,433 -0.52 4.61
Training in Performance Info. 1,392 0.28 1.37
Wilson Type - Production Org. 1,560 0.01 1.00
Org. Communication 1 1,427 -0.50 2.64
Org. Communication 2 587 -0.09 1.34
Org. Impediments to PM 1,259 -0.12 3.04
Length of Use 1,137 -0.71 1.87
# of Employees Supervised 1,507 0.22 1.35
Years at the City of LA 1,506 -1.79 5.27
Highest Level of Education 1,505 -0.84 4.16
Table 5.2 Skewness and Kurtosis of Panel Data Variables
N
Skewness
Statistic
Kurtosis
Statistic
Awareness of PM Reforms 566 0.11 1.72
Attitudes Towards PM Reforms 662 -0.59 3.08
Ind. Usage of Performance Info. 598 -0.55 3.03
Org. Usage of Performance Info. 599 -0.28 2.65
Proficiency W/ Performance Info. 740 -0.19 2.93
Public Service Motivation 739 -0.28 3.07
Training in Performance Info. 725 0.20 1.34
Wilson Type - Production Org. 788 0.12 1.01
Org. Communication 1 734 -0.54 2.67
Org. Communication 2 320 -0.05 1.27
Org. Impediments to PM 666 -0.01 2.80
Length of Use 594 -0.74 1.93
# of Employees Supervised 767 0.25 1.36
Years at the City of LA 768 -1.91 5.81
Highest Level of Education 768 -0.85 4.37
165
5.1.2B Multicollinearity Analysis
To assess potential collinearity issues between the independent and control variables, a
variance inflation factor (VIF) test and collinearity tolerance test were conducted. Table 5.3 and
Table 5.4 show the VIF test and collinearity tolerance results for both the pooled data and panel
data respectively. With all the VIF statistics at near 1 for both data sets, there are no concerns
with multicollinearity for either the independent or control variables. The collinearity tolerance
statistics also support this conclusion that collinearity is not an issue for the data sets.
Confirmatory factor analysis was used as a second measure to check for multicollinearity.
Overall model fit indices came out as excellent during the confirmatory factor analysis (RMSEA
= 0.041, CFI = 0.98, SMR = 0.057), indicating little multicollinearity concern.
166
Table 5.3 Multicollinearity Statistics for Independent and
Control Variables (Pooled Data)
VIF
Collinearity
Tolerance
Public Service Motivation 1.07 0.93
Training in Performance Info. 1.20 0.83
Wilson Type - Production Org. 1.11 0.90
Org. Communication 1 1.29 0.77
Org. Communication 2 1.06 0.95
Org. Impediments to PM 1.13 0.88
Length of Use 1.12 0.89
# of Employees Supervised 1.16 0.86
Years at the City of LA 1.13 0.89
Highest Level of Education 1.08 0.92
Table 5.4 Multicollinearity Statistics for Independent and
Control Variables (Panel Data)
VIF
Collinearity
Tolerance
Public Service Motivation 1.18 0.85
Training in Performance Info. 1.24 0.81
Wilson Type - Production Org. 1.12 0.89
Org. Communication 1 1.38 0.73
Org. Communication 2 1.08 0.93
Org. Impediments to PM 1.14 0.88
Length of Use 1.10 0.91
# of Employees Supervised 1.20 0.84
Years at the City of LA 1.12 0.89
Highest Level of Education 1.06 0.94
167
5.1.2C Descriptive Statistics and Correlation
Table 5.5 through Table 5.8 presents the descriptive statistics for the pooled data and the
panel data broken down by the year collected. Further Table 5.9 and Table 5.10 present the
difference scores of the mean for each variable between years. It is the differences between years
that can provide some additional insight into the progression of performance management
reforms in Los Angeles. For many of the variables covered in the regression models, the change
between years is tiny and not statistically significant for either the pooled data or the panel data.
However, for both sets of data, there is a statistically significant increase in organizational usage
of performance information, training in performance information use, and the second
organizational communication variable. In addition, the pooled data also demonstrates a
statistically significant increase in individual usage of performance information. The IPMU
instituted a citywide training program in late 2015 (that was later canceled) which can account
for increased training, while the maturation of Back to Basics could possibly account for the
higher usage of performance measures across the individual and organizational levels.
The more surprising result from the descriptive data is that even though communication
between individuals increased between 2015 and 2016, there was not a resulting increase in the
awareness or attitudes of City managers in relation to performance management. In the same
vein, it is surprising that even though training scores increased there was no statistically
significant improvement in the proficiency of City managers in using performance information.
As such, the descriptive data points to a performance management system in LA that grew
process-wise but did not necessarily mature in terms of active data-driven analysis.
In examining the relative and absolute descriptive indicators displayed in tables 5.9 and
5.10 there are a couple of other notable characteristics to discuss. All the dependent variables
168
tend to fall closer to 3 on a five-point Likert scale, indicating only moderate involvement in
performance reforms generally by City managers. However, both attitudes towards performance
management reforms and public service motivation are significantly higher for City managers as
compared to other variables. This may
indicate that City managers are high minded in how they approach both their overall work and
performance management reforms, as compared to what the data says about their actions. This
finding comports somewhat to the analysis from the case study departments where many of those
interviewed had excellent intentions towards reforms but struggled to execute performance
management in practice. This struggle is potentially due to reasons such as those discussed in the
two previous chapters including deficient data systems, resource challenges, cultural issues, or a
lack of meaningful metrics among other causes.
Table 5.11 and Table 5.12 below provide the correlation coefficients between the
independent and control variables for both the pooled data and panel data. The large majority of
coefficients show a weak positive relationship between the control variables, with a handful
showing a non-existent or weak negative relationship. No combination of independent or control
variables rises to a level worth further discussion here.
169
Table 5.6 Descriptive Statistics for 2016 Pooled Data
N Mean
Standard
Deviation
Min Max
Awareness of PM Reforms 405 2.53 1.05 1 4
Attitudes Towards PM Reforms 490 3.88 0.92 1 5
Ind. Usage of Performance Info. 462 3.30 1.06 1 5
Org. Usage of Performance Info. 463 3.35 1.03 1 5
Proficiency W/ Performance Info. 593 3.46 0.74 1 5
Public Service Motivation 590 4.22 0.45 1 5
Training in Performance Info. 577 0.46 0.43 0 1
Wilson Type - Production Org. 655 0.49 0.50 0 1
Org. Communication 1 588 3.50 1.11 1 5
Org. Communication 2 227 7.35 3.49 1 11
Org. Impediments to PM 506 2.50 0.85 1 5
Length of Use 456 4.50 1.79 1 6
# of Employees Supervised 629 25.85 20.08 0 51
Years at the City of LA 628 17.97 5.15 0.5 21
Highest Level of Education 628 4.36 0.88 1 6
Table 5.5 Descriptive Statistics for 2015 Pooled Data
N Mean
Standard
Deviation
Min Max
Awareness of PM Reforms 630 2.54 1.06 1 4
Attitudes Towards PM Reforms 772 3.80 0.94 1 5
Ind. Usage of Performance Info. 688 3.18 1.17 1 5
Org. Usage of Performance Info. 693 3.22 1.09 1 5
Proficiency W/ Performance Info. 846 3.44 0.75 1 5
Public Service Motivation 843 4.25 0.46 1 5
Training in Performance Info. 815 0.40 0.41 0 1
Wilson Type - Production Org. 905 0.50 0.50 0 1
Org. Communication 1 839 3.41 1.15 1 5
Org. Communication 2 360 6.89 3.59 1 11
Org. Impediments to PM 753 2.54 0.92 1 5
Length of Use 681 4.49 1.96 1 6
# of Employees Supervised 878 24.64 20.01 0 51
Years at the City of LA 878 18.20 5.07 0.5 21
Highest Level of Education 877 4.26 0.90 1 6
170
Table 5.7 Descriptive Statistics for 2015 Panel Data
N Mean
Standard
Deviation
Min Max
Awareness of PM Reforms 294 2.64 1.05 1 4
Attitudes Towards PM Reforms 349 3.85 0.90 1 5
Ind. Usage of Performance Info. 308 3.26 1.10 1 5
Org. Usage of Performance Info. 308 3.30 1.07 1 5
Proficiency W/ Performance Info. 375 3.48 0.74 1 5
Public Service Motivation 374 4.27 0.43 1 5
Training in Performance Info. 365 0.40 0.41 0 1
Wilson Type - Production Org. 394 0.47 0.50 0 1
Org. Communication 1 370 3.50 1.10 1 5
Org. Communication 2 179 6.76 3.61 1 11
Org. Impediments to PM 344 2.49 0.84 1 5
Length of Use 305 4.48 1.96 1 6
# of Employees Supervised 385 24.29 19.58 0 51
Years at the City of LA 386 18.21 4.96 0.5 21
Highest Level of Education 386 4.34 0.85 1 6
Table 5.8 Descriptive Statistics for 2016 Panel Data
N Mean
Standard
Deviation
Min Max
Awareness of PM Reforms 272 2.57 1.06 1 4
Attitudes Towards PM Reforms 313 3.86 0.90 1 5
Ind. Usage of Performance Info. 290 3.36 1.01 1 5
Org. Usage of Performance Info. 291 3.42 0.98 1 5
Proficiency W/ Performance Info. 365 3.49 0.74 1 5
Public Service Motivation 365 4.25 0.44 1 5
Training in Performance Info. 360 0.49 0.43 0 1
Wilson Type - Production Org. 394 0.47 0.50 0 1
Org. Communication 1 364 3.52 1.13 1 5
Org. Communication 2 141 7.49 3.42 1 11
Org. Impediments to PM 322 2.48 0.83 1 5
Length of Use 289 4.63 1.68 1 6
# of Employees Supervised 382 25.72 20.02 0 51
Years at the City of LA 382 18.64 4.43 0.5 21
Highest Level of Education 382 4.37 0.86 1 6
171
Table 5.10 Difference of Mean Scores Between 2015 and 2016 (Panel Data)
N Mean 2015 Mean 2016 Difference
Awareness of PM Reforms 566 2.64 2.57 -0.07
Attitudes Towards PM Reforms 662 3.85 3.86 0.01
Ind. Usage of Performance Info. 598 3.26 3.36 0.10
Org. Usage of Performance Info. 599 3.30 3.42 0.12*
Proficiency W/ Performance Info. 740 3.48 3.49 0.01
Public Service Motivation 739 4.27 4.25 -0.02
Training in Performance Info. 725 0.40 0.49 0.09***
Wilson Type - Production Org. 788 0.47 0.47 0.00
Org. Communication 1 734 3.50 3.52 0.02
Org. Communication 2 329 6.76 7.49 0.73*
Org. Impediments to PM 666 2.49 2.48 -0.01
Length of Use 594 4.48 4.63 0.15
# of Employees Supervised 767 24.29 25.72 1.43
Years at the City of LA 768 18.21 18.64 0.43
Highest Level of Education 768 4.34 4.37 0.03
Significant at: *10%, **5%, ***1%
Table 5.9 Difference of Mean Scores Between 2015 and 2016 (Pooled Data)
N Mean 2015 Mean 2016 Difference
Awareness of PM Reforms 1,035 2.54 2.53 -0.01
Attitudes Towards PM Reforms 1,262 3.80 3.88 0.08
Ind. Usage of Performance Info. 1,150 3.18 3.30 0.12*
Org. Usage of Performance Info. 1,156 3.22 3.35 0.13**
Proficiency W/ Performance Info. 1,439 3.44 3.46 0.02
Public Service Motivation 1,433 4.25 4.22 -0.03
Training in Performance Info. 1,392 0.40 0.46 0.06***
Wilson Type - Production Org. 1,560 0.50 0.49 -0.01
Org. Communication 1 1,427 3.41 3.50 0.09
Org. Communication 2 587 6.89 7.35 0.46*
Org. Impediments to PM 1,259 2.54 2.50 -0.04
Length of Use 1,137 4.49 4.50 0.01
# of Employees Supervised 1,507 24.64 25.85 1.21
Years at the City of LA 1,506 18.20 17.97 -0.23
Highest Level of Education 1,505 4.26 4.36 0.10**
Significant at: *10%, **5%, ***1%
172
Table 5.11 Correlations for Independent and Control Variables (Pooled Data))
1 2 3 4 5 6 7 8 9 10
1 Public Service Motivation 1.00
2 Training in Performance Info. 0.09*** 1.00
3 Wilson Type - Production Org. 0.01 -0.02 1.00
4 Org. Communication 1 0.18*** 0.33*** 0.03 1.00
5 Org. Communication 2 0.08** 0.10** 0.01 0.16*** 1.00
6 Org. Impediments to PM 0.11*** -0.07*** 0.05* -0.22*** -0.05 1.00
7 Length of Use 0.07** 0.16*** 0.06** 0.26*** 0.11*** -0.08** 1.00
8 # of Employees Supervised 0.20*** 0.18*** 0.18*** 0.21*** 0.19*** 0.01 0.12*** 1.00
9 Years at the City of LA -0.04* 0.21*** 0.09*** -0.02 -0.02 -0.01 0.08*** 0.08*** 1.00
10 Highest Level of Education 0.10*** 0.03 -0.07*** -0.01 0.01 0.05* 0.01 -0.11*** -0.18*** 1.00
Significant at: *10%, **5%, ***1%
173
Table 5.12 Correlations for Independent and Control Variables (Panel Data)
1 2 3 4 5 6 7 8 9 10
1 Public Service Motivation 1.00
2 Training in Performance Info. 0.11*** 1.00
3 Wilson Type - Production Org. -0.03 -0.08** 1.00
4 Org. Communication 1 0.25*** 0.33*** 0.01 1.00
5 Org. Communication 2 0.10* 0.14** -0.04 0.18*** 1.00
6 Org. Impediments to PM 0.07* -0.08** 0.00 -0.25*** -0.06 1.00
7 Length of Use 0.09** 0.14*** 0.11*** 0.17*** 0.10* -0.05 1.00
8 # of Employees Supervised 0.23*** 01.9*** 0.14*** 0.21*** 0.20*** -0.05 0.09** 1.00
9 Years at the City of LA -0.08** 0.20*** 0.07** -0.08** -0.05 0.04 0.03 -0.01 1.00
10 Highest Level of Education 0.12*** 0.01 -0.05 0.01 0.07 0.01 0.05 -0.01 -0.13*** 1.00
Significant at: *10%, **5%, ***1%
174
5.1.3 Regression Models Employed
As discussed at the beginning of the chapter, two regression models were employed in
the course of the research. The first model was an ordinary least squares model (OLS) with a
time effect added to account for the change in years. This model was applied to the pooled data
and is sometimes called a pooled OLS model in the literature (Dielman, 1983; Stritch, 2017).
This type of pooled OLS model has been argued to be appropriate for the analysis of pooled time
series data (Dielman, 1983; Stritch, 2017). The OLS model also draws partially on the work of
Meier and O’Toole (1999; 2001).
The second regression model employed in this chapter is a random-effects model utilized
for the panel data. While there is some skepticism around random-effects models in economic
policy research (Bell & Jones, 2015; Woolridge, Forthcoming), these models have been used
successfully in both political science and public management research (Bell & Jones, 2015;
Woolridge, Forthcoming). Early versions of this research considered a fixed-effects model, but
three key factors led to the final random-effects model. In repeated tweaking of the model, a
series of Hausman tests were run. All of the Hausman tests indicated that a random-effects model
was appropriate over a fixed-effects model, in the majority of cases by an overwhelming factor.
A second consideration was the VIF testing that showed an almost complete lack of
multicollinearity between the independent and control variables, which removed any
endogeneity concerns that a fixed-effects model would have been needed to remedy. Finally, a
random-effects model is most appropriate when any potential omitted variables are time variant
(Bell & Jones, 2015). In the case of performance management system data over time it is likely
that any omitted variable would also be likely to shift over time, with the other variables in the
model.
175
As noted earlier, through confirmatory factor analysis the choice of variables in the
model displayed an excellent fit (RMSEA = 0.041, CFI = 0.98, SMR = 0.057). All factor
loadings were above 0.65, and in many cases much higher. A final addition to the models was
running them both with the police and the fire departments classified as production organizations
under Wilson typology and not classified as production organizations. This allowed a sensitivity
analysis around the disputed classification of these two City departments. Results presented in
the next section display each model with these two departments in and out as production
organizations.
5.2 Results and Analysis
The regression results are presented below, organized by the five dependent variables
covered, with each regression model examined together by the dependent variable. A discussion
connected to the research questions in this dissertation follows.
5.2.1 Awareness of Performance Management Reforms
Of the three independent variables included in both regression models, public service
motivation is the only one that shows a statistically significant impact on public managers’
awareness of performance management reforms. This impact is magnified in the smaller group
of City managers included in the panel data as compared to the pooled data. The effect is also
slightly more prominent when the LAPD and LAFD are removed as production organizations
from the Wilson typology variable. Intuitively, this result makes sense as public service
motivation could drive public managers to be more attuned to their work.
The three organizational variables have a weak, but statistically significant effect on
public manager’s awareness of performance management reforms. Each of these variables has an
impact in the expected direction of the relationship. Both organizational communication
176
variables have a positive impact, while organizational impediments have a negative impact.
Surprisingly, two experiences related control variables, length of use of performance information
and years worked at the city of Los Angeles, have a negative, albeit weak, impact on awareness
for public managers. This finding is counterintuitive to what would be expected as longer use of
performance management systems would most likely mean better awareness.
Finally, in examining the difference across and within the models, there are slight
differences between the OLS model and the random-effects model. In general, the models’
findings mirror each other. However, the relationships displayed in the random-effects model are
marginally stronger than the OLS model. This could be a result of the implicit time effect in the
random-effects model (though the OLS time effect has no impact) but is more likely due to the
more consistent panel data sample. This stronger association in the random-effects model is also
observed in the four other models discussed in the next sections. The variations in models
between inclusion and exclusion of LAPD and LAFD as production organizations do not show a
specific trend in either direction.
177
Table 5.13 OLS Regression Models for Awareness of Performance Management Reforms (Pooled Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation 0.28*** 0.09
0.30*** 0.10
Training in Performance Info. 0.13 0.11
0.13 0.11
Wilson Type - Production Org. -0.12 0.09
0.13* 0.09
Org. Communication 1 0.14*** 0.05
0.13*** 0.05
Org. Communication 2 0.05*** 0.01
0.06*** 0.01
Org. Impediments to PM -0.14** 0.06
-0.12** 0.06
Length of Use -0.08*** 0.02
-0.07*** 0.02
# of Employees Supervised 0.01*** 0.01
0.01** 0.01
Years at the City of LA -0.16** 0.01
-0.16** 0.01
Highest Level of Education 0.02 0.05
0.05 0.05
Time Effect -0.05 0.09
-0.05 0.09
Constant 1.46*** 0.53
1.31*** 0.53
R-Square 0.21
0.22
N 478
478
Significant at *10%, **5%, ***1%
178
Table 5.14: Random-Effects Regression Models for Awareness of Performance Management Reforms
(Panel Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation 0.38*** 0.13
0.40*** 0.13
Training in Performance Info. 0.01 0.13
0.01 0.13
Wilson Type - Production Org. -0.10 0.13
0.03 0.14
Org. Communication 1 0.08 0.06
0.07 0.06
Org. Communication 2 0.05*** 0.01
0.05*** 0.01
Org. Impediments to PM -0.17** 0.08
-0.16** 0.08
Length of Use -0.06** 0.03
-0.06** 0.03
# of Employees Supervised 0.01** 0.01
0.01* 0.01
Years at the City of LA -0.01 0.01
-0.01 0.01
Highest Level of Education 0.04 0.07
0.04 0.07
Constant 1.22** 0.74
1.12** 0.74
R-Square 0.20
0.24
N 274
274
Significant at *10%, **5%, ***1%
179
5.2.2 Attitudes Toward Performance Management Reforms
Once again, the independent variable that shows up as having a statistically significant
impact on public managers’ attitudes towards performance management reforms is public service
motivation. The positive impact demonstrated is stronger than the previous model that looked at
awareness, but only marginally. Again, the results are slightly more pronounced in the panel data
through the random-effects model as compared to the pooled data in the OLS model. Neither of
the other two independent variables had any discernable impact on public managers’ attitudes
towards performance management reforms.
The only control variable that showed any statistical significance is organizational
communication 2. However, the relationship is so slight that it does not seem relevant to the
attitudes that public managers’ hold towards performance reforms. Across the two different
models and data sets, there are no significant differences in results. This finding also holds when
the type of production organization is modified.
180
Table 5.15 OLS Regression Models for Attitudes Towards Performance Management Reforms (Pooled
Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation 0.38*** 0.08
0.38*** 0.09
Training in Performance Info. -0.11 0.09
-0.11 0.09
Wilson Type - Production Org. 0.06 0.07
0.05 0.08
Org. Communication 1 0.07* 0.04
0.07* 0.04
Org. Communication 2 0.03*** 0.01
0.03*** 0.01
Org. Impediments to PM -0.06 0.05
-0.06 0.05
Length of Use 0.01 0.02
-0.01 0.02
# of Employees Supervised 0.01 0.01
0.01 0.01
Years at the City of LA 0.01 0.01
0.01 0.01
Highest Level of Education -0.02 0.04
-0.03 0.04
Time Effect 0.05 0.08
0.05 0.08
Constant 2.01*** 0.46
2.00*** 0.46
R-Square 0.16
0.16
N 503
503
Significant at: *10%, **5%, ***1%
181
Table 5.16 Random-Effects Regression Models for Attitudes Towards Performance Management
Reforms (Panel Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation 0.45*** 0.12
0.45*** 0.12
Training in Performance Info. 0.01 0.12
0.01 0.12
Wilson Type - Production Org. 0.04 0.11
0.03 0.11
Org. Communication 1 0.07 0.05
0.07 0.05
Org. Communication 2 0.02* 0.01
0.02* 0.01
Org. Impediments to PM -0.09 0.07
-0.09 0.07
Length of Use -0.02 0.02
-0.02 0.02
# of Employees Supervised 0.01 0.01
0.01 0.01
Years at the City of LA 0.01 0.01
0.01 0.01
Highest Level of Education -0.05 0.01
-0.05 0.01
Constant 1.87*** 0.64
1.88*** 0.65
R-Square 0.19
0.21
N 284
284
Significant at: *10%, **5%, ***1%
182
5.2.3 Individual Usage of Performance Information
The third set of regression models that examine individual usage of performance
information by Los Angeles public managers is the first time when there is a divergence in
results between the pooled data within the OLS model and the panel data within the random-
effects model. In the OLS model, only public service motivation is moderately, positively
associated with individual usage of performance information. All three independent variables
have a statistically significant impact on the random-effects model. The power of public service
motivation is twice as strong in the random effects model as compared to the OLS model. This is
also the first model where training and being a production organization show a moderate,
positive impact. At least in the case of the random-effects model, it demonstrates that all three
independent variables are important to performance management reforms.
Of the organizational control variables, two of the three are statistically significant in the
OLS model with pooled data, while all three are statistically significant in the random-effects
model. Organizational communication 1 is most prominent in the OLS model and shows the
positive impact of individual usage of performance information. Organizational impediments
have a small negative impact as expected in the OLS model. Each of the three organizational
control variables has a small, but statistically significant impact on the expected directions in the
random-effects model. Overall, the random-effects model with the panel data connects the most
independent and control variables to the dependent variable of individual usage of performance
information.
The only difference seen across the models when the composition of production
organizations is modified is noticed in the random-effects model. In this case, the positive effect
of being a production organization has become slightly more pronounced. It seems likely that the
183
difference in the statistical significance of all three independent variables that are pronounced in
the random-effects model, but not the OLS model is due to an unobserved population difference
or another unobserved effect between the models.
184
Table 5.17 OLS Regression Models for Individual Usage of Performance Information (Pooled Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation 0.21*** 0.08
0.22*** 0.09
Training in Performance Info. 0.10 0.09
0.10 0.09
Wilson Type - Production Org. 0.01 0.07
0.08 0.08
Org. Communication 1 0.30*** 0.04
0.30*** 0.04
Org. Communication 2 0.01 0.01
0.02 0.01
Org. Impediments to PM -0.10** 0.05
-0.10** 0.05
Length of Use 0.05** 0.02
0.04** 0.02
# of Employees Supervised 0.01*** 0.01
0.01*** 0.01
Years at the City of LA 0.02* 0.01
0.01* 0.01
Highest Level of Education 0.01 0.04
0.01 0.04
Time Effect 0.10 0.07
0.10 0.07
Constant 0.83*** 0.45
0.79*** 0.46
R-Square 0.32
0.33
N 502
502
Significant at: *10%, **5%, ***1%
185
Table 5.18 Random-Effects Regression Models for Individual Usage of Performance Information
(Panel Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation 0.45*** 0.12
0.44*** 0.12
Training in Performance Info. 0.21*** 0.12
0.19* 0.12
Wilson Type - Production Org. 0.27*** 0.11
0.31*** 0.11
Org. Communication 1 0.16*** 0.05
0.16*** 0.05
Org. Communication 2 0.04*** 0.01
0.04*** 0.01
Org. Impediments to PM -0.13* 0.07
-0.12* 0.07
Length of Use 0.04 0.02
0.04 0.02
# of Employees Supervised 0.01 0.01
0.01 0.01
Years at the City of LA 0.01 0.01
0.01 0.01
Highest Level of Education -0.02 0.06
-0.03 0.06
Constant 0.65** 0.61
0.66** 0.63
R-Square 0.36
0.37
N 283
283
Significant at: *10%, **5%, ***1%
186
5.2.4 Organizational Usage of Performance Information
In examining the results for organizational usage of performance information, it is the
first model where public service motivation does not show up as significant in any iterations of
the two regression models. Instead, training in performance information usage is the independent
variable that has a statistically significant positive impact across both models and the two
different iterations of those models based upon Wilson typology differences. The relationship of
training to organizational usage of performance information is marginally stronger in both
versions of the random-effects model with panel data as compared to the OLS model with pooled
data.
In the observed models, the first differentiation between models where police and fire are
either included or excluded as production organizations is noted. In the models that include
police and fire as production organizations, there is no statistical effect observed. However, when
these two departments are excluded from the typology we observe a weak, but statistically
significant positive effect on organizational usage of performance information. The observed
effect is more pronounced in the panel data with random-effects as compared to the pooled data
with OLS. As such, the change in classification of the LAPD and the LAFD seems to be a
significant factor in how organizational level performance management occurs.
Both the organizational communication 1 variable and the organizational impediments
variable feature as statistically significant across all versions of the models in relation to
organizational usage of performance information. The positive effect of organizational
communication 1 is slightly more pronounced in the OLS model as compared to the random-
effects model. In contrast, the expected negative effect of organizational impediments is more
187
pronounced in the random-effects model with panel data as compared to the OLS model with
pooled data. The differences across both models are slight for both variables.
188
Table 5.19 OLS Regression Models for Organizational Usage of Performance Information (Pooled
Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation -0.03 0.07
-0.02 0.08
Training in Performance Info. 0.24*** 0.08
0.24*** 0.08
Wilson Type - Production Org. -0.07 0.07
0.10* 0.07
Org. Communication 1 0.31*** 0.03
0.31*** 0.04
Org. Communication 2 0.01 0.01
0.01 0.01
Org. Impediments to PM -0.33*** 0.05
-0.33*** 0.05
Length of Use 0.06*** 0.02
0.06*** 0.02
# of Employees Supervised 0.01*** 0.01
0.01*** 0.01
Years at the City of LA 0.01** 0.01
0.01** 0.01
Highest Level of Education 0.01 0.04
0.01 0.04
Time Effect 0.04 0.07
0.04 0.07
Constant 2.46*** 0.41
2.35*** 0.41
R-Square 0.44
0.46
N 504
504
Significant at: *10%, **5%, ***1%
189
Table 5.20 Random-Effects Regression Models for Organizational Usage of Performance Information
(Panel Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation 0.06 0.11
0.06 0.11
Training in Performance Info. 0.29*** 0.10
0.29*** 0.10
Wilson Type - Production Org. 0.05 0.10
0.18** 0.09
Org. Communication 1 0.26*** 0.05
0.26*** 0.05
Org. Communication 2 0.01 0.01
0.01 0.01
Org. Impediments to PM -0.38*** 0.06
-0.37*** 0.06
Length of Use 0.04 0.02
0.04* 0.02
# of Employees Supervised 0.01** 0.01
0.01** 0.01
Years at the City of LA 0.01 0.01
0.01 0.01
Highest Level of Education -0.03 0.05
-0.03 0.05
Constant 2.75*** 0.58
2.66*** 0.58
R-Square 0.51
0.53
N 284
284
Significant at: *10%, **5%, ***1%
190
5.2.5 Proficiency in Using Performance Information
In looking at the final dependent variable of proficiency in using performance
information only a handful of independent or control variables have any statistical effect across
all the observed models. Once again, public service motivation is shown to have a positive effect
on the dependent variable. This impact is moderate in the OLS model and fairly weak in the
random-effects model, with no significant difference when production organizations are
reclassified. However, in the OLS model with pooled data the change of LAPD and LAFD from
inclusion as production organizations to exclusion results in production organizations having a
small positive effect that is statistically significant.
None of the organizational control variables show any relevant impact on proficiency in
using performance information for public managers in Los Angeles. Of the individual control
variables highest level of education features as statistically significant in the observed models. A
higher level of education has a positive effect on the proficiency in using performance
information for public managers. This impact would seem to be an intuitive feature of the
models. The effect is slightly more pronounced in the pooled data within the OLS model as
compared to the panel data within the random-effects model. Overall, the models in this section
have less explanatory power than the previous four sets of models.
191
Table 5.21 OLS Regression Models for Proficiency in Using Performance Information (Pooled Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation 0.26*** 0.07
0.27*** 0.07
Training in Performance Info. 0.05 0.08
0.05 0.08
Wilson Type - Production Org. 0.06 0.06
0.12* 0.06
Org. Communication 1 0.05 0.03
0.05 0.03
Org. Communication 2 0.02* 0.01
0.02** 0.01
Org. Impediments to PM -0.03 0.04
-0.03 0.04
Length of Use 0.03 0.01
0.02 0.02
# of Employees Supervised 0.01 0.01
0.01 0.01
Years at the City of LA 0.01 0.01
0.01 0.01
Highest Level of Education 0.17*** 0.03
0.17*** 0.03
Time Effect -0.02 0.06
-0.02 0.06
Constant 1.46*** 0.38
1.40*** 0.39
R-Square 0.14
0.15
N 505
505
Significant at: *10%, **5%, ***1%
192
Table 5.22 Random-Effects Regression Models for Proficiency in Using Performance Information
(Panel Data)
Including LAPD & LAFD as
Production Orgs.
Excluding LAPD & LAFD
as Production Orgs.
b
Standard
Error
b
Standard
Error
Public Service Motivation 0.17* 0.10
0.17* 0.10
Training in Performance Info. -0.04 0.09
-0.04 0.09
Wilson Type - Production Org. 0.02 0.10
0.07 0.10
Org. Communication 1 0.01 0.04
0.01 0.04
Org. Communication 2 0.04*** 0.01
0.04*** 0.01
Org. Impediments to PM -0.11** 0.05
-0.11** 0.05
Length of Use 0.02 0.02
0.02 0.02
# of Employees Supervised 0.01 0.01
0.01 0.01
Years at the City of LA 0.01 0.01
0.01 0.01
Highest Level of Education 0.10** 0.05
0.10** 0.05
Constant 1.99*** 0.55
1.96*** 0.55
R-Square 0.17
0.17
N 285
285
Significant at: *10%, **5%, ***1%
193
5.3 Discussion
In the previous sections, the impact of a number of independent and control variables on
the attitudes and actions of public managers in Los Angeles were discussed. These findings are
considered in connection to the findings from the other analytic chapters in the concluding
chapter of this dissertation. This particular discussion section considers the findings from this
chapter under the rubric of the three overarching research questions of this dissertation. These
findings indicate several valuable insights into successful performance management reforms.
5.3.1A How is success defined when it comes to performance management
reforms?
This chapter considered five variables related to how public workers interact within the
context of performance management systems. These variables ranged from initial awareness of
reforms all the way to the proficiency of public managers in using performance measures. Each
of these five variables represents a different aspect of reforms, and in turn, may also represent
different facets of success for a performance management system. Is a manager’s attitude
towards reforms more important than her actions or proficiency? Is there a combination of these
factors that better define success? These are important questions that cannot necessarily be
answered by this research but are important to investigate moving forward. For now, it is
important for researchers and practitioners to continue to consider a range of success measures.
This chapter can, however, provide answers about which one of these five outcomes for
public managers related to reforms can be influenced to improve them. In the previously
discussed results, we can see that individual usage of performance information is the outcome
that is most influenced by a range of factors. This gives organizations who are striving for
success that is defined by actively using performance measures good news in that they can train,
motivate, and design structures to increase the adoption of performance measures in the active
194
work of public workers. On the flip side, the challenge appears to be that improving the
proficiency of performance information used by public managers is more challenging, and
requires further investigation. Finally, many forms of success are reliant on the inherent public
service character of workers, and this is something that needs to be considered when defining
success for performance management systems.
5.3.1B What factors are central to successful performance management systems
and in what combination?
This chapter examined slightly different factors that might contribute to performance
management reforms as compared to the previous two analytic chapters. In addition, because of
the format of regression analysis, it is not really possible to discuss explicitly how combinations
of independent and control variables can lead to successful outcomes for performance
management reforms. However, we can implicitly consider some possible combinations based
upon the literature and the logic of the dependent variables being observed. Key findings on
factors for success include:
• Public service motivation is clearly an important factor for how individual public workers
fare when it comes to performance management systems.
• Training and Wilson typology have a much less overall impact on success and are only
somewhat relevant as factors for a handful of ways public managers engage with reforms.
• Organizational factors, both negative and positive continue to have a role in how
individual public managers, and by extension institutions as a whole, function in the
course of performance management reforms.
The results indicating the impact of public service motivation on public managers during
performance management reforms are substantial based upon the models in this chapter. Public
service motivation has a positive connection to every aspect of how individual managers interact
195
with performance management systems. Awareness, attitudes, usage, and proficiency are all
positively correlated with public service motivation. These results both add to and extend the
existing literature on public service motivation and performance management. These findings
build upon the literature by Moynihan and Pandey (2007; 2010) that indicates that public service
motivation has an impact on performance information use. It extends the literature by making a
new finding that public service motivation also has an effect on managerial attitudes towards
reforms and is positively related to proficiency when using performance metrics.
The results for the impact of training are both expected and surprising at the same time.
Training has a positive impact on both individual and organizational usage of performance
information which is expected (Kroll & Moynihan, 2015), though the exact mechanism remains
unclear. However, the findings in this paper dispute previous work that suggested training would
improve public managers’ efficiency and proficiency with performance information (Cavalluzzo
& Ittner, 2003; Julnes & Holzer, 2001). The research in this dissertation may provide a more
accurate finding as the measures employed ask about specific performance measurement related
tasks in this study as compared to the perception of proficiency in the previous work (Cavalluzzo
& Ittner, 2003; Julnes & Holzer, 2001). The fact that training has no connection to awareness
and attitudes towards reforms is also a confusing finding and one that should concern
organizations that are investing in training in the hope of boosting managers connected to
performance management reforms.
The Wilson typology of the department that an individual public manager works within
has a limited impact on their connection to performance reforms. It does make logical sense that
out of all the dependent variables covered in this analysis being a production organization
generally only has a discernable impact on individual and organizational usage of public
196
information. Though perhaps it is a surprise these types of organizations also do not have a
greater awareness about reforms as their metrics should be visible to the staff that works there.
Not surprisingly, organizational features such as lack of data systems, poor analytic capacity, and
rigid bureaucratic structures inhibit reforms at the individual level of public managers. This
comports with much of the literature on reforms (Ammons & Rivenbark, 2008; Behn, 2005;
Moynihan, 2006), while also extending the literature by adding additional impediments to
consider such as the high time cost of gathering data or lack of consistency with performance
measures.
Finally, in considering combinations of factors that lead to success, this chapter cannot
make distinct statements due to the nature of the methodology. However, it is safe to say that
getting public managers to use performance information is the area of reforms with the most
possible factors that can play into success, as noted by the results in this chapter. In fact, overall
usage of performance information is the area most covered by the models in this chapter. This is
similar to past research in some ways, meaning future work should examine some of the
dependent variables, laid out in some cases for the first time in this chapter further to better
understand how they can be influenced for successful reform implementation.
5.3.1C How do organizations overcome obstacles that arise while implementing
performance management reforms?
This chapter did assess that impediments to success are a clearly measurable factor when
it comes to considering performance reforms. As to how to overcome these impediments, this
chapter has less to add as compared to previous analytic chapters in this dissertation. However, it
should be clear that implicitly public service motivation can have an impact on overcoming
obstacles to reforms, based on the findings from this chapter. While in turn the potential for
using training to overcome obstacles appears to be less promising. This is supported by previous
197
qualitative evidence from Back to Basics in Los Angeles, where training programs were canceled
after only being in place for a short time. While this evidence is far from definitive, the use of
training for performance management systems should be questioned and explored further to
ensure it actually can play a positive role in reforms.
198
Chapter Six: Conclusion
6.1 Introduction
In the course of the three analytic chapters within this dissertation, a number of findings
were developed using complementary research approaches to study performance management
reforms. Specifically, this dissertation examined definitions of success for performance
management implementation, success factors and how they might combine to push forward
reforms, and paths to overcome obstacles to performance management success. In this final
chapter, a number of the findings for each analytic chapter are examined together to draw
broader conclusions from this three-year-long examination of performance management reforms
in the City of Los Angeles. The implications for both research and practice of performance
management systems as they relate to the findings of this dissertation are also discussed. Finally,
the limitations of this study are considered, before a final section that considers where future
research should move forward from this dissertation.
However, before jumping into each of those topics, it seems appropriate to consider the
Back to Basics reform within the City of Los Angeles as a whole. The purpose of this
dissertation was not to judge whether or not the Back to Basics agenda for Los Angeles was an
overall success or failure. Rather, the purpose of this dissertation was to study performance
management success factors and obstacles to success through the lens of the performance
management system being implemented within Los Angeles. Yet, after three years of research
into performance management within the City, a comment or two on the overall Back to Basics
effort seems warranted.
In the course of these three years of research, observing, interviewing, and collecting data
on performance management reforms in Los Angeles it is possible to conclude that Los Angeles
199
made significant progress in implementing reforms. Not to say that there were not enormous
challenges or frequent setbacks for the City as whole and departments in particular, but as
observed through the case-study research four of the six departments studied demonstrated at
least moderate success with performance management. A number of the descriptive metrics
collected during the research also pointed to a range of successes. While Los Angeles’ possible
success cannot be confirmed in a sweeping manner through the empirical research conducted in
this dissertation, many indicators point to pockets of success. And, as a final comment, it should
be noted that in 2018, about a year after research for this dissertation concluded, the City of Los
Angeles was awarded the first gold-level certification by Bloomberg Philanthropies from its
What Works Cities program. This Bloomberg program was designed to be “the national standard
of excellence for well-managed, data-driven local government” (Bloomberg, 2018).
The next section provides a final consideration of the research questions examined by this
dissertation. Findings related to each of the three overarching research questions in this
dissertation were explored in-depth in the three analytic chapters. Here the findings are briefly
summarized, and then connections are drawn between the findings that were explored in the
preceding chapters. Through this approach, the strength of the mixed-methodology is
demonstrated through cross-validated findings.
6.1.1 How is success defined when it comes to performance management
reforms?
In Chapter Three, through the examination of the case-study departments, there were two
findings about how to define performance management success. First, that success can be
defined differently over time as performance reforms evolve, and that patience is warranted in
the course of this evolution, allowing for gradual or long-term improvement. Second, that
performance management systems should be judged on the dual standard of both process
200
improvements and active decision-making with data. Ultimately these two measures of success
reinforce each other.
Chapter Four also considered multiple definitions of success. One measure was more
process-based, while the second measure was more concerned with active decision-making
utilizing data. However, with many causal recipes for success overlapping for both measures the
chapter demonstrated once again that both process and decision-making standards of success
should be employed.
Chapter Five examined five connected measures that mostly focused on how individuals
can be successful during performance management reforms. These measures ranged from
awareness of reform efforts to usage of and proficiency with performance information. While the
way success is defined in Chapter Five is not directly applicable to Chapter Three and Chapter
Four, there is still a consideration of multiple measures in defining success.
Across all three analytic chapters, a key finding is employing multiple measures to define
success for performance management systems. Between Chapter Three and Chapter Four
specifically, there is consensus that both process-based measures of success and measures of
data-driven decision-making should be employed when considering the success of a performance
management system. Both chapters found multiple measures of this sort as valid in examining
success and generally that both types of measures relate to similar paths to success. This cross-
validation of success measures is an important finding that will allow more effective study of
performance management reforms moving forward.
6.1.2 What factors are central to successful performance management
systems and in what combination?
In Chapter Three, a number of factors from the literature were confirmed to have some
value in promoting successful performance management reforms. Of these factors, leadership
201
and organizational culture were identified as being central in successful reforms, along with
organizational size. In contrast, financial resources and data systems were found to be less
critical to the success of performance reforms. As for combinations of factors that lead to
success, the combination of strategic planning, good metrics, strong leadership, and employee
buy-in were displayed in the four case-study departments that had varying levels of success
during Back to Basics.
Using fuzzy set qualitative comparative analysis (fsQCA), Chapter Four looked at causal
configurations for successful performance management implementation. From this analysis,
strong leadership and large organizational size were identified as core conditions in the most
common configuration that led to success. In many cases, the addition of good metrics as a core
condition was also found in combination with strong leadership and large organizations. At times
good metrics alone featured as a core condition that led to success. Other factors such as strategic
planning, political attention, and innovative culture were identified as peripheral factors
contributing to success.
Chapter Five once again considered a different aspect of success factors for performance
management reforms, by looking at individual public managers. Public service motivation was
found to impact a range of behaviors for public managers in the course of performance
management reforms. In addition, organizational factors, both negative and positive have a
strong connection to how public workers act during reforms.
Success factors found across several chapters in this dissertation include leadership, good
metrics, organizational size, strategic planning, and organizational culture. This cross-validation
lends confidence that success factors in Los Angeles, and more broadly for performance
management reforms, have been correctly identified. Additionally, the combination of strong
202
leadership, organizational size, and good metrics were found in both Chapter Three and Chapter
Four as a recipe for success. Additional peripheral factors were also found in both chapters
connected to this combination. Finally, sizable financial resources were not found to be a critical
success factor in either Chapter Three or Chapter Four.
6.1.3 How do organizations overcome obstacles that arise while implementing
performance management reforms?
Of the three research questions in this dissertation considering how to overcome obstacles
to successful performance management reforms might have been the most difficult to answer.
Nevertheless, a handful of findings relate to this question especially from Chapter Three and
Chapter Four. Chapter Three provided the critical finding that the impediments to performance
management systems that can be realistically overcome must be considered. Lack of data
systems and lack of financial resources were both identified as obstacles overcome in the course
of the Back to Basics reforms. In turn, it was demonstrated that strong leadership and employee
culture appear to help overcome obstacles to reforms. In Chapter Four, a lack of financial
resources was also found to be an impediment that could be overcome, with strong leadership as
a factor that could potentially help to overcome this obstacle. Chapter Five did not contribute to
the cross-validation of efforts to overcome obstacles to performance reforms. Still, it is an
interesting and useful finding that strong leadership can overcome financial obstacles and
potentially other impediments to success. Strong leadership features across the research in this
dissertation and points to the potential of overcoming other obstacles not explicitly in the
findings.
6.2 Implications for Theory
In the analytic chapters of this dissertation, many of the findings have been discussed in
relation to the previous literature on performance management systems, and also how certain
203
findings push forward the study of performance management reforms. This section does not
repeat all these findings and their connection to the study of performance reforms. Only the key
findings, including those that were cross-validated in the previous sections of this chapter are
discussed here.
The first way this dissertation advances the study of performance management is
methodological. The specific mixed-methods approach in this dissertation and the use of fsQCA
specifically are both completely new methodologies to the study of performance management
and related concepts of success. While there have been a handful of mixed-methods approaches
to studying reforms (Moynihan, 2013; Soss, Fording, & Schram, 2001), none have employed
three different methodologies to attempt to thoroughly cross-validate results. By utilizing mixed-
methods, this dissertation brings a holistic perspective to studying performance management
reforms.
With this being the first study on performance management to use fsQCA, this
dissertation is the first to bring a causal configurational analysis to the study of performance
management success. fsQCA has been utilized to study organizational performance in other
contexts (Andrews et al., 2015; Fiss, 2011; Frambach et al., 2016) and is well suited to studying
the complex and configurational nature of performance reforms. Through this methodology, this
dissertation pushes the field to consider combinations of factors causally that lead to well-
functioning performance management systems. The results from the fsQCA analysis are the first
step in hopefully spreading the use of this methodology to more public organizational questions
related to performance management.
At the theoretical level, two things differentiate this dissertation from previous literature
in the field. One, this is the first study to attempt to examine performance management reforms
204
across all three organizational levels laid out by Wilson (1989) and Scott (1998). While the
results in this study were challenging to link across the institutional, managerial, and operational
levels of Los Angeles, the attempt to do so pushes the thinking of how to study performance
management reforms. As with the reason to employ fsQCA, trying to study multiple
organizational levels of reform attempts to work through the complex nature of performance
management reforms.
Second, the holistic, measured, and balanced approach to studying performance
management reforms set this dissertation apart from much of the previous literature. While not
wholly successful, this dissertation did work to cover most of the success factors and
impediments to reform empirically across the three chapters. Additionally, the theoretical set up
to the empirical analysis balanced skepticism of performance management reforms with a
measured attempt to identify genuine success with respect to performance reforms. The push to
use multiple measures to define successful reforms across the same study is also novel. This
study encourages multiple and overlapping conceptions of performance management success for
the field. While ambitious, these theoretical approaches to studying performance management
seem necessary to gain a greater understanding of the multiple factors that impact reforms.
Finally, the results in this dissertation validate knowledge in the field of performance
management, while also pushing it in new directions. Leadership has been heavily theorized as
an important factor for successful performance management reforms (Behn, 2003; 2014; May &
Winter, 2007; Sanger, 2008) and empirically studied just as much (Moynihan & Ingraham, 2004;
Bourdeaux & Chikoto, 2008; Dull, 2009; Moynihan & Lavertu, 2012). This study adds to the
literature by again confirming the value of leadership to performance reforms. It pushes
knowledge forward to by demonstrating the importance of leadership through multiple methods
205
in the same research setting. It also breaks ground by demonstrating specific ways that leadership
can overcome certain impediments to reforms.
The importance of organizational size to the potential success of performance
management reforms has generally been understudied (Nielsen, 2013). This dissertation
demonstrates through multiple results in Chapter Three and Chapter Four that being a large
organization is a critical success factor for reforms. Organizational size should potentially be
considered as a more central factor to the success of reforms than previously thought, possibly as
a result of the additional analytical staff resources this large size can help provide to a
performance management system.
A handful of researchers have looked at the impact of public service motivation on
performance management reforms (Moynihan & Pandey, 2007; 2010). This dissertation confirms
the results in these studies and adds additional context not previously discovered. The results
from Chapter Five confirm previous work on the positive impact of public service motivation on
public managers’ usage of performance information (Moynihan & Pandey, 2007; 2010). The
research in Chapter Five also concludes that other characteristics of public managers in the
course of performance reforms are impacted by public services motivation, such as their
awareness of reforms, their attitudes towards reforms, and their proficiency with performance
measures.
Lastly, and possibly most consequentially, this dissertation breaks ground by being the
first to discover causal combinations of factors that can lead to successful performance
management reforms. Through the findings in Chapter Three and Chapter Four, it becomes
apparent that strong leadership, good metrics, and organizational size are a causal combination of
factors that can lead to successful reforms. Strategic planning and organizational culture can also
206
feature as peripheral factors in this combination. This finding encourages the field to look at
other possible combinations that lead to performance management success. The findings
discussed here along with others scattered throughout the chapters in this dissertation broaden
and deepen the study of performance management systems.
6.3 Implications for Practice
Part of the purpose of approaching this dissertation from a holistic, multi-method study of
performance management reforms was to provide public organizations and their managers with
better ways of approaching performance management reforms that are more likely to lead to
successful outcomes. While it is unwise to generalize new practices from one study, key findings
that public organizations and public managers may want to consider when implementing
performance management systems are summarized here:
• Putting in place strong leaders in key positions within public organizations undertaking
performance management reforms can have a robust impact on potential success and in
overcoming obstacles.
• Leaders and other public workers who are more inclined to demonstrate public service
traits may thrive within performance management systems and assist in pushing them
forward.
• Understanding which obstacles can be potentially overcome (poor data systems, lack of
financial resources) as compared to those that are harder to ameliorate (lack of leadership,
small organizational size) during performance reforms is important.
• It takes time to change the organizational culture so that staff is more attuned to
performance management principles. Patience is critical during this process.
207
• The use of multiple definitions of success will help public organizations to understand the
progress of performance management reforms better.
6.4 Limitations of the Study
While the strengths of this study (holistic theoretical approach, mixed-methods, etc.) have
been discussed in multiple sections of this dissertation, it is also essential to consider the
limitations of this dissertation. First, as a general statement to consider, while this dissertation
uncovers a number of significant findings about performance management systems and uses
multiple methods, the results are for only one local government organization. While the findings
from this dissertation may generally be revealing for the field of performance management, they
are not necessarily explicitly applicable to other public organizations undertaking performance
management reforms. Replication of these findings within other local governments and at
additional levels of government is an important next step for the field.
With respect to the data collected for this dissertation, there are several potential
concerns. With respect to the survey data collected it is possible that the survey may suffer from
response bias and positive perception bias. Response bias is potentially baked into the survey in
two ways. First, because the sample selected was done in conjunction with the City of Los
Angeles the sample may be biased as it leaves out public workers that were not picked by the
City. Second, those City workers who chose to respond to the survey after it was sent may have
bias related to performance management reforms that caused them to respond. The large sample
for the survey offsets these biases to some degree but does not entirely eliminate them. Positive
perceptions bias (Brest & Krieger, 2010) notes that those responding to a survey may be inclined
to more positively represent their actions or those of their organization. As such, the findings in
this dissertation may be more positive than actual reality.
208
With respect to the interviews and meeting observations, positive perception bias is also a
possibility. Even though those interviewed were guaranteed anonymity, this may not completely
eliminate the likelihood of those subjects wanting to come across more positive. In addition, by
observing meetings, the researchers may change the normal behavior in those meetings by their
mere presence.
With respect to the methodologies utilized in this dissertation, there are also several
limitations to consider. Across all three methodologies, there is no coverage for pre-treatment of
performance management reforms within Los Angeles. Though the original plan for this study
was to begin research prior to the start of Back to Basics implementation, this did not happen in
practice. As such there may have been important unobserved effects that happened prior to the
start of Back to Basics and in the early months of reforms. With respect to the case-studies, the
departments were mostly picked in conjunction with the leadership of the City. A different set of
case-study departments may have yielded different outcomes. The regression models employed
in this dissertation may also be subject to directionality bias. For example, the study posits that
public service motivation leads to greater uptake of performance management reforms. It is also
conceivable that a higher performance workplace might make public managers feel better about
participating in a public service career.
While both the fsQCA and multivariate regression analysis cover two time periods, this
may not be the best representation of time for the study. The initial plan for the research was to
conduct three or four surveys to better account for the change over time. Still, the two periods
may be superior to a single cross-sectional survey. Even with a number of statistical tests that
generally validate the survey data, the regression analysis may still suffer from common source
bias, as only survey data is used in that chapter. Despite these possible limitations this study
209
makes a valuable contribution to the literature and moves the field of performance management
in new directions.
6.5 Suggestions for Future Research
Outside of taking different approaches to the research that could ameliorate some of the
limitations laid out in the previous sections, there are other areas of future research suggested by
this dissertation.
With the research in this dissertation putting together the first set of causal recipes for
performance management reform success, future research should explore the use of fsQCA
across a number of other performance management systems in order to potentially confirm the
recipes form this dissertation and potentially to discover new ones. Thinking about performance
management in more explicit configurational forms would continue to push the field forward. As
this was an exploratory study with fsQCA, future studies could build hypotheses about causal
configurations and then test them using fsQCA.
Initially, this research endeavored to conduct a controlled experiment on the training
programs being utilized during Back to Basics by the City. While in general control experiments
are challenging to undertake in public administration and public management research, using it
for a small performance management training program seemed feasible. Due to concerns from
the City and the short duration of the training program in Los Angeles before they were canceled
this part of the research never took place. With the findings about training in this dissertation
being fairly negligible, a controlled study of performance management training would be a
valuable addition to the field.
Finally, this research illuminated the difficulty of studying how to overcome obstacles to
performance management implementation success. Relatively speaking it was more
210
straightforward to study success factors in the course of this dissertation research. This is
reflected in the findings which are more heavily tilted towards revealing success factors as
compared to showing how to overcome impediments. Building research that can study this
complicated area more thoroughly would be a valuable addition to the research started in this
dissertation.
211
Appendix A: Los Angeles Performance Management Survey
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 1/12
Los Angeles Performance Management Survey
Block 1
.
This is a followup survey to one conducted in the Spring of 2015. You may or may not have participated in the
original survey. Either way, you are invited to participate in this current survey. The research will only include
people who voluntarily choose to take part.
As part of Mayor Garcetti’s “Back to Basics” program, he has requested that all departments plan for and
implement a performance management system similar to the COMPSTAT system employed by the LAPD. These
systems need to demonstrate clear usage of data and analysis to produce significant results in alignment with
priority outcomes. The Innovation and Performance Management Unit (iPMU) at the Mayor’s Office is working
with a team of researchers from the Price School of Public Policy at the University of Southern California to
evaluate the progress towards these goals.
If you agree to take part, you can begin by clicking on the next button (>>) at the bottom righthand corner of this
page. The survey takes about 20 minutes to complete. You do not have to answer any questions you do not
want to. Simply click “Continue without Answering” when the survey reminds you that a question has not been
answered.
CONFIDENTIALITY: All respondents to this survey are assured COMPLETE CONFIDENTIALITY. The survey
is intended to assess system level improvements associated with performance management and will not be used
to evaluate individual employees. The survey results will be collected and analyzed by the USC team. They will
strip all identifying information from survey responses and maintain the data on a secure, password protected
server. Only aggregate statistics from the survey will be shared with the City and published in reports.
INVESTIGATOR CONTACT INFORMATION: If you have any questions concerning issues of confidentiality or
about the research project, feel free to contact the project's CoPrincipal Investigator, Robert W. Jackman,
Research Fellow, at rjackman@usc.edu or 6507769664. You may also contact the USC Institutional Review
Board that monitors all collection of data by USC researchers. They can be reached at 2138215272 or
upirb@usc.edu.
By clicking on the next button (>>), you are providing your consent to participate in this study.
USC UPIRB #UP1400120
Block 2
. We will begin with a few questions about you and your work with the City of Los Angeles.
Q1.
How many years have you worked at your current position?
Q2. How many years have you worked for the City of Los Angeles in total?
212
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 2/12
Male
Female
Less than a high school diploma
High school diploma
Associate degree
Bachelor's degree
Master's degree
Ph.D.
Biological / life sciences (agriculture, biology, biochemistry, botany, etc.)
Business (accounting, business administration, marketing, management, etc.)
Communication (speech, journalism, television/radio, etc.)
Computer and information sciences
Education
Engineering
Healthrelated fields (nursing, physical therapy, health technology)
Humanities (English, literature, philosophy, religion, foreign languages)
Liberal / General studies
Mathematics
Parks, recreation, leisure studies, sports management
Physical sciences
Preprofessional (predental, premedical, preveterinary)
Public administration (city management, urban planning, public policy, law enforcement, etc.)
Social sciences (anthropology, economics, interdisciplinary studies, international relations, political sciences, psychology,
sociology)
Visual and performing arts
Other:
Q3. In your role, approximately how many employees and outside contractors do you supervise?
Q4. What is your gender?
Q5.
What is the highest level of education that you completed?
Q5b. Which of these fields best describes your major or field of study?
213
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 3/12
Q6. Please indicate the degree to which your agree or disagree with the following statements.
Disagree Strongly Disagree Neutral Agree Agree Strongly
Meaningful public service is
very important to me.
I am often reminded by daily
events about how dependent
we are on one another.
Making a difference in society
means more to me than
personal achievements.
I am willing to take risks on the
job to get things done.
I am not afraid to go to bat for
the rights of others even if it
means I will be ridiculed.
I am prepared to make
sacrifices for the good of
society.
Closely following organizational
procedures is the best way to
get things done for my job.
Q7. In the last year, how often have you personally performed the following tasks at work?
Almost every
day
A few times a
week
A few times a
month
A few times a
year Never Don't Know
Read and interpreted a report
or memo containing
quantitative information such as
budget numbers, survey
results, or performance
measures.
Interpreted and analyzed raw
data such as a survey or
budget numbers.
Created a graphical display
(e.g. pie chart, bar chart, line
graph, etc.) to summarize data.
Performed a statistical analysis
to test a hypothesis about
some data.
Q8. How would you rate your current level of proficiency in each of the following skills?
None Basic Intermediate Advanced Expert
Reading and understanding
quantitative information
provided to you by others.
Analyzing raw data such as a
survey results or budget
numbers.
Creating graphics to
summarize data.
Performing statistical analyses.
Q9. Please answer questions A and B for each of the following tasks.
214
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 4/12
Yes
No
Do not know
Question A
Has the City of Los
Angeles ever provided you
with training to accomplish
the following tasks?
Question B
In your opinion to what extent, if at all, would you need
training (or additional training) in order to help you
accomplish the following tasks?
Yes No
To no
extent
To a
small
extent
To a
moderate
extent
To a
great
extent
To a
very
great
extent
No basis to
judge / Not
applicable
Conduct strategic
planning
Set program
performance goals
Develop program
performance
measures
Use program
performance
information to make a
decision
Link the performance
of your departmental
activities to the
achievement of your
department’s
strategic goals
Block 3
.
Now we turn to questions concerning the group within your department in which you work.
Q10. To what extent, if at all, has your group done the following?
To no extent
To a small
extent
To a moderate
extent
To a great
extent
To a very great
extent
No basis to
judge / Not
applicable
Communicated its mission to
you a in a clear,
understandable way.
Defined its strategic goals.
Communicated to you how your
everyday job responsibilities
relate to the attainment of the
agency’s strategic goals.
Developed meaningful ways to
measure whether the agency is
achieving its strategic goals.
Q11.
Does your group within your department currently collect and analyze performance measures related to the work
for which you are responsible?
215
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 5/12
1 year
23 years
45 years
More than 5 years
Q12.
For how many years has your group been collecting and analyzing performance measures?
Q13. For the work for which you are responsible, to what extent, if at all, do you use the information obtained from
performance measurement when participating in each of the following activities?
To no extent
To a small
extent
To a moderate
extent
To a great
extent
To a very great
extent
No basis to
judge / Not
applicable
Setting program priorities
Allocating resources
Adopting new program
approaches or changing work
processes
Refining program performance
measures
Setting individual job
expectation for the government
employees I manage or
supervise
Rewarding government
employees I manage or
supervise
Q14.1. There are a number of ways to measure the results of a local government operations and programs. The
ways that are listed in the lefthand column of the table below are among those that are used most often. To what
extent, if at all, do you agree with each of the following statements as they relate to performance measures
related to the work for which you are responsible?
To no extent
To a small
extent
To a moderate
extent
To a great
extent
To a very great
extent
No basis to
judge / Not
applicable
We have performance
measures that tell us how
many things we produce or
services we provide.
We have performance
measures that tell us if we are
operating effectively.
We have performance
measures that tell us whether
or not we are satisfying our
customers.
We have performance
measures that tell us about the
quality of the products or
services we provide.
216
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 6/12
To no extent
To a small
extent
To a moderate
extent
To a great
extent
To a very great
extent
No basis to
judge / Not
applicable
We have performance
measures that would
demonstrate to someone
outside our agency whether
or not we are achieving our
intended results.
Q14.2. Please provide a brief example where performance measures help you identify the effectiveness of
your operations, customer satisfaction, or the quality of your products or services.
Q15.1. Based on your experience in your department, to what extent , if at all, are the following factors
impediments to the measurement of performance or the use of performance information? (Part 1/2)
To no extent
To a small
extent
To a moderate
extent
To a great
extent
To a very great
extent
No basis to
judge / Not
applicable
Difficulty determining
meaningful measures
Different parties are using
different definitions to measure
performance
Difficulty obtaining valid or
reliable data
Difficulty obtaining data in time
to be useful
Difficulty in linking performance
to costs and budgets allocations
High cost of collecting data
Lack of incentive (e.g. rewards,
positive recognition)
Difficulty resolving conflicting
interests of stakeholders, either
internal or external
Results of our work occurring
too far in the future to be
measured
Q15.2. Based on your experience in your department, to what extent , if at all, are the following factors
impediments to the measurement of performance or the use of performance information? (Part 2/2)
To no extent
To a small
extent
To a moderate
extent
To a great
extent
To a very great
extent
No basis to
judge / Not
applicable
Difficulty distinguishing
between results produced by
the program and results caused
by other factors
217
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 7/12
To no extent
To a small
extent
To a moderate
extent
To a great
extent
To a very great
extent
No basis to
judge / Not
applicable
Existing information technology
and/or systems incapable of
providing needed data
Concern that performance
information could be used
against your program or
agency
Lack of staff who are
knowledgeable about gathering
and/or analyzing performance
information
Lack of ongoing top executive
commitment or support for
using performance information
to make program and/or
funding decisions
Lack of ongoing political
support for using performance
information from the City’s
political leaders to make
program and/or decisions
Difficulty determining how to
use performance information to
improve the program
Difficulty determining how to
use performance information to
set new or revise existing
performance goals
Q16.1. Based on your experience in your department, to what extent, if at all, do you agree with the following
statements? (Part 1/3)
To no extent
To a small
extent
To a
moderate
extent
To a great
extent
To a very
great extent
No basis to
judge / Not
applicable
Department managers/supervisors at my
level have the decisionmaking authority
they needed to help the agency accomplish
its strategic goals.
Department managers/supervisors at my
level are held accountable for the results of
the departmental work they are responsible
for.
Employees in my agency receive positive
recognition for helping the agency
accomplish its strategic goals.
My department is able to shift resources
within its budget to accomplish its mission.
The individual that I report to periodically
reviews performance measures with me to
assess the results or outcomes of the
department work for which I am
responsible.
Results or outcomeoriented performance
information are used by the department to
identify problems and develop solutions to
those problems.
218
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 8/12
To no extent
To a small
extent
To a
moderate
extent
To a great
extent
To a very
great extent
No basis to
judge / Not
applicable
Results or outcomeoriented performance
information are used by the department to
help develop its budget.
Q16.2. Based on your experience in your department, to what extent, if at all, do you agree with the following
statements? (Part 2/3)
To no extent
To a small
extent
To a
moderate
extent
To a great
extent
To a very
great extent
No basis to
judge / Not
applicable
Funding decisions for the department
activities for which I am responsible are
based on results or outcomeoriented
performance information.
My agency’s top leadership demonstrates a
strong commitment to achieving results.
My direct supervisor demonstrates a strong
commitment to achieving results.
Managers in my department meet regularly
to review performance information.
Managers in my department regularly form
action plans to address issues that have
been identified through the review of
performance information.
Management above my level has made
changes to the department activities for
which I am responsible based on results or
outcomeoriented performance information.
Q16.3. Based on your experience in your department, to what extent, if at all, do you agree with the following
statements? (Part 3/3)
To no extent
To a small
extent
To a
moderate
extent
To a great
extent
To a very
great extent
No basis to
judge / Not
applicable
My department provides me with access to
knowledgeable people (either from within
or outside of the agency) that can provide
assistance with developing performance
measures, collecting the measures, and
interpreting the data.
My department is a very dynamic and
entrepreneurial place.
People in my department are willing to stick
their necks out and take risks.
The glue that holds my department
together is a commitment to innovation and
development.
There is an emphasis in my department on
being best.
In my department, readiness to meet new
challenges is important.
There is broad consensus among political
leaders, departmental employees, major
stakeholder groups, and citizens
concerning what your department’s main
priorities should be.
219
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 9/12
Yes
No
1
25
610
More than 10
To no extent
To a small
extent
To a
moderate
extent
To a great
extent
To a very
great extent
No basis to
judge / Not
applicable
Los Angeles citizens and stakeholder
groups expect your department to be able
to provide measures demonstrating
performance.
Q17.1.
1. In the last six months, have you discussed issues related to performance metrics or performance
management in your department with anyone in the Mayor’s Office or someone who works in another Los
Angeles City Department?
Q17.2. Please indicate the department or Mayor's Office where the people with whom you discussed
performance management work. Check all that apply.
Mayor's Office Cultural Affairs General Services Police
Aging Disability Harbor
Public Works Board of Public
Works
Airports DWP Joint
Housing and Community
Investment
Public Works Contract
Administration
Animal Services DWP Power
Information Technology
Agency
Public Works Engineering
Building and Safety DWP Water Library Public Works Sanitation
City Administrative Officer
Economic and Workforce
Development
Los Angeles Convention
Center
Public Works Street Lighting
City Clerk El Pueblo Neighborhood Empowerment Public Works Street Services
City Employees Retirement
System
Emergency Management Office of Finance Recreation and Parks
City Ethics Commission Employee Relations Board Pension Transportation
City Planning and
Development
Fire Personnel Zoo
Q17.3. Approximately, what is the number of total conversations that you have had with these people concerning
performance management?
Q18. Based on your experience with working for the City of Los Angeles, how much do you agree or disagree
with the following statements?
Disagree Strongly Disagree Neutral Agree Agree Strongly
220
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 10/12
Yes
No
Low
Moderate
Extensive
Extensive and I have been involved in my department’s efforts to develop its performance management system.
Yes
No
Do not know
Disagree Strongly Disagree Neutral Agree Agree Strongly
Performance information is
more useful for high level
administrators than managers
in my position.
Implementing performance
managements systems has the
potential to dramatically
improve our department’s ability
to achieve is strategic goals.
The realities of work within my
department make it unlikely that
performance management will
have a large impact on my
department’s ability to achieve
is strategic goals.
Q19.1. Prior to receiving this questionnaire, had you heard of Mayor Garcetti’s Back to Basics Initiative
requiring all City Departments to develop performance management systems?
Q19.2. How would you describe your level of awareness of the requirements of the Initiative?
Block 4
. Finally we turn to questions related to performance management and open data.
Q20.
Are you familiar with the Los Angeles Open Data platform that was instituted by the Garcetti Administration?
Q21.1. Based on your experience in your department, to what extent, if at all, do you agree with the following
statements? (Part 1/2)
To no extent
To a small
extent
To a
moderate
extent
To a great
extent
To a very
great extent
No basis to
judge / Not
applicable
Over the last year work within your
department has been impacted or
influenced by the Los Angeles Open Data
platform.
221
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 11/12
Positive Feedback
Negative Feedback
No basis to judge/Not applicable
To no extent
To a small
extent
To a
moderate
extent
To a great
extent
To a very
great extent
No basis to
judge / Not
applicable
Over the last year you have observed
overlap between performance
management initiatives in your department
and the Los Angeles Open Data platform.
For your department as a whole,
performance data from your department
appears on the Los Angeles Open Data
platform.
For the work which you are personally
responsible, performance measures you
personally utilize, appear on the Los
Angeles Open Data platform.
The data posted on the Los Angeles Open
Data platform accurately reflects the work
your department conducts.
Q21.2. Based on your experience in your department, to what extent, if at all, do you agree with the following
statements? (Part 2/2)
To no extent
To a small
extent
To a
moderate
extent
To a great
extent
To a very
great extent
No basis to
judge / Not
applicable
Your department works with outside groups
via performance data or the Los Angeles
Open Data platform in order to achieve its
goals/objectives.
My department receives feedback from
citizens or citizens groups based upon
information disbursed through the Los
Angeles Open Data platform.
My department adjusts its performance
metrics or objectives due to feedback from
citizens or citizens groups.
Over the last year my department has
shared its performance metrics or data with
another department within the City.
Over the last year you have personally
utilized the Los Angeles Open Data
platform to access data from another City
department.
Over the last year you have personally
utilized the Los Angeles Open Data
platform to gain access to data about your
own department that you were not able to
access internally.
Q22.
Does public sharing of your department's data through the Los Angeles Open Data platform, on the whole, lead to
positive feedback or negative feedback for your department?
222
4/22/2019 Qualtrics Survey Software
https://usc.ca1.qualtrics.com/Q/EditSection/Blocks/Ajax/GetSurveyPrintPreview 12/12
Q23. We would like to give you the opportunity to share any other perspectives on your experiences with
performance management in the course of your job at the City of Los Angeles. Please use the text box below to
share any other information you think is relevant to understanding how performance management has been
implemented or impacted you.
223
Appendix B: Interview and Meeting Observation Protocols
B.1 Interview Protocol for Departmental officials working on COMPSTAT
1. How did you come to be involved in your department’s COMPSTAT initiative?
2. How has your department organized the team that will be working on its COMPSTAT
system?
3. What in your background in terms of work experience and/or education do you think has
been most important for enabling you to contribute to the COMPSTAT process?
a. What areas do think you and your team will need added training to get up to
speed?
4. What has been the role of the iPMU from your perspective?
5. Does your department have a strategic plan?
6. What steps has your department been taking to date to respond to the Mayor’s directive
on creating COMPSTAT procedures?
a. Have you made your March 2014 deadline for metrics
b. What metrics are you defining as your key performance indicators? Why?
i. Who has been involved in developing objectives and metrics?
ii. How have you addressed the multiplicity of goals of NCs?
c. Have you developed logic models of the department’s work processes?
d. One challenge in COMPSTAT systems is getting departments to distinguish
between inputs, outputs, and outcomes. How far along is your department in
defining the critical outcomes you seek to achieve and to designing metrics to
map those outcomes?
7. What have been the most important successes to date in the process
a. A challenge for all performance management systems is to find areas in which
there are performance issues AND in which there are feasible corrective actions.
What do you see as the areas in your work that fit these criteria?
8. What are the main challenges that you have been facing?
9. What are the main sources of data that are available to your department.
a. How easy would it to generate a report that compared metrics over time and
between subunits?
10. Can I have copies of written memos from staff meetings that review metrics and form
action items?
224
B.2 Observation Protocol for COMPSTAT meetings
Ask for copies of any memos and other handouts that are presented at the meeting. If there is a
PowerPoint presentation ask for the slides.
1. What data is presented during the meeting?
2. Is it presented in a manner that accentuates comparison (e.g. time trends, comparison
between departmental subunits, benchmarks from other cities)?
3. Does the meeting identify any issues or problems related to the analysis of data?
4. Do the participants review any action items that were set in previous meetings?
5. Are there discussions/concerns about the integrity of the data?
6. Do they review any action items from previous meetings?
7. How often are these meetings held?
225
Appendix C: Selected fsQCA Variables Constructed with Survey
Questions
C.1 Outcome Variables
Q1. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Setting program priorities.
Q2. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Allocating resources.
Q3. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Adopting new program approaches or changing work processes.
Q4. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Refining program performance measures.
Q5. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Setting individual job expectation for the government employees I
manager or supervise.
Q6. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Rewarding government employees I manager of supervise.
C.2 Independent Variables
Analytic Capacity
Q1. Based on your experience in your department, to what extent, if at all, do you agree with
the following statements? My department provides me with access to knowledgeable
people (either from within or outside of the agency) that can provide assistance with
developing performance measures, collecting the measures, and interpreting the data.
Innovative Culture
Q1. Based on your experience in your department, to what extent, if at all, do you agree with
the following statements? My department is a very dynamic and entrepreneurial place.
226
Q2. Based on your experience in your department, to what extent, if at all, do you agree with
the following statements? People in my department are willing to stick their necks out
and take risks.
Q3. Based on your experience in your department, to what extent, if at all, do you agree with
the following statements? The glue that holds my department together is a commitment
to innovation and development.
Q4. Based on your experience in your department, to what extent, if at all, do you agree with
the following statements? There is an emphasis in my department on being the best.
Q5. Based on your experience in your department, to what extent, if at all, do you agree
with? In my department, readiness to meet new challenges is important
Strong Leadership
Q1. To what extent, if at all, has your group done the following? Communicated its mission
to you in a clear, understandable way.
Q2. To what extent, if at all, has your group done the following? Defined its strategic goals.
Q3. To what extent, if at all, has your group done the following? Communicated to you how
your everyday job responsibilities relate to the attainment of the agency’s strategic
goals.
Q4. Based on your experience in your department, to what extent, if at all, are the following
factors impediments to the measurement of performance or the use of performance
information? Lack of ongoing top executive commitment or support for using
performance information to make program and/or funding decisions. [reversed]
Q5. Based on your experience in your department, to what extent, if at all, do you agree the
following statements? Employees in my agency receive positive recognition for helping
the agency accomplish its strategic goals.
Q6. Based on your experience in your department, to what extent, if at all, do you agree the
following statements? The individual that I report to periodically reviews performance
measures with me to asses the results or outcomes of the department work for which I
am responsible.
Q7. Based on your experience in your department, to what extent, if at all, do you agree the
following statements? My agency’s top leadership demonstrates a strong commitment to
achieving results.
Q8. Based on your experience in your department, to what extent, if at all, do you agree the
following statements? There is broad consensus among political leaders, departmental
227
employees, major stakeholder groups, and citizens concerning what your department’s
main priorities should be.
Good Metrics
Q1. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that tell us how many things we produce or services we
provide.
Q2. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that tell us if we are operating effectively.
Q3. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that tell us whether or not we are satisfying our customers.
Q4. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that tell us about the quality of the products or services we
provide
Q5. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that would demonstrate to someone outside our agency
whether or not we are achieving our intended results.
Q6. Based on your experience in your department, to what extent, if at all, are the following
factors impediments to the measurement of performance or the use of performance
information? Difficulty determining meaningful measures. [reversed]
Q7. Based on your experience in your department, to what extent, if at all, are the following
factors impediments to the measurement of performance or the use of performance
information? Different parties are using different definitions to measure performance.
[reversed]
Q8. Based on your experience in your department, to what extent, if at all, are the following
factors impediments to the measurement of performance or the use of performance
information? Difficulty obtaining valid or reliable data. [reversed]
Q9. Based on your experience in your department, to what extent, if at all, are the following
factors impediments to the measurement of performance or the use of performance
information? Difficulty obtaining data in time to be useful. [reversed]
228
Q10. Based on your experience in your department, to what extent, if at all, are the following
factors impediments to the measurement of performance or the use of performance
information? High cost of collecting data. [reversed]
Q11. Based on your experience in your department, to what extent, if at all, are the following
factors impediments to the measurement of performance or the use of performance
information? Existing information technology and/or systems incapable of providing
needed data. [reversed]
Q12. To what extent, if at all, has your group done the following? Developed meaningful
ways to measure whether the agency is achieving its strategic goals.
229
Appendix D: Selected Regression Model Variables Constructed with
Survey Questions
D.1 Dependent Variables
Awareness of Performance Management Reforms
Q1. How would you describe your level of awareness of the requirements of the Initiative?
Attitudes Towards Performance Management Reforms
Q1. Based on your experience with working for the City of Los Angeles, how much do you
agree or disagree with the following statements? Implementing performance
managements systems has the potential to dramatically improve our department’s ability
to achieve is strategic goals.
Individual Usage of Performance Information
Q1. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Setting program priorities.
Q2. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Allocating resources.
Q3. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Adopting new program approaches or changing work processes.
Q4. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Refining program performance measures.
Q5. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Setting individual job expectation for the government employees I
manager or supervise.
Q6. For the work for which you are responsible, to what extent, if at all, do you use the
information obtained from performance measurement when participating in each of the
following activities? Rewarding government employees I manager of supervise.
230
Organizational Usage of Performance Information
Q1. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that tell us how many things we produce or services we
provide.
Q2. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that tell us if we are operating effectively.
Q3. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that tell us whether or not we are satisfying our customers.
Q4. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that tell us about the quality of the products or services we
provide.
Q5. To what extent, if at all, do you agree with each of the following statements as they
relate to performance measures related to the work for which you are responsible? We
have performance measures that would demonstrate to someone outside our agency
whether or not we are achieving our intended results.
Proficiency in Using Performance Information
Q1. How would you rate your current level of proficiency in each of the following skills?
Reading and understanding quantitative information provided to you by others.
Q2. How would you rate your current level of proficiency in each of the following skills?
Analyzing raw data such as a survey results or budget numbers.
Q3. How would you rate your current level of proficiency in each of the following skills?
Creating graphics to summarize data.
Q4. How would you rate your current level of proficiency in each of the following skills?
Performing statistical analyses.
D.2 Independent Variables
Public Service Motivation
Q1. Please indicate the degree to which you agree or disagree with the following statements.
Meaningful public service is very important to me.
Q2. Please indicate the degree to which you agree or disagree with the following statements.
I am often reminded by daily events about how dependent we are on one another.
231
Q3. Please indicate the degree to which you agree or disagree with the following statements.
Making a difference in society means more to me than personal achievements.
Q4. Please indicate the degree to which you agree or disagree with the following statements.
I am willing to take risks on the job to get things done.
Q5. Please indicate the degree to which you agree or disagree with the following statements.
I am not afraid to go to bat for the rights of others even if it means I will be ridiculed.
Q6. Please indicate the degree to which you agree or disagree with the following statements.
I am prepared to make sacrifices for the good of society.
Training in Performance Information Use
Q1. Has the City of Los Angeles ever provided you with training to accomplish the
following tasks? Conduct strategic planning.
Q2. Has the City of Los Angeles ever provided you with training to accomplish the
following tasks? Develop program performance measures.
Q3. Has the City of Los Angeles ever provided you with training to accomplish the
following tasks? Develop program performance measures.
Q4. Has the City of Los Angeles ever provided you with training to accomplish the
following tasks? Use program performance information to make a decision.
Q5. Has the City of Los Angeles ever provided you with training to accomplish the
following tasks? Link the performance of your departmental activities to the
achievement of your department’s strategic goals.
D.3 Control Variables
Organizational Communication 1
Q1. To what extent, if at all, has your group done the following? Communicated its mission
to you a in a clear, understandable way.
Q2. To what extent, if at all, has your group done the following? Defined its strategic goals.
Q3. To what extent, if at all, has your group done the following? Communicated to you how
your everyday job responsibilities relate to the attainment of the agency’s strategic
goals.
Q4. To what extent, if at all, has your group done the following? Developed meaningful
ways to measure whether the agency is achieving its strategic goals.
232
Organizational Communication 2
Q1. Approximately, what is the number of total conversations that you have had with these
people concerning performance management?
Organizational Impediments to Performance Management Reforms
Q1. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Difficulty determining meaningful
measures.
Q2. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Different parties are using
different definitions to measure performance.
Q3. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Difficulty obtaining valid or
reliable data.
Q4. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Difficulty obtaining data in time to
be useful.
Q5. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Difficulty in linking performance
to costs and budgets allocations.
Q6. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? High cost of collecting data.
Q7. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Lack of incentive.
Q8. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Difficulty resolving conflicting
interests of stakeholders, either internal or external.
Q9. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Results of our work occurring too
far in the future to be measured.
Q10. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Difficulty distinguishing between
results produced by the program and results caused by other factors.
233
Q11. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Existing information technology
and/or systems incapable of providing needed data.
Q12. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Concern that performance
information could be used against your program or agency.
Q13. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Lack of staff who are
knowledgeable about gathering and/or analyzing performance information.
Q14. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Lack of ongoing top executive
commitment or support for using performance information to make program and/or
funding decisions.
Q15. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Lack of ongoing political support
for using performance information from the City’s political leaders to make program
and/or decisions.
Q16. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Difficulty determining how to use
performance information to improve the program.
Q17. To what extent, if at all, are the following factors impediments to the measurement of
performance or the use of performance information? Difficulty determining how to use
performance information to set new or revise existing performance goals.
Length of Use
Q1. For how many years has your group been collecting and analyzing performance
measures?
Number of Employees Supervised
Q1. In your role, approximately how many employees and outside contractors do you
supervise?
Years Worked at the City of Los Angeles
Q1. How many years have you worked for the City of Los Angeles in total?
234
Highest Level of Education
Q1. What is the highest level of education that you completed?
235
References
Ammons, D. N. (1995). “Overcoming the Inadequacies of Performance Measurement in Local
Government: The Case of Libraries and Leisure Services. Public Administration Review,
55(1), 37-47.
Ammons, D. N. (Ed.). (2008). Leading Performance Management in Local Government. ICMA
Press.
Andrews, R., Beynon, M. J., & McDermott, A. M. (2015). “Organizational Capability in the
Public Sector: A Configurational Approach.” Journal of Public Administration Research
and Theory, 26(2), 239-258.
Ammons, D. N., & Rivenbark, W. C. (2008). “Factors Influencing the Use of Performance Data
to Improve Municipal Services: Evidence from the North Carolina Benchmarking
Project.” Public Administration Review, 68(2), 304-318.
Askim, J., Johnsen, Å. & Christophersen, K. A. (2008). “Factors Behind Organizational
Learning from Benchmarking: Experiences from Norwegian Municipal Benchmarking
Networks.” Journal of Public Administration Research and Theory, 18(2), 297-320.
Aucoin, P. (1990). “Administrative Reform in Public Management: Paradigms, Principles,
Paradoxes and Pendulums.” Governance, 3(2), 115-137.
Badker, G. (1992). “Incentive Contracts and Performance Measurement.” Journal of Political
Economy, 100(31), 598-614
Barzelay, M., & Thompson, F. (2006). “Responsibility Budgeting at the Air Force Materiel
Command.” Public Administration Review, 66(1), 127-138.
Behn, R. D. (2003). “Why Measure Performance? Different Purposes Require Different
Measures.” Public Administration Review, 63(5), 586-606.
Behn, R. D. (2005). “The Core Drivers of CitiStat: It's Not Just About the Meetings and the
Maps.” International Public Management Journal, 8(3), 295-319.
Behn, R. D. (2014). The PerformanceStat Potential: A Leadership Strategy for Producing
Results. Brookings Institution Press.
Bell, A., & Jones, K. (2015). “Explaining Fixed Effects: Random Effects Modeling of Time-
Series Cross-Sectional and Panel Data.” Political Science Research and Methods, 3(1),
133-153.
Bourdeaux, C. (2008). “Integrating Performance Information into Legislative Budget
Processes.” Public Performance & Management Review, 31(4), 547-569.
236
Bourdeaux, C., & Chikoto, G. (2008). “Legislative Influences on Performance Management
Reform.” Public Administration Review, 68(2), 253-265.
Boyne, G. A. (2003). “Sources of Public Service Improvement: A Critical Review and Research
Agenda.” Journal of Public Administration Research and Theory, 13(3), 367-394.
Boyne, G. A., & Gould-Williams, J. (2003). “Planning and Performance in Public Organizations
an Empirical Analysis.” Public Management Review, 5(1), 115-132.
Bratton, W. J., & Malinowski, S. W. (2008). “Police Performance Management in Practice:
Taking COMPSTAT to the Next Level. Policing: A Journal of Policy and Practice, 2(3),
259-265.
Brest, P., & Krieger, L. H. (2010). Problem Solving, Decision Making, and Professional
Judgment: A Guide for Lawyers and Policymakers. Oxford University Press.
Cavalluzzo, K. S., & Ittner, C. D. (2004). “Implementing Performance Measurement
Innovations: Evidence from Government.” Accounting, Organizations and Society, 29(3-
4), 243-267.
Courty, P., & Marschke, G. (2004). “An Empirical Investigation of Gaming Responses to
Explicit Performance Incentives.” Journal of Labor Economics, 22(1), 23-56.
Courty, P., & Marschke, G. (2008). “A General Test for Distortions in Performance
Measures.” The Review of Economics and Statistics, 90(3), 428-441.
Dee, T., & Jacob, B. (2010). “Evaluating NCLB: Accountability Has Produced Substantial Gains
in Math Skills but Not in Reading.” Education Next, 10(3), 54-62.
Denhardt, R. B. (1981). “Toward a Critical Theory of Public Organization.” Public
Administration Review, 41(6), 628-635.
Diefenbach, T. (2009). “New Public Management in Public Sector Organizations: The Dark
Sides of Managerialistic ‘Enlightenment’.” Public Administration, 87(4), 892-909.
Dielman, T. E. (1983). “Pooled Cross-Sectional and Time Series Data: A Survey of Current
Statistical Methodology.” The American Statistician, 37(2), 111-122.
DiMaggio, P., & Powell, W.W. (1983). “The Iron Cage Revisited: Institutional Isomorphism and
Collective Rationality in Organizational Fields.” American Sociological Review, 48, 147-
160.
Dull, M. (2009). “Results-Model Reform Leadership: Questions of Credible Commitment.”
Journal of Public Administration Research and Theory 19(2), 255-284.
237
Durant, R.F. (1999). “The Political Economy of Results-Oriented Management in the
“Neoadministrative State” Lessons from the MCDHHS Experience.” The American
Review of Public Administration, 29(4), 307-331.
Durant, R. F., Kramer, R., Perry, J. L., Mesch, D., & Paarlberg, L. (2006). “Motivating
Employees in a New Governance Era: The Performance Paradigm Revisited.” Public
Administration Review, 66(4), 505-514.
Fernandez, S., & Moldogaziev, T. (2013). “Employee Empowerment, Employee Attitudes, and
Performance: Testing a Causal Model.” Public Administration Review, 73(3), 490-506.
Fiss, P. C. (2007). “A Set-Theoretic Approach to Organizational Configurations.” Academy of
management review, 32(4), 1180-1198.
Fiss, P. C. (2011). “Building Better Causal Theories: A Fuzzy Set Approach to Typologies in
Organization Research.” Academy of Management Journal, 54(2), 393-420.
Frambach, R. T., Fiss, P. C., & Ingenbleek, P. T. (2016). “How Important is Customer
Orientation for Firm Performance? A Fuzzy Set Analysis of Orientations, Strategies, and
Environments.” Journal of Business Research, 69(4), 1428-1436.
Frazier, M. A., & Swiss, J. E. (2008). “Contrasting Views of Results-Based Management Tools
from Different Organizational Levels.” International Public Management Journal 11(2),
214-234.
Frederickson, H. G. (1991). “Toward a Theory of the Public for Public
Administration.” Administration & Society, 22(4), 395-417.
Frederickson, D. G. (2003). Performance Measurement and Third-Party Government: A Study of
the Implementation of the Government Performance and Results Act in Five Health
Agencies (Doctoral Dissertation, Indiana University).
Frederickson, D. G. (2006). Measuring the Performance of the Hollow State. Georgetown
University Press.
Frumkin, P., & Galaskiewicz, J. (2004). “Institutional Isomorphism and Public Sector
Organizations.” Journal of Public Administration Research and Theory, 14(3), 283-307.
Gao, J. (2015). “Performance Measurement and Management in the Public Sector: Some
Lessons from Research Evidence.” Public Administration and Development, 35(2), 86-
96.
Gerrish, E. (2016). “The Impact of Performance Management on Performance in Public
Organizations: A Meta-Analysis.” Public Administration Review, 76(1), 48-66.
238
Greckhamer, T., Furnari, S., Fiss, P. C., & Aguilera, R. V. (2018). “Studying Configurations
with Qualitative Comparative Analysis: Best Practices in Strategy and Organization
Research.” Strategic Organization, 16(4), 482-495.
Greif, A., & Laitin, D. D. (2004). “A Theory of Endogenous Institutional Change.” American
Political Science Review, 98(4), 633-652.
Hannan, M. T., & Freeman, J. (1984). “Structural Inertia and Organizational Change.” American
Sociological Review, 149-164.
Hart, O. (1988). “Incomplete Contracts and the Theory of the Firm.” Journal of Law, Economics,
and Organizations, 4(1), 119-139.
Hatry, H. P. (2006). Performance Measurement: Getting Results. The Urban Institute.
Heinrich, C. J. (2002). “Outcomes–Based Performance Management in the Public Sector:
Implications for Government Accountability and Effectiveness.” Public Administration
Review, 62(6), 712-725.
Heinrich, C. J. (2009). “Third-Party Governance Under No Child Left Behind: Accountability
and Performance Management Challenges.” Journal of Public Administration Research
and Theory, 20(S1), i59-i80.
Heinrich, C. J., & Choi, Y. (2007). “Performance-Based Contracting in Social Welfare
Programs.” The American Review of Public Administration, 37(4), 409-435.
Heckman, J., Heinrich, C., Courty, P., Marschke, G., & J. Smith. (2011). The Performance of
Performance Standards. WE Upjohn Institute.
Heckman, J., Heinrich, C., & Smith, J. (1997). “Assessing the Performance of Performance
Standards in Public Bureaucracies.” The American Economic Review, 87(2), 389-395.
Heinrich, C. J., & Marschke, G. (2010). “Incentives and Their Dynamics in Public Sector
Performance Management Systems.” Journal of Policy Analysis and Management, 29(1),
183-208.
Heskett, J. (2012). The Culture Cycle: How to Shape the Unseen Force That Transforms
Performance. FT Press
Hood, C. (2012). “Public Management by Numbers as a Performance-Enhancing Drug: Two
Hypotheses.” Public Administration Review, 72(s1), S85-S92.
Hou, Y., Lunsford, R. S., Sides, K. C., & Jones, K. A. (2011). “State Performance-Based
Budgeting in Boom and Bust Years: An Analytical Framework and Survey of the
States.” Public Administration Review, 71(3), 370-388.
239
Hvidman, U., & Andersen S C. (2014). “Impact of Performance Management in Public and
Private Organizations.” Journal of Public Administration Research and Theory 24(1):
35–58.
Ingraham, P. W. (1993). “Of Pigs in Pokes and Policy Diffusion: Another Look at Pay-for-
Performance.” Public Administration Review 53(4), 348-356.
James, O. (2011). “Managing Citizens' Expectations of Public Service Performance: Evidence
from Observation and Experimentation from Local Government.” Public
Administration, 89(4), 1419-1435.
Julnes, P. D. L., & Holzer, M. (2001). “Promoting the Utilization of Performance Measures in
Public Organizations: An Empirical Study of Factors Affecting Adoption and
Implementation.” Public Administration Review, 61(6), 693-708.
Kamensky, J. M. (1996). “Role of the "Reinventing Government" Movement in Federal
Management Reform.” Public Administration Review, 56(3), 247-255.
Kelly, D., & Amburgey, T. L. (1991). “Organizational Inertia and Momentum: A Dynamic
Model of Strategic Change.” Academy of Management Journal, 34(3), 591-612.
Kettl, D. F., & Kelman, S. (2007). Reflections on 21st Century Government Management. IBM
Center for the Business of Government.
Kroll, A., & Moynihan, D. P. (2015). “Does Training Matter? Evidence from Performance
Management Reforms.” Public Administration Review, 75(3), 411-420.
Lavertu, S., Lewis, D. E., & Moynihan, D. P. (2013). “Government Reform, Political Ideology,
and Administrative Burden: The Case of Performance Management in the Bush
Administration.” Public Administration Review, 73(6), 845-857.
Leavey Center for the Study of Los Angeles. (2007). Loyal Marymount University Public
Opinion Survey. Loyal Marymount University.
Leavey Center for the Study of Los Angeles. (2014). Loyal Marymount University Public
Opinion Survey. Loyal Marymount University.
Lipsky, M. (2010). Street-Level Bureaucracy: Dilemmas of the Individual in Public Service.
Russell Sage Foundation.
Lu, Y. (2008). “Managing the Design of Performance Measures: The Role of Agencies.” Public
Performance & Management Review, 32(1), 7-24.
Lynn, L. E., Heinrich, C. J., & Hill, C. J. (2000). “Studying Governance and Public
Management: Challenges and Prospects.” Journal of Public Administration Research and
Theory, 10(2), 233-262.
240
May, P. J., & Winter, S. C. (2007). “Politicians, Managers, and Street-Level Bureaucrats:
Influences on Policy Implementation.” Journal of Public Administration Research and
Theory, 19(3), 453-476.
McAfee, P. R., & McMillan, J. (1988). Incentives in Government Contracting. University of
Toronto Press.
Meier, K. J., & O'Toole Jr, L. J. (2001). “Managerial Strategies and Behavior in Networks: A
Model with Evidence from US Public Education. Journal of Public Administration
Research and Theory, 11(3), 271-294.
Melkers, J. E., & Willoughby, K. G. (1998). “The State of the States: Performance-Based
Budgeting Requirements in 47 out of 50.” Public Administration Review, 58(1), 66-73.
Melkers, J. E., & Willoughby, K. G. (2001). “Budgeters' Views of State Performance-Budgeting
Systems: Distinctions Across Branches.” Public Administration Review, 61(1), 54-64.
Melkers, J. E., & Willoughby, K. G. (2005). “Models of Performance-Measurement Use in Local
Governments: Understanding Budgeting, Communication, and Lasting Effects.” Public
Administration Review, 65(2), 180-190.
Moe, T. M. (1984). “The New Economics of Organization.” American Journal of Political
Science, 28(4), 739-777.
Moynihan, D. P. (2006). “Managing for Results in State Government: Evaluating a Decade of
Reform.” Public Administration Review, 66(1), 77-89.
Moynihan, D. P. (2008). The Dynamics of Performance Management: Constructing Information
and Reform. Georgetown University Press.
Moynihan, D. P. (2013). “Advancing the Empirical Study of Performance Management: What
We Learned from the Program Assessment Rating Tool.” The American Review of Public
Administration, 43(5), 499-517.
Moynihan, D. P., & Ingraham, P. W. (2004). “Integrative Leadership in the Public Sector: A
Model of Performance-Information Use.” Administration & Society, 36(4), 427-453.
Moynihan, D. P., & Lavertu, S. (2012). “Does Involvement in Performance Management
Routines Encourage Performance Information Use? Evaluating GPRA and
PART.” Public Administration Review, 72(4), 592-602.
Moynihan, D. P., & Pandey, S. K. (2007). “The Role of Organizations in Fostering Public
Service Motivation.” Public Administration Review, 67(1), 40-53.
241
Moynihan, D. P., & Pandey, S. K. (2010). “The Big Question for Performance Management:
Why do Managers Use Performance Information?.” Journal of Public Administration
Research and Theory, 20(4), 849-866.
Mullen, P. R. (2006). “Performance-Based Budgeting: The Contribution of the Program
Assessment Rating Tool.” Public Budgeting & Finance, 26(4), 79-88.
Nielsen, P. A. (2013). “Performance Management, Managerial Authority, and Public Service
Performance.” Journal of Public Administration Research and Theory, 24(2), 431-458.
Niskanen, W.A. (1971). Bureaucracy and Representative Government. Transaction Publishers.
O’Hare, M. (1995). “Immaculate Correction.” (Paper presented at Association of Public Policy
and Management Annual Research Conference).
O'Toole Jr, L. J., & Meier, K. J. (1999). “Modeling the Impact of Public Management:
Implications of Structural Context.” Journal of Public Administration Research and
Theory, 9(4), 505-526.
Osborne, S. P. (2006). “The New Public Governance?.” Public Management Review, 8(3), 377-
387.
Osborne, D. & Gaebler, T. (1992). Reinventing Government: How the Entrepreneurial Spirit Is
Transforming the Public Sector. Addison-Wesley.
Osborne, D., & Plastrik, P. (1997). Banishing Bureaucracy: The Five Strategies for Reinventing
Government. Addison-Wesley.
Perrin, B. (1998). “Effective Use and Misuse of Performance Measurement.” American Journal
of Evaluation, 19(3), 367-379.
Perry, J. L. (1996). “Measuring Public Service Motivation: An Assessment of Construct
Reliability and Validity.” Journal of Public Administration Research and Theory, 6(1), 5-
22.
Perry, J. L. (2000). “Bringing Society in: Toward a Theory of Public-Service
Motivation.” Journal of Public Administration Research and Theory, 10(2), 471-488.
Perry, J. L., Mesch, D., & Paarlberg, L. (2006). “Motivating Employees in the New Governance
Era: The Performance Paradigm Revisited.” Public Administration Review, 66(4), 505-
514.
Perry, J. L., Hondeghem, A., & Wise, L. R. (2010). “Revisiting the Motivational Bases of Public
Service: Twenty Years of Research and an Agenda for the Future.” Public Administration
Review, 70(5), 681-690.
242
Peters, T.J., & Waterman, R.H. (1982). In Search of Excellence: Lessons from America's Best-
Run Companies. Harper and Row.
Prendergast, C. (1999). “The Provision of Incentives in Firms.” Journal of Economic
Literature, 37(1), 7-63.
Radin, B. A. (1998). “The Government Performance and Results Act (GPRA): Hydra-Headed
Monster or Flexible Management Tool?.” Public Administration Review 58(4), 307-316.
Radin, B. A. (2006). Challenging the Performance Movement: Accountability, Complexity, and
Democratic Values. Georgetown University Press.
Radin, B. A. (2008). “The Legacy of Federal Management Change: PART Repeats Familiar
Problems.” in Performance Management and Budgeting: How Governments Can Learn
from Experience, 114-134.
Ragin, C. C. (1987). The Comparative Method: Moving Beyond Qualitative and Quantitative
Strategies. University of California Press.
Ragin, C. C. (2000). Fuzzy-Set Social Science. University of Chicago Press.
Ragin, C. C. (2005). “From Fuzzy Sets to Crisp Truth Tables.” COMPASSS Working Paper.
Ragin, C. C. (2008). “Qualitative Comparative Analysis Using Fuzzy Sets (fsQCA).” in
Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and
Related Techniques, 87-121.
Rosenfeld, R., Fornango, R., & Baumer, E. (2005). “Did Ceasefire, COMPSTAT, and Exile
Reduce Homicide?.” Criminology & Public Policy, 4(3), 419-449.
Sanderson, I. (2001). “Performance Management, Evaluation and Learning in ‘Modern’ Local
Government.” Public Administration, 79(2), 297-313.
Sanger, M. B. (2008). “From Measurement to Management: Breaking Through the Barriers to
State and Local Performance.” Public Administration Review, 68, S70-S85.
Sanger, M. B. (2013). “Does Measuring Performance Lead to Better Performance?.” Journal of
Policy Analysis and Management, 32(1), 185-203.
Schutz, A. (1967). The Phenomenology of the Social World. Northwestern University Press.
Schick, A. (1966). “The Road to PBB: The Stages of Budget Reform.” Public Administration
Review, 26(4), 243-258.
Schlager, E., & Heikkila, T. (2009). “Resolving Water Conflicts: A Comparative Analysis of
Interstate River Compacts.” Policy Studies Journal, 37(3), 367-392.
243
Schneider, C. Q., & Wagemann, C. (2010). “Qualitative Comparative Analysis (QCA) and
Fuzzy-Sets: Agenda for a Research Approach and a Data Analysis
Technique.” Comparative Sociology, 9(3), 376-396.
Scott, W. R. (1998). Organization: Rational, Natural and Open Systems (5th ed.). Prentice Hall.
Smith, D. C., & Bratton, W. J. (2001). “Performance Management in New York City:
COMPSTAT and the Revolution in Police Management.” in Quicker, Better, Cheaper,
453-82.
Sole, F., & Schiuma, G. (2010). “Using Performance Measures in Public Organisations:
Challenges of Italian Public Administrations.” Measuring Business Excellence, 14(3), 70-
84.
Sonenshein, R. J. (2013). The City at Stake: Secession, Reform, and the Battle for Los Angeles.
Princeton University Press.
Soss, J., Fording, R., & Schram, S. F. (2011). “The Organization of Discipline: From
Performance Management to Perversity and Punishment.” Journal of Public
Administration Research and Theory, 21(S2), i203-i232.
Spicer, M. (2004). “Public Administration, the History of Ideas,and the Reinventing Government
Movement.” Public Administration Review, 64(3), 353-362.
Strauss, A., & Corbin, J. (1994). “Grounded Theory Methodology.” In the Handbook of
Qualitative Research, 273-285.
Stritch, J. M. (2017). “Minding the Time: A Critical Look at Longitudinal Design and Data
Analysis in Quantitative Public Management Research.” Review of Public Personnel
Administration, 37(2), 219-244.
Tullock, G. (1965). The Politics of Bureaucracy. Public Affairs Press.
Walker, R. M., Damanpour, F., & Devece, C. A. (2010). “Management Innovation and
Organisational Performance: Mediating Role of Planning and Control.” Journal of Public
Administration Research and Theory, 21(2), 367-386.
Wholey, J. S., & Hatry, H. P. (1992). “The Case for Performance Monitoring.” Public
Administration Review, 52(6), 604.
Wildavsky, A. A. (1968). “The Political Economy of Efficiency: Cost-Benefit Analysis, System
Analysis, and Program Budgeting.” Public Administration Review, 26, 292-310.
Wildavsky, A. A. (1973). “If Planning Is Everything, Maybe It’s Nothing.” Policy Sciences
26(4), 127-153.
244
Wildavsky, A. A., & Caiden, C. (2004). The New Politics of the Budgetary Process (5th Ed.).
Longman.
Wooldridge, J. M. (Forthcoming). “Correlated Random Effects Models with Unbalanced
Panels.” Journal of Econometrics.
Ukeles, J. B. (1982). Doing More With Less: Turning Public Management Around. Amacon.
Williamson, O.E. (1981). “The Economics of Organization: The Transaction Cost
Approach.” American Journal of Sociology, 87(3), 548-577.
Wilson, J. Q. (1989). Bureaucracy: What Government Agencies Do and Why They Do It. Basic
Books.
Yang, K., & Hsieh, J. Y. (2007). “Managerial Effectiveness of Government Performance
Measurement: Testing a Middle-Range Model. Public Administration Review, 67(5),
861-879.
Young, K. L., & Park, S. H. (2013). “Regulatory Opportunism: Cross-National Patterns in
National Banking Regulatory Responses Following the Global Financial Crisis.” Public
Administration, 91(3), 561-581.
Abstract (if available)
Abstract
Performance management reforms have become widespread across all levels of government within the United States and internationally. The purpose of these performance management systems is to utilize data to improve organizational processes, accountability, and ultimately, services delivered to citizens. Beyond a handful of prominent success stories, most attempts have ended up partially adopted, treated as a compliance exercise, or outright failures. This dissertation explores performance management systems through a holistic three-year study of reforms within the City of Los Angeles. ❧ Three interlocking research questions are explored in this dissertation. First, how is success defined when it comes to performance management reforms? Is it simply the adoption of performance measures or a deeper shift in how a public organization functions? Second, what factors are central to successful performance management systems and in what combination? Third, how do organizations overcome obstacles that arise while implementing performance management reforms? ❧ The above questions and the evolution of reforms in Los Angeles are examined through a unique mixed methods research approach. This research approach is layered upon a theoretical underpinning that considers the operational, managerial, and institutional levels of the City. Three different methodologies are employed in a complementary fashion. Semi-structured case-studies organized around different performance management themes are employed. Multivariate regression analysis via several time varied models that incorporate survey data and other quantitative data. Finally, qualitative comparative analysis (QCA) using Boolean logic is employed as a methodological bridge between the other two research methods. ❧ This dissertation finds that both process-based measures of success and measures of data-driven decision-making should be employed when considering the success of a performance management system. Success factors for performance management systems identified through multiple methodological approaches include leadership, good metrics, organizational size, strategic planning, and organizational culture. Additionally, the combination of strong leadership, organizational size, and good metrics were found to be a causal configuration of factors that led to good outcomes for performance management reforms. Finally, the research demonstrated that inadequate data systems and a lack of financial resources are obstacles that could be overcome on the path to implementing successful reforms.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Designing school systems to encourage data use and instructional improvement: a comparison of educational organizations
PDF
Technological innovation in public organizations
PDF
Three essays on the causes and consequences of China’s governance reforms
PDF
Titrating the solution: the diffusion and institutionalization of the logic of continuous improvement
PDF
How organizations adapt during EBP implementation
PDF
A study of Chinese environmental NGOs: policy advocacy, managerial networking, and leadership succession
PDF
Exploring the social determinants of health in a population with similar access to healthcare: experiences from United States active-duty army wives
PDF
Reproducing inequity in organizations: gendered and racialized emotional labor in pubic organizations
PDF
Delineating the processes and mechanisms of sustainment: understanding how evidence-based prevention initiatives are maintained after initial funding ends
PDF
Performance management in government: the importance of goal clarity
PDF
A stakeholder approach to reimagining private departments of public safety: the implementation of a community advisory board recommendation report
PDF
Goal setting and performance in federal agencies
PDF
Calibrating COCOMO® II for functional size metrics
PDF
Diverging pathways to citizenship and immigrant integration in the U.S.
PDF
Factors that facilitate or hinder staff performance during management turnover
PDF
Examining the implementation of district reforms through gap analysis: addressing the performance gap at two high schools
PDF
A series of longitudinal analyses of patient reported outcomes to further the understanding of care-management of comorbid diabetes and depression in a safety-net healthcare system
PDF
The functions of the middleman: how intermediary nonprofit organizations support the sector and society
PDF
Assessing and addressing random and systematic measurement error in performance indicators of institutional effectiveness in the community college
PDF
Who's the physician in charge? Generalist and specialist jurisdictions in professional practice
Asset Metadata
Creator
Jackman, Robert William
(author)
Core Title
How can metrics matter: performance management reforms in the City of Los Angeles
School
School of Policy, Planning and Development
Degree
Doctor of Philosophy
Degree Program
Public Policy and Management
Publication Date
08/05/2019
Defense Date
05/09/2019
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
case studies,data-driven reforms,Local government,Los Angeles,mixed-methods,multivariate regression analysis,OAI-PMH Harvest,organizational reform,performance management,performance metrics,Public Administration,Public Management,qualitative comparative analysis (QCA)
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Musso, Juliet Ann (
committee chair
), Fiss, Peer C. (
committee member
), Resh, William G. (
committee member
)
Creator Email
rjackman@usc.edu,robertwjackman@alumni.stanford.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-204810
Unique identifier
UC11663173
Identifier
etd-JackmanRob-7735.pdf (filename),usctheses-c89-204810 (legacy record id)
Legacy Identifier
etd-JackmanRob-7735.pdf
Dmrecord
204810
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Jackman, Robert William
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
case studies
data-driven reforms
mixed-methods
multivariate regression analysis
organizational reform
performance management
performance metrics
qualitative comparative analysis (QCA)