Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Performance measurement in nonprofits: an evaluation study
(USC Thesis Other)
Performance measurement in nonprofits: an evaluation study
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: PERORMANCE MEASUREMENT IN NONPROFITS 1
PERFORMANCE MEASUREMENT IN NONPROFITS: AN EVALUATION STUDY
by
Fredrica Piphus Singletary
A Dissertation Presented to the
FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION
December 2017
Copyright 2017 Fredrica Piphus Singletary
PERFORMANCE MEASUREMENT IN NONPROFITS 2
EPIGRAPH
…my whole life is dedicated to change (Bigwood & Williams, 1998).
PERFORMANCE MEASUREMENT IN NONPROFITS 3
DEDICATION
Bringing the gifts that my ancestors gave,
I am the dream and the hope of the slave (Angelou, 1978).
This dissertation is dedicated to those who came before me whose prayers, sacrifices,
perseverance, and unrelenting fortitude made it possible for me to fulfill both my and their
wildest hopes and dreams.
PERFORMANCE MEASUREMENT IN NONPROFITS 4
ACKNOWLEDGEMENTS
My sincerest thank you to the everyone who made this arduous journey worthwhile.
• To this study’s organization of focus, thank you for opening your doors to me. This
milestone would be incomplete without your support.
• Trey, we did it! Thanks for being my editor, cheerleader, and more. You willingly traded
newlywed life to support my dream of becoming Dr. Diva and for that I am forever grateful.
• Mommy and Daddy, thanks for simply challenging me to be my best. No words could ever
express my appreciation for your unconditional love and support.
• Dr. Sherrae M. Hayes Mack and future Drs. Christopher L. Shropshire and Nicole Robinson
Hightower, thanks for inspiring me and encouraging to the finish line.
• To my besties (Megan, Whitney, Evelyn, Sherrae, Nicole, Andrea, & Chelsea), thanks for
keeping me sane.
• OCL Cohort 1, we survived…now, it’s time to change the world.
o To my study group (Dr. Josephine Macharia-Lowe and Dr. Corey Flournoy)–thanks for
always looking out for me!
o To my sweet Sorors of Delta Sigma Theta Sorority, Inc. (future Drs. Ayanna Davis,
Kim Green, Tyline Hood, Maleta Wilson and Dr. Monique Logan), OO-OOP! Love
you all…it was a tremendous blessing to have sisters on this journey.
• To my dissertation committee, thank you for your willingness to serve on my committee. I
gained so much from your feedback and insight!
o Dr. Julie Slayton, thank you for leading me through the dissertation process. I am truly
grateful for your patience, encouragement, and unwavering support.
PERFORMANCE MEASUREMENT IN NONPROFITS 5
o Dr. Alan G. Green & Dr. Renée Smith-Maddox, thanks for challenging me to think
critically and find answers to the hard questions.
PERFORMANCE MEASUREMENT IN NONPROFITS 6
TABLE OF CONTENTS
EPIGRAPH ..................................................................................................................................... 2
DEDICATION ................................................................................................................................ 3
ACKNOWLEDGEMENTS ............................................................................................................ 4
LIST OF FIGURES ........................................................................................................................ 8
ABSTRACT .................................................................................................................................... 9
CHAPTER ONE: INTRODUCTION ........................................................................................... 10
Organizational Context and Mission ........................................................................................ 11
Related Literature...................................................................................................................... 12
Importance of the Study ............................................................................................................ 16
Stakeholder for the Study.......................................................................................................... 17
Purpose of the Project and Questions ....................................................................................... 17
Methodological Framework ...................................................................................................... 18
Definitions................................................................................................................................. 18
Organization of the Study ......................................................................................................... 19
CHAPTER TWO: REVIEW OF THE LITERATURE ................................................................ 20
Accountability in Nonprofit Organizations (NPOs) ................................................................. 20
Clark and Estes’ (2008) Gap Analytic Conceptual Framework ............................................... 21
Stakeholder Knowledge and Motivation and Organizational Influences at Exodus Learning . 22
Knowledge and Skills ........................................................................................................... 22
Motivation ............................................................................................................................. 25
Organization .......................................................................................................................... 28
Interactive Conceptual Framework ........................................................................................... 30
CHAPTER THREE: METHODOLOGY ..................................................................................... 34
Participating Stakeholders ........................................................................................................ 34
Interview Sampling Criteria and Rationale........................................................................... 34
Interview Sampling (Recruitment) Strategy and Rationale. ..................................................... 35
Data Collection and Instrumentation ........................................................................................ 35
Data Analysis ............................................................................................................................ 37
Credibility and Trustworthiness ................................................................................................ 38
Ethics......................................................................................................................................... 40
Limitations and Delimitations ................................................................................................... 41
Conclusion ................................................................................................................................ 42
CHAPTER FOUR: RESULTS AND FINDINGS ........................................................................ 44
Participating Stakeholders ........................................................................................................ 44
Findings..................................................................................................................................... 45
Finding 1: Alignment with Aspirational Mission of EL through Recruitment, Faith, and
Service-Orientation ............................................................................................................... 46
Finding 2: Varied Positions Resulting in Competing Agendas and Needs .......................... 49
Finding 3: EL Staff at Cross-purposes due to Competing Agendas and Needs ................... 53
Conclusion ................................................................................................................................ 57
CHAPTER FIVE: IMPLICATIONS AND RECOMMENDATIONS FOR PRACTICE,
POLICY, AND RESEARCH ....................................................................................................... 59
Summary of Findings ................................................................................................................ 60
Implications and Recommendations for Practice, Policy, and Research .................................. 62
PERFORMANCE MEASUREMENT IN NONPROFITS 7
Implications and Recommendations for Practice ................................................................. 62
Implications and Recommendations for Policy .................................................................... 66
Implications and Recommendations for Research ................................................................ 67
Conclusion ................................................................................................................................ 68
References ................................................................................................................................. 70
APPENDIX A: Interview Protocol ........................................................................................... 77
APPENDIX B: Informed Consent/Information Sheet .............................................................. 80
APPENDIX C: Recruitment Letter ........................................................................................... 83
PERFORMANCE MEASUREMENT IN NONPROFITS 8
LIST OF FIGURES
Figure 1. EL Staff KMO Conceptual Framework prior to Data Collection. ................................ 30
Figure 2. EL Staff KMO Conceptual Framework after Data Collection ..................................... 32
Figure 3. EL Organizational Chart ............................................................................................... 45
PERFORMANCE MEASUREMENT IN NONPROFITS 9
ABSTRACT
Using Clarke and Estes’ (2008) Gap Analysis Framework, this study explored the way
staff members at a nonprofit afterschool program understood and enacted performance
measurement activities in relation to the organization’s mission. The study took place within a
three-site afterschool program servicing about 80 underprivileged elementary-aged youth in a
large urban city in the southeastern region of the United States. This study sought to understand
the staff members’ role and job-related duties related to measuring the organization’s mission-
related objectives. Five 60 minute one to one semi-structured interviews were conducted with the
Executive Director, Program Director, two Program Coordinators, and Site Assistant. Analysis
revealed that the five staff members interviewed where selectively hired based on their Christian
value systems and service orientation, resulting in alignment with the organization’s mission;
however, their differing roles resulted in competing needs that placed them at cross-purposes and
caused a lack of aligned in their views of performance measurement. The three findings that
emerged were as follows: (1) Staff members were recruited through their professional and
personal networks, and they were subsequently hired based upon their anticipated alignment with
the mission of the organization, (2) While staff members aligned with organization’s aspirational
mission, staff members’ performance measurement agendas and needs varied based upon their
role and respective job-related duties, and (3) Given their differing roles, the agendas and needs
identified by the staff members placed them at cross-purposes because their views of
performance measurement were not aligned. Overall, the study’s findings revealed that
alignment with the mission of the organization amongst staff members did not outweigh the
variation in staff member’s role, resulting in divergent perspectives on performance
measurement.
PERFORMANCE MEASUREMENT IN NONPROFITS 10
CHAPTER ONE: INTRODUCTION
Over the past 20 years, nonprofit organizations’ (NPOs) growth, both in significance and
size, has led to increased interest in how NPOs allocate funding towards the achievement of
mission objectives (Francisco & Alves, 2012). According to Giving USA (2015), $358.38 billion
were contributed to the nonprofit sector in 2014, which was the highest recorded total in the
report’s 60-year history. As of 2006, 119 NPOs were being added to the sector daily
(Hasenwinkel, 2006). By definition, NPOs are tax-exempt organizations that seek to serve in the
public interest (i.e., charitable, educational, scientific, religious, etc.) and have a designation as a
501(c)(3) organizations with the Internal Revenue Service (DeMartinis, 2002).
Due to NPOs’ emphasis on mission-related objectives that would be otherwise financially
unsustainable using for-profit business models, NPOs have become increasingly pressured to
exhibit performance excellence (Cairns, Harris, Hutchinson, & Tricker, 2005). Funding for
NPOs reflects a broad pool of contributors; the federal government, NPO employees, donors,
board members, and funding agencies have all called for mission-based performance
measurement systems (Carman, 2008). However, the ability to track mission-related outcomes
and generate a social return on investment (SROI) has eluded the vast majority of NPOs (Sawhill
& Williamson, 2001). Key stakeholders such as employees, donors, board members, and funding
agencies, have not agreed on the best method of performance measurement. For Exodus
Learning (EL)
1
, this same trend hold true. Stakeholders at EL have identified the need to
establish a performance measurement system to evaluate the achievement of its mission-related
objectives. In this study, I sought to understand the way that one stakeholder group within EL
1
To protect the identity of the nonprofit being evaluated within this study, the pseudonym of Exodus Learning (EL)
will be used.
PERFORMANCE MEASUREMENT IN NONPROFITS 11
responded to the call to measure its mission-related objectives. In the remainder of Chapter One,
I focus on EL and its challenge to assess the extent to which staff members engaged in
performance measurement activities to evaluate programming at EL. I offer the organizational
context and mission followed by a review of related literature on performance and outcome
measurement concluded by the introduction of the study’s key stakeholder group and the
organization of the study.
Organizational Context and Mission
Nestled in a large urban city in the southeastern region of the United States (U.S.), EL
was a multi-site afterschool program servicing underprivileged elementary-aged youth. The
mission of EL was to be a Christ-centered service providing distinctive personalized academic
and life skills that developed tomorrow ’s leaders (Exodus Foundation, 2015). Since its founding
in 2001, EL had facilitated programming focused on academic enrichment, character
development, parent engagement, service learning, and fitness. During the school year, the
program was structured around the 5-day school week with hours from 3:30pm to 6:45pm, and a
5-week arts and academic camp was offered during summer break. At the time of the study, EL
had three sites that targeted students attending inner-city schools in neighborhoods with high
crime, poverty, and had been identified by the United Way and department of Education as high
risk for academic failure (Exodus Foundation, 2015).
The primary directive of EL was to equip students to become great students and
community leaders (Exodus Foundation, 2015). Through three primary Performance Targets
(PT’s) focused on enrichment in English/Language Arts, Mathematics, and Life Skills, EL
evaluated each student’s programmatic progress. The students, whose families voluntarily chose
PERFORMANCE MEASUREMENT IN NONPROFITS 12
to participate in the program, received individualized attention alongside 45 to 50 other students
from EL staff members. With a 15:1 staff to student ratio, EL staff members employed an
individualized learning program through best-practice educational techniques, and training on
character, etiquette, and life skills (Exodus Foundation, 2015).
Related Literature
The imperative for nonprofits to employ performance measurement systems has come
from a myriad of sources. NPOs, donors, business sponsors, and governmental contributors want
to be assured that their financial investments are, indeed, supporting the mission (Cairns et al.,
2005). Specifically, stakeholders are seeking to determine if NPOs have the operational
infrastructure and capacity to provide services with efficiency (Cairns et al., 2005). Moreover,
stakeholders are working to ensure that NPOs are ethically worthy of their tax-exempt status and
subsequent financial contributions (Cairns et al., 2005). As NPOs increase in popularity,
concerns related to measuring mission-based performance from both the executive leadership
teams of NPOs and stakeholders have increased (Kaplan, 2001; Rojas, 2004).
Yet while pressure to measure performance has increased, key stakeholders such as
employees, donors, board members, and funding agencies, have not agreed on the best method of
performance measurement. Given the complex nature of quantifying subjective outcomes, many
stakeholders divert to analyzing accounting records to measure mission success. However,
overemphasis on accounting records for performance measurement is flawed, as nonprofits tend
to pursue nonfinancial mission-items for their main objectives (Baber, Daniel, & Roberts, 2002).
NPOs do not ascribe to a conventional business model, therefore instituting financial measures as
the primary marker for success would be inappropriate and make comparisons across NPOs
problematic (Ruebottom, 2011). Herman and Renz (1998) found that stakeholder groups have
PERFORMANCE MEASUREMENT IN NONPROFITS 13
extremely divergent views in defining organizational effectiveness. From a sample of 64 health
and welfare charities receiving funds from United Way, Herman and Renz (1998) discovered
that stakeholders defined “effective on the basis of the criteria and impressions they deem most
relevant” (p. 31). Essentially, stakeholders’ independent prioritization of components related to
organizational effectiveness has prevented many NPOs from implementing performance
measurement systems (Rojas, 2004; Sowa, Selden, & Sandfort, 2004).
The industry-wide call for consistent performance measurement in NPOs has generated
research on how watchdog and rating agencies currently evaluate nonprofits. Watchdog and
rating agencies, like the Better Business Bureau (BBB) Wise Giving Alliance, have been able to
objectively measure NPOs’ operational effectiveness by assessing key areas defined by their
respective metrics (Chen, 2009). The BBB Wise Giving Alliance Standards for Charitable
Accountability were created to evaluate NPOs’ compliance patterns and financial efficiencies to
assist donors in making informed giving decisions (Chen, 2009). Nevertheless, the bulk of
performance measurement systems have not been sophisticated enough to assess mission-based
outcomes (Ebrahim & Rangan, 2014; Kaplan, 2001).
NPOs that participate in third-party assessments with watchdog groups and rating
agencies are likely to achieve greater public support (Chen, 2009). Chen (2009) discovered that
NPOs that participated and met all the standards in programs facilitated by watchdog groups, like
the BBB Charity Review, were more likely to gain a 30% increase in public support in
comparison to nonprofits that did not meet all the standards. In a study led by the New York
Philanthropic Advisory Service of the Education and Research Foundation of the BBB of
Metropolitan New York, NPOs that met one extra standard were associated with an increase of
more than 7% in public support (Chen, 2009). Chen (2009) stated that these discoveries served
PERFORMANCE MEASUREMENT IN NONPROFITS 14
as reasons for nonprofits that did not work with a watchdog group to consider participating. He
also suggested that NPOs had to exercise caution in working with such agencies as donations
were the primary source of revenue for most NPOs. Misguided performance metrics that did not
consider the NPO’s operational infrastructure (i.e., size of organization, board size, fundraising
expenses, nonprofit mission, government funding) could yield misleading results and adversely
impact public support (Chen, 2009).
Overall, measuring mission success has been identified as difficult in nonprofit
management literature for decades (Ebrahim & Rangan, 2014; Sawhill & Williamson, 2001). As
NPOs increase in popularity, concerns related to measuring mission-based performance from the
executive leadership teams of NPOs and stakeholders, like the U.S. government, philanthropic
foundations, and individual donors have increased (Kaplan, 2001). At the federal level, the
passage of the Government Performance and Results Act (GPRA), which mandated federal
agencies to create 5-year strategic plans with performance indicators and goals, and the
implementation of the Office of Management and Budget’s Program Assessment Rating Tool
(PART) reflect the increasing demand for nonprofit organizations to consistently assess and
evaluate their programming (Carman, 2008). Within state and local governments, Balanced
Scorecards and similar organizational report cards have emerged in an effort to hold NPOs
accountable. Philanthropic funders, like the United Way, have employed outcome measurement
requirements to better measure the effectiveness of NPOs that are receiving United Way grants
(Carman, 2008; Ebrahim & Rangan, 2010).
While it is evident that stakeholders at every level are requiring NPOs to assess the
performance of their programs, the larger question is: What is performance measurement and
outcome measurement? Performance measurement is defined as a systematic way to quantify the
PERFORMANCE MEASUREMENT IN NONPROFITS 15
efficiency and effectiveness of an action (Neely, Gregory, & Platts, 1995). According to
Reisman and Clegg (1999), outcome measurement is a methodical approach to examine the
extent to which a program has accomplished its intended results. There is much debate within
nonprofit literature as to whether performance measurement and outcome measurement are
synonymous (Ebrahim & Rangan, 2014; Reisman & Clegg, 1999). The heart of the debate lies in
the definition of outcomes and how they are derived. Performance measurement is often
perceived as a business term with a strong tie to fiscal accountability and limited connection to
the amorphous mission-based programs facilitated by NPOs (Beamon & Balcik, 2008). In
contrast, outcome measurement is closely associated with the construct of “mission success” and
the paradigm that NPOs must be able to quantify the achievements of mission-based
programming. Regardless of which side of the debate researchers land, there is consensus that:
(1) measurement within the nonprofit sector is challenging, and (2) measurement within the
nonprofit sector is necessary (Beamon & Balcik, 2008).
Sawhill and Williamson (2001) provide a powerful narrative on the difficulty of
measuring performance within NPOs: “Imagine an organization whose mission is to alleviate
human suffering. How can you measure such an abstract notion? How can an organization
meaningfully assess its direct contribution to such a broadly stated mission?” (p. 371).
Alternatively, another organization may have a mission of feeding all the children living in
poverty within a 2-mile radius. Each of these NPOs have stated missions, yet the ability to
determine the extent to which each has achieved its intended results, as defined by outcome
measurement, is only attainable by the NPO striving to feed children within the 2-mile radius.
However, both NPOs can implement systems to quantity the efficiency and effectiveness of their
programs, as defined by performance measurement.
PERFORMANCE MEASUREMENT IN NONPROFITS 16
Within Ebrahim and Rangan’s (2014) performance framework, the distinction between
performance measurement and outcome measurement is further illuminated through the
application of key components of the logic model–inputs, activities, outputs, outcomes, and
impacts. Logic models are widely accepted and commonly used within frameworks for
measuring performance in NPOs; logic models were developed for the United States Agency for
International Development (USAID) for the evaluation of programs in the late 1960s (Ebrahim
& Rangan, 2014). What does this distinction mean? It may not be feasible for many NPOs to
measure performance beyond outputs and outcomes (Ebrahim & Rangan, 2014). Predicated by
contingency theory, Ebrahim and Rangan’s (2014) performance framework asserts that there is
no one best way to measure performance in NPOs. Due to the broad range of NPOs, experts
agree that a set of universal standards is more applicable than attempting to employ an industry-
wide performance measurement system, which could not capture the complexity of every NPO;
presently, there is no formal best practice for performance management in NPOs (Ebrahim &
Rangan, 2010; Molnár, 2008).
Importance of the Study
Stakeholders’ expectations for NPOs to adopt and implement quality performance
measurement systems to track mission-based outcomes have increased. In the wake of fiscal
misconduct within industries across America, stakeholders desire to know how their financial
contributions are correcting the societal ills that NPOs aspire to address; stakeholder also wish to
hold NPOs accountable for fulfilling their mission-related objectives. (Cairns et al., 2005;
Ebrahim & Rangan, 2014; Kaplan, 2001; Sawhill & Williamson, 2001). Donors, business
sponsors, and governmental contributors want to be assured that their financial investments are,
indeed, supporting the mission (Cairns et al., 2005; Carman, 2008). Specifically, stakeholders are
PERFORMANCE MEASUREMENT IN NONPROFITS 17
attempting to ensure that NPOs are ethically worthy of their tax-exempt status and subsequent
financial contributions, and have the operational infrastructure and capacity to provide their
stated services with efficiency (Cairns et al., 2005).
Stakeholder for the Study
Stakeholder groups play a pivotal role in an organization’s success, as they have a direct
impact on an organization’s ability to achieve its performance goals (Wheeler & Sillanpää,
1998). NPOs do not generate a profit to have shareholders, but the value created by treating
stakeholders well translates into additional funds that can be used to continue advancing the
organization’s mission (Hull & Lio, 2006). For NPOs, like EL, each stakeholder plays an integral
part in advancing the organization’s goals and strategic objectives. At EL, the stakeholders
included students, teachers, parents, staff, volunteers, board members, leadership, donors, and
community supporters. Although all stakeholders contributed to the achievement of the overall
mission of EL, it was important to evaluate how EL staff members understood and enacted
performance measurement activities in relation to the organization’s mission. The inability to
measure mission success for the Exodus Foundation could result in reduced funding, as they
might be impeded from sharing the fiscal impact of donations on students’ academic growth and
character development.
Purpose of the Project and Questions
The purpose of this project was to conduct a modified gap analysis (Clark & Estes, 2008)
to examine how staff members at EL understood and enacted performance measurement
activities in relation to the organization’s mission. Data collection included the areas of
stakeholders’ anticipated knowledge and skill, motivation, and organizational issues for the
PERFORMANCE MEASUREMENT IN NONPROFITS 18
framing of the study’s interview protocol. While a complete gap analysis would focus on all
stakeholders, for practical purposes the stakeholders to be focused on in this analysis are the
Executive Director, Program Director, two Program Coordinators, and Site Assistant. The
stakeholders selected represent the EL staff members who were responsible for completing
performance measurement activities, and their collective perspective was instrumental in
answering the study’s guiding questions. At the time of the study, EL had no formal performance
measurement system in place.
As such, the question that guided this study was the following:
1. What is it that supports or impedes the enactment of performance measurement at
Exodus Learning?
Methodological Framework
A qualitative data gathering and analysis was conducted to evaluate EL staff members’
roles in performance measurement activities in relation to the organization’s mission. EL staff
members’ current performance was examined using interviews, literature review, and content
analysis. Research based solutions were presented and recommended in a comprehensive
manner.
Definitions
Nonprofit Organization: Tax-exempt organizations that seek to service public interest (i.e.,
charitable, educational, scientific, religious, etc.) and have a designation as a 501(c)(3)
organizations with the Internal Revenue Service (DeMartinis, 2002).
Mission: The purpose and reason for which a nonprofit organization exists (Knauft, Berger,
Gray, 1991).
PERFORMANCE MEASUREMENT IN NONPROFITS 19
Mission Success: A construct related to a nonprofit organization achieving its stated mission-
related goals (Sawhill & Williamson, 2001; Niven, 2011).
Performance Measurement: A systematic way to quantify the efficiency and effectiveness of an
action (Neely, Gregory, & Platts, 1995).
Outcome Measurement: A methodical approach to examine the extent to which a program has
accomplished its intended results (Reisman & Clegg, 1999).
Organization of the Study
This dissertation is organized into five chapters. Chapter One provides the reader with the
key concepts and terminology commonly found in a discussion about performance measurement
and mission success in NPOs. EL’s mission, goals and stakeholders, and the framework for the
project were introduced. Chapter Two provides a review of relevant literature surrounding the
scope of the study. Chapter Three details the study’s methodology, including approach to the
selection of participants, data collection and analysis. Chapter Four provides the study’s three
findings. In closing, chapter Five provides a summary of the study in its entirety, implications
and recommendations for practice, policy, and future research.
PERFORMANCE MEASUREMENT IN NONPROFITS 20
CHAPTER TWO: REVIEW OF THE LITERATURE
Chapter two provides a review of literature related to the key factors that were used to
examine how Exodus Learning (EL) staff members engaged in performance measurement
activities to evaluate EL’s programming. This chapter is divided into three major sections. The
first section briefly defines and discusses accountability within nonprofits organizations. The
second section identifies how Clark and Estes’ (2008) gap analysis framework was used within
this study to evaluate EL staff members’ ability to engage in performance measurement
activities. The second section also includes a discussion on learning and motivation literature as
it relates to nonprofit employees. The chapter concludes with a presentation of the conceptual
framework that informed my approach to my research design, data collection, and analysis.
Accountability in Nonprofit Organizations (NPOs)
The roles of external stakeholders, such as boards, funders, watchdog agencies, are
paramount in ensuring that NPOs are allocating their resources, financially and
programmatically, appropriately and responsibly. External stakeholders’ calls for performance
measurement systems that evaluate mission-based programming in addition to fiscal audits
reflects the sector shift to donor control over the past 25 years (Ostrander, 2007). As NPOs have
become more accountable than ever before, two trends have emerged within the sector: (1)
greater emphasis on measurable outcomes, and (2) increased attention to capacity building
(Carman, 2008; Wing, 2004). The establishment of performance measurement systems and
capacity building within NPOs go hand in hand. In order to create such systems, the staff
members who are responsible for the daily operations of the organization must be trained and
skilled to assess and evaluate the identified programmatic activities and outputs, calling for
internal and reciprocal accountability.
PERFORMANCE MEASUREMENT IN NONPROFITS 21
Accountability, in general, involves the way NPOs and their employees handle the many
expectations generated within and outside the organization (Romzek & Dubnick, 1987). The
need for staff members to take responsibility and account for their actions and potential impact
of their relationship with the organization reflects reciprocal accountability (Brinkenhoff, 2002;
Elmore, 2004). Reciprocal accountability requires that the capacity of staff members is built to
accomplish the organization’s mission and/or goals. Elmore (2000) states, “…there can be no
demand without attention to the capacity that exists to deliver them” (p. 32). Consistent with
Elmore’s (2000) explanation of reciprocal accountability, it is the responsibility of NPOs to
create opportunities for staff to learn how to complete new tasks, like performance measurement
related job duties. Reciprocal accountability creates an environment that fosters and perpetuates
transparency and shared accountability (Brinkenhoff, 2002; Elmore, 2004). Reciprocal
accountability provides balance to the seesaw that is performance measurement. In an idealistic
setting, on one end, external stakeholders provide their expectations for the assessment of
programming, on the other end NPO employees are trained and capable of performing the tasks
required to evaluate its programming, and between the two rests an internal accountability
system creating balance and order to its programming.
Clark and Estes ’ (2008) Gap Analytic Conceptual Framework
According to Clark and Estes (2008), analyzing knowledge and skill, motivation, and
organizational issues determines which components need to be addressed to achieve
stakeholders’ stated organizational goals. Knowledge, motivation, and organizational (KMO)
issues have been identified by Clark and Estes (2008) as causes of performance gaps that impact
stakeholders’ ability to achieve organizational goals. This study used a modified gap analysis
framework to explore EL staff members’ perspectives regarding performance measurement.
PERFORMANCE MEASUREMENT IN NONPROFITS 22
Stakeholder Knowledge and Motivation and Organizational Influences at Exodus Learning
This study examined the staff’s knowledge and motivation and the organizational
elements that were connected to EL’s goal of implementing a performance measurement system
in order to evaluate EL’s mission success in the future. Within this chapter, a review of relevant
learning and motivation theory is presented, along with the way it applied in the particular
context of this study.
Knowledge and Skills
Stakeholder specific factors. To identify what might have shaped staff members’
motivation to engage in performance management related activities, it was important to know
what knowledge influences might have been connected to their ability to engage in these
activities. The purpose of this section is to review relevant literature on knowledge and
motivation influences for employees overseeing supplemental educational programs (e.g.,
afterschool programs like EL). Identifying knowledge influences of EL staff was important in
understanding how EL staff could successfully complete their stakeholder and organizational
goals.
Knowledge Assessment. The knowledge component of the gap analysis identifies
whether individuals have an awareness of the skills and knowledge base necessary to fulfill their
performance goals (Clark & Estes, 2008). Specifically, the knowledge component assesses
whether individuals have a clear comprehension of the innumerable variables needed to achieve
performance goals such as how, why, what, when, where and who needs to be involved (Clark &
Estes, 2008).
Stakeholder knowledge influences. The knowledge component of the gap analysis can
be divided into three primary types–declarative, procedural, and metacognitive (Rueda, 2011).
PERFORMANCE MEASUREMENT IN NONPROFITS 23
This section examined relevant literature related to the three types of knowledge influences
impacting EL staff.
Declarative knowledge influences. According to Rueda (2011), declarative knowledge
influences can be categorized as factual or conceptual. Factual knowledge refers to terminology
or details that must be known in order for one to effectively operate in a specific discipline or
context (Rueda, 2011). Conversely, conceptual knowledge encompasses one’s knowledge of
theories, generalizations, or principles to a specific discipline (Rueda, 2011). Factual and
conceptual knowledge align to create a solid foundation necessary for individuals to comprehend
and fulfill related performance goals.
Procedural knowledge influences. Procedural knowledge refers to the finite skills,
techniques, or methodologies necessary for accomplishing a task (Rueda, 2011). Ebrahim and
Rangan (2010) present four propositions to assist NPOs in navigating their logic models and
implementing performance measurement systems:
1. Performance in emergency and relief work can be measured in terms of inputs,
activities, and outputs.
2. Performance in service delivery work can be measured in terms of activities and
outputs.
3. Performance in service delivery work, when of large scale and scope, can be
measured in terms of outcomes and sometimes impacts.
4. Performance in advocacy and rights-based work can be measured in terms of outputs
and “influence,” an intermediary outcome. (p. 22)
In addition to the four propositions, Ebrahim and Rangan (2014) note that NPOs operate under
both aspirational and operational missions. An aspirational mission reflects what the world
PERFORMANCE MEASUREMENT IN NONPROFITS 24
should look like while an operational mission can be observed through the work that a NPO does
on a daily basis; the operational mission is not always openly stated but it provides a means for
measuring the organization’s progress towards achieving the mission (Ebrahim & Rangan,
2014).
Within performance measurement, clarifying the operational mission from the
aspirational mission is essential for NPOs. After defining each individually, NPOs are better
positioned to define their scale (the reach of operations), and scope (range of activities created to
address the need stated within the operational mission); defining the operational mission, scale,
and scope is critical for each NPO to know what to measure (Ebrahim & Rangan, 2014). For a
NPO, the operational mission identifies the societal problem being addressed, the concept of
scale identifies the desired size of the population, and scope reflects the activities or
programming needed to address the problem (Ebrahim & Rangan, 2014).
By defining the operational mission, scope, and scale coupled with the propositions,
Ebrahim and Rangan (2014) note that NPOs will be able to create personalized, internal
performance measurement systems that address the increasing demand for accountability from
external stakeholders. Yet, Ebrahim and Rangan (2014) caution that every NPO should measure
its activities and outputs, acknowledging that not all NPOs will have the ability to venture into
measuring outcomes and impacts due to the fact that outcomes and impacts are influenced by
events by the confines of the organization itself.
Metacognitive knowledge influences. Staff members must evaluate their strengths and
challenges related to achieving their stakeholder goals. The acknowledgement of one’s own
cognitive process is considered metacognitive knowledge (Rueda, 2011). Metacognitive
knowledge fuels one’s ability to solve problems and address contextual aspects of an activity or
PERFORMANCE MEASUREMENT IN NONPROFITS 25
problem (Rueda, 2011). Within ASPs, like EL, metacognitive knowledge aids employees in
making relevant programmatic adjustments and improvements. Self-assessments and evaluations
provide a means for ASP staff to examine their performance. Pintrich (2002) explains that
metacognitive knowledge can also be used through the modeling of strategies as one is training.
For example, an ASP employee may talk aloud about her own cognitive process as she is
training staff members, which provides a helpful model for other staff members.
Motivation
General theory. Schunk, Pintrich, and Meece (2009) define motivation as “the process
whereby goal-directed activity is instigated and sustained” (p. 4). The instigation and
sustainment of motivation are influenced by internal (cognitive) and external (cultural, social,
etc.) factors (Rueda, 2011). Motivation is multi-dimensional; motivational beliefs span from
developmental to cultural to cognitive and are reflected through several different theories
(Rueda, 2011).
To understand EL staff members’ willingness to engage in performance measurement
related activities, it was important to understand how motivation might have influenced their
ability to identify the appropriate tasks and activities to initiate, persist in completing said tasks
over a period of time, and generate the mental effort needed to learn new knowledge related to
the tasks and activities. The previous indicators reflect what Schunk et al. (2009) refer to as
“motivational indices”—active choice, persistence, and effort. This section examines relevant
literature related to expectancy-value theory and self-efficacy theory, as intrinsic value, utility
value, and self-efficacy directly influence EL staff members’ ability to complete performance
measurement activities in relation to the organization’s mission.
PERFORMANCE MEASUREMENT IN NONPROFITS 26
Expectancy-value theory. According to Wigfield, Stephen, and Klauda (2009),
expectancy-value theory focuses on three broad issues, which “include how expectancies and
values develop, how they are influenced by different kinds of educational contexts, and how
culture impacts the development of expectancies and values” (p. 55). Eccles (2006) states that
the perceived value of a task is defined by four constructs: (1) the fulfillment one expects while
completing a task–intrinsic interest; (2) the degree to which completing a task is in line with
one’s self-image–attainment value; (3) the value of the task for helping one reach an immediate
or long-term external reward–utility value; and (4) the perceived cost of participating in the
activity.
Staff expectations and values. Cruz, Pérez, and Cantero (2009) found that intrinsic
motivation has a positive impact on non-profit employee performance because it is generated
internally by the individual. Subsequently, NPOs find intrinsic motivation causes employees to
efficiently fulfill the organizational goals and mission (Cruz et al., 2009). Benz (2005) notes that
employees within NPOs often have more task variety, independence, and influence on the job
than for-profit employees, which further fosters intrinsic and utility values amongst NPO
workers. The intersection between the culture of the workplace and one’s internal connection to
the organization’s mission amplifies the significance of expectancy-value theory in NPOs.
While intrinsic value is associated with improved employee performance, NPOs cannot
solely focus on recruiting intrinsically motivated employees (Benz, 2005). It is also important to
nurture intrinsic motivation in employees who may not join the organization for intrinsic value-
related reasons. Benz (2005) encourages NPOs to consistently create ways to train and assist
with motivating staff through procedures and practices that enhance intrinsic motivation and
value commitment towards the organization’s goals. Cheverton (2007) suggests that the
PERFORMANCE MEASUREMENT IN NONPROFITS 27
productivity and performance of NPO staff, leaders, boards is increased by value commitment,
and helps to drive effectiveness and direction. Unity of purpose amongst employees through
utility and intrinsic values aid NPOs in fulfilling their mission-related objectives. Moreover,
organizations, are able to make a tangible connection between staff member tasks with their
stakeholder goals.
Self-efficacy theory. The intrinsic value associated with mission-focused work along
with the utility value one derives from fulfilling mission-related tasks attracts employees to
NPOs (Benz, 2005; Cruz et al., 2009). However, their belief that they are capable of organizing
and executing tasks at the desired level of performance is what motivates employees to stay and
learn more job-related tasks; this belief is called self-efficacy (Rueda, 2011). Self-efficacy refers
to how well one anticipates engaging in a task or activity (Rueda, 2011). For EL staff, self-
efficacy played an integral role in their motivation toward fulfilling their job-related duties.
EL staff self-efficacy. Within NPOs, a myriad of variables impact employee self-
efficacy, including performance feedback, procedural constraints, job-goal difficulty, and job-
goal specificity (Wright, 2007). Yet, Pajares (2006) found that individuals with high self-efficacy
employed more efficient self-regulatory strategies at varying levels of ability while individuals
with low self-efficacy might have perceived that tasks were more difficult than they were,
causing stress, depression, anxiety, and a limited perspective of how to resolve problems
(Pajares, 2006).
Self-efficacy prompts individuals to ask the question, can I do this task (Rueda, 2011)?
According to Rueda (2011), individuals with higher self-efficacy have greater belief in their own
capabilities, which results in increased motivation to initiate, persist, and exert mental effort in
tasks and activities. To improve self-efficacy, Margolis and McCabe (2006) offer the following
PERFORMANCE MEASUREMENT IN NONPROFITS 28
interventions: (1) helping individuals recognize the degree to which they succeed at a task–
enactive mastery; (2) directing guidance on how to perform a task–vicarious experiences; and (3)
providing information related to the task for individuals to interpret and evaluate–verbal
persuasion.
Organization
General theory. Schein (1985) notes that an organization’s culture defines what the
organization’s pay attention to, what actions it takes, how it responds, and how it makes
meaning. Further, culture can be assessed through the cultural setting and cultural models within
the organization (Gallimore & Goldenberg, 2001). According to Gallimore and Goldenberg
(2001), cultural settings reflect tangible organizational aspects, like staff members and their
respective job duties. Conversely, cultural models are often invisible as they reflect how
employees perceive the environment is and should be, including cultural practices (Gallimore &
Goldenberg, 2001).
Accountability. Over the past 20 years, NPOs growth both in significance and size has
led to increased interest in how NPOs allocate funding towards the achievement of mission
objectives (Francisco & Alves, 2012). More specifically, stakeholder expectations for NPOs, like
EL, to adopt quality systems and implement performance measurement systems to track mission-
based outcomes has increased (Cairns et al., 2005). This call informally stipulates that NPOs
adopt more businesslike processes and evaluate their financial and programmatic performance
indicators consistently.
Among NPOs, the concept of accountability is defined by the following four
components: mission fulfillment, leadership on behalf of public interest, stewardship, and quality
of services and evaluation of outcomes (Association of Fundraising Professionals, 1995). While
PERFORMANCE MEASUREMENT IN NONPROFITS 29
the notion of accountability is not new to NPOs, the sense of duty to employ a performance
measurement system that reflects all four components has escaped many NPO, as they have a
tendency to focus solely on stewardship (e.g., fiscal accountability). NPOs struggle with
“developing surrogate quantitative measures of organizational performance . . . because [they]
frequently have goals that are amorphous and offer services that are intangible” (Forbes, 1998, p.
184). Yet, as Romzek and Dubnick (1987) point out, accountability necessitates that public
agencies and their employees manage the varied expectations created both within and outside the
organization.
A NPO’s mission statement explains what the organization does, and it has legal
implications of both staff members and the board of directors; should a NPO not perform its
mission, the Internal Revenue Service (IRS) can remove its tax-exempt status (Brinckerhoff,
2009). Subsequently, professional accountability related to NPOs staff members’ ability to
effectively fulfill its mission is essential for the organization to succeed (Romzek & Dubnick,
1987).
Within NPOs, Kaplan (2001) explains that “strategy and performance measurement
should focus on what output and outcomes the organization intends to achieve, not what
programs and initiatives are being implemented” (p. 358). For the cultural settings of outputs and
outcomes to be evaluated, staff members must feel safe to feel comfortable being held
accountable through the tracking of programmatic outcomes. Therefore, there is a nuanced
relationship between trust and professional accountability that organizations must address.
Romzek and Dubnick (1987) assert that, “professional accountability is characterized by the
placement of control over organizational activities in the hands of the employee with the
expertise or special skills to get the job done” (p. 229).
PERFORMANCE MEASUREMENT IN NONPROFITS 30
Interactive Conceptual Framework
Maxwell (2013) explains that a conceptual framework informs a study through a system
of expectations, concepts, and beliefs. The conceptual framework provides a visual
representation of the research questions being studied and will serve as a guide throughout the
study (Maxwell, 2013). Within this study, the conceptual framework has the same function. The
focus of this study was to explore how EL staff members engaged in performance measurement
activities to evaluate EL’s program objectives and investigate their understanding of
performance measurement. To evaluate how EL staff members engage in such behaviors, the
three components of the gap analytic framework were applied along with insights gained from
the proliferating body of literature on performance measurement within NPOs.
Prior to entering the field, Figure 1 was the conceptual framework utilized for this study.
Figure 1. EL Staff KMO Conceptual Framework prior to Data Collection.
The initial conceptual framework was developed from general literature that explored the
knowledge, motivation, and organizational issues likely to impede EL employees’ ability to
engage in performance measurement activities. Based upon this initial conceptual framework, for
PERFORMANCE MEASUREMENT IN NONPROFITS 31
EL staff members to engage in performance measurement activities, they had to first possess
declarative and procedural skills on how to capture data related to program objectives. Further,
the staff members had to able to evaluate their strengths and challenges related to evaluating
program objectives by way of metacognitive skills. Declarative, procedural, and metacognitive
skills would ensure that EL employees had the knowledge to complete the required tasks.
Beyond knowledge influences, EL staff members’ ability to find value in tracking the objectives
along with the belief that they were even capable of tracking program objectives was important
because self-efficacy, utility value, and intrinsic value are large motivation influences for NPOs
employees. Lastly, organizational influences also played a role within this framework. In an
environment where EL employees were faced with competing responsibilities, the latter
knowledge and motivation influences helped the staff members to prioritize their tasks in
relations. Additionally, the organizational influences identified highlight the role of
accountability within EL. Overall, the cultural model alludes to EL being open to performance
measurement, yet the cultural settings indicated that EL staff members were expected to need
more knowledge skills on how to integrate the tracking of data related to program objectives
with their existing teaching responsibilities.
PERFORMANCE MEASUREMENT IN NONPROFITS 32
Figure 2. EL Staff KMO Conceptual Framework after Data Collection
Figure 2 reflects the revised conceptual framework created at the conclusion of the study.
Using the study’s three findings, the revised conceptual framework provides a more realistic
view of performance measurement at EL when compared to the initial conceptual framework.
The initial conceptual framework cited the knowledge, motivation, and organizational issues that
were expected to impede EL employees’ ability to engage in performance measurement
activities. The revised conceptual framework draws from the study’s findings to provide a visual
guide of this study.
The study’s findings and subsequent conceptual framework aligned with nonprofit
literature. This conceptual framework begins with the organizational culture. As noted by Schein
(1984), an organization’s culture dictates how and what it prioritizes. The study’s analysis
revealed that the organizational culture within EL lent to an informal recruitment strategy of
recruiting and hiring from the personal and/or professional networks of EL stakeholders. This
recruitment practice was consistent with that of other NPOs (Linscott, 2011). For EL, this
PERFORMANCE MEASUREMENT IN NONPROFITS 33
practice resulted in the hiring of staff members who shared a Christian value system and service
orientation, resulting in a common interest in advancing the organization’s mission.
Despite alignment around the mission, analysis revealed that there was variation among
the three levels of EL’s organizational structure (e.g., senior level, middle level, and frontline
employees) and staff members’ roles. This variation resulted in divergent performance
measurement needs and agendas because senior level, middle level, and frontline employees
desired different data collected to evaluate their respective role’s efforts to advance EL’s
mission. As previously stated, Rojas (2004) and Sowa, Selden, and Sandfort (2004) noted that
the independent prioritization of elements of organization effectiveness has prevented many
NPOs from establishing performance measurement systems. Their findings proved to be fitting
for EL. At the time of the study, EL did not have a formal performance measurement system.
Therefore, the study’s findings and revised conceptual framework could be used to inform the
implementation of a system at EL in the future.
PERFORMANCE MEASUREMENT IN NONPROFITS 34
CHAPTER THREE: METHODOLOGY
The purpose of this project was to examine the way employees at EL understood and
enacted performance measurement activities in relation to the organization’s mission. The
analysis focused on the roles of EL staff members and how the variation of their roles
supported and/or impeded performance measurement activities at EL. For practical purposes
the stakeholders focused on in this analysis were staff members within EL.
Participating Stakeholders
The study’s population of focus was EL’s employees. Specifically, the Executive
Director, Program Director, two Program Coordinators, and Site Assistant. As a three-site
afterschool program overseen by the Executive Director, it was determined that the inclusion of
all EL employees would provide in-depth insight into the way staff understood and enacted
performance measurement within EL. These five staff members were directly responsible for
ensuring that students meet their Performance Targets (PTs). As such, these five staff members
were also responsible for engaging in performance measurement activities, making them the
appropriate EL employees to interview to assess the extent staff members were engaging in
performance measurement activities to evaluate programming at EL.
Interview Sampling Criteria and Rationale
Criterion 1. Senior level and middle level leadership of EL (Executive Director and
Program Director) responsible for overseeing the daily operations of the afterschool program and
ensuring EL’s PTs were being achieved.
Criterion 2. Frontline employees (two Program Coordinators) responsible for leading
their respective sites and ensuring EL’s PTs were being achieved.
PERFORMANCE MEASUREMENT IN NONPROFITS 35
Criterion 3. Frontline employee (one Site Assistant) supporting Program Coordinators at
their respective sites who contributed to the achievement of EL’s PTs.
Interview Sampling (Recruitment) Strategy and Rationale.
Merriam and Tisdell (2016) note that purposeful sampling allows researchers to gain in-
depth understanding of a desired population. LeCompte and Schensul (2010) use the term
criterion-based selection, as purposeful sampling begins with the identification of criteria for
selecting a study’s participants. Using the interview sampling criterion identified in the previous
section, a purposeful sampling strategy was employed for this study. At the time of the study, EL
had five individuals on staff; all of who met the study’s interview sampling criterion. To ensure
that the study was inclusive of all EL staff members, I sent e-mails to all five staff members
using the recruitment letter (see Appendix C, for recruitment letter) and invited them to
participate. All five EL employees responded stating their interest in participating and interview
dates and times were determined; interviews were conducted with the EL’s Executive Director,
Program Director, two Program Coordinators, and Site Assistant.
Data Collection and Instrumentation
Qualitative research is utilized to help gain insight into how people view their
experiences, how they create their worldviews, and how they make meaning related to their
experiences (Merriam & Tisdell, 2016). Within qualitative research, the researcher is the primary
instrument of data collection and analysis, and the focus is on the process (Merriam & Tisdell,
2016). The method selected for this project was interviews. An interview “is a conversation that
has a structure and a purpose” (Brinkman & Kvale, 2015, p. 5). The systematic activity of
interviewing provided insight into how EL staff members viewed evaluation within the
PERFORMANCE MEASUREMENT IN NONPROFITS 36
afterschool program, how they formed their perception of evaluation, and how they made
meaning of evaluation within EL.
Interviews
EL’s Executive Director, Program Director, two Program Coordinators, and Site
Assistant were asked to participate in individual, semi-structured interviews. Semi-structured
interviews were conducted because they call for specific information from all participants while
allowing for flexibility in the questions being posed (Merriam & Tisdell, 2016). Semi-structured
interviews also allowed me to respond to emergent themes or new ideas as they arose (Merriam
and Tisdell, 2016). The questions posed were informed by the study’s initial conceptual
framework and drew from the knowledge, motivation, and organizational issues identified before
data collection. For example, to explore EL staff members’ perception of the organizational
influences, I posed the following question: Suppose it were my first day at EL; How would you
prioritize performance measurement in relation to the other duties that I am responsible for
completing? To gain insight on EL staff members’ motivation (e.g., self-efficacy) for completing
performance measurement activities, the following question was asked: What are your thoughts
about your ability to evaluate mission-related objectives like EL’s Performance Targets? To
further assess, the organizational influences, participants were asked: What are your thoughts on
evaluating EL’s programming against its mission?
To accurately document the interviews and generate honest responses, the face-to-face
interviews were recorded and conducted in offices/rooms identified by the participants, at a time
specified by the participants. On average, the interviews lasted 1 hour. I followed a script and
facilitated the semi-structured interviews with open-ended questions in an effort to generate the
employees’ thoughts, opinions, and comprehension of performance evaluation within EL. Each
PERFORMANCE MEASUREMENT IN NONPROFITS 37
question was open-ended and single-barrel to ensure that the questions addressed one topic at a
time. The interview questions posed were approved in advance by the Institutional Review Board
at the University of Southern California. For a complete list of the questions, refer to
Appendix A for the interview protocol.
Data Analysis
For the interviews, data analysis began during data collection. I wrote analytic memos
after each interview. I documented my thoughts, concerns, and initial conclusions about the data
in relation to the conceptual framework and research questions. Upon leaving the field, the
interviews were transcribed by an online transcription service. As I read all five transcripts, I
engaged in note-taking, questioning, coding, and interpretation. In an effort to make meaning of
the participants’ responses, I listened to the transcript recordings and read the transcripts multiple
times. Consistent with Corbin and Strauss’s (2008) constant comparative analysis approach, I
compared each participant to the preceding participant to identify similarities and differences
amongst the five participants.
In the first phase of analysis, I used open coding, applying a priori codes from the
conceptual framework. Some of the codes I used were K – Knowledge, M – Motivation, SE –
Self-Efficacy, V – Value, and CR – Competing Responsibility. Employing the analytic tools
described by Corbin and Strauss (2008), I asked myself questions, examined the language and
emotions expressed by the participants, and made comparisons. I did so by typing comments on
the margins of the transcripts. Making comparisons was the most useful tool during this phase, as
it helped me to identify the hierarchy within EL’s organizational structure along with the duties
associated with the participant’s roles. This helped to inform my second phase of analysis. For
example, the second participant whose transcript I analyzed was a Program Coordinator who
PERFORMANCE MEASUREMENT IN NONPROFITS 38
expressed his desired data to evaluate his effectiveness as a teacher. Alternatively, the third
participant whose transcript I analyzed was the Program Director who shared his role was to
identify what data to collect by EL and interpret said data. By employing the constant
comparative analysis approach, it become evident that participants’ roles created conflicting data
collection requests, which was consistent with the cultural setting influence component of my
initial conceptual framework.
As such, a second phase of analysis was conducted where empirical and a priori codes
were aggregated into analytic/axial codes. Some of the codes I used were SH–Selecting for Hire,
FB–Christian/Faith-based, SO–Service Orientation, VR–View of Role, SP–Track Student
Progress, and P –Previous Experience with NPOS/ASPs. In the third phase of data analysis, I
identified pattern codes and themes that emerged in relation to the conceptual framework and
research questions. Upon coding all 5 transcripts through the three phases described, I drafted the
findings for the study. During this final stage, I wrote analytic memos for each transcript that
included my research question and data sandwiches supporting my findings.
Credibility and Trustworthiness
Wolcott (2005) states that credibility increases “the correspondence between research and
the real world” (2005, p. 160). In an effort to increase the credibility and trustworthiness of this
study, the strategies of direct quotation examination and reflectivity were employed (Creswell,
2014). At the close of the study, I provided my findings to the five EL employees who
participated in interviews to assess the accuracy of the findings from the study’s one-on-one
interviews with purposefully selected EL employees. While raw transcripts were not presented,
the study’s findings were provided to give the Executive Director, Program Director, Program
Coordinators, and Site Assistant interviewed an opportunity to comment on the findings. The use
PERFORMANCE MEASUREMENT IN NONPROFITS 39
of direct quotations to communicate the study’s findings assisted in making the results more
tangible and real, adding to the study’s credibility (Creswell, 2014). I also turned to nonprofit
and learning literature to identify what aligned with theory and the empirical work of this study,
allowing for the triangulation of data sources (Maxwell, 2013).
Lastly, reflectivity builds a transparent narrative within a study (Creswell, 2014). During
the interviews, I openly disclosed the bias that I brought to the study, related to my educational
background, professional experience, gender, ethnicity, and previous history with EL as a
volunteer and donor, and pre-existing interactions with three of its employees. I strove to conduct
the semi-structured interviews as ‘‘conversations with a purpose’’ (Burgess, 1984, p. 102). I
found that the background of EL staff members and my own was very similar. Therefore, when
applicable, I shared that: 1) I was a graduate of the public, historically black university that
neighbored EL’s main site (4 out 5 participants attended this university), 2) I was a graduate of a
private, Christian university in the same city as the study’s main site (1 out 5 participants
attended this university), 3) I volunteered at the main site during my undergraduate
matriculation, 4) I was a financial supporter, 5) I had pre-existing connections with three of
study’s participants, 6) I was also a nonprofit professional, and 7) as an African-American
woman, I could relate to many of the challenges faced by EL’s African-American participants. In
an effort to identify my biases and prevent unintentionally influencing the study’s findings, I
wrote reflective notes related to my interpretation of the data and asked myself probing
questions. Some of the questions I asked myself include: Am I staying objective or am I leaning
into my pre-existing knowledge of EL? As a fellow nonprofit professional, are my own
expectations affecting my view of the participant’s response? Despite our previous interactions,
how does this participant’s experience help me understand their view? To answer these
PERFORMANCE MEASUREMENT IN NONPROFITS 40
questions, I reviewed my reflective notes against my comments on each participant’s transcript
and revised any comments that reflected bias, and I also conferred with my Faculty Advisor, Dr.
Julie Slayton during each phase of data analysis. In addition to providing advising on the data
analytic process, Dr. Slayton asked probing questioning to further mitigate bias so the study’s
findings would not be unintentionally compromised.
Ethics
Prior to collecting data from Exodus Learning (EL) staff members, required
documentation (e.g., recruitment materials, informed consent, and information sheet) was
submitted to the University of Southern California’s Institutional Review Board (IRB). The
university’s IRB team determined the present study was exempt from further review and
provided final approval. The study’s approach did not create risk for the participants, or the risks
created generated benefits to the participants or the general public.
The participation of EL staff was completely voluntary, and participants had the option of
not participating. There was no consequence for not participating in the study, and the researcher
not disclose who did and did not participate in an effort to protect EL employees from any harm.
Each participant was provided with a written informed consent form disclosing the potential
risks and benefits of the study.
The study information sheet also indicated that any identifiable information obtained in
connection with this study would remain confidential. The audio-tapes were destroyed once they
were transcribed. Participants had the right to review and edit the transcripts. Additionally, the
researcher, third-party transcription company, and the University of Southern California’s
Human Subjects Protection Program (HSPP) had permission to access the data. The HSPP
reviews and monitors research studies to protect the rights and welfare of research subjects.
PERFORMANCE MEASUREMENT IN NONPROFITS 41
I facilitated the one to one interviews as a doctoral candidate who had not worked with
EL staff members at any point during their employment. Participants could have chosen to end
their participation in the study at any time without penalty. Any identifiable information obtained
in connection with this study remained confidential. Participant responses were coded with a
false name (pseudonym) and maintained separately on an audio recorder, then transcribed. The
audio-tapes were destroyed once they were transcribed.
While I never worked directly with the EL staff members who participated in the study, I
had direct connections with EL and three of its staff members. During my undergraduate
matriculation, I volunteered at EL to meet a university service-learning requirement. Later, while
pursuing my Master’s degree, I interviewed the past Executive Director on EL’s fundraising
activities to fulfill my graduate coursework. Prior to conducting the study, I identified that I was
previously acquainted with the current Executive Director, Program Director, and one Program
Coordinator as we were members of the same academic program during our undergraduate
matriculation, and I disclosed this relationship to those who participated.
As a former EL volunteer and current financial supporter, I was interested in assisting EL
advance its stated mission. Recognizing that I had a working knowledge of how the afterschool
program operated, I acknowledged that I had to refer to the programmatic materials provided by
EL and responses gained from staff members during data collection, analysis, and reporting
activities, and not allow my existing knowledge of the program and biases to influence my
approach to data collection and analysis.
Limitations and Delimitations
The limitations of the present study were as follows: (1) participants were selected from a
multi-site afterschool program servicing underprivileged elementary-aged youth in a large urban
PERFORMANCE MEASUREMENT IN NONPROFITS 42
city in the southeastern region of the United States, (2) participants were selected based upon
their roles and predetermined criteria, (3) there was a small participant pool, (4) the study
occurred during a single point in time, and (5) the extent to which participants were honest and
transparent in their interview responses. As such, the study was limited to the knowledge and
experiences expressed by the participants; the findings and conclusions drawn from the study
were influenced by the participants’ understanding and perceptions of performance measurement
of programming within their organization. As a qualitative study designed to gain the perceptions
of a specific group of NPO workers, the delimitations were necessary. Generalizability was not
an intended outcome for this study due to its qualitative design. However, according to Patton
(2002), the information gained from a qualitative study could yield opportunities for learning
despite its limited generalizability.
Lastly, the impetus for engaging in performance measurement activities as dictated by its
funders (e.g., local foundations, United Way, individual donors, etc.) were present limitations to
the processes and perceptions held by the employees that were interviewed. Performance
measurement expectations set from varying funders provided divergent points of interest and
focus, subsequently impacted the participants’ knowledge and perceptions of performance
measurement of programming within EL.
Conclusion
The impetus for nonprofit organizations (NPOs) creating performance measurement
systems is rooted in stakeholders’ desire to hold NPOs accountable for allocating resources, both
financially and non-financially, in a manner that advances the organization’s stated mission
(Francisco & Alves, 2012). Subsequently, NPOs, like the EL, are working to ensure that they
have the infrastructure to support systems of performance measurement to track their
PERFORMANCE MEASUREMENT IN NONPROFITS 43
programmatic outcomes. Identifying the way EL staff members understood and enacted
performance measurement activities in relation to the organization’s mission was pivotal to
inform how EL can develop a formal performance measurement system in the future.
PERFORMANCE MEASUREMENT IN NONPROFITS 44
CHAPTER FOUR: RESULTS AND FINDINGS
The purpose of this study was to explore EL staff member’s role in performance
measurement activities using a modified gap analysis framework (Clark & Estes, 2008). Five
semi-structured 1-hour interviews were conducted at EL’s main site. A complete gap analysis
would have focused on all stakeholders; for practical purposes, the stakeholders of focus within
this project were staff members within EL.
Participating Stakeholders
Applying the interview sampling criterion identified in Chapter Three, a purposeful
sampling strategy was used for this study. At the time of the study, EL employed a total of five
staff members. All five staff members were identified as being directly responsible for advancing
the organization’s mission and engaging in performance measurement activities to evaluate
programming at EL. Therefore, the five qualitative interviews were conducted with EL’s five
staff members, which included the Executive Director, Program Director, two Program
Coordinators, and Site Assistant (see Figure 3, for organizational chart).
PERFORMANCE MEASUREMENT IN NONPROFITS 45
Figure 3. EL Organizational Chart
Findings
EL’s five-member team demonstrated unity in their goal of advancing the organization’s
mission. The data revealed that this faith-based afterschool program employed mission-driven
staff who were committed to supporting the educational, character, and social development of its
at-risk elementary-aged participants so they could ultimately become successful, contributing
members of society. EL had a lean staffing model and staff members were responsible for a
number of job-related duties, ranging from lesson-planning to fundraising. As such, the
prioritization of EL staff members’ duties varied based on their roles. This variation resulted in
EL staff members having divergent needs and perspectives on what data should be collected to
measure EL’s performance. It was evident that each respective staff member’s role in the
organization influenced his/her perspective on how the measurement of the program’s
performance should be enacted; these conflicting perspectives created misalignment within EL.
The data (both quantitative and qualitative in form) collected was often narrowly targeted at
showcasing the advancement of EL’s mission with the primary purpose of securing funds from
its stakeholders. While the collection of the aforementioned data was perceived by some EL staff
members as a necessity to highlight its missional advancement, teaching staff members
PERFORMANCE MEASUREMENT IN NONPROFITS 46
expressed a need for data to be collected that assessed their effectiveness along with their
students’ progress (e.g., academic, behavioral, character).
Therefore, the following three findings emerged: (1) Staff members were recruited and
hired based upon their anticipated alignment with the mission of EL, (2) While staff members
aligned with EL’s aspirational mission, staff members’ performance measurement agendas and
needs varied based upon their role and respective job-related duties, and (3) Given their differing
roles, the agendas and needs identified by the staff members placed them at cross-purposes
because their views of performance measurement at EL were not aligned.
Finding 1: Alignment with Aspirational Mission of EL through Recruitment, Faith, and
Service-Orientation
The five EL staff members interviewed within this study were aligned with the
aspirational mission of the organization, which was to be a Christ-centered service providing
distinctive personalized academic and life skills that develop tomorrow ’s leaders. Throughout
their interviews, two clear themes emerged related to EL staff ’s alignment with its mission. Their
alignment was also a result of being recruited through their personal and professional networks
coupled with their shared service orientation and Christian value systems.
Theme 1: Recruitment. Without formal recruitment strategies like for-profit
organizations, the bulk of NPOs default to recruiting and subsequently hiring from within their
professional and personal networks (Linscott, 2011). This tendency was reflected within the
recruitment narrative of EL’s five-member team, all of whom were selectively recruited by one
another or individuals with direct ties to EL. For example, Will, an EL Program Coordinator,
recounted his recruitment experience as follows:
PERFORMANCE MEASUREMENT IN NONPROFITS 47
Well, I’ve been in the nonprofit sector for going on 5 years now. I was at another faith-
based nonprofit before Exodus Learning; Philip was one of my mentors. So, after going
through a high school program with him for a couple of years, I knew that he was going
to be leaving. He was like, “Hey! I’m over here at Exodus Learning. This is what we do. I
think you’ll be a great fit.”
Here Will indicated that Philip had an established connection with him after knowing Will for
years and acting as his mentor. Will recognized that he was recruited by Philip because of their
personal and professional relationship. Given their pre-existing relationship and Philip’s
awareness of Will’s participation in NPO programming, Will shared that Philip informed him
that he believed he would “be a great fit.” On a macro level, Will’s experience reflected NPO
staff members’ tendency to recruit and hire from their personal and professional networks.
Similarly, Ashley, the EL Site Assistant, shared her recruitment experience as follows:
So, I became friends with Hilary about like 2 years ago, and she worked here and she
would tell me about it all the time. I used to volunteer at an afterschool program, and I
knew it was something that I enjoyed as far as like being there for the kids and helping
them in areas that they need help with. I like…I understand math. I understand English so
I can help them with that. And she (Hilary) just kept telling me about it and when Philip
offered me a position here there wasn’t really anything stopping me from working here.
So yeah, I think my main motivation is just being there for the kids and helping them in
the areas that I did and was fortunate enough to have people help me.
Much like Will, Ashley was recruited from her personal network. Here Ashley demonstrated that
her friend Hilary saw that Ashley was someone who could contribute to EL based upon Hilary’s
awareness of Ashley’s experience at another ASP. Subsequently, Ashley explained that Hilary
PERFORMANCE MEASUREMENT IN NONPROFITS 48
was persistent in discussing EL with her. Ashley was also interested in working at EL because
she connected with its mission of assisting the ASPs’ youth in the areas of English and
mathematics. Ashley’s friendship with Hilary, previous experience with an ASP, and desire to
serve youth led to the onboarding of another EL staff member aligned with EL’s aspirational
mission.
The other three staff members’ recruitment stories also included their respective EL
connections making an acknowledgement about his/her being a “fit” for the organization. On a
micro level, Will and Ashley’s experiences illustrated that within the organizational culture of
EL there was a propensity to recruit and hire individuals who shared the same types of
experiences and/or values as those who recruited them and those experiences and values aligned
with EL’s mission.
Theme 2: Faith and Service Orientation. In addition to being recruited through
personal/professional relationships, the five-member team shared their belief in God and desire
to share Christian values with the program’s participants along with a passion for serving their
local community. For example, EL Program Coordinator Hilary shared her philosophy on
integrating faith-based lessons with academic enrichment: “I enjoy teaching the kids like basic
principles of Christ because that’s what leads to success.” Here Hilary indicated that she gained
fulfillment from sharing biblically-based principles with EL’s participants, as she attributed said
principles to aiding students in becoming accomplished. This finding was consistent with the
nonprofit literature that notes NPO employees are attracted to mission-focused work because
they are able to derive both intrinsic and utility value from their mission-related tasks (Benz,
2005; Cruz et al., 2009).
PERFORMANCE MEASUREMENT IN NONPROFITS 49
Much like Hilary, the other four EL staff members connected their personal value
systems to the aspirational mission of EL. For example, when Site Assistant Ashley was asked
what inspired her most about the work that EL does, she said:
I think I like that it’s Christ-centered. So, even though, there are a lot of afterschool
programs out there. Like…this one specifically focuses on the character of the kids as
well. Not just academics or having no place for them to be. Like…specifically provides a
place for them to learn about God and to learn about how to treat others and how to be a
leader. So, I really like that about Exodus Learning.
Here Ashley expressed that she appreciated the fact that EL was a faith-based ASP. Ashley
explained that its use of Christian principles allowed EL to also incorporate lessons on character
and leadership for its participants. Overall, EL staff members identified that the mission of EL
aligned with their personal value systems (e.g., Christian faith, service to their local
communities), resulting in a strong sense of buy-in amongst the team in their ability to advance
the organization’s aspirational mission. Furthermore, the triad of pre-existing relationships, faith,
and service-orientation led to a surface-level agreement amongst EL staff members as they were
aligned with EL’s mission.
Finding 2: Varied Positions Resulting in Competing Agendas and Needs
As mentioned in Chapter Two, Ebrahim and Rangan (2014) noted that an organization’s
operational mission is not often overtly stated; yet, it creates the ability for an organization to
measure its progress in achieving its mission. Although EL staff members aligned with the
aspirational mission, variation in EL staff members’ roles and subsequent execution of the
operational mission became apparent as staff members described their views of their respective
roles and related job duties in relation to performance measurement activities. Differences
PERFORMANCE MEASUREMENT IN NONPROFITS 50
between the senior level, middle level, and frontline employees (see Figure 3 above, for
organizational chart) also emerged as each staff member described his/her role and/or view of
what data should be collected to measure EL’s performance. The data revealed that EL staff
members’ roles within the organization ultimately dictated how to they worked to advance the
operational mission. For example, at the senior level, EL Executive Director Philip described his
role as follows:
And so, my priority immediately goes to donors and supporters financially. I would like
to think of students as EL’s number one customer, but as the Executive Director donors
become my number one customer. …and those are the people I have to answer to because
we are a grants-based organization for the most part. We have a couple of private donors,
but outside of that we rely on, you know, the support of others to fund our mission and to
keep our lights on. So, for that reason I’m always trying to stay abreast or keep them
abreast and stay abreast with what they have going on, and comments, suggestions
recommendations, scheduling site visits, knowing when they’re going to come out or
when they want to bring somebody. So, I am always calling. I’m always showing up at
their offices. I’m always out for coffee, and that’s every day. That’s even what I am
doing today.
Here Philip openly acknowledged that his position as EL Executive Director predetermined his
priorities. He candidly noted his personal desire to consider EL’s participants as EL’s number
one customer, but he had a keen awareness that funders had to be his top priority because the
monetary donations that EL funders provided made it possible for EL to serve its students. As
such, Philip’s interpretation of his role in relation to EL’s operational mission placed funders and
their needs at the top of his list in relation to performance measurement activities.
PERFORMANCE MEASUREMENT IN NONPROFITS 51
Conversely, when asked how he would frame performance measurement at EL to a new
hire, Program Director Carlton, who worked at the middle level, stated:
I would only be framing it from the mindset of how the students are, because we haven’t
really talked at all about like where is the organization providing the services that we say
we’re providing in a way that is meaningful. But then, even then it’s still tied. So, really
what are we doing? We really just want to grow kids in reading. That is how we do that
and how we identify and how to do that I guess is really how, what performance
measurement would be like—how we facilitate growth in kids. So, I would describe that
to someone new on staff.
In his role as Program Director, Carlton shared that he reported to the Executive Director Philip
and supervised the Program Coordinators Will and Hilary. Despite his direct connection to
Philip, the routes by which Carlton and Philip worked to advance EL’s aspirational mission were
drastically different. Most notably, Carlton indicated that EL staff members had not clarified its
operational mission in his statement that “we haven’t really talked at all about like where is the
organization providing the services that we say we’re providing in a way that is meaningful.”
Based on this lack of clarity, Carlton’s interpretation of his role translated to students’ progress
in literacy being at the top of his priority list for measuring EL’s performance.
As frontline employees with the most direct interactions with both students and parents,
the EL teaching staff (comprised of Program Coordinators Will and Hilary and Site Assistant
Ashley) had a shared interpretation of their roles and how to advance EL’s operational mission.
Yet, their perspective was not aligned with Philip or Carlton who work at the senior and middle
levels, respectively. For example, Program Coordinator Will shared his perspective on the
programmatic data requested and assessed by Philip and Carlton as follows:
PERFORMANCE MEASUREMENT IN NONPROFITS 52
I will say on the business side and organizational sense those numbers are important for
dollars of course. People want to know where their money is going to and if their money
is being used correctly. On a more practical side, for me, I just want to make sure that my
teaching isn’t in vain. In the sense that, I think that I’m doing something good by my
students if they are retaining. That’s what’s more important to me. I don’t want to waste
my time or their time. I’m telling you (students) that you should be here learning work.
I’m really just talking and throwing information at you, which is how some of the kids
perceive their teachers at school. That information is just thrown at them and is not put in
a more practical sense. So, I try to be intentional on making sure that I know they receive
information and knowing how they retain information and then to know what ways they
can best regurgitate that information.
Although Will acknowledged the significance of the data requested by Philip and Carlton, Will
emphasized a desire to have metrics that illustrated his effectiveness as a teacher along with
overall student progress. Given their teaching capacities at EL, Will, Hilary, and Ashley’s
advancement of the operational mission was primarily fueled by serving the program’s
participants through academic enrichment. Here Site Assistant Ashley shared her stance on EL
evaluating its mission success as follows:
I think we should just to make sure that we’re on track because I think when you get
down to the day to day things it’s easy to forget why you’re there when you’re trying to
get kids to learn how to work. It’s easy to forget the overall mission of why we’re doing
this. And the fact that we’re trying to build them up as leaders and build up the
community. So, I think it is important to just remember.
PERFORMANCE MEASUREMENT IN NONPROFITS 53
Much like the other teaching staff (Will and Hilary), Ashley asserted that she wanted to maintain
a connection to EL’s aspirational mission and not become lost in the mundane details of her role.
Given that Ashley, Will, and Hilary worked so closely to the population that EL served, their
roles led them to identify teacher effectiveness coupled with student progress at the top of their
list for evaluating performance at EL.
Each EL staff member derived his/her respective means for executing EL’s operational
mission based on his/her role. As such, three differing viewpoints on what was the top priority
for evaluating performance measurement at EL emerged. The priorities were identified as
follows: 1) Executive Director Philip stated funders and their needs, 2) Program Director Carlton
identified students’ growth in literacy, and 3) EL teaching staff (comprised of Program
Coordinators Will and Hilary and Site Assistant Ashley) shared teacher effectiveness and overall
student progress.
Finding 3: EL Staff at Cross-purposes due to Competing Agendas and Needs
The divergent agendas and needs of EL’s senior level, middle level, and frontline
employees placed them at cross-purposes. As stated in the previous section, three competing
priorities were identified among the Executive Director, Program Director, and EL teaching
staff, reflecting a lack of alignment in performance measurement at EL. The three competing
priorities appeared to result in the collection of or the desire to collect disconnected data. While
all of the data identified as priority by EL staff did function as an out birth of EL’s mission, no
deliberate connections were made between the three priorities to cross-sect EL staff members’
roles and related needs.
When asked what she viewed as disadvantages to measuring EL’s performance against
mission-related objectives, Program Coordinator Hilary said, “I think the quantification aspect is
PERFORMANCE MEASUREMENT IN NONPROFITS 54
the struggle and that is difficult to do or in that you think if you had to quantify everything it
would limit what services you provide…limit service supervision.” Here Hilary also alluded to a
potential organizational fear of needing to “quantify everything” and subsequent concern with
having to limit services rendered to ensure EL had the capacity to measure everything. Hilary’s
response on the struggle of what to quantify coupled with the following statement from Program
Coordinator Will highlights that staff members were at cross-purposes due to their differing roles
within EL.
I like to see the tangible improvement. I’m not big on like computer numbers because I’m
always thinking of the…for lack of better words…the interferences of like sitting at a
computer and you know in testing…you know, “I’ve (student) been at school all day;
I’ve been sitting there being quiet. Now, I have to sit at a computer, and I have to keep
clicking.” You know…all those things I try to consider those. Like going back to what I
was saying–I like to see the tangible. What I recently started doing in the last couple of
weeks when our younger kids come to sign themselves in and I ask you (student) a
question. And so, I know that Hilary has been doing site words with you. Now, I want to
know if that is going from Miss Hilary just taught it and I can retain it for 2 minutes, or if
I retained it for the next day.
In the above statement, Will expressed that he was more interested in witnessing students’
demonstration of retention as opposed to relying upon “computer numbers,” which referred to
the i-Ready lessons and end of week assessments reviewed by Carlton. In an effort to get what he
described as a record of “tangible improvement,” Will shared one of his methods for measuring
the program’s performance was casual dialogue with students about the lesson from the previous
day. Will’s rationale for this method was his awareness of the attention span and socioemotional
PERFORMANCE MEASUREMENT IN NONPROFITS 55
needs of an elementary-aged student being expected to focus on academic enrichment materials
on a computer after completing a school day. From his capacity as a member of the EL teaching
staff, Will gave voice to his concerns related to leaning into the data analyzed by Carlton to make
programmatic adjustments and measure student progress.
When Program Director Carlton was asked his thoughts on evaluating EL’s programming
against its stated mission, he stated:
Because your mission is what you’re saying like you’re doing. And so, if you don’t have
a way of evaluating like, “Hey! Are we actually doing what we said were going to do?”
Then, you might be doing…yeah, you might be doing good work, but you’re not doing
actually what you said you were going to do. And so, like funds don’t usually come in for
you.
Carlton recognized the significance of evaluating EL against its mission as a function of
accountability–socially and fiscally. However, Carlton’s perspective on EL evaluating
performance against its mission rested in his understanding of his role as Program Director; he
shared:
Who analyzes the data is me. I…the Program Director analyzes the data and sets the
curriculum for the Coordinators along with like new goals for the week. So, I notice that
like they (students) didn’t get this lesson based off of either their end of week
assessments or how they’re doing as a group in their i-Ready lessons, and then that’s
what kind of dictates, “Oh…OK we need to teach this in 2 weeks, or we need to teach
this 2 weeks, or we need to like add this, or spiral this in, in some way somehow into
what we’re doing.” And all of it is connected. So, it is kind of easy to spiral in and then
PERFORMANCE MEASUREMENT IN NONPROFITS 56
do like a review or to figure out how to connect it, but Miss Hilary usually figures it out
really succinctly.
Here Carlton was very explicit about his role as Program Director and how it interconnected to
both students and Program Coordinators. He expressed that by making meaning of the data
collected from students he created the ASPs’ weekly goals and curriculum. Based on Carlton’s
summation of his role, it appeared that he and the teaching staff were aligned in performance
measurement at EL. Yet, Hilary and Will’s respond made it clear that while they understand the
need for “computer numbers” and other metrics derived by Phil and Carlton they desired
alterative data to be collected.
All five members of the EL team desired to advance EL’s mission. Their routes for doing
so varied greatly and were dictated by their role within the organization. Through the
aforementioned responses, it became evident that there was tension among the three layers of
EL’s organizational chart. The tension first appeared in the second finding as evidenced by the
three distinct priorities identified. Upon further exploring EL staff members’ perspectives on
evaluation EL’s programmatic performance against it mission, the misalignment between the
three layers of EL’s organizational structure reflected that the staff were at cross-purposes.
Executive Director Phil focused on the collection of data to inform stakeholders (e.g., grantors,
donors, supporters) of mission advancement. Program Director Carlton made meaning of weekly
assessments and i-Ready lessons to inform the ASP’s curriculum. The teaching staff created
their own methods of data collection to assess their effectiveness and evaluate overall student
retention and progress. All in all, the three layers of employees did not have a formal means for
streamlining their data needs and further unify their efforts to advance the mission of EL.
PERFORMANCE MEASUREMENT IN NONPROFITS 57
Conclusion
As mentioned in chapter two, an organization’s culture determines what the organization
pays attention to, what actions it takes, how it responds, and how it makes meaning (Schein,
1985). Through the findings of this study, EL’s organizational culture emerged as one with
general alignment among its employees because of it selective hiring practices. EL staff
members were selectively recruited by individuals working for or connected to EL for their
service-orientation as evidenced by previous experience with ASPs and/or NPOs, connection to
Christian faith, and perception that he/she would connect to EL’s mission and programmatic
services. As individuals with shared value systems, EL staff members’ articulation of both EL’s
aspirational mission and the need to evaluate its performance against said mission made them
aligned. Despite alignment with the mission, the study’s analysis revealed that the organizational
values shared by EL staff did not perpetuate a data-driven culture. Upon closer examination of
each layer of EL’s organizational structure, it became apparent that EL’s organization culture
was guided by staff members’ respective roles and related job duties.
When further questioned about their perspectives on measuring EL’s performance, the
means by which staff members expressed programmatic performance should be assessed was
also derived based on their role within the organization. As such, what was identified as a
priority for data collection differed with each layer of the organization; 1) Executive Director
Philip stated funders and their needs, 2) Program Director Carlton identified students’ growth in
literacy, and 3) EL teaching staff (comprised of Program Coordinators Will and Hilary and Site
Assistant Ashley) shared teacher effectiveness and overall student progress. This lack of
congruence placed EL staff members at cross-purposes. While all three priorities noted were an
outflow of EL’s mission, the desired data and methods of collection differed. Overt tension
PERFORMANCE MEASUREMENT IN NONPROFITS 58
regarding performance measurement practices among EL staff members was not identified
within this study. However, upon closer examination of EL staff members’ responses, covert
tension emerged between the three layers of the organizational structure by virtue of their highly-
compartmentalized roles and related evaluation worldviews. The implications and
recommendations for these findings will be explored in the following chapter.
PERFORMANCE MEASUREMENT IN NONPROFITS 59
CHAPTER FIVE: IMPLICATIONS AND RECOMMENDATIONS FOR PRACTICE,
POLICY, AND RESEARCH
The problem of practice addressed in this study was the growing expectation for NPOs to
establish performance measurement systems to evaluate the achievement of their mission-related
objectives. In an effort to ensure that services are being rendered effectively and funds are being
utilized appropriately, the call for NPOs to implement performance measurement systems has
increased; specifically, NPO literature has noted funders placing the establishment of
performance measurement processes as a condition of funding (Cairns et al., 2005; Carman,
2008; Carnochan, Samples, Myers, & Austin, 2014). This study sought to explore what
supported and/or impeded the enactment of performance measurement activities by nonprofit
professionals using a gap analysis framework (Clark & Estes, 2008).
The purpose of the study was to understand Exodus Learning’s (EL) staff members’ roles
in performance measurement activities. The study site was the headquarters of EL; EL was a
nonprofit, multi-site afterschool program in a large urban city in the southeastern region of the
U.S., servicing underprivileged elementary-aged youth. EL’s three sites targeted students
attending inner-city schools. The students’ schools resided in neighborhoods that were identified
by the United Way and department of Education as high risk for academic failure to high crime
and poverty (Exodus Foundation, 2015). EL’s staff members led programming focused on
academic enrichment, character development, and parent engagement. While EL’s staff members
had a clear understanding of the organization’s purpose, EL did not have a formal performance
measurement system.
This qualitative study was based on semi-structured interviews with five EL staff
members, which included the Executive Director, Program Director, two Program Coordinators,
PERFORMANCE MEASUREMENT IN NONPROFITS 60
and Site Assistant. A purposeful sampling strategy was used because this study desired to gain
understanding from a specific stakeholder group. The participants had at least two to five years
of experience with NPOs and ASPs before coming to work for EL. The average length of the
five interviews conducted was 1-hour. At the close of data collection, inductive data analysis was
conducted and concluded with coding to identify themes. Through this analysis, three major
findings and are discussed in the section below.
Summary of Findings
The three findings that emerged suggest that EL staff are unified in their desire to
advance the organization’s mission; however, the distinct duties and responsibilities associated
with their respective roles put them cross-purposes when prioritizing the data to be collected to
establish a performance measurement system. The first finding was that EL staff are aligned with
the aspirational mission through recruitment, faith, and service orientation. Two themes emerged
within this finding: (1) EL staff members were selectively recruited through their personal and
professional networks by individual who were working for or had ties with EL, and (2) EL staff
member shared a service orientation and Christian value system. The EL staff members, who
were being recruited, were perceived as having similar value systems as those recruiting them,
which resulted in alignment with the advancement of EL’s aspirational mission.
The second finding was that variation in EL staff member’s roles resulted in the staff
members having divergent views on what data should be collected to evaluate EL’s performance.
Furthermore, differing views amongst the senior level, middle level, and frontline employees
emerged. Ultimately, the second finding revealed that EL staff members’ roles within the
organization dictated how to they worked to advance the operational mission. More specifically,
PERFORMANCE MEASUREMENT IN NONPROFITS 61
each layer of EL’s organizational structure identified a distinctive set of priorities, which were:
1) the senior level employee noted funders and their needs, 2) the middle level employee
identified students’ growth in literacy, and 3) Frontline employees (e.g., teaching staff) shared
teacher effectiveness and overall student progress.
The third finding was that EL staff members are at cross-purposes due to their competing
agendas and needs, as noted by the three competing priorities identified within the second
finding. As such, EL staff members were not in alignment on performance measurement because
each layer of the organizational structure had a distinct perspective on what data needed to be
collected to advance EL’s aspirational mission. Additionally, because their misalignment was
covert, no efforts had been made to cross-sect EL staff members’ roles and related needs with the
three priorities identified.
Overall, the findings suggest that EL staff members are guided chiefly by their roles and
related job duties. Although their selective recruitment led to alignment in the advancement of
EL’s aspirational mission because of staff members’ shared Christian values and service
orientation, the variation in their roles lead to divergent view on what data should be collected to
measure EL’s programmatic performance. The study’s analysis revealed that EL staff members’
surface-level agreement with the organization’s mission did not translate into the cultivation of
an organizational culture fueled by the desire to formally assess the advancement of its mission-
related programming. In the following section, implications and recommendations for practice,
policy, and further research related to these findings will be discussed.
PERFORMANCE MEASUREMENT IN NONPROFITS 62
Implications and Recommendations for Practice, Policy, and Research
Implications and Recommendations for Practice
The findings suggested that there was tension amongst the three layers of EL’s
organizational structure (e.g., senior level, middle level, and frontline employees) due to the
variation in their roles and job-related duties, resulting in competing needs and agendas for
performance measurement activities. At every level, the data desired by EL’s staff members was
applicable to their role advancing EL’s aspirational mission. However, the data they desired to be
collected was not aligned with each other’s. For example, at the senior level, the Executive
Director requested qualitative data and general, aggregate quantitative data to share with EL’s
funders. At the middle level, the Program Director identified the use of aggregate student data on
growth in literacy to inform the ASP’s curriculum and weekly lessons. While EL’s frontline
employees (e.g., teaching staff) shared that they desired data to assess their effectiveness as
teachers and overall student progress. EL’s staff members’ inability to cross-sect their divergent
needs impeded their ability to establish a formal performance measurement system. The
implications and recommendations for this issue included two main themes: (1) the need to
establish formal performance measurement practices, 2) the need to integrate views and
experiences of all EL staff members into performance practices, 3) the need to cultivate a data-
driven culture, and 4) the need to establish a formalized training program.
Performance measurement practices. This study revealed that EL employees’
performance measurement worldviews were dictated by their roles. The data suggested that
tension arose amongst the three layers of EL’s organizational structure because of the desire for
varying data to be collected to evaluate EL’s performance. To address this tension, EL should
establish formal performance measurement practices to align staff members in their efforts to
PERFORMANCE MEASUREMENT IN NONPROFITS 63
evaluate EL’s programming. Industry experts agree that NPOs can employ effective
performance management systems using financial and nonfinancial markers (Barman, 2007;
Zimmermann & Stevens, 2006). To consistently track mission success, NPOs, like EL, must start
by creating tools to measure qualitative outcomes that are readily accessible and integrate
practical processes that support measurement of both financial and nonfinancial markers
(Salamon, Geller, & Mengel, 2010). As a means of initiating performance measurement
practices, EL could engage in benchmarking NPOs who have successfully established
performance measurement systems. Benchmarking is a process of comparison resulting in
assessment or innovation (Bender & Schuh, 2002). Engaging in the process of benchmarking
other nonprofit ASPs would provide EL staff members with a guide as they work to establish
their own performance measurement system.
Integrate views and experiences. Although the study found that EL staff members were
aligned in their goal of advancing EL’s mission because of their shared Christian value systems
and service orientation, the study revealed that the staff members were at cross-purposes due to
the variation in their roles and responsibilities. As stated in the previous theme, tension arose
amongst EL staff due to their divergent views of performance measurement at EL. In a
qualitative study of four NPOs’ experiences in engaging in performance measurement processes,
Carnochan et al. (2014) found that participants valued being included in the design of their
respective NPOs performance measurement systems; participants cited a sense of ownership and
shared that fears and resistance to learning a new system were mitigated. The integration of all
EL employees in the design phase would ensure that the views and experiences held by each role
within the organization are reflected within the newly formed performance measurement system.
Johnson and Crean (2008) assert that the NPOs who are most effective at engaging in
PERFORMANCE MEASUREMENT IN NONPROFITS 64
performance measurement are those who have integrated evaluative practices into both their
operations and programming through the inclusion of staff members and key stakeholders in
Communities of Learners (CoL). Through a CoL, all participating staff members engage in the
design and implementation of a performance measurement system, and the hierarchal structure
of an organization is no longer an impediment (Johnson & Crean, 2008). Using the CoL
approach created by the TCC Group, EL would be able to include all staff members as they
follow the seven-step evaluation process. The seven steps of the CoL approach are as follows: 1)
Identify and organize decision makers, 2) Determine participants and audience of evaluation, 3)
Develop logic model, 4) Identify evaluation questions and indicators, 5) Create evaluation
methods, tools, and tasks, 6) Collect, analyze, and interpret data, and 7) Apply evaluation results
(Johnson & Crean, 2008). Through the inclusion of staff members at every step of the CoL
approach, EL could have the opportunity to initiate a performance measurement system that is
reflective of all staff members’ views and experience. By integrating staff member’s respective
views and experiences, the impediment of tension amongst the three layers of EL’s hierarchal
structure could be alleviated as their varied data collection needs could be re-prioritized
collectively.
Cultivate data-driven culture. The findings of this study identified that EL employees
are aligned with the organization’s mission, which inadvertently fostered a sense of unity
amongst the staff. However, missional alignment amongst staff members alone within an
organization does not dictate how its values and practices will emerge, so it is the responsibility
of the organization to be intentional about the type of culture that it cultivates. For NPOs, like
EL, who are looking to establish formal performance measurement practices and/or systems to
satisfy the requests of financial stakeholders, it is imperative that they cultivate data-driven
PERFORMANCE MEASUREMENT IN NONPROFITS 65
cultures. After conducting 330 interviews with executives of North American companies about
what drives their management practices along with a review of their annual reports, McAfee and
Brynjolfsson (2012) found that the companies who defined themselves as data-driven performed
better on financial and operational measures. While the interviews conducted by McAfee and
Brynjolfsson (2012) were in the for-profit sector, the knowledge gained is still relevant; they also
learned that organizations with data-driven cultures asked themselves what they knew, not what
they thought.
On a practical level, what does it mean for an organization to cultivate a data-driven
culture? Noyce, Perda, and Traver (2000) found that in a data driven culture “there is an
institutionalized willingness to use numbers systematically to reveal important patterns and to
answer focused questions about policy, methods, and outcomes” (p. 53). Therefore, an
organization with a data-driven culture has embedded practices along with the receptivity of its
staff members to engage in activities that will guide their actions beyond preconceived thoughts
and assumptions. Noyce, Perda, and Traver (2000) note that well-informed stakeholders are
necessary for the distribution of data collection tools, data retrieval, drafting report, and decision
making. If a CoL was created at EL, this group could be utilized to execute such activities and
foster the cultivation of a culture lead by data.
Training Program. While the findings of this study suggest that the five EL employees
interviewed have a clear understanding of their roles and responsibilities as evidenced by their
articulation of their job-related priorities, it is important to note that NPO sector is faced with a
talent retention dilemma. In a study conducted by the Young Nonprofit Professionals Network,
45% of emerging NPO professionals reported that they plan to leave the nonprofit sector for their
next role (Stannard-Friel, 2007). Improved work-life balance, higher wages, professional
PERFORMANCE MEASUREMENT IN NONPROFITS 66
development opportunities, and potential for career advancement offered by positions in the
public and private sectors are variables that have appealed to exiting NPO professionals
(Linscott, 2011). Therefore, in addition to integrating present employees in the design and
implementation of a formalized performance measurement system, it is recommended that EL
also consider the establishment of a formalized training program to ensure that institutional
knowledge and organizational culture values and practices are not lost to staff turnover.
Clark and Estes (2008) note that knowledge and skill enhancements are necessary under
the following two conditions: (1) employees do not know how to accomplish their performance
goals, and (2) when future challenges requiring problem solving are anticipated. With each new
EL hire, the first condition will be applicable. Clark and Estes (2008) indicate that the first
condition calls for information, job aids, or training. Training is essential because it provides
staff members with relevant information related to their roles along with the skills, knowledge,
and abilities required to effectively perform (Suazo, Martinez, & Sandoval, 2009). Further, job
aids provide employees who have completed training with reminders about how to implement
what they learned (Clark & Estes, 2008). A formalized training program and related job aids
would provide EL staff with the knowledge needed to ensure that EL will sustain its newly
established performance measurement system, even when faced with staff turnover.
Implications and Recommendations for Policy
The findings revealed that employees aligned with EL’s mission because it resonated
with their Christian value systems and service orientation. As such, there was a cultural model of
general openness to measuring EL’s outcomes. While EL staff were open to the establishment of
a performance measurement system that measured the organization’s impact, staff members’
primary focus remained on their job-related duties, ranging from fundraising to teaching and
PERFORMANCE MEASUREMENT IN NONPROFITS 67
academic enrichment responsibilities. As a result, it is recommended that better alignment
between the EL staff members’ goals and the organizational goal be created to promote
professional accountability within the organization. Professional accountability is inclusive of
the prioritization of goals; therefore, the time an organization sets aside for a task or its failure to
set time aside is reflective of the task’s value by the organization. For performance measurement
to be successful at EL, employees must have the ability to evaluate outcomes on a level that does
not derail them from their primary role of teaching their at-risk student population. More
specifically, under professional accountability, staff members are able to make pivotal decisions
using their expertise (Romzek & Dubnick, 1987). Ideally, the inclusion of performance
measurement would aid EL staff members in consistently making data-driven decisions and
further enhance their expertise.
Implications and Recommendations for Research
The findings from this study generated a primary recommendation for future research on
performance measurement in NPOs. The study’s data revealed impediments to EL staff
members’ ability to implement a performance measurement system. The findings suggested that
EL staff had difficulty identifying what to measure to evaluate what EL frontline employees
conceptualized as student progress, which resulted in tension between the desire for metrics on
specific student information and the need for general, aggregate data by middle and senior level
employees. Given that this study purposefully selected stakeholders within one NPO, its
generalizability is limited. Therefore, the recommendation is to conduct a comprehensive,
empirical study on NPOs, ranging in size and scope, who have successfully established
performance measurement practices to provide models and best practices to aid NPOs seeking to
create their own performance measurement systems. While there has been an increased demand
PERFORMANCE MEASUREMENT IN NONPROFITS 68
for NPOs to employ performance measurement practices, empirical research on this subject
matter is minimal (Carnochan et al., 2014; Lynch-Cerullo & Looney, 2011).
Conclusion
This study examined how the varied roles of employees within a nonprofit ASP
supported and/or interfered with their ability to evaluate the program’s performance in relation to
the organization’s mission. The implications and recommendations for practice, policy, and
research developed using the data analysis and finding that emerged from this study. The
recommendations for practice suggests the following:
1) Establish formal performance measurement practices,
2) Integrate views and experiences of all EL staff members into performance practices
by using Community of Learners approach,
3) Cultivate data-driven culture,
4) Establish a formalized training program.
The recommendation for policy suggests the following:
Alignment between the EL staff members’ goal and the organizational goal be created
to promote professional accountability within the organization.
The recommendation for research suggests the following:
Conduct a comprehensive, empirical study on NPOs, ranging in size and scope, who
have successfully established performance measurement practices to provide models
and best practices to aid NPOs seeking to create their own performance measurement
systems.
PERFORMANCE MEASUREMENT IN NONPROFITS 69
The findings, implications, and recommendations of this evaluation study are provided to further
understanding into performance measurement within NPOs with hopes of further advancing the
limited body of empirical research on performance measurement practices in NPOs.
PERFORMANCE MEASUREMENT IN NONPROFITS 70
References
Angelou, M. (1978). Still I Rise. New York: Penguin Books Inc.
Association of Fundraising Professionals. (1995). The accountable nonprofit organization.
Retrieved from http://www.afpnet.org/Ethics/EnforcementDetail.cfm?ItemNumber=3262
Baber, W. R., Daniel, P. L., & Roberts, A. A. (2002). Compensation to managers of charitable
organizations: An empirical study of the role of accounting measures of program activities.
The Accounting Review, 77(3), 679-693.
Barman, E. (2007). What is the bottom line for nonprofit organizations? A history of
measurement in the British voluntary sector. Voluntas, 18, 101–115.
Beamon, B. M., & Balcik, B. (2008). Performance measurement in humanitarian relief chains.
International Journal of Public Sector Management, 21(1), 4-25.
Bender, B. E., & Schuh, J. H. (2002). Using benchmarking to inform practice in higher
education. Jossey-Bass.
Benz, M. (2005). Not for the profit, but for the satisfaction? Evidence on worker well-being in
nonprofit firms. Kyklos, 58(2), 155-176.
Bigwood, J. (Producer), & Williams, H. (Director). (1998). Belly [Motion picture]. United States:
Artisan Entertainment.
Brinckerhoff, P. C. (2009). Mission-based management: Leading your not-for-profit in the 21st
century (2nd ed.). New York, NY: John Wiley & Sons.
Brinkman, S., & Kvale, S. (2015). Interviews: Learning the craft of qualitative research
interviewing. Aalborg. Accessed January, 24, 2017.
Burgess, R. (1984). In the field: An introduction to field research. London: George Allen &
Unwin.
PERFORMANCE MEASUREMENT IN NONPROFITS 71
Cairns, B., Harris, M., Hutchison, R., & Tricker, M. (2005). Improving performance? The
adoption and implementation of quality systems in UK nonprofits. Nonprofit
Management and Leadership, 16(2), 135-151.
Carman, J. G. (2008). Nonprofits, funders, and evaluation: Accountability in action. The
American Review of Public Administration.
Carnochan, S., Samples, M., Myers, M., & Austin, M. J. (2014). Performance measurement
challenges in nonprofit human service organizations. Nonprofit and Voluntary Sector
Quarterly, 43(6), 1014-1032.
Chen, G. (2009). Does meeting standards affect charitable giving? An empirical study of New
York metropolitan area charities. Nonprofit Management and Leadership, 19(3), 349-
365.
Cheverton, J. (2007). Holding our own: Value and performance in nonprofit organizations.
Australian Journal of Social Issues, 42(3), 427-436.
Clark, R. E., & Estes, F. (2008). Turning research into results: A guide to selecting the right
performance solutions. IAP.
Corbin, J., & Strauss, A. (2008). Chapter 4: Strategies for qualitative data analysis. Techniques
and procedures for developing grounded theory (3
rd
ed.) (pp. 65-86). Los Angeles:
SAGE.
Creswell, J. W. (2014). Research design: Qualitative, quantitative, and mixed methods
approaches. Sage publications.
Cruz, N. M., Pérez, V. M., & Cantero, C.T. (2009). The influence of employee motivation on
knowledge transfer. Journal of Knowledge Management, 13(6), 478-490.
PERFORMANCE MEASUREMENT IN NONPROFITS 72
DeMartinis, R. (2002). What is a Nonprofit Organization? Nonprofit and Fundraising
Resources. Retrieved from http://www.nonprofit.pro/nonprofit_organization.htm
Ebrahim, A. S., & Rangan, V. K. (2010). The limits of nonprofit impact: A contingency
framework for measuring social performance. Harvard Business School General
Management Unit Working Paper, (10-099), 10-099.
Ebrahim, A., & Rangan, V. K. (2014). What Impact?. California Management Review, 56(3),
118-141.
Eccles, J. (2006). Expectancy value motivational theory. Retrieved from
http://www.education.com/reference/article/expectancy-value-motivational-theory
Elmore, R. F. (2000). Building a new structure for school leadership. Albert Shanker Institute.
Elmore, R. F. (2004). School reform from the inside out: Policy, practice, and performance.
Harvard Educational Pub Group.
Exodus Foundation. (2015). Programs. Retrieved from
http://givingmatters.guidestar.org/profile/2307/exodus-foundation.aspx
Forbes, D. P. (1998). Measuring the unmeasurable: Empirical studies of nonprofit organization
effectiveness from 1977 to 1997. Nonprofit and Voluntary Sector Quarterly, 27(2), 183-
202.
Francisco, L., & Alves, M. (2012). Accounting information and performance measurement in a
nonprofit organization. Studies in Managerial and Financial Accounting, 25, 465-487.
Gallimore, R., & Goldenberg, C. (2001). Analyzing cultural models and settings to connect
minority achievement and school improvement research. Educational Psychologist,
36(1), 45-56.
PERFORMANCE MEASUREMENT IN NONPROFITS 73
Hasenwinkel, P. (2006). Identifying Enablers of Nonprofit High Performance. Executive Issues
in Nonprofits (Accenture).
Herman, R. D., & Renz, D. O. (1998). Nonprofit organizational effectiveness: Contrasts between
especially effective and less effective organizations. Nonprofit Management and
Leadership, 9(1), 23-38.
Hull, C. E., & Lio, B. H. (2006). Innovation in non-profit and for-profit organizations: Visionary,
strategic, and financial considerations. Journal of Change Management, 6(1), 53-65.
Johnson, C., & Crean, A. (2008). Effective nonprofit evaluation: Through a community of
learners. Retrieved from http://www.tccgrp.com/pdfs/per_brief_col.pdf.
Kaplan, R. S. (2001). Strategic performance measurement and management in nonprofit
organizations. Nonprofit management and Leadership, 11(3), 353-370.
Knauft, E. B., Berger, R. A., & Gray, S. T. (1991). Profiles of excellence: Achieving success in
the nonprofit sector. Jossey-Bass.
LeCompte, M. D., & Schensul, J. J. (2010). Designing and conducting ethnographic research: An
introduction (2nd ed.). Lanham and New York: AltaMira Press.
Linscott, K. G. (2011). Filling the leadership pipeline: driving forces and their effect on the next
generation of nonprofit leaders. SPNHA Review, 7(1), 4.
Lynch-Cerullo, K., & Cooney, K. (2011). Moving from outputs to outcomes: A review of the
evolution of performance measurement in the human service nonprofit
sector. Administration in Social Work, 35(4), 364-388.
Maxwell, J. A. (2013). Qualitative Research Design: An Interactive Approach. Los Angeles:
SAGE.
PERFORMANCE MEASUREMENT IN NONPROFITS 74
Margolis, H., & McCabe, P. P. (2006). Improving self-efficacy and motivation what to do, what
to say. Intervention in school and clinic, 41(4), 218-227.
McAfee, A., & Brynjolfsson, E. (2012). Big data: The Management Revolution. Harvard
Business Review, 90(10), 60-68.
Merriam, S. B., & Tisdell, E.J. (2016). Qualitative Research: A Guide to Design and
Implementation. San Francisco, CA: Jossey-Bass.
Molnár, M. (2008). The accountability paradigm: Standards of excellence: Theory and research
evidence from Hungary. Public Management Review, 10(1), 127-137.
Neely, A., Gregory, M., & Platts, K. (1995). Performance measurement system design: a
literature review and research agenda. International journal of operations & production
management, 15(4), 80-116.
Niven, P. R. (2011). Balanced scorecard: Step-by-step for government and nonprofit agencies.
John Wiley & Sons.
Noyce, P., Perda, D., & Traver, R. (2000). Creating Data-Driven Schools. Educational
Leadership, 57(5), 52-56.
Ostrander, S. A. (2007). The growth of donor control: Revisiting the social relations of
philanthropy. Nonprofit and Voluntary Sector Quarterly, 36(2), 356-372.
Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks,
CA: Sage.
Pajares, F. (2006). Self-efficacy theory. Retrieved from
http://www.education.com/reference/article/self-efficacy-theory.
performance solutions. Atlanta, GA: CEP Press.
PERFORMANCE MEASUREMENT IN NONPROFITS 75
Pintrich, P. R. (2002). The role of metacognitive knowledge in learning, teaching, and assessing.
Theory into Practice, 41(4), 219-225.
Reisman, J., & Clegg, J. (1999). Outcomes for success 2000. In The Evaluation Forum, Seattle,
WA (p. 41).
Rojas, R. R. (2000). A review of models for measuring organizational effectiveness among For ‐
Profit and nonprofit organizations. Nonprofit Management and Leadership, 11(1), 97-104.
Romzek, B. S., & Dubnick, M. J. (1987). Accountability in the public sector: Lessons from the
Challenger tragedy. Public Administration Review, 227-238.
Ruebottom, T. (2011). Counting social change: Outcome measures for social enterprise. Social
Enterprise Journal, 7(2), 173-182.
Rueda, R. (2011). The 3 dimensions of improving student performance: Finding the right
solutions to the right problems. New York: Teachers College Press.
Salamon, L. M., Geller, S. L., & Mengel, K. L. (2010). Nonprofits, innovation, and performance
measurement: Separating fact from fiction. Listening Post Project, 17, 1-25.
Sawhill, J. C., & Williamson, D. (2001). Mission impossible? Measuring success in nonprofit
organizations. Nonprofit Management and Leadership, 11(3), 371-386.
Schein, E. H. (1985). Defining organizational culture. Classics of organization theory, 3, 490-
502.
Schunk, D. H., Pintrich, P. R., & Meece, J. R. (2009). Motivation in education: Theory,
research, and applications. Upper Saddle River, NJ: Pearson/Merrill Prentice Hall.
Stannard-Friel, J. (2007). What’s driving young professionals from the nonprofit
sector? Retrieved from http://www.onphilanthropy.com.
PERFORMANCE MEASUREMENT IN NONPROFITS 76
Sowa, J. E., Selden, S. C., & Sandfort, J. R. (2004). No longer unmeasurable? A
multidimensional integrated model of nonprofit organizational effectiveness. Nonprofit
and voluntary sector quarterly, 33(4), 711-728.
Suazo, M. M., Martínez, P. G., & Sandoval, R. (2009). Creating psychological and legal
contracts through human resource practices: A signaling theory perspective. Human
Resource Management Review, 19(2), 154-166.
Wheeler, D., & Sillanpää, M. (1998). Including the stakeholders: the business case. Long Range
Planning, 31(2), 201-210.
Wing, K. T. (2004). Assessing the effectiveness of capacity-building initiatives: Seven issues for
the field. Nonprofit and Voluntary Sector Quarterly, 33(1), 153-160.
Wigfield, A., Tonks, S., & Klauda, S. L. (2009). Expectancy-value theory. Handbook of
motivation at school, 55-75.
Wolcott, H. F. (2005). The art of fieldwork (2nd ed.). Walnut Creek, CA: AltaMira Press.
Wright, B. E. (2007). Public service and motivation: Does mission matter? Public Administration
Review, 67(1), 54-64.
Zimmermann, J., & Stevens, B. (2006). The use of performance measurement in South Carolina
nonprofits. Nonprofit Management & Leadership, 16(3), 315–327.
PERFORMANCE MEASUREMENT IN NONPROFITS 77
APPENDIX A: Interview Protocol
Greetings!
As a doctoral student within the Rossier School of Education at the University of Southern
California, I invite you to participate in gathering data about your role in performance
measurement of programming at Exodus Learning (EL) to date. Your participation consists
of participating in an interview. The purpose of the study is to understand your role in
performance measurement at EL to evaluate how EL is advancing its mission. You are eligible to
participate in the study as a staff member of EL. As a direct benefit of participating in the study,
you will be provided with a final copy of the interview findings. Others may benefit from the
information we earn from this study, which may help to advance the understanding
of performance measurement within the nonprofit sector.
Your participation is voluntary, and the alternative is to not participate. If you volunteer to
participate in this study, you will be asked to participate in a 1-hour audio taped interview.
1. What motivated you to work at a nonprofit after school program like EL?
a. What is your definition of a nonprofit organization?
b. How long have you been working at EL?
c. Would you describe yourself as a mission-driven person? If so, tell me how this
prompted you to work at EL?
d. What inspired you most about the work that EL does?
2. Think about a recent day that you describe as typical at EL. Walk me through it.
a. Describe your job-related duties.
b. How do you feel about your tasks?
c. As you reflect on this day, what is your emotional reaction?
d. If you took time to reflect on your own actions, how and when did you go about
doing so?
i. What questions did you ask yourself?
ii. What actions steps did you put in place for yourself or others?
Now I want to focus specifically on the way you collect and use data.
3. The collection of relevant data/information helps many nonprofits make informed
decisions, successfully fulfill job functions thereby advancing their missions.
a. As a EL employee, how do you define performance measurement? Data?
b. Using your definition(s), what is being collected and why?
c. Using your definition(s), who is collecting, managing, organizing it and why?
d. From your perspective, why, if at all, is the collection of this important?
i. What information do you believe is most helpful for you to collect to
successfully fulfill your job-related duties?
e. How, if at all, is this information used?
PERFORMANCE MEASUREMENT IN NONPROFITS 78
i. From your perspective, how, if at all, is this information being provided?
ii. Who is responsible for providing this information?
iii. How often are you directly involved in providing such information?
iv. How does this make you feel?
f. What information do you believe is most helpful for your supervisors to have
collected to successfully fulfill their job-related duties?
i. From your perspective, how, if at all, is this information being provided?
ii. How often are you directly involved in providing such information?
iii. How does this make you feel?
g. What information do you believe is most helpful for EL board of directors to have
collected to successfully fulfill their duties?
i. From your perspective, how, if at all, is this information being provided?
ii. Who is responsible for providing this information?
iii. How often are you directly involved in providing such information?
iv. How does this make you feel?
h. What information do you believe is most helpful for donors to have access to feel
motivated to financially support EL?
i. From your perspective, how, if at all, is this information being provided?
ii. How often are you directly involved in providing such information?
iii. How does this make you feel?
Many nonprofits view performance measurement as a way to evaluate their mission-related
objectives. I want to focus specifically on performance measurement at EL.
4. Suppose it were my first day at EL; how would you describe the purpose of performance
measurement at EL?
a. How would you prioritize performance measurement in relation to the other
duties that I am responsible for completing?
b. What tasks would you teach me to complete/measure EL’s mission-related
objectives?
c. How would you explain and/or train completing performance measurement
activities?
d. How important do you feel it is for you to complete “said tasks”?
e. How do you feel about performance measurement is approached at EL?
5. How does your direct supervisor support you in completing your job duties related to
performance measurement? Walk me through two examples.
a. How would you rate your supervisor’s prioritization of performance measurement
in your job-related duties?
6. What are your thoughts on evaluating EL’s programming against its mission?
a. When thinking about mission-related objectives at EL, what comes to your mind?
b. What do you view as benefits to evaluation mission-related objectives at EL?
c. What do you view as disadvantages to evaluating mission-related objectives?
d. What are your thoughts on whether or not EL evaluates its mission-related
objectives?
PERFORMANCE MEASUREMENT IN NONPROFITS 79
7. What are your thoughts about your ability to evaluate mission-related objectives like
EL’s Performance Targets?
a. To what degree, do you believe you are able to evaluate mission-related
objectives like EL’s Performance Targets?
8. How do you utilize the information gained from EL’s Performance Targets?
a. Walk me through two recent examples that you think demonstrate how you apply
information gained from EL’s performance targets.
b. To what degree, do you believe information gained from EL’s Performance
Targets is utilized …
i. by the board of directors?
ii. by the teaching staff?
iii. to receive funding?
9. What items and/or information do you think are/is important to collect to give insight
into…
a. the role of parents?
b. the significance of parent?
c. the purpose of EL?
10. How, if at all, is parent participation tracked and utilized to support students’ progress?
a. Walk me through two recent examples that you believe demonstrate how the
tracking of parent participation is utilized to support students’ progress.
b. To what degree, do you believe the tracking of participation is utilized to support
students’ progress?
11. If you wanted to know (or learn) more about measuring the performance of EL’s
programming, what would you do?
a. Describe what/if any trainings or resources that have been made available to you
on measuring EL’s mission-related objectives.
b. To what degree, if at all, do you believe additional training or resources on
performance measurement would further assist you in completing your job-related
duties?
12. Think about the last time a policy or procedural change was introduced at EL.
a. How was the change communicated (e.g., e-mail, staff meeting, flyer, etc.)?
b. Who communicated the changed?
c. Who was expected to put the new policy/procedure into effect?
d. How did this make your feel?
e. How, if at all, did this reflect the typical process for communication policy or
procedural changes at EL?
PERFORMANCE MEASUREMENT IN NONPROFITS 80
APPENDIX B: Informed Consent/Information Sheet
University of Southern California
Rossier School of Education
3470 Trousdale Parkway
Los Angeles, CA 90089
INFORMATION/FACTS SHEET FOR EXEMPT NON-MEDICAL RESEARCH
Performance Measurement of Programming in Exodus Learning (EL)
You are invited to participate in a research study conducted by Fredrica Piphus Singletary,
Doctoral Candidate, under the supervision of Faculty Advisor, Julie Slayton, JD, PhD, at the
Rossier School of Education at the University of Southern California because you are a staff
member at EL. Your participation is voluntary. This document explains information about this
study. Please ask questions about anything that is unclear to you.
PURPOSE OF THE STUDY
The purpose of this study is to understand your role in performance measurement at EL so we can
evaluate how EL is advancing its mission.
PARTICIPANT INVOLVEMENT
If you agree to take part in this study, you will be asked to participate in a 1-hour audio-taped
interview. You do not have to answer any questions you do not want to; if you do not want to be
taped, handwritten notes will be taken.
PAYMENT/COMPENSATION FOR PARTICIPATION
You will receive a $5 Starbucks gift card for your time. You do not have to answer all of the
questions in order to receive the card. The card will be given to you at the close of the interview.
ALTERNATIVES TO PARTICIPATION
Your alternative is to not participate. Your relationship with Exodus Learning will
not be affected whether you participate or not in this study.
CONFIDENTIALITY
One to one interviews will be facilitated by a doctoral candidate who has not worked with you at
any point in your employment. Any identifiable information obtained in connection with this study
will remain confidential. Your responses will be coded with a false name (pseudonym) and
maintained separately. Recordings of the interviews will be transcribed by a professional
transcription service. The data will not be maintained by that service; it will be transcribed then
returned to me. The audio-tapes will be destroyed once they have been transcribed. You will have
the right to review and edit the transcripts.
PERFORMANCE MEASUREMENT IN NONPROFITS 81
The data, including identifiers, audio recordings and transcriptions will be stored on a password
protected computer and in a shared Dropbox folder for each respondent. This is called “raw data.”
The researcher and the faculty advisor will have access to the raw data in the Dropbox folder, and
if you would like to see your raw data, you will be given access to it. The raw data, including
identifiers, will be retained for future research. If you do not want your data or identifiers retained
for future studies, you should not participate.
The members of the research team and the University of Southern California’s Human Subjects
Protection Program (HSPP) may access the data. The HSPP reviews and monitors research studies
to protect the rights and welfare of research subjects.
When the results of the research are published, or discussed in conferences, no identifiable
information will be used.
INVESTIGATOR CONTACT INFORMATION
If you have any questions or concerns about the study, please contact:
Fredrica Piphus Singletary
Rossier School of Education
3470 Trousdale Parkway
Los Angeles, CA 90089
Tel: 513.435.3860
Email: piphussi@usc.edu
IRB CONTACT INFORMATION
University Park Institutional Review Board (UPIRB), 3720 South Flower Street #301, Los
Angeles, CA 90089-0702, (213) 821-5272 or upirb@usc.edu
PERFORMANCE MEASUREMENT IN NONPROFITS 82
SIGNATURE OF RESEARCH PARTICIPANT
I have read the information provided above. I have been given a chance to ask questions. My
questions have been answered to my satisfaction, and I agree to participate in this study. I have
been given a copy of this form.
Name of Participant
Signature of Participant Date
SIGNATURE OF INVESTIGATOR
I have explained the research to the participant and answered all of his/her questions. I believe
that he/she understands the information described in this document and freely consents to
participate.
Name of Person Obtaining Consent
Signature of Person Obtaining Consent Date
PERFORMANCE MEASUREMENT IN NONPROFITS 83
APPENDIX C: Recruitment Letter
Greetings!
As a doctoral student within the Rossier School of Education at the University of Southern
California, I invite you to participate in gathering data about your role in performance
measurement of programming at Exodus Learning (EL) to date. Your participation consists of
participating in an interview. The purpose of our study is to understand your role in performance
measurement at EL so we can evaluate how EL is advancing its mission. You are eligible to
participate in the study as a staff member of EL. As a direct benefit of participating in the study,
you will be provided with a final copy of the interview findings. Others may benefit from the
information we learn from this study, which may help to advance the understanding of
performance measurement within the nonprofit sector.
Your participation is voluntary, and the alternative is to not participate. If you volunteer to
participate in this study, you will be asked to participate in a 1-hour audio taped interview. You
will receive a $5 Starbucks gift card for your time. You do not have to answer all of the questions
in order to receive the card. The card will be given to you at the close of the interview.
Abstract (if available)
Abstract
Using Clarke and Estes’ (2008) Gap Analysis Framework, this study explored the way staff members at a nonprofit afterschool program understood and enacted performance measurement activities in relation to the organization’s mission. The study took place within a three-site afterschool program servicing about 80 underprivileged elementary-aged youth in a large urban city in the southeastern region of the United States. This study sought to understand the staff members’ role and job-related duties related to measuring the organization’s mission-related objectives. Five 60-minute one to one semi-structured interviews were conducted with the Executive Director, Program Director, two Program Coordinators, and Site Assistant. Analysis revealed that the five staff members interviewed where selectively hired based on their Christian value systems and service orientation, resulting in alignment with the organization’s mission
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
An assessment of a nonprofit organization’s effort to increase its staff diversity
PDF
Nonprofit donor retention: a case study of Church of the West
PDF
A nonprofit study evaluating board chair onboarding practices for effective governance
PDF
Managerial coaching to increase diversity, build capacity, and improve nonprofit performance
PDF
Understanding the teachers' perception of impeding bullying in the middle school classroom
PDF
Provisional admission in higher education: a case study in retention, persistence, and matriculation in academia
PDF
Educator professional development for technology in the classroom: an evaluation study
PDF
Employment rates upon MBA graduation: An evaluation study
PDF
What teachers in a high-performing high school need to effectively manage workplace stress: an evaluation study
PDF
Supporting women business owners: the Inland Empire Women’s Business Center: an evaluation study
PDF
Leadership capacity building within a Hawaiian-based nonprofit organization
PDF
Centering underrepresented voices: the underrepresentation of BIPOC professionals in the nonprofit sector
PDF
Knowledge, motivation and organizational influences impacting recruiting practices addressing the gender gap in the technology industry: an evaluation study
PDF
Teacher retention influences: an evaluation study
PDF
In-school violent behavior impacts future goals of low socioeconomic status Black male students who were exposed to community violence
PDF
The parent voice: an exploratory study to understand Latino parent involvement in schools
PDF
Violence experienced by registered nurses working in hospitals: an evaluation study
PDF
Implementing comprehensive succession planning: an improvement study
PDF
Making change: understanding executive nonprofit leadership approach to systemic change – a case study on Teach For America
PDF
Enhancing socially responsible outcomes at a major North American zoo: an innovation study
Asset Metadata
Creator
Piphus Singletary, Fredrica
(author)
Core Title
Performance measurement in nonprofits: an evaluation study
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Organizational Change and Leadership (On Line)
Publication Date
09/21/2017
Defense Date
09/08/2017
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
mission success,nonprofits,OAI-PMH Harvest,performance measurement
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Slayton, Julie (
committee chair
), Green, Alan G. (
committee member
), Smith-Maddox, Renée (
committee member
)
Creator Email
piphussi@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-430228
Unique identifier
UC11265280
Identifier
etd-PiphusSing-5738.pdf (filename),usctheses-c40-430228 (legacy record id)
Legacy Identifier
etd-PiphusSing-5738-0.pdf
Dmrecord
430228
Document Type
Dissertation
Rights
Piphus Singletary, Fredrica
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
mission success
nonprofits
performance measurement