Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Outcomes-based contracting through impact bonds: ties to social innovation, systems change, and international development
(USC Thesis Other)
Outcomes-based contracting through impact bonds: ties to social innovation, systems change, and international development
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Copyright 2024 Hilary Victoria Olson
OUTCOMES-BASED CONTRACTING THROUGH IMPACT BONDS:
TIES TO SOCIAL INNOVATION, SYSTEMS CHANGE, AND INTERNATIONAL
DEVELOPMENT
by
Hilary Victoria Olson
A Dissertation Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF PHILOSOPHY
(PUBLIC POLICY AND MANAGEMENT)
May 2024
ii
ACKNOWLEDGEMENTS
This endeavor would not have been possible without my PhD advisor and dissertation
chair Gary Painter. Throughout my PhD program, Gary provided me with invaluable advice on
papers, presentations, and my career, on top of always having a friendly word to share. I’m also
extremely grateful to my dissertation committee member Chris Fox for allowing me to
collaborate on UK-based research, connecting me to his colleagues, and introducing me to new
methodologies. I would like to express my appreciation for the feedback and expertise provided
by my dissertation committee member Christine Beckman, as well. I’d further like to recognize
Kevin O’Leary and Chris Albertson who were co-authors alongside Gary Painter and Chris Fox
for my first article.
In addition, I would like to extend my sincere thanks to Andrew Levitt and Mila Lucik
for their help in recruiting interview participants and sharing program data for my second article.
Further, many thanks to the interview participants who generously made time to speak with me
about their experiences and insights; these conversations were truly some of the highlights of my
PhD program. I’d also like to mention the whole team at the Price Center for Social Innovation,
who were a joy to work with and who created a wonderful environment in which to grow and
learn.
Finally, I’d like to acknowledge my cohort members and PhD colleagues, with special
thanks to Alison Holt whose weekly check-ins kept me sane. I would also be remiss in not
mentioning my friends and family – especially my mother Victoria, dad Christopher, and sister
Sabrina – for their encouragement over the years. Lastly, I could not have undertaken this
journey without the love and support from my wonderful husband, Antoine.
iii
TABLE OF CONTENTS
ACKNOWLEDGEMENTS............................................................................................................ ii
LIST OF TABLES......................................................................................................................... vi
LIST OF FIGURES ...................................................................................................................... vii
ABBREVIATIONS .....................................................................................................................viii
ABSTRACT................................................................................................................................... ix
INTRODUCTION .......................................................................................................................... 1
SOCIAL IMPACT BONDS ....................................................................................................... 2
ESSAY 1: IMPACT BONDS AND SOCIAL INNOVATION.................................................. 7
ESSAY 2: IMPACT BONDS AND SYSTEMS CHANGE....................................................... 8
ESSAY 3: IMPACT BONDS AND INTERNATIONAL DEVELOPMENT ........................... 9
DISSERTATION CONTRIBUTIONS..................................................................................... 10
CHAPTER 1 – ARE SOCIAL IMPACT BONDS INNOVATIVE FINANCE TOOLS OR DO
THEY HELP SUPPORT SOCIAL INNOVATION PROCESSES?............................................ 11
ABSTRACT.............................................................................................................................. 11
INTRODUCTION .................................................................................................................... 11
THEORY .................................................................................................................................. 15
Social Innovation................................................................................................................................................ 17
RESEARCH QUESTIONS ...................................................................................................... 20
DATA AND METHODS ......................................................................................................... 21
Investor Classifications ...................................................................................................................................... 21
Program Classifications...................................................................................................................................... 23
RESULTS ................................................................................................................................. 25
Private Sector Social Financing.......................................................................................................................... 27
Piloting and Scaling Social Innovation .............................................................................................................. 30
DISCUSSION........................................................................................................................... 32
CONCLUSION......................................................................................................................... 35
CHAPTER 2 – CAN SOCIAL OUTCOMES CONTRACTS SPUR SYSTEMS CHANGE?
EXPLORING ASSET-BASED WORKING, INNOVATION, AND COLLABORATION....... 38
ABSTRACT.............................................................................................................................. 38
INTRODUCTION .................................................................................................................... 39
CONCEPTUAL FRAMEWORK............................................................................................. 43
Asset-Based Working......................................................................................................................................... 45
Innovation........................................................................................................................................................... 46
Collaboration ...................................................................................................................................................... 47
RESEARCH QUESTIONS ...................................................................................................... 48
iv
DATA AND METHODS ......................................................................................................... 49
GM Homes ......................................................................................................................................................... 51
Data..................................................................................................................................................................... 53
RESULTS ................................................................................................................................. 55
Systems-Level Effects........................................................................................................................................ 55
Asset-Based Working......................................................................................................................................... 58
Innovation........................................................................................................................................................... 62
Collaboration ...................................................................................................................................................... 67
DISCUSSION........................................................................................................................... 72
SOC Effects........................................................................................................................................................ 75
CONCLUSION......................................................................................................................... 76
CHAPTER 3 – RESULTS-BASED FUNDING VIA DEVELOPMENT IMPACT BONDS:
STAKEHOLDER PERCEPTIONS ON BENEFITS AND COSTS ............................................ 80
ABSTRACT.............................................................................................................................. 80
INTRODUCTION .................................................................................................................... 81
DEVELOPMENT IMPACT BONDS ...................................................................................... 83
Motivations and Expected Benefits.................................................................................................................... 84
Potential Costs and Challenges .......................................................................................................................... 86
CONCEPTUAL FRAMING OF SIBS..................................................................................... 87
SIB Stakeholder Expectations............................................................................................................................ 89
SIB Development Barriers ................................................................................................................................. 91
RESEARCH QUESTIONS ...................................................................................................... 92
DATA AND METHODS ......................................................................................................... 93
Sample ................................................................................................................................................................ 95
Program Contexts............................................................................................................................................... 97
RESULTS ................................................................................................................................. 99
Motivations and Model Benefits...................................................................................................................... 100
Costs and Challenges........................................................................................................................................ 108
DISCUSSION......................................................................................................................... 115
Broader Social Change Objectives................................................................................................................... 116
DIB Market Development Barriers.................................................................................................................. 120
CONCLUSION....................................................................................................................... 124
CONCLUSION........................................................................................................................... 128
MAIN TAKE-AWAYS .......................................................................................................... 128
Key Findings .................................................................................................................................................... 129
Implications For Practice.................................................................................................................................. 130
Opportunities For Research.............................................................................................................................. 131
CROSS-CUTTING THEMES................................................................................................ 132
Data-Driven Delivery ....................................................................................................................................... 133
Scaled Impact and Systems Change ................................................................................................................. 134
Market Growth ................................................................................................................................................. 135
Social Innovation.............................................................................................................................................. 137
FINAL THOUGHTS .............................................................................................................. 139
v
REFERENCES ........................................................................................................................... 140
APPENDICES ............................................................................................................................ 154
APPENDIX 1.1: FURTHER BACKGROUND ON PILOT VERSUS SCALED
CLASSIFICATION PROCESS.............................................................................................. 154
APPENDIX 1.2: UNIQUE INVESTORS FOR UK AND US SIBS ..................................... 157
APPENDIX 2.1: GM HOMES CASE TIMELINE................................................................ 161
APPENDIX 2.2: INTERVIEW PROTOCOLS ...................................................................... 162
APPENDIX 2.3: INTERVIEW SUMMARY TABLE........................................................... 169
APPENDIX 2.4: SUMMARY OF PROCESS TRACING TESTS AND EVIDENCE ......... 170
APPENDIX 3.1: DIB INTERVIEW PROTOCOL ................................................................ 173
APPENDIX 3.2: EXAMPLES OF UNIQUE DIB CONTEXTS ........................................... 175
APPENDIX 3.3: POSSIBLE DIB THEORY OF CHANGE ................................................. 176
vi
LIST OF TABLES
Table 1.1: Summary Statistics for Key Program Criteria ............................................................. 25
Table 1.2: Summary Statistics for Program Stakeholders............................................................ 27
Table 1.3: Trends in Potential Program Criteria for UK and US Programs............................... 154
Table 1.4: Unique UK Investors................................................................................................. 157
Table 1.5: Unique US Investors.................................................................................................. 158
Table 2.1: Bank Robbery Analogy for Process Tracing Tests...................................................... 42
Table 2.2: Process Tracing Test Approach................................................................................... 51
Table 2.3: Asset-Based Working Hypothesis Tests...................................................................... 59
Table 2.4: Innovation Hypothesis Tests........................................................................................ 63
Table 2.5: Collaboration Hypothesis Tests................................................................................... 68
Table 2.6: Summary of Interviews.............................................................................................. 169
Table 2.7: Summary of Asset-Based Working Hypothesis Tests and Evidence ........................ 170
Table 2.8: Summary of Innovation Hypothesis Tests and Evidence .......................................... 171
Table 2.9: Summary of Collaboration Hypothesis Tests and Evidence ..................................... 172
Table 3.1: Existing Themes from the DIB Literature ................................................................... 87
Table 3.2: Interview Sample ......................................................................................................... 96
Table 3.3: Motivating Contextual Factors and Model Inputs..................................................... 103
Table 3.4: Beneficial Model Outputs, Outcomes, and Impacts.................................................. 106
Table 3.5: Motivations and Benefits Categorizations................................................................. 108
Table 3.6: Start-Up Costs and Management Challenges............................................................ 113
Table 3.7: Costs and Challenges Categorizations....................................................................... 115
Table 3.7: Quotes on DIBs Being Applied in New Contexts..................................................... 175
vii
LIST OF FIGURES
Figure 0.1: Typical SIB Structure................................................................................................... 4
Figure 1.1: Percentage of Programs by Issue Area....................................................................... 13
Figure 1.2: The Process of Social Innovation............................................................................... 18
Figure 1.3: Percentage of Programs with at Least One Investor from the Following Sectors ..... 28
Figure 1.4: Percentage of Investors from the Following Sectors Across All Programs............... 29
Figure 1.5: Programs Classified as Pilot vs Scale......................................................................... 31
Figure 1.6: Programs Classified as Feasibility vs Effectiveness .................................................. 31
Figure 2.1: Potential Asset-Based Working Mechanism.............................................................. 46
Figure 2.2: Potential Innovation Mechanism................................................................................ 47
Figure 2.3: Potential Collaboration Mechanism........................................................................... 48
Figure 3.1: Possible DIB Theory of Change............................................................................... 120
Figure 3.2: Expanded Logic Model ............................................................................................ 176
viii
ABBREVIATIONS
CA Combined Authority
DIB Development Impact Bond
ETE Employment, Education, and Training
GM Greater Manchester
HICs High-Income Countries
LMICs Low- and Middle-Income Countries
NGO Non-Governmental Organization
NPG New Public Governance
NPM New Public Management
OBC Outcomes-Based Contract
PBR Payment-by-Results
PFS Pay-for-Success
RBA Results-Based Aid
RBF Results-Based Financing
RCT Randomized Control Trial
RSL Registered Social Landlord
SDG Sustainable Development Goal
SIB Social Impact Bond
SOC Social Outcomes Contract
UK United Kingdom
US United States
ix
ABSTRACT
This dissertation offers an in-depth look at Social Impact Bonds (SIBs), a form of
outcome-based contracting (OBC), from a variety of perspectives. First established in the UK in
2010, SIBs have generated considerable interest among policymakers and scholars: for their
potential to harness greater amounts of capital for greater social impact and to foster innovation
through a focus on outcomes. While the body of literature on SIBs is growing quickly, it remains
a nascent field of study with many opportunities for development. This dissertation aims to
address some of the gaps in the literature to both generate more academic knowledge about these
innovative tools, as well as meaningful insights for practice. To do so, this dissertation is broken
into three separate articles which offer a more nuanced view of SIBs as they relate to social
innovation, systems change, and international development.
The first article, “Are Social Impact Bonds an Innovation in Finance or Do They Help
Finance Social Innovation?,” evaluates SIBs using social innovation theory. As the literature has
tended to situate SIBs within public management theories, this article offers novel evidence by
analyzing SIBs through the theory of social innovation. The study’s two main research questions
are: 1) Are SIBs social finance tools being used to bring in new sources of capital to fund service
delivery? And 2) Are SIBs being used to finance specific stages in other social innovation
processes? The research analyzes both qualitative and quantitative data on SIBs in the UK and
US – leaders in SIB implementation.
The second article, “Can Social Outcomes Contracts Contribute to Systems Change?
Exploring Asset-Based Working, Innovation, and Collaboration,” investigates the ability for
Social Outcomes Contracts (SOCs), a broader term encompassing SIBs, to impact their
surrounding service ecosystems. Although many practitioners and academics have lauded the
x
potential benefits of SIBs on service delivery, including catalyzing more transformative change,
others have criticized SIBs for focusing on individual rather than systemic causes to social
challenges. To address this lack of evidence, the study evaluates the case of the Greater
Manchester Homes Partnership (GM Homes), asking: What impacts did GM Homes have on its
wider service delivery system? Through which mechanism(s) did it generate these ecosystem
effects: asset-based working, innovation, and/or collaboration? Taking a novel process tracing
approach, the analysis draws on evidence from interviews, primary documents, user data, and
secondary literature.
The third article, “Results-Based Funding Via Development Impact Bonds: Stakeholder
Perceptions on Benefits and Costs”, focuses on the application of impact bonds for international
development purposes. As most research to date has focused on SIBs in high-income countries
(HICs), this study contributes to the much newer and smaller set of literature on Development
Impact Bonds (DIBs) by examining stakeholder motivations, expectations, and experiences in
low- and middle-income countries (LMICs). The study’s two primary research questions are:
What are the contextual factors and anticipated model benefits that are motivating stakeholders
to use DIBs? What costs and challenges do stakeholders experience in implementing DIBs? The
study answers these questions through interviews with DIB stakeholders, drawing from a sample
of 7 recent projects.
1
INTRODUCTION
Following the global financial crisis in 2008, governments in many countries began
implementing austerity measures and making cuts to social services (Williams 2020).
Meanwhile, social and environmental problems have continued to rise (Fry 2019; Broccardo et
al. 2020). Reflecting the severity of these issues are the Sustainable Development Goals (SDGs)
adopted by the United Nations in 2015 (Halkos and Gkampoura 2021). The SDGs seek to “tackle
global social, economic and environmental problems and promote sustainable development in
both developed and developing nations” by 2030 (Halkos and Gkampoura 2021: 119).
Importantly, such efforts aim to improve the well-being of current generations without
compromising future generations (Halkos and Gkampoura 2021; Rizzello and Kabli 2020). To
meet these pressing social objectives while facing severe domestic budget and international aid
constraints, policymakers have been increasingly turning towards innovative financial models
(Fry 2019; Rizzello and Kabli 2020). One model which has gained considerable attention is
outcome-based contracting (OBC).
OBC is an arrangement in which some portion of service provider payment is made
contingent on the achievement of specific social outcomes (Farr 2016). These outcomes are an
intervention’s effects on the lives of services users as opposed to an intervention’s activities or
outputs (Bovaird and Davies 2011). With an increased focus on measurable outcomes,
proponents claim that OBC can enable cost savings, improve accountability, encourage
innovation, and transfer financial risk away from commissioners (Farr 2016). For these reasons,
OBC is often presented as supporting public service reform efforts (Farr 2016; Albertson et al.
2018; Fox and Morris 2021). To trigger payments, OBC inherently involves defining outcomes,
choosing metrics, setting success targets, and selecting appropriate evaluation methods (De Pieri
2
et al. 2022). Thus, OBC also aligns with the broader evidence-based policy movement, as
assessments help demonstrate “the effectiveness of interventions on which public money is
spent” (De Pieri et al. 2022: 1).
Given the complexity of structuring and implementing such contracts, OBC can produce
a number of transactions costs as well, for instance due to administrative issues around collecting
evidence and methodological issues around attributing outcomes within complex service systems
(De Pieri et al. 2022; Farr 2016). Incurring these additional costs is often justified by the belief
that OBC produces better social outcomes and higher quality services (De Pieri et al. 2022).
However, as of yet the effectiveness of OBC has not widely been evidenced (Tomkinson 2016;
De Pieri et al. 2022). Fox and Morris (2021: 73) argue that “if we are to make better judgements
about whether and when outcome-based commissioning models are appropriate the evidence
base must be developed.” Farr (2016) similarly calls for greater research on OBC across different
service areas, policy domains, and models. To help build up the evidence on OBC, this
dissertation examines an increasingly popular form of OBC – Social Impact Bonds (SIBs) – as
applied within a variety of contexts.
SOCIAL IMPACT BONDS
SIBs are a type of OBC for “leveraging private investment to cover the upfront cost of
social interventions, and using government funds to pay for the social outcomes those
interventions generate” (Fitzgerald et al. 2020: 86). These contracts go by many different names
– Payment by Results in the UK, Pay for Success in the US, Social Benefit Bonds in Australia,
and Social Outcome Contracts (SOCs) in Europe (Albertson et al. 2018; Fitzgerald et al. 2020).
Contrary to their name, SIBs are not technically bonds as they are not tradeable instruments
3
(Rizzello and Kabli 2020). Instead, they mix elements of both debt and equity logics. While debt
is a safer form of investment which stipulates a fixed interest rate and repayment schedule,
equity is a riskier form of investment where interest rates and repayment depend on performance
(Gustafsson-Wright et al. 2015). SIBs combine these two logics, with variable investor returns
dependent on outcomes, using incentives to drive greater social impact (Andreu 2018; AlendaDemoutiez 2020; Gustafsson-Wright et al. 2015). By making investor (re)payment conditional
upon program success, SIBs are often presented as a way to align stakeholder interests through
incentives which create win-win situations (Alenda-Demoutiez 2020; Ormiston et al. 2020). For
instance, by offering “better solutions for… citizens, less risk for the government and greater
involvement by the private sector in working with community organizations to better solve
public challenges” (Ormiston et al. 2020: 242).
There are four major categories of SIB stakeholders: investors, outcome funders, service
providers, and intermediaries. Private sector investors provide the up-front funding to service
providers, with varying degrees of initial investment and potential interest payments tied to
program success (Williams 2020). Outcome funders, typically government entities, then repay
investors based on the level of success (Williams 2020). Service providers, often third sector
organizations such as non-profits, deliver the intervention to the target population (Gallucci et al.
2019). Finally, intermediaries assist with contract development and program design, providing
support with raising capital, negotiating rates of return, selecting outcome targets, offering
technical assistance, and managing and evaluating performance (Gruyter et al. 2020; Williams
2020). Outside of these main actors, SIBs may also include evaluators, lawyers, technical
assistance providers, and guarantors (Gustafsson-Wright et al. 2015). Figure 0.1 from
Gustafsson-Wright et al. (2015) visualizes the typical structure of a SIB.
4
Figure 0.1: Typical SIB Structure
According to the Government Outcomes Lab’s Impact Bond Dataset, the first SIB was
established in the UK in 2010 (Indigo 2023). By 2012, the UK had established 13 more. By
2013, the adoption of SIBs had expanded to the US (3 SIBs), Australia (2 SIBs), and Germany,
and the Netherlands (1 SIB each). In 2015, an additional 7 countries adopted SIBs, including 2
low- and middle-income countries (LMICs): Peru and India. Notably, these 2 projects took the
form of Development Impact Bonds (DIBs), a type of SIB which are “implemented in low- and
middle-income countries, where not-for-profit organization is the outcome funder, as opposed to
government” (Gustafsson-Wright et al. 2015: 4). Since that time, the number of SIBs has
continued to grow, with 275 projects launched in 36 countries (Indigo 2023). Based on the
Government Outcome Lab’s data, these SIBs have raised close to $760 million in investments
and reached over 2 million beneficiaries (Indigo 2023). Notably, DIB implementation has
remained low, representing only 21 of the total projects. With 93 SIBs, the UK has remained the
leader in impact bond implementation. The countries with the next highest number are the US
5
(28) and Portugal (23). Regionally, the majority of SIBs have been in Europe (173), followed by
the Americas (42), Asia (31), Oceania (17), and Africa (12). In terms of sectors, most SIBs have
addressed Employment and Training (75); the rest have addressed Child/Family Welfare (48),
Health (46), Education (44), Homelessness (37), Criminal Justice (18), Agriculture/Environment
(4), and Poverty Reduction (3).
Despite their proliferation, there remain considerable differences in SIB design from
project to project (Ronicle et al. 2022; Carter et al. 2018). For instance, SIB contracts can be held
either between the government and an intermediary (managed structure), an investor
(intermediated structure), or a service provider (direct structure) (Gustafsson-Wright et al. 2015).
SIBs can also have a variety of payment schedules, with outcome funders making interim
payments, payments based on outcomes surpassing certain thresholds, or payments based on
separate performance outcomes (Gustafsson-Wright et al. 2015). Projects further differ in the
amount of investor (re)payment tied to outcome achievement, the degree to which service
providers are exposed to financial risk, and the amount of performance management support
given to service providers (Ronicle et al. 2022).
In addition, there is notable variation in how SIBs are evaluated. Early conceptions of
SIBs envisioned the use of rigorous impact evaluations to assess outcome achievement, for
instance by employing experimental and quasi-experimental methods (Albertson et al. 2018). In
particular, Randomized Control Trials (RCTs) were seen as well-suited for evaluating SIB
performance, as RCTs can attribute observable outcomes to a program’s interventions, as
opposed to other confounding factors, giving them high internal validity (Heinrich and Kabourek
2019; Williams 2020; Fox and Morris 2021; Hevenstone et al. 2023). In practice, however, RCTs
have been hard to conduct due to a myriad of factors. These challenges include “technical
6
difficulties… the non-alignment between the delivery structure of the intervention and the
experimental design… [and] ethical concerns” (De Pieri et al. 2022: 13). Instead, assessments
have more often tended to use non-experimental methods, such as comparisons of validated
administrative data against historical baselines (Gustafsson-Wright et al. 2020a).
As is clear, as SIBs have continued to be implemented in greater number and in a
diversity of contexts, the model itself has evolved (Ronicle et al. 2022). Consequently, many
questions remain over the SIB model’s core purposes and consequences (Ronicle et al. 2022).
Fitzgerald et al. (2020) identify five pressing SIB questions: 1) What are the administrative or
political problems to which SIBs respond? 2) Where and why do SIBs emerge in particular
contexts? 3) What is the role of SIBs in the evidence-based policy movement? 4) Is delivering an
intervention through a SIB more effective than other means? Are associated costs justifiable?
And 5) Do SIBs catalyze wider organizational, system, or institutional changes? This dissertation
presents three essays which help to address these questions by examining the SIB model from
the perspectives of social innovation, systems change, and international development.
The first essay, “Are Social Impact Bonds an Innovation in Finance or Do They Help
Finance Social Innovation?,” analyzes SIBs through the lens of social innovation. In doing so, it
provides insights into question 1, by exploring whether SIBs can attract private investment
capital and “spur innovative new services and delivery practices” (Fitzgerald et al. 2020: 88).
The second essay, “Can Social Outcomes Contracts Contribute to Systems Change? Exploring
Asset-Based Working, Innovation, and Collaboration,” responds to question 5. The third essay,
“Results-Based Funding Via Development Impact Bonds: Stakeholder Perceptions on Benefits
and Costs,” explores the role of DIBs in funding international development initiatives. It thus
7
relates to question 2 by analyzing “where and why SIBs emerge in particular contexts”
(Fitzgerald et al. 2020: 88). An overview each essay’s research approach follows.
ESSAY 1: IMPACT BONDS AND SOCIAL INNOVATION
Although SIB literature has often analyzed SIBs using public administration theories,
particularly New Public Management (NPM), these appear insufficient to explain the many
forms that SIBs take in practice (Albertson et al. 2020). In response, the dissertation’s first essay
proposes social innovation theory as a more appropriate framework with which to analyze SIBs.
Social innovation is “a novel process or product that intends to generate more effective and just
solutions to address complex social problems for collective gain” (Beckman et al. 2023: 4). As a
process, social innovation typically involves co-creation, pilot testing, scaling, and diffusion
(Beckman et al. 2023); as a product, social innovation typically takes the shape of social
enterprises, social movements, or social finance tools (Phills et al. 2008). This study proposes
that SIBs could support the piloting and scaling of social innovation by encouraging
experimentation in service delivery through the provision of up-front risk capital (Edmiston and
Nicholls 2017; Martin 2015). Additionally, SIBs could behave as social finance tools by
attracting private capital through the alignment of stakeholder interests around specified social
outcomes (Social Finance 2009; Gruyter et al. 2020; Then and Schmidt 2020).
Focusing specifically on the UK and US as leaders in SIB implementation, the study’s
two research questions are: Are SIBs social finance tools being used to bring in new sources of
capital to fund service delivery? Are SIBs being used to finance specific stages in other social
innovation processes? To answer the first question, the study examines whether, and to what
extent, private finance was included in the financing of SIB interventions. To answer the second
8
question, it tests whether SIBs function to support the piloting and scaling stages of the social
innovation process. In addition, the study compares the two countries along these measures.
Using data from the Social Finance UK SIB Database, the study uses content analysis and
descriptive statistics to classify investors by sector and to classify programs as supporting either
piloting (testing feasibility) or scaling (testing effectiveness).
ESSAY 2: IMPACT BONDS AND SYSTEMS CHANGE
While SIBs have been in use for over a decade now, many questions remain on whether
they can meet their full potential – including their ability affect systems change (Williams 2020;
Fitzgerald et al. 2020). Systems change involves altering the status quo by challenging a
system’s norms, beliefs, values, policies, behaviors, resources, and decision-making processes
(Foster-Fishman et al. 2007; Zellner and Campbell 2015). In the context of public service
delivery, a system is comprised of all individuals, organizations, knowledge, and practices
involved in providing a service (Waddell 2016). Although proponents originally envisioned SIBs
as working to address intractable social problems by establishing new ways of working and new
partnerships, critics argue that SIBs have instead tended to focus on individual rather than
systemic causes to such problems (Social Finance 2009; Tse and Warner 2020). In contributing
to this debate, the dissertation’s second essay evaluates the case of the Greater Manchester
Homes Partnership (GM Homes). The GM Homes SIB funded a housing-first program which
provided wrap-around, asset-based support to individuals experiencing homelessness. In addition
to achieving its contractual outcomes, a 2021 evaluation found evidence suggesting that GM
Homes had helped catalyze wider systems change as well (GMCA 2021).
9
Building on these preliminary findings, this study’s two research questions are: What
impacts did GM Homes have on its wider service delivery system? Through which
mechanism(s) did it generate these ecosystem effects? To answer these questions, the study
employs the novel approach of process tracing, which theorizes how potential mechanisms link
causes and effects and then analyzes how well observable evidence from actual cases supports
this explanation (Beach 2017). Drawing on research by Carter et al. (2018) and Fox et al.
(2022b), the study hypothesizes the following potential causal mechanisms: asset-based working,
innovation, and collaboration. To test these hypotheses, the analysis evaluates data from
interviews, primary documents, a dataset on user experiences, and secondary literature
(Trampusch and Palier 2016).
ESSAY 3: IMPACT BONDS AND INTERNATIONAL DEVELOPMENT
Despite generating considerable interest since their launch in 2015, DIBs represent just
21 of the 275 impact bonds worldwide (Indigo 2023). Given the relatively slower growth of the
DIB market, the impact bond literature has tended to focus on SIBs in high-income countries
(Alenda-Demoutiez 2020; Ormiston et al. 2020). Consequently, little research has explored the
specific role of DIBs in funding development initiatives in low- and middle-income countries
(LMICs) (Alenda-Demoutiez 2020). The dissertation’s third essay addresses this gap by situating
DIBs within the move towards greater professionalism within the international aid sector over
the last few decades (Alenda-Demoutiez 2020). For instance, as a means of increasing the
effectiveness and accountability of aid, donors have increasingly begun evaluating impact as
opposed to monitoring activities, including by using results-based funding (Gallucci et al. 2019;
Belt et al. 2017; Oroxom et al. 2018; Alenda-Demoutiez 2020). Results-based funding is a
10
contract in which a principal pays an implementing agent only after a development program has
achieved predetermined results (Grittner 2013; Klingebiel 2012). DIBs represent a hybrid model
of results-based funding, as investors give service providers up-front funding, and outcome
funders (such as donor organizations) repay investors based on results.
To build up the evidence base on DIBs, this exploratory study analyzes stakeholder
motivations, expectations, and experiences for 7 recent projects in LMICs. The study asks two
primary research questions: 1) What are the contextual factors and anticipated model benefits
that are motivating stakeholders to use DIBs? 2) What costs and challenges do stakeholders
experience in implementing DIBs? To answer these questions, the study thematically analyzes
primary interview data as well as secondary documentary data. The 7 DIBs were drawn from a
sample of projects included in the Government Outcomes Lab’s Impact Bond Dataset. In total,
the study involved 16 interviews with 18 individuals from stakeholder organizations, including 8
outcome funders, 3 investors, 3 service providers, and 2 intermediaries.
DISSERTATION CONTRIBUTIONS
In total, these three essays aim to make important contributions to the growing SIB
literature. In applying social innovation as a new theory for analyzing SIBs, the first essay
addresses a lack of evidence on whether SIBs are social finance tools which attract private
capital and facilitate the piloting and scaling of social interventions. In conducting process
tracing as a novel method for evaluating SIBs, the second essay addresses a lack of evidence on
whether and how SIBs can affect their wider service delivery systems. Finally, in collecting new
primary evidence through interviews with DIB stakeholders, the third essay addresses a lack of
research on the specific role of impact bonds in supporting development in LMICs.
11
CHAPTER 1 – ARE SOCIAL IMPACT BONDS INNOVATIVE FINANCE TOOLS OR
DO THEY HELP SUPPORT SOCIAL INNOVATION PROCESSES?1
by Hilary Olson, Gary Painter, Kevin Albertson, Christopher Fox, and Christopher O’Leary
ABSTRACT
Outcomes Based Commissioning (OBC) – for example, Pay for Success (in the US) or
Payment by Results (in the UK) – has been suggested as a way to provide ‘more’ social services
for ‘less’ public resources. Such commissioning is often linked with an innovative financing tool
called a Social Impact Bond (SIB). Using data from the Social Finance UK Database and
focusing on SIBs in the US and UK, we evaluate whether the SIB approach aligns with the
theoretical predictions of social innovation. The results provide limited evidence that SIBs
facilitate capital injections from the private sector into the production of social goods as well as
facilitate parts of the process of social innovation – namely, piloting and scaling. We conclude
that there is significant variation, both between the US and UK and within the US, in social
innovation ecosystems and the role played by SIBs.
INTRODUCTION
Governments in some of the world’s richest nations are facing growing demands to
respond to increasing social needs while simultaneously facing fiscal demands which would
seem to emphasize the reduction of social budgets. In this context, Outcomes Based
1 A version of this article has been published as: Olson, H., Painter, G., Albertson, K., Fox, C., & O’Leary, C.
(2022). ‘Are Social Impact Bonds an innovation in finance or do they help finance social innovation?’. Journal of
Social Policy, 1-25.
12
Commissioning (OBC), for example, Pay for Success in the US – Payment by Results in the UK
– has been suggested as one way in which ‘more’ social services can be provided for ‘less’
public resources. These forms of public sector contracting are often linked with a new financing
tool for social services referred to as a Social Impact Bond (SIB).
SIBs are not strictly speaking bonds (debt instruments) but are rather a class of OBC
contract where the up-front finance for social service delivery is made available by third-party
investors rather than service providers. This capital funds a program or intervention seeking to
improve the prospects of a target group in need of public services (Mulgan et al. 2011). To
attract investors, SIBs require commitments by a commissioner (usually national or local
government) to make payments linked to the achievement of specific social outcomes by the
target group (Mulgan et al. 2011). In theory, the SIB partners assess the extent to which the
program has achieved these outcomes – either at the conclusion of the program or at various
intermediate stages.2 Based on the value of these outcomes (if any), commissioners then make
payments to investors, affording them their return.
Early proponents of SIBs distinguished them from other forms of outcome-based
payment by emphasizing that the payment mechanism facilitates the alignment of social and
financial returns on investment and that service provider costs are covered by investors’ up-front
capital. Theoretically, SIBs have the potential to shift risk away from the public sector and to
bring together groups of social investors and portfolios of interventions that would not have been
connected without the new tool (Social Finance 2009). SIBs could thus help expand the ‘social
investment market’ as well (Cabinet Office 2011: paragraph 4.3). The long-term vision of SIBs
was ambitious:
2 In the US, assessment is more likely done through an independent evaluation and in the UK more likely based on a
rigorous audit of agreed outcome measures.
13
Social Impact Bonds enable foundations, social sector organisations and government to
work in new ways and to form new partnerships. By aligning the interests of all parties
around common social outcomes, Social Impact Bonds have the potential to address
some of society’s most intractable problems (Social Finance 2009: 4).
SIBs typically address these social problems through preventive interventions, so
commissioners can repay investors from the hypothetical cost savings (Edmiston and Nicholls
2017; Fraser et al. 2018). For instance, in England a disproportionate amount of public spending
has historically gone towards expensive healthcare treatments as opposed to preventions
(Albertson et al. 2018). As of November 2018, SIBs in the UK and US focused on the following
policy areas: workforce development, housing and homelessness, health, child and family
welfare, criminal justice, education and early years, and poverty and the environment (Social
Finance 2018). In particular, SIBs in both countries commonly addressed housing and
homelessness or health; UK programs frequently addressed workforce development; and US
programs often targeted criminal justice (see figure 1.1).
Figure 1.1: Percentage of Programs by Issue Area
Overall, SIBs could facilitate innovation in four distinct ways: 1) unlocking an untapped
flow of social finance, 2) incentivizing the development of an evidence base for funded
14
interventions, 3) incentivizing experimentation, and 4) changing the role of government so that
its focus is on defining and costing social priorities rather than bringing resources and expertise
to bear (Social Finance 2009).
In this article, we will focus on important aspects of SIBs through the lens of social
innovation to contribute to the growing body of literature which considers their efficacy. Many
previous analyses of SIBs have been descriptive in nature or have focused on the early lessons of
implementation. For example, Ronicle et al. (2014) describe the state of the field in the UK in the
first few years of implementation. Gustafsson-Wright et al. (2015) also focus on policy lessons
based on the early SIBs that were launched. Albertson et al. (2018) provide an expansive
overview of all SIBs in the US and UK at the time of their writing, but do not directly test the
emergence of SIBs within theoretical constructs. Finally, Wooldridge et al. (2019) study the
challenges and benefits of the UK SIB commissioning process, and the potential for replication
and scaling, based on the 68 SIBs launched in the UK as of April 2019. While newer articles
have involved more empirical analyses of topics such as the effects of competing stakeholder
expectations on SIB ecosystem development (Gruyter et al. 2020; Ormiston et al. 2020; Williams
2020), the SIB field has yet to reach maturity, and there are many areas for further development
(Broccardo et al. 2020).
Using data from the Social Finance UK Database for the US and UK, this article first tests
the conjecture that SIBs unlocked an ‘untapped flow of social finance’ (as theorized by Social
Finance 2009). Specifically, we examine whether, and to what extent, private finance was
included in the financing of SIB interventions. Second, we test whether SIBs function to support
the piloting and scaling stages of the social innovation process. Third, we compare the US and
UK on these two measures. Overall, the results provide limited evidence that SIBs attracted
15
private capital for the production of social goods and facilitated the pilot testing and scale-up of
social interventions. Nonetheless, the findings are novel in empirically testing the claims that
SIBs are both innovative financial instruments themselves as well as policy tools which finance
social innovation. We conclude by noting that SIBs could be more effective at facilitating social
innovation by involving more mainstream private investors, funding more experimental pilot
programs, and including more co-productive processes.
THEORY
SIBs are a complex new policy tool which presents a range of both opportunities and
challenges to stakeholders. Initial efforts in the literature to analyze SIBs argued that they were
an example of the type of public sector reforms adopted under New Public Management (NPM)
theory (Hood 1991). NPM suggests that by adopting private sector practices, the public sector
can improve the efficiency and effectiveness of service delivery (Chandra et al. 2021). Edmiston
and Nicholls (2017) note that NPM could be an appropriate lens through which to examine SIBs,
with their explicit involvement of both markets and measurement. However, NPM appears
limited in its ability to account comparatively for SIB variations.
For example, in the UK, it seems reasonable to theorize that SIBs are a part of a ‘public
sector reform’ narrative that is an intrinsic part of NPM and a part of a move towards OBC in
public services more generally (Fraser et al. 2018; Lagarde et al. 2013; Painter et al. 2018;
Warner 2013). In contrast, many US SIBs originated from nonprofits seeking funding to expand
promising programs, closer to a ‘financial sector reform’ narrative and more aligned with New
Public Governance (NPG), social entrepreneurship and corporate social responsibility (Fraser et
al. 2018). NPG highlights the potential of collaboration to co-produce more innovative and
16
sustainable service delivery solutions when facing considerable budget constraints and public
management fragmentation (Osborne 2006; Chandra et al. 2021). From this perspective, SIBs
could help to grow the social finance sector as well as spur social and policy innovation through
new collaborative governance schemes (Ormiston et al. 2020; Fitzgerald et al. 2020).
Although most SIB literature to date has drawn on the public administration theories of
NPM and (to a lesser extent) NPG, neither appear sufficient to explain the many forms that SIBs
take in practice (Albertson et al. 2020). In recent work, Albertson et al. (2020) thus also propose
Open Innovation and social innovation as theoretical lenses by which to analyze the emergence
of SIBs. While Open Innovation offers many similar theoretical insights into SIBs as social
innovation, it has some notable limitations as well. Open Innovation (and Open Innovation 2.0)
propose that intersectoral collaboration, with knowledge and resources more widely shared
across organizational boundaries, generates more innovative, sustainable, and cost-effective
solutions (Chesbrough and Bogers 2014; Chesbrough 2006; Felin and Zenger 2014; Curley
2016). Achieving these outcomes is done by aligning the interests of a diverse set of actors along
with rapid experimentation and prototyping with users and citizens (Porter and Kramer 2011;
Curley 2016). However, Open Innovation primarily focuses on the role of firms in co-creating
economic and (secondarily) social value (Albertson et al. 2020); meanwhile, social innovation
highlights the potential for actors, from a variety of sectors, to (primarily) affect social change
(Chesbrough and Di Minin 2014). In this article, we will therefore advance social innovation
theory as a more appropriate framework with which to analyze SIBs.
17
Social Innovation
In contrast to business innovation, social innovation is focused on producing novel
solutions to address pressing social needs (Marques et al. 2018; Murray et al. 2010). However,
social innovation theory (e.g. Mulgan 2006; Sabato et al. 2017) is still emerging, and its
definition is contested. There are two competing paradigms within social innovation: a
technocratic or utilitarian paradigm (e.g. Phills et al. 2008) with ties to neoliberalism and NPM,
and a democratic or radical paradigm (e.g. Moulaert et al. 2005) centered on empowerment and
social justice (Montgomery 2016; Ayob et al. 2016; Chan et al. 2021). Indeed, these more radical
conceptions of social innovation include some elements of Open Innovation like coproduction
(Rosen and Painter 2019).3 Despite these conceptual debates, common among social innovation
definitions is the importance of creating new social relationships to generate collaborative ideas
and deliver novel solutions which produce positive social impacts (Ayob et al. 2016). These
solutions can take the shape of products or processes (Ayob et al. 2016). We will evaluate SIBs
from both perspectives.
As a product, social innovations are “new ideas (products, services and models) that
simultaneously meet social needs and create new social relationships or collaborations” (Murray
et al. 2010: 3). Types of social innovation products include social enterprises, social movements,
or social finance tools (Phills et al. 2008). We focus here on the latter, and define social finance
as capital that generates primarily social and/or environmental returns, as well as potentially
financial returns (Nicholls and Emerson 2015).
3 It is worth noting as Rosen and Painter (2019) discuss that the term co-production can have different meanings.
Here we use the term co-production as Rosen and Painter (2019) describe and not as a government led process that
does not include the customer and service provider in co-creation.
18
Meanwhile, as a process social innovation involves “inventing, securing support for, and
implementing novel solutions to social needs and problems” (Stanford Social Innovation Review
2003, as quoted in Phills et al. 2008: 36). While there is no singular model for the social
innovation process, a general framework typically involves problem identification and idea
generation; design and prototyping; launching, sustaining, and scaling; and learning and
diffusion (Eveleens 2010; Murray et al. 2010). For example, figure 1.2 below attempts to capture
the definition advanced by Beckman et al. (2020) that social innovation is an iterative, inclusive
process that generates more effective and just solutions to solve complex social problems.
Notably, these models emphasize a progression between stages, though not necessarily in a
linear fashion (Eveleens 2010).
Figure 1.2: The Process of Social Innovation
SIBs as an Innovative Financial Tool
SIB proponents have hailed SIBs as an innovative financial tool with the potential to
attract private capital to finance services for previously under-served populations that would
otherwise be too risky to deliver (Gruyter et al. 2020; Then and Schmidt 2020). As with impact
investing more broadly, SIB stakeholders seek private sector investors who value both social and
19
financial returns; this in turn helps service providers secure larger or more stable funding and
governments provide a higher quantity or quality of services (Martin 2015). Many claims that
SIBs can increase access to capital refer to private capital, as the private sector has historically
been less willing to accept the below-market returns and higher risks associated with social
policy financing (Gruyter et al. 2020; Ormiston et al. 2020). By tying repayment to program
success, this front-end investment shifts the financial risk of performance away from service
providers and commissioners to investors, encouraging investors to support performance
measurement and management (Edmiston and Nicholls 2017; Martin 2015).
Therefore, SIBs could theoretically attract an untapped flow of social finance by aligning
stakeholder interests around specified social outcomes (Social Finance 2009). However,
Williams (2020: 909) noted that thus far “SIB investments have come primarily from
foundations and a small group of “high net worth” individuals” rather than private investors.
Questions also remain regarding whether SIB programs still would have been funded through
other means or if investors would have invested in other social causes (Gruyter et al. 2020).
SIBs as Facilitators of the Social Innovation Process
SIBs could also theoretically contribute to the process of social innovation by supporting
one or more of its stages. However, most research to date indicates the role of service providers
and users in co-production are typically absent from SIB narratives (Ormiston et al. 2020;
Fitzgerald et al. 2020). Similarly, little data is available on the ability for SIBs to catalyze more
transformational change given that only a small number of SIBs have reached completion thus
far (Gruyter et al. 2020; Fitzgerald et al. 2020). The analysis will therefore examine the
20
contribution of SIBs to the piloting and scaling of social interventions, as this was the primary
focus of early wave UK and US SIBs.
Pilots are small-scale, localized projects which gather evidence about an intervention
through ‘lean experimentation’ (Murray and Ma 2015: paragraph 2), feedback loops, and
practical experience to make improvements before broader implementation (Mulgan 2006; Ettelt
et al. 2015). Scaling (Westley et al. 2014), then involves growing, replicating, adapting, or
franchising a program to reach larger target populations or new locations (Mulgan 2006).
Arguments in favor of SIBs frequently claim that by supplying service providers with the upfront risk capital, SIBs can encourage delivery partners to experiment and innovate in services
delivery (Edmiston and Nicholls 2017; Martin 2015). However, as with many innovation-driven
initiatives, what is meant by experimentation and innovation is often vague (Hammond et al.
2021). Most frequently, SIB proponents appear to use innovation to mean implementing proven
interventions in new contexts as opposed to piloting new services (Fitzgerald et al. 2020;
Albertson et al. 2018); in part, because investors often prefer investing in evidence-based
programs rather than novel interventions to reduce the risk of failure (Gruyter et al. 2020).
RESEARCH QUESTIONS
The framework of social innovation can capture many of the features of SIBs,
particularly the opportunity for stakeholders in different sectors (public, private, and social) to
work collaboratively to address pressing social challenges and deliver social outcomes in new
ways. This research will test some of the implications of social innovation theory by analyzing
SIBs along both their product and process dimensions. The first two research questions are: 1)
21
Are SIBs social finance tools being used to bring in private capital to fund service delivery? 2)
Are SIBs being used to finance the piloting and scaling stages of the social innovation process?
Additionally, this study will have a particular analytic focus on SIBs in the US and UK –
the two leading countries in SIB development. The first SIB was established in the UK in 2010.
By 2012, the UK had established 13 more. By 2013, the adoption of SIBs had expanded to other
developed countries, including 3 in the US. Since that time, the number of SIBs continued to
grow rapidly – as of November 2018, 121 SIBs had been launched in 24 countries, with the
majority in the UK (47 SIBs) and the US (22 SIBs) (Social Finance 2018). Therefore, the third
research question is: 3) How do SIBs in the UK and US compare along these two dimensions?
DATA AND METHODS
To answer these research questions, we primarily use data from the SIB Database
managed by Social Finance, a not-for-profit organization based in the UK (Social Finance 2018).
This Database contains profiles of SIBs compiled using publicly available information, including
program location, launch date, target population, and stakeholders. We downloaded all available
data as of 20 November 2018, converted monetary values into US dollars (OECD.Stat n.d.), and
classified SIBs located in Wales or England within a UK grouping. The analysis took place in
two main stages: an ‘investor classifications’ stage and a ‘program classifications’ stage.
Investor Classifications
To determine if SIBs can attract private capital, we examined the proportion of up-front
investors who were for-profit. We categorized all UK and US SIB investors into six categories:
1) charities, trusts, and foundations (‘charities’), 2) for-profit limited companies, 3) public
22
bodies, 4) social enterprises, 5) registered social landlords (RSLs) or housing associations, and 6)
private individuals. Charities are those organizations which are designated as such by charities or
tax law due to their charitable activities. For-profit limited companies are organizations which
pursue traditional profit maximization. Public bodies are those entities which are branches of
local or national governments. Social enterprises can take a variety of legal structures, including
charities, limited companies, and community interest companies. However, their defining
characteristic is that they are primarily operating with social purposes. RSLs and landlord
associations are a UK-specific category of organizations which often take the form of charities or
social enterprises, but generally have more assets and are larger in size. Finally, private
individuals are those charitable or philanthropic individuals investing their own money as
opposed to investing on behalf of larger organizations.
To sort investors into these six categories, we relied upon self-reported information from the
organizations, for instance as stated on their institutional websites or professional profiles such as
LinkedIn and Charity Navigator. If further information was required, we looked at textual
descriptions provided in press releases or other organizational documents. Even after careful
analysis, the classification of about 5 programs could be contested, especially those which
display characteristics of social enterprises. For example, the Goldman Sachs Urban Investment
Group, which invested in the US’s Green Infrastructure program, participates in impact investing
– an activity typically associated with social enterprises. Given the Group’s situation within the
broader Goldman Sachs’ institution, we labeled it as a for-profit limited company. The
conclusions of this study are not altered by how these few programs are classified.
23
Program Classifications
The programs classifications analysis focused on determining at what stage SIBs are
funding programs within the social innovation process: pilot testing or scaling. To extract
evidence from the Database’s text-based profiles to classify programs, we applied content
analysis to identify trends and patterns within the text using categories and coding (Stemler
2000). Using an inductive approach to allow concepts to emerge from the data (Elo and Kyngas
2008), we began by reviewing the types of data contained within the Database’s profile texts and
determining which stages of the social innovation process could be coded and categorized.
As expected, this analysis uncovered that, as of yet, little data is available regarding the
extent to which SIBs are engaging in co-production or are contributing to systems change. As is
further explained in the Discussion section, while many SIB profiles refer to some relevant
aspects of co-production – for instance, providing personalized support – no profiles offered
specific information about how such approaches were designed or implemented. Similarly, given
the recent implementation of SIBS, the analysis found few data within the Database regarding
the extent to which SIB programs and strategies are being diffused into broader systems. On the
other hand, the content analysis did reveal sufficient evidence to categorize programs as either
operating at the pilot or scaled stage.
Pilots versus Scaled
Based on the initial content analysis which categorized SIBs as either pilot or scaled
interventions (see Appendix 1.1), the first classification strategy defined programs as pilots or
scaled interventions quantitatively based on the size of a program’s target population as well as
the amount of capital invested in each program. We coded programs as pilots if they had target
24
populations below 300 and if they had a budget (‘capital raised’) of less than $2 million. If the
Database did not contain any information on these two variables, we did not categorize the
program as either pilot or scaled. There were seven such programs. If the Database was missing
information on one of the variables, we classified the program according to the variable that was
available.
Feasibility versus Effectiveness
For a more robust analysis, and as a validity check, the analysis then utilized a second
typology using qualitative criteria such as each program’s objectives and methods for evaluating
outcomes. Following scholars (e.g. Ettelt et al. 2015) who have begun to suggest ways of
classifying different types of social innovation, we created and operationalized our own
typology: programs testing feasibility versus programs testing effectiveness. We conceptualized
programs testing feasibility to be comparable to pilot programs and programs testing
effectiveness to be comparable to scaled interventions. This feasibility/effectiveness analysis also
took an inductive approach to content analysis.
We classified programs as testing feasibility if their objectives appeared to align more
with learning from implementation processes or demonstrating a proof of concept. These
programs were characterized as not being based upon an already proven intervention and not
using rigorous evaluation methods. Programs testing feasibility were also more likely to measure
programmatic outcomes, reference “learning” or “adapting”, and claim they were the “first”
program to be implemented in some way. Conversely, we classified as testing effectiveness those
programs which aimed to demonstrate the replicability or scalability of an intervention. These
programs were more likely to be systematized or professionalized, for instance by being part of a
25
broader SIB funding initiative or by using rate cards to monetize outcomes. Programs testing
effectiveness also typically used more rigorous evaluation methods and built upon proven
interventions.
RESULTS
There are some notable differences between SIB program characteristics in the UK and
US (see table 1.1). The range for target population in the UK (14 to 11,000) is much larger than
in the US (135 to 4,458); though upon removing one outlier, the maximum in the UK is 4,000.4
UK and US programs also have comparable target population means (1,205.2 and 1,239.8) and
medians (416 and 562.5), respectively. US programs have more generous SIB budgets (‘capital
raised’) than UK programs, with a maximum value of $30 million, compared to $7.7 million in
the UK. The average ($2.0 million) and median ($1.5 million) capital raised for UK programs are
also much less than the average ($10.3 million) and median ($8.7 million) for US programs.
Similar patterns emerge in maximum outcome payments. UK program lengths range from 2 to
10 years, with a mean and median of 3.9 and 3.5 years, respectively. US program durations,
meanwhile, have a larger range (3 to 30 years), mean (6.2 years) and median (5.5 years).
Table 1.1: Summary Statistics for Key Program Criteria
UK US
Target population
Min 14 135
Max 11,000 4,458
Mean 1,205.2 1,239.8
Median 416 562.5
4 During our initial data cleaning, we also removed one US outlier for target population of 160,000.
26
Capital raised ($M)
Min $0.5 $0.5
Max $7.7 $30.0
Mean $2.0 $10.3
Median $1.5 $8.7
Max outcome payments ($M)
Min $0.1 $0.5
Max $13.3 $34.5
Mean $4.3 $11.8
Median $3.8 $8.0
Duration (years)
Min 2.0 3.0
Max 10.0 30.0
Mean 3.9 6.2
Median 3.5 5.5
In addition to this programmatic data, table 1.2 provides further descriptive data on the
number of program stakeholders for UK and US programs – specifically investors, outcome
payors, services providers, and intermediaries.5 UK programs tend to have fewer investors
overall, with a maximum of 13, average of 2.9, and median of 2; in the US the maximum number
of investors is 35, average is 5.8, and median is 4. For outcome funders, the maximum number
for UK programs is 12, compared to 3 in the US. Meanwhile, the average and median number of
outcome funders for UK programs (2.4 and 2) are slightly larger than for US programs (1.3 and
1), respectively. While the maximum number of service providers for UK programs (16) is
higher than for US programs (6), the averages for US programs (1.7) and UK programs (2.1) are
5 Due to imprecise wording from the SIB Database, we estimated the number of stakeholders in at least one of the
four stakeholder categories for 16 programs. For example, one funder listed in the UK’s Children Social Care
program launched in 2017 was simply “schools.” Without knowing the exact number of schools, we estimated it at
the lower bound of two. Therefore, some uncertainty remains over the true number of stakeholders involved for
some SIBs.
27
much closer, and the median for both is 1. Meanwhile, for intermediaries of UK and US SIBs,
the maximums are 1 and 2, the averages are 0.6 and 1, and the medians are both 1, respectively.
Table 1.2: Summary Statistics for Program Stakeholders
UK US
Investors
Min 0 1
Max 13 35
Mean 2.9 5.8
Median 2 4
Outcome Funders
Min 1 1
Max 12 3
Mean 2.4 1.3
Median 2 1
Service Providers
Min 1 1
Max 16 6
Mean 2.1 1.7
Median 1 1
Intermediaries
Min 0 1
Max 1 2
Mean 0.6 1.0
Median 1 1
Private Sector Social Financing
In the US, a typical SIB has nearly double the number of investors per program compared
to the UK; the US also has a much higher percentage of programs with at least one for-profit
investor, at around two-thirds of all programs compared to one-sixth in the UK (see figure 1.3).
However, the UK and US have similar percentages of programs with at least one investor
classified as a public entity (around 12.5 percent) or a private individual (around 7.5 percent).
28
Conversely, while almost half of the UK programs have at least one charity as an investor, all of
the US programs do. In addition, many more UK programs (72 percent) have at least one
investor that is a social enterprise than US programs (5 percent). Notably, this social enterprise
participation in the UK SIBs was dominated by one major investor – Bridges Fund Management
(“Bridges”) – which invested in nearly half of the UK SIBs. Approximately 9 percent of the UK
programs have at least one RSL investor (a UK-specific category).
Figure 1.3: Percentage of Programs with at Least One Investor from the Following Sectors
In examining the total number of investors involved in all UK and US SIBs, the analysis
found a comparable percentage of investors that are for-profit in the UK (12 percent) and US (18
percent) (see figure 1.4). Further, for both UK and US programs around half of all investors
were charities and around 4 percent were public bodies. However, 35 percent of UK program
investors were social enterprises compared to only 1 percent for US programs. US programs also
have a higher percentage of investors who are private individuals (26 percent) than UK programs
(7 percent). In addition, about 5 percent of UK program investors are RSLs.
29
Figure 1.4: Percentage of Investors from the Following Sectors Across All Programs
The analysis also uncovered an interesting divergence in the number of repeat investors
between the two countries. In the UK, there were 48 unique investors in SIB programs. While
each investor contributed to an average of 2.2 SIBs, there were 13 investors (27 percent of all
UK investors) who contributed to more than one SIB. Notably, Bridges invested in 21 SIBs (44.7
percent of all UK SIBs) – a considerable number, as the next highest number of SIBs to which
one investor contributed was 8 (for Big Issue Invest). Meanwhile, in the US, there were 54
unique investors. Each investor contributed to an average of 1.6 programs, and 14 investors (26
percent of all US investors) contributed to more than one SIB. In the US, the investor who
contributed to the most SIBs was the Reinvestment Fund, at 6 SIBs (25 percent of all US SIBs).
See Appendix 1.2 for tables listing each unique investor and the number of SIBs in which they
invested for both UK and US programs.
Overall, the data provide mixed evidence of whether SIBs are behaving as innovative
financing tools by utilizing private capital to finance social programs, with considerable
differences between the two countries. In particular, the data suggest that participation in SIBs
from at least one private actor is happening at much higher rates in the US than the UK,
suggesting that US SIBs have been more effective at achieving the goal for social finance
instruments of bring in private capital for social purposes. Conversely, the UK has much higher
30
rates of participation from social enterprises. While attracting social enterprise capital could also
be seen as an aim of social finance tools, given that historically social funding has come from
charitable trusts and foundations (Albertson et al. 2018), it falls short of the objective of
attracting more mainstream for-profit investors.
In addition, the data reveal further divergence in how UK and US SIBs utilize other
capital sources– for instance, from charities or private individuals – and in how they are able to
encourage repeat investments. However, the data do not provide information about the share of
total investment in each SIB that is coming from these groups relative to other sectors. If the
relative percentage is high, this would provide even stronger support for the claim that SIBs are
innovative in their ability to bring ‘new’ types of funding to service provision.
Piloting and Scaling Social Innovation
Within the social innovation process, the analysis focused specifically on classifying
programs according to the piloting or scaling stages. It then utilized a secondary approach which
equated piloting with testing feasibility and scaling with testing effectiveness. When comparing
the piloting/feasibility and scaling/effectiveness classifications, we find that these two typologies
matched 78 percent of the time. Further, these two classifications were very similar for the UK
programs, with 32 percent of programs classified as pilots and as testing feasibility, and 64
percent of programs classified as scaled interventions and 68 percent as testing effectiveness.
However, there were larger differences between the classifications for the US programs using the
two typologies. While only 9 percent of US programs were classified as pilots, 18 percent were
classified as testing feasibility. Additionally, while 91 percent of programs were classified as
scaled interventions, 82 percent were classified as testing effectiveness. The results from the first
31
classification process are summarized in figure 1.5, while the results from the second
classification process are provided in figure 1.6.
Figure 1.5: Programs Classified as Pilot vs Scale
Figure 1.6: Programs Classified as Feasibility vs Effectiveness
In sum, the evidence of how SIB use fits within the process of social innovation suggests
variation in how SIBs are used to fund the stages of piloting and scaling between the UK and
US, as well as in how SIBs are used within each country. Nonetheless, we find that both UK and
US SIBs tend to fund more programs operating at the scaled intervention level and testing
effectiveness than programs operating at the pilot level and testing feasibility. We also find that
under both approaches the US consistently funds even fewer programs operating at the pilot
stage and testing feasibility than the UK. As such, the evidence suggests that early UK and US
SIBs were only supporting the social innovation process in limited ways.
32
DISCUSSION
This study provides important evidence on the nature of SIBs as viewed through a social
innovation framework. First, as a tool that can bring in private capital for the production of
public goods, we found that a little over 60 percent of US SIBs included a private sector funder,
while a little over 70 percent of UK SIBs included a social enterprise investor. As such, it
appears that there is only moderate evidence for SIBs to be considered an innovative financial
mechanism, with more support for US SIBs. Moreover, it remains to be seen if a broad cross
section of private and social sector investors will use SIBs to increase their contribution to the
production of social goods, as the diversity of investors in the US is not replicated in the UK,
where one investor (Bridges) invested in almost half of the SIBs.
We also found that the use of SIBs in the UK is much more likely to be driven by public
sector investment than philanthropic investment as in the US. Out of the 47 UK SIBs, 24 were
purportedly launched as part of a broader SIB funding agenda at the UK national government
level. For example, “The UK Department for Work and Pensions (DWP) commissioned ten
Social Impact Bonds under the innovation fund, to pilot social investment and new delivery
models” (Social Finance 2018). Conversely, none of the US SIB profiles mention such
coordinated funding initiatives. However, in prior research, Fry (2019: 788) found that “federal
support of PFS [pay-for-success] was a major catalyst for PFS diffusion” in the US, for instance
through the Social Innovation Fund and the 2018 Social Impact Partnerships to Pay for Results
Act. Nonetheless, our findings conform with earlier research that the UK is taking more of a
centralized approach to implementing SIBs than the US (Heinrich and Kabourek 2019), with the
government providing higher amounts of support and subsidies (Williams 2020).
33
Second, this study also tested how SIBs can accelerate the process of social innovation
through piloting and scaling. We found that both the UK and the US are creating fewer SIBs at
the earlier stage of the process (i.e. during the pilot phase), where more policy experimentation is
likely to happen. Instead, they appear to be funding relatively more programs which scale up
previously proven interventions. In contrast to the UK where about a third of SIBs are small
scale pilots, in the US, testing for the efficacy of promising programs at scale is the dominant
feature of the US SIB market. Thus, we also found minimal evidence that SIBs support the
entirety of the social innovation process, as this support primarily took place during the scaling
stage.
Additionally, SIBs in the US are generally employing more rigorous evaluation methods
than in the UK. In examining the language contained within the SIB profiles, only nine (19
percent) UK SIBs and seven (32 percent) US SIBs mentioned conducting an RCT or comparing
outcomes with a comparison group, control group, or historical baseline. Notably, the only three
programs which specifically mentioned the use of an RCT were from the US. In part, this is due
to the scale of the SIBs in the US, but many of the larger SIBs in the UK also do not employ
rigorous evaluation methodologies. Therefore, increasing the use of rigorous evaluations in both
countries, but especially the UK, is an area of potential emphasis in the future (Albertson et al.
2018). Additionally, in the US, nine out of the 22 SIBs state that they are the “first” to implement
a SIB within some unique context, suggesting that the US is still piloting the SIB tool as a social
innovation itself. Thus, it is likely that we will see further developments in how the US uses SIBs
in the future.
Third, as highlighted throughout our analysis, we found notable differences between the
use of SIBs in the UK and the US; overall, the analysis showed that early UK SIBs tend to be
34
much smaller than US SIBs, and tend to have more social investors compared to private
investors in the US, conforming with earlier findings (Painter et al. 2018; Gustafsson-Wright et
al. 2015). These trends reflect the differing impetus behind SIB adoption in the two countries. As
explained by Albertson et al. (2018), the primary driver of SIBs in the UK was public sector
demand for the subcontracting of existing services; UK SIBs intended to encourage new entrants
into service delivery in order to increase competition and reduce costs, partly requiring
experimentation (Albertson et al. 2018). Meanwhile, the primary driver of SIBs in the US was
private sector supply of innovative social services, with US SIBs responding to the “large, and
unmet, demand for funding sources that can support transformation in social service delivery”,
especially in scaling previously proven interventions (Albertson et al. 2018: 104).
Fourth, as previously mentioned, there was very limited evidence in the database to
suggest that SIBs also work to include co-productive elements or have an explicit orientation for
diffusion. While language describing the SIBs did not mention co-production or co-creation
explicitly, some did include language suggesting the personalization of services. For example, 31
SIB profiles mentioned providing one-on-one, holistic, personalized, bespoke, tailored, intensive
and/or wrap-around support to respond to individual needs. The vast majority of these (25) were
in the UK, including six funded through the Innovation Fund and five funded through the Fair
Chance Fund. However, the Database language was vague and did not go further to discuss
specific co-production strategies. The most information that a profile offered on potentially coproductive strategies was the UK’s end of life care program launched in September 2018, which
stated that it collected “feedback from careers and families on their experience of the service.”
Similarly, only seven profiles of UK SIBs financed through the Fair Chance Fund used
explicit diffusion-related language, stating that their “findings will inform policy direction.”
35
Additionally, a couple of larger-scale SIBs within the Database hinted at possible diffusion into
wider systems. The Green Infrastructure program in Washington D.C. has the longest duration
(30 years) of all programs as well as very large values of capital raised ($25 million) and
maximum outcome payments ($28.3 million). It is the only US SIB to address poverty and the
environment, and promotes wider systems change by reducing pollution to improve water
quality, as well as by reinvesting all SIB proceeds into additional green infrastructure projects.6
Finally, it is important to note that by focusing on the piloting and scaling stages as well
as on the role of private sector funding within SIBs, our analysis aligns more with the
technocratic or utilitarian paradigm of social innovation, omitting the more democratic or
radically-oriented elements such as co-production (Montgomery 2016; Ayob et al. 2016).
Further, although this paper found relatively little evidence that early-wave SIBs in the US and
UK displayed socially innovative characteristics, findings might also suggest “a flaw in the
design of SIBs” Albertson et al. (2020: 7) which social innovation theory (including the radical
paradigm) could help address by guiding the continued evolution of SIBs.
CONCLUSION
In analyzing SIBs through the theoretical lens of social innovation, this article has found
limited evidence that SIBs are either an innovation in social finance or a tool which supports the
piloting and scaling stages of the social innovation process. The data revealed that around twothirds of US SIBs, but only one-sixth of UK SIBs, received up-front funding from private
investors and that the majority of SIBs, particularly within the US, fund scaled programs which
6 Outside of the US and the UK, there was anecdotal evidence that Belgium’s Duo for a Job SIB did experience
rapid diffusion of innovative practices, in part, “because of deep stakeholder involvement” (Painter et al. 2018:
paragraph 17).
36
aim to test the effectiveness of social interventions. The analysis also identified several notable
differences between the US and UK in their use of SIBs. For instance, on average, the US has
much higher program budgets and potential outcome payments, longer program durations, and
nearly twice as many investors per SIB. In addition, while both countries are not systematically
utilizing rigorous evaluation methods, the US has been incorporating more RCTs and control
group/baseline comparisons than the UK. The analysis further uncovered interesting trends
within the SIB ecosystems within the two countries. Within the UK, the central government has
given considerable funding and support to SIBs, and Bridges has provided up-front capital to
nearly half of all programs. Meanwhile, in the US, stakeholders seem to be continuing to test
SIBs by applying them in novel ways.
This study also suggests fruitful opportunities for further research. First, studies could
more closely explore the source of investment capital from different sectors to determine the
amount of private funding SIBs have been able to leverage in comparison to other sources (e.g.
social, public and charitable investors). Research on SIB investment could also examine in more
depth the types of financial returns, capital protections, and levels of investment available to
investors. If considerable financial guarantees are necessary to attract more mainstream
investors, this could limit the amount of risk transfer from the public to the private sector
(Fitzgerald et al. 2020). As this analysis focused on US and UK SIBs as the leaders in SIB
implementation, it would also be highly valuable for researchers to compare SIB use around the
globe, including extending the analysis to Development Impact Bonds.
Additionally, given that this analysis examined the role of SIBs during the piloting and
scaling stages of the social innovation process, future studies could focus specifically on the coproduction and diffusion stages, especially as growing numbers of SIB contracts will have
37
officially ended. Although previous work (e.g. Albertson et al. 2018 and Ronicle et al. 2014)
suggests that service users and their communities generally have little or no role in the
development of SIBs, emerging research has identified a number of promising ways in which
SIBs have begun to incorporate co-creation. For instance, Fox et al. (2021) found strengths-based
working and co-creation crucial to facilitating early-stage innovation in a case study of four UK
SIBs. Additional research could therefore continue to explore how SIB design can better
incorporate co-production and diffusion-oriented strategies, for instance as part of the push for
SIBs 2.0 which Baines et al. (2021: 7) argue should “put people at the heart of service design and
delivery; create the conditions for learning; and encourage wider systems change.”
38
CHAPTER 2 – CAN SOCIAL OUTCOMES CONTRACTS SPUR SYSTEMS CHANGE?
EXPLORING ASSET-BASED WORKING, INNOVATION, AND COLLABORATION
by Hilary Olson
ABSTRACT
Since their establishment over a decade ago, considerable research has been done on
Social Impact Bonds (SIBs). Nonetheless, many questions remain on their ability to meet their
high expectations, including their ability to impact broader service delivery systems. While a
number of scholars have posited that SIBs could spur more transformative change, others have
critiqued SIBs for focusing on individual rather than systemic causes to persistent social
challenges. To address this lack of evidence, this study evaluates the case of the Greater
Manchester Homes Partnership (GM Homes), asking: In what ways, and through which
mechanisms, did GM Homes produce systems-level effects? To answer these questions, the
analysis takes a novel process tracing approach and tests three main hypotheses related to assetbased working, innovation, and collaboration. Evidence for testing these hypotheses is drawn
from 21 interviews, primary documents, user data, and secondary literature. Overall, the analysis
uncovers compelling evidence that GM Homes helped generate systems-level effects,
particularly in the areas of housing provider policies and dual diagnosis services. Findings
further suggest that asset-based working was the most influential causal mechanism, but that
adaptive management (rather than innovation) and large-scale collaborative working were vital
to enabling this.
39
INTRODUCTION
As the world becomes more interconnected and complex, systemic problems, such as
homelessness, are on the rise (ODPM 2005). Such problems are frequently referred to as wicked
problems as they have elements that “are so interconnected that it is impossible to identify a
single cause or solution” (Grewatsch et al. 2021: 2). Nonetheless, traditional policy approaches
tend to focus on the effects of individual organizations or interventions on single aspects of
social problems as opposed to their root causes (Waddock et al. 2015; Haynes et al. 2020;
ODPM 2005; Abercrombie et al. 2015; Grewatsch et al. 2021). Such tendencies have prompted
frustration with the inability for conventional solutions to meet individuals’ needs, act
preventatively, and overcome operational silos (Abercrombie et al. 2015). As a result,
policymakers have increasingly been exploring the potential of innovative policy tools to address
entrenched social problems. One such approach which has generated increasing attention is the
Social Impact Bond (SIB).
The first SIB was established in the UK in 2010. Since that time, 275 SIBs have been
launched in 36 countries (Indigo 2023). As a financial model, SIBs differ from more traditional
social sector financing by obtaining up-front funding from third-party investors as opposed to
service providers (Albertson et al. 2018). To incentivize performance, investor repayment and
profit are made partially or wholly contingent upon the achievement of specified outcomes, with
payments typically coming from government commissioners (Albertson et al. 2018). By
encouraging intersectoral collaboration in pursuit of measurable outcomes, SIBs operate at the
intersection of impact investing, public-private partnerships, and evidence-based policymaking
(Ravi et al. 2019; Warner 2013). Among its anticipated benefits include fostering innovation in
service delivery, stimulating collaboration, driving performance management, creating a culture
40
of monitoring and evaluation, and producing more sustainable impacts (Gustafsson-Wright et al.
2015).
Although SIBs are now over a decade old, many questions remain on whether they can
truly live up to their high expectations (Williams 2020). For instance, while a number of scholars
have posited that SIBs could spur more transformative change (Fitzgerald et al. 2020), others
have critiqued SIBs for focusing on individual issues rather than the underlying systemic causes
that lead to vulnerability (Andreu 2018; Broccardo et al. 2020). On the one hand, Social Finance
(2009) originally envisioned SIBs as working to address intractable social problems by
establishing new ways of working and new partnerships. Similarly, Fraser et al. (2021: 5) state
that SIBs could serve as a disruptive innovation if they “challenge existing actors… to rethink
existing practices… through the introduction of new ideas.” On the other hand, Tse and Warner
(2020: 140) argue that SIBs rarely “focus their market logics on the behavior of the larger, more
powerful actors and institutions that contribute to social welfare concerns,” instead focusing on
program participant behaviors. Similarly, Sinclair et al. (2021) caution that SIBs are not wellsuited for addressing wicked problems, as these require transformational approaches which
empower service users. To address the lack of evidence on how SIBs affect their wider service
delivery ecosystems, this study evaluates the case of the Greater Manchester Homes Partnership
(GM Homes).
The GM Homes SIB funded a housing-first program designed to help individuals
experiencing rough sleeping with entering into and sustaining accommodation. The program,
which ran from 2018 to 2021, also aimed to improve outcomes related to well-being, mental
health, drug and alcohol misuse, and employment, education, and training. Working with three
local service providers, GM Homes offered wrap-around, asset-based support which focused on
41
individuals’ strengths and goals as opposed to their deficits and problems. Over its three-year
duration, GM Homes reached sufficient target outcomes to trigger maximum (re)payment for the
investors. In addition, a 2021 evaluation by the Greater Manchester Combined Authority
(GMCA) found evidence suggesting that GM Homes had helped catalyze wider systems change
as well – such as by creating new services, establishing new partnerships, and generating
programmatic insights for scaling.
Building on these preliminary findings, the present study seeks to rigorously investigate
the extent to which, and the mechanisms through which, GM Homes was able to generate
systems-level effects. To do so, the study employs a novel method for evaluating SIBs – process
tracing. Although early conceptions of SIBs envisioned the use of counterfactual evaluation
designs, such as Randomized Control Trials (RCTs), to attribute outcome achievement to a
program’s interventions, such approaches are unable to explain how or why such outcomes are
(or are not) achieved (Albertson et al. 2018; Heinrich and Kabourek 2019; Williams 2020; Fox
and Morris 2021; Hevenstone et al. 2023). Opening this black box, process tracing enables
researchers to make causal inferences by theorizing how potential mechanisms link causes and
effects within a case, then empirically supporting these claims using multiple types of evidence
(Beach 2016; Crasnow 2017; Beach 2017; Trampusch and Palier 2016).
Process tracing is often compared to conducting a criminal investigation (e.g. Van Evera
2016; Punton and Welle 2015; Collier 2011). When investigating a crime, a detective identifies
possible suspects and collects evidence to determine their guilt or innocence; when process
tracing, a researcher identifies possible mechanisms and evaluates evidence to confirm or
eliminate hypotheses. To assess whether evidence is sufficient and/or necessary to support
competing causal explanations, process tracing uses four tests: Straw-in-the-Wind, Hoop,
42
Smoking Gun, and Doubly Decisive (Van Evera 2016; Blatter and Haverland 2014; Bennett
2010). Like establishing a suspect’s motive, a Straw-in-the-Wind test is neither sufficient nor
necessary for confirming or eliminating a hypothesis; but this test helps to determine its
relevance. As with establishing a suspect’s (lack of) alibi, a Hoop test is necessary but not
sufficient for confirming a hypothesis; it can also eliminate it. Similar to uncovering significant
circumstantial evidence against a suspect, a Smoking Gun test is sufficient but not necessary for
confirming a hypothesis; however, failing this test cannot eliminate it. Akin to finding conclusive
proof of a suspect’s guilt, passing a Doubly Decisive test is both necessary and sufficient for
confirming a hypothesis; failing this test eliminates it. Table 2.1 expands upon this analogy
using the example of an investigation into the suspects of a bank robbery.
Table 2.1: Bank Robbery Analogy for Process Tracing Tests
Sufficient for Guilt/Causal Inference
No Yes
Necessary for Guilt/Causal Inference
No
Straw-in-the-Wind:
Passing affirms relevance of
hypothesis; failing does not eliminate it
If the suspect was in significant debt,
this motive is relevant but does not
prove guilt. If no motive is found, does
not prove innocence.
Smoking Gun:
Passing confirms hypothesis; failing
does not eliminate it
If the suspect was seen holding a bag
of money at the scene of the crime,
this circumstantial evidence strongly
suggests guilt. If no such evidence is
found, does not prove innocence.
Yes
Hoop:
Passing supports hypothesis; failing
eliminates it
If the suspect was in town on the day of
the robbery, this does not prove guilt. If
the suspect was out of town, this alibi
proves innocence.
Doubly-Decisive:
Passing confirms hypothesis and
eliminates others; failing eliminates it
If security footage shows the robber’s
face and it was the suspect, this
conclusive proof proves the suspect’s
guilt and exonerates others. If not, it
proves the suspect’s innocence.
43
In this study’s investigation into the GM Homes case, the analysis uses these process
tracing tests to examine three mechanisms suspected of enabling systems change: asset-based
working, innovation, and/or collaboration. The remainder of the paper proceeds as follows. First,
the paper provides a conceptual framework for how SIBs could generate system-level effects.
Second, the paper outlines the study’s research questions and hypotheses. Third, the paper
describes the analytical use of process tracing to evaluate evidence within the GM Homes case.
Fourth, the paper summarizes the main findings of the study along the dimensions of asset-based
working, innovation, and collaboration. Fifth, the paper discusses the implications of these
findings. Finally, the paper concludes and offers recommendations for practice and future
research.
CONCEPTUAL FRAMEWORK
To better understand how SIBs could affect their wider service ecosystems, this paper
begins with a brief introduction to systems change. Systems change is “an intentional process
designed to alter the status quo by shifting and realigning the form and function of a targeted
system with purposeful interventions” (Foster-Fishman et al. 2007: 197). In the context of public
service delivery, this system includes the individuals, organizations, knowledge, and practices
involved in providing a service (Waddell 2016). It is the interaction among these parts that forms
a functional system (Foster-Fishman et al. 2007). Such systems are not only complicated in the
number of parts they contain, but are complex in that these parts form dynamic and non-linear
relationships which produce emergent properties (Mansoor and Williams 2023). Due to this
complexity and interconnectedness, systems change initiatives take a holistic approach to
improving social outcomes by addressing the whole service delivery system rather than its
44
individual component parts (ODMP 2005; Morcol 2006; Foster-Fishman et al. 2007). To do so,
initiatives often work to challenge a system’s norms, beliefs, values, policies, behaviors,
resources, and decision-making processes (Foster-Fishman et al. 2007; Zellner and Campbell
2015).
Within the SIB field, as a growing number of contracts are coming to a close, several
research studies have begun to consider more seriously how SIBs might affect systems change.
Savell (2022: 8) offers one conceptual model, proposing that SIBs could contribute to systems
strengthening, or the “capability to deliver effective services to meet population needs,” through
the scaling of effective interventions and the promotion of outcomes-based working. As potential
causal pathways, Savell (2022) highlights the role of data-driven service design and delivery,
cross-sector partnerships, and outcomes-focused governance in facilitating improvements in
broader program design and implementation.
In addition, Gustafsson-Wright et al. (2020b) theorize that SIBs could affect broader
ecosystems through: 1) building a culture of monitoring and evaluation, 2) performance
management, 3) innovation in delivery, 4) private capital crowd-in, 5) reduced government risk,
6) collaboration, and 7) sustained impact. In reviewing the evidence to date, Gustafsson-Wright
et al. (2020b: 3) state that SIBs “seem to influence the systems where they are active by shifting
mindsets and building stakeholder capacity.” Further, that the model’s focus on monitoring and
evaluation, performance management, and collaboration appear to be the most prominent drivers
thus far (Gustafsson-Wright et al. 2020b).
Another framework from Carter et al. (2018) suggests that SIBs could facilitate public
sector reform by overcoming service fragmentation, risk-aversion, and reactive interventions. To
address fragmentation in service delivery, SIBs could spur collaboration among stakeholders
45
through “a shared focus on outcomes… [to] provide beneficiaries with more efficient and
effective joined-up care” (Carter et al. 2018: 10). To overcome risk aversion to trying new ways
of working, SIBs could encourage innovation by transferring financial risk away from
commissioners and service providers towards investors (Carter et al. 2018). Lastly, SIBs could
help reduce the reliance on expensive remedial services by funding more preventative
interventions which generate future savings by decreasing the need for services over time (Carter
et al. 2018).
This study proposes a similar conceptual model to Carter et al. (2018), but with a slight
reframing based on recent findings from Fox et al. (2022b). In studying four UK cases, Fox et al.
(2022b) found that asset-based working, and inherently co-creation, helped enable early-stage
innovation within the SIBs. Additionally, they discovered that “the more established SIBs
reported changes in local public service systems that they attributed to the strengths-based model
adopted by the SIBs.” (Fox et al. 2022b: 13). Therefore, this study views asset-based working,
innovation, and collaboration as potential causal mechanisms through which SIBs could affect
systems-level change. Each of these theoretical mechanisms are explained in greater detail
below, drawing not only on the conceptual models above, but broader literature as well.
Asset-Based Working
By focusing on outcome targets as opposed to input specifications, SIBs could give
service providers greater autonomy in designing and delivering services (Williams 2020;
Fitzgerald et al. 2020; Gustafsson-Wright et al. 2020a). This operational discretion could
encourage service providers to take a more personalized approach to service delivery – including
through asset-based working (Gustafsson-Wright and Osborne 2020b; Fox et al. 2022b). Unlike
46
deficit-based models which aim to address individual needs and problems, asset-based models
aim to utilize individual strengths and capabilities, such as self-efficacy, personal motivation,
skills, self-esteem, and sense of purpose (Wildman et al. 2019; Nel 2018; Blickem et al. 2018).
Taking an asset-based approach requires building trusting relationships between service users
and providers, as individuals have likely had negative experiences in accessing services
previously (Harrison et al. 2019). Developing trusting relationships is most effective if it is
accompanied by “changing environments, institutions, and relationships into ones that are
trustworthy (Harrison et al. 2019: 8). Therefore, challenging the deficit-based approaches which
are entrenched within the existing service system becomes a vital goal of asset-based work
(Mathie and Cunningham 2003). Adopting asset-based working also requires organizational
change and shifts in mindsets as frontline staff and managers take on new roles and
responsibilities to give service users more choice and control (Friedli 2013; Nel 2018; Andrade
2016; Voorberg et al. 2017; Wildman et al. 2019). Thus, SIBs could catalyze system-level effects
by promoting more personalized, asset-based practices in order to spur cultural change
throughout the service system (see figure 2.1).
Figure 2.1: Potential Asset-Based Working Mechanism
Innovation
Through the provision of up-front risk capital, SIBs could promote innovation and
experimentation in program design and delivery by shifting the financial risk of failure from
service providers to investors (Carter et al. 2018; Gustafsson-Wright et al. 2020b). To support
such innovation, SIBs could also help create a culture around monitoring and evaluation by
Focus on
Strengths and
Capabilities
Build Trusting
Relationships
Promote Wider
Asset-Based
Working
Change in
Organizational
Culture
47
requiring the validation of outcomes to trigger payments (Gustafsson-Wright et al. 2020b).
Innovation can take the form of pilot testing experimental interventions or adapting interventions
during delivery (Carter et al. 2018; Gustafsson-Wright et al. 2020b). It can also entail providing
more personalized and localized services or promoting continuous improvement through
performance management (Carter et al. 2018). Developing innovative solutions to social
challenges requires testing and measuring the impacts of change efforts to generate actionable
knowledge (ODPM 2005; Abercrombie et al. 2015). Insights from measuring program data can
then help lead to the scaling of effective interventions, such as through evidence-based
policymaking, by demonstrating the potential of alternative approaches and generating learnings
on what works (Gustafsson-Wright et al. 2020b; Savell 2022; Mulgan 2013; Stead 2019). As
such, SIBs could produce ecosystem effects by evaluating innovative approaches in order to
scale up effective solutions throughout the system (see figure 2.2).
Figure 2.2: Potential Innovation Mechanism
Collaboration
SIBs could facilitate intersectoral collaboration by aligning interests and incentives
around common social goals (Gustafsson-Wright et al. 2020b; Broccardo et al. 2020).
Collaboration could not only involve SIB (contractual) partners, but wider networks of systems
actors as well (Carter et al. 2018). As different stakeholders have access to different resources
and networks, collaborative partnerships facilitate the transfer of knowledge, resources, and
norms across a system (Abercrombie et al. 2015; Foster-Fishman et al. 2007). Such partnerships
Test or Adapt
Interventions
Continually
Learn and
Improve
Evaluate and
Manage
Performance
Scale Effective
Interventions
48
thus enable “an exchange of know-how that leads to increased efficiency and creative solutions”
(Gustafsson-Wright et al. 2015: 44). Collaborative working also helps improve service delivery
by better linking individuals with existing services and resources or even by creating new
services altogether (Wildman et al. 2019; Harrison et al. 2019). Further, as partners from
different sectors likely hold competing goals and values, intersectoral collaboration can represent
a learning process as partners challenge each other’s prior assumptions and practices (Smeets
2017). In turn, collaboration can leave behind new relationships which draw on new
combinations of knowledge and resources for social problem-solving (Smeets 2017). Therefore,
SIBs could contribute to systems change by building new relationships and partnerships among
system actors in order to join up services and resources (see figure 2.3).
Figure 2.3: Potential Collaboration Mechanism
RESEARCH QUESTIONS
Based on the conceptual framing above, this study’s research questions are: What
impacts did GM Homes have on its wider service delivery system? Through which
mechanism(s) did it generate these ecosystem effects: asset-based working, innovation, and/or
collaboration? The study’s three hypotheses are that GM Homes created systems-level effects
through:
1) promoting asset-based practices throughout the system to spur cultural change;
2) evaluating innovative approaches to scale up effective solutions throughout the system;
and/or
3) building new collaborative partnerships to join-up services throughout the system.
Align on Goals
and Values
Share
Knowledge and
Resources
Build New
Relationships
Join Up
Services
49
The study’s primary hypothesis is that asset-based working was most pivotal to
catalyzing systems change for two main reasons. First, asset-based working was one of the most
unique features of the GM Homes approach. Given that there is limited evidence on the ability
for SIBs to generate ecosystem effects to date, and that innovation and collaboration are more
common SIB features, these two mechanisms are not expected to have been as influential as
asset-based working. Second, Fox et al.’s (2022b: 9) study suggests that asset-based working
might itself spur more collaborative and innovative approaches, by “placing a greater emphasis
on front-line staff engaging with other services” (8) and giving front-line staff greater autonomy
to “co-produce solutions with individual clients in order to respond to unique contexts.” For
analytic clarity, the present study investigates all three mechanisms separately.
DATA AND METHODS
To evaluate the ways in which, and the mechanisms by which, GM Homes impacted its
service ecosystem, the analysis takes a process tracing approach. Process tracing seeks to explore
“the many and complex causes of a specific outcome” (Blatter and Haverland 2014: 59). To do
so, it uses generative causation to explore the mechanisms linking causes and effects (Stern et al.
2012). In contrast, more traditional counterfactual approaches to causation explore “the
outcomes resulting from manipulated causes” (White and Phillips 2012: 18). Counterfactual
evaluation designs, such as RCTs, are helpful in estimating the treatment effects of more
conventional programs which follow linear processes to incremental change (Svensson et al.
2018: Stern et al. 2012; White and Phillips 2012). However, these experimental approaches are
less appropriate for evaluating nonlinear social change processes which are often more dynamic
and disruptive (Svensson et al. 2018; Preskill and Beer 2012). Therefore, evaluations of complex
50
system change initiatives require different types of analytic approaches, such as realist and
theory-driven methods which explore “the roles of mechanisms and contextual factors in
producing policy impact” (Mansoor and Williams 2023: 11). Process tracing is thus well-suited
to analyzing the GM Homes case, as its mechanistic approach can account for the complexity of
the interdependent relationships driving systems-level effects (Mansoor and Williams 2023).
Using the four process tracing tests described previously, the analysis evaluates the
evidence in support of the study’s three hypothesized mechanisms: asset-based working,
innovation, and collaboration. For each mechanism, the analysis uses Straw-in-the-Wind tests to
assess if GM Homes was designed in a way which could plausibly generate system effects; Hoop
tests to assess if GM Homes was implemented in a way which could correlate to observed
system effects; Smoking Gun tests to assess if GM Homes contributed to observed system
effects; and Doubly Decisive tests to assess if GM Homes caused observed systems changes
(Lynn et al. 2022). This analytic approach is summarized in table 2.2 below.
It is important to note that in practice it is often difficult to find evidence that meets the
rigid criteria for undergoing a Doubly Decisive test (te Lintelo et al. 2020; Fox et al. 2022a;
Mahoney 2012). As such, many process tracing scholars suggest placing the most weight on
findings from the Hoop and Smoking Gun tests, which together can still provide high confidence
in a hypothesis (Fox et al. 2022a; Mahoney 2012; Wadeson et al. 2020; Punton and Welle 2015:
Bennett 2010). Moreover, as explained by Punton and Welle (2015: 7), categorizing evidence
into these different types of tests is “inevitably to some extent subjective, as one person’s straw
in the wind test might look more like a smoking gun to another.” Thus, these authors stress the
importance of transparency in detailing how evidence has been sorted.
51
Table 2.2: Process Tracing Test Approach
Sufficient for Causal Inference
No Yes
Necessary for Causal
Inference
No
Straw-in-the-Wind:
Evidence that the design of GM Homes
was plausibly relevant to system-level
changes
Smoking Gun:
Evidence that GM Homes contributed
to system-level changes
Yes
Hoop:
Evidence that the implementation of
GM Homes was correlated with
system-level changes
Doubly-Decisive:
Evidence that GM Homes caused
system-level changes
Given that process tracing is a within-case method, appropriate case selection is vital to
validity (Trampusch and Palier 2016). Case selection for the study was theoretically-driven and
restricted to positive cases which contain each cause, outcome, and contextual condition that is
theoretically required to enable the mechanism (Trampusch and Palier 2016; Beach 2017). Using
“prior cross-case knowledge” about which cases were most likely to exhibit as many of these
relevant factors as possible (Beach 2017: 14), case selection was limited to the four cases
previously analyzed and purposively selected by Fox et al. (2022b). Out of these four cases, GM
Homes was ultimately selected, based on the relative amount and relevance of available data for
the case. A summary of the case follows.
GM Homes
The GM Homes Social Outcomes Contract (SOC)
7 – a broader term for SIB – was
designed to provide permanent housing to individuals in the Greater Manchester area who were
7 As noted by Ronicle et al. (2022), some researchers and practitioners prefer the term SOC to SIB. This preference
was echoed by many of the GM Homes interviewees. As such, the remainder of this paper uses the term SOC when
referring to the GM Homes case, but uses the term SIB interchangeably (for instance, when referencing quotes).
52
experiencing rough sleeping, or “seen at least six times over the past two years” (GMCA 2021:
10). The main impetus behind the program was the need to address increasing numbers of rough
sleeping in Greater Manchester, compounded by “austerity and government cuts and a lack of
services” (Advisor). Funding to pay for the project’s target outcomes (if and only if they were
achieved) was offered by the Ministry of Housing, Communities, and Local Government
(MGCLG).8 Notably, this MGCLG funding was tied to “national initiatives to end rough
sleeping” (Commissioner), with seven other similar SOCs launched simultaneously (Johal and
Ng 2022). Ending rough sleeping was also one of the policy priorities for the incoming mayor,
Andy Burnham. Although the SOC had already been designed by the time Mayor Burnham
entered office, it received both considerable political support and pressure, becoming a highprofile program (e.g. Pidd 2017; 2018; 2021).
As a housing-led program, access to safe and stable accommodations was seen as key to
helping individuals in planning better futures. As one Delivery Partner described, the approach
was: “let’s get the accommodation sorted, from which they can rebuild their lives.” To support
sustainment of these accommodations, the program provided wrap-around support with physical
and mental health, wellbeing, and Education, Training, and Employment (ETE). The SOC’s
commissioner was GMCA. Its investors were One Manchester, Trafford Housing Trust, and a
group of social investors brought together by Bridges Outcome Partnerships (Bridges).
9 Bridges
not only raised investment capital for the SOC but provided program management support as
well. The SOC’s service providers were Shelter, the Brick, and Great Places (GMCA 2021). In
8 Now called the Department for Levelling Up, Housing, and Communities (DLUHC).
9 This group included: Big Society Capital, European Investment Fund, Greater Manchester Pension Fund,
Merseyside Pension Fund, Omidyar Network, Deutsche Bank Social Investments, Panahpur Charitable Trust,
Pilotlight, Prince of Wales Charitable Foundation, and Trust for London (Indigo 2023).
53
total, the SOC contract specified that GMCA would pay investors for up to £2.6 million worth of
outcomes payments if targets were achieved over the three-year program.
After receiving over 500 potential referral names from the 10 participating local
authorities, 406 individuals signed up to start the program. Eight months ahead schedule, the
SOC had successfully achieved and validated the full £2.6 million worth of outcomes payments.
Nonetheless, the partners chose to continue funding delivery for another eight months, ultimately
achieving £700,000 seven hundred thousand pounds worth of additional outcomes at no cost to
the government. By the end of the program, the SOC had exceeded all of its initial internal
projections, aside from those related to ETE. A timeline of the case is provided in Appendix 2.1.
Data
As is common with process tracing research, this analysis relies on data collected from
interviews, primary documents, user data, and secondary literature (Trampusch and Palier 2016).
The SOC’s investing manager brokered initial access to partner organizations for interviews.
These nine organizations include the SOC’s housing providers, delivery partners, investors, and
managers. Volunteers and service users were not interviewed. A total of 21 interviews were
conducted in two phases. The first phase consisted of 6 individual and group interviews in 2020
while the program was still underway. This phase involved 11 individuals from 6 different
organizations. All interviews were conducted on-line and were not recorded; instead, detailed
notes were taken during the interviews, which were then written up formally and anonymized.
The second phase of interviews took place in 2022 after the program concluded. This phase
included 5 follow-up interviews and 10 first-time interviews, all done on an individual basis.
54
Interviews were again conducted on-line, but were also recorded and transcribed. See Appendix
2.2 for the interview protocols and Appendix 2.3 for a summary of these two interview phases.
Primary documents included internal reports (EX: well-being assessment and exit review
summaries), program participant case studies, program newsletters, and evaluation reports.
Secondary literature was comprised of news articles covering the program’s announcement,
implementation, progress, and final achievement. User data was provided in a dataset that
included variables on individual characteristics (EX: age, gender), service needs at referral (EX:
learning disabilities, drug and alcohol misuse, offending behavior), engagement with various
program interventions, and progress on outcome metrics (EX: sustained accommodation and
wellbeing scores).
All evidence was coded and analyzed according to the four process tracing tests.
Evidence was also partially weighted based on the source of the data (Wadeson et al. 2020). For
instance, when discussing on-the-ground implementation, more weight was given to responses
from delivery partners and housing providers; meanwhile, when discussing impacts on future
programming and funding decisions, more weight was given to responses from commissioners
and investors. It is also important to note that while this study intentionally involved postprogram interviews to collect reflections and insights into what has happened since the SOC
ended, this nearly two-year time lapse did introduce the possibility of incomplete or inaccurate
responses on the part of interview participants. However, as this retrospective data has been
triangulated with interview data and program documents from when the program was still
running, the influence of any bias or errors is likely small.
55
RESULTS
The analysis took place in two stages. First, it examined the types of system change
effects that were observed by GM Homes stakeholders. These changes primarily related to
housing provider policies and dual diagnosis services, as well as the design of a follow-on
project called the Housing First pilot. Second, the analysis took a deductive approach by testing
the three hypotheses related to asset-based working, innovation, and collaboration. Finally, this
section concludes with a brief reflection on the role that the SOC as a funding mechanism itself
played in supporting these causal processes.
Systems-Level Effects
Interviewees shared a range of views on the extent to which systems change was an
explicit goal of the SOC. For instance, one Delivery Partner believed that “It was always from
the very beginning, from the development of the bid… an expectation that we would identify
systemic issues. And we would deal with those by challenging them, seeking policy change,
starting new conversations with those people that were perpetuating, dealing with the structures,
the behaviors that sit behind all of that.” Meanwhile, a Housing Provider felt that, “we changed
things as we went along… rather than setting out on a practical level to change the system.” The
majority of interviewees agreed, however, that systems change was a goal from the beginning of
the program, mostly driven by the realization that: the current system is not working, what can
we do to improve it?
Additionally, interviewees expressed a variety of opinions on the levels of systems
change they thought the program achieved. Most agreed that the program was able to affect
‘little’ systems change by changing individual mindsets and organizational practices, as opposed
56
to ‘big’ systems change by changing government laws and policies. Similarly, while some
interviewees saw the program contributing to prevention efforts by addressing root causes, they
also noted that this work was more related to preventing people from returning to rough sleeping
rather than preventing people from entering into rough sleeping in the first place. Overall,
interviewees most often saw the program as contributing to systems change through revisions to
housing provider policies and the establishment of dual diagnosis services, particularly with
reference to their impacts on the Housing First pilot. Although interviewees also mentioned
progress towards systems change in the area of criminal justice, these effects did not appear to
sustain after the SOC’s end.
Interviewees shared several ways in which they saw housing providers change their
policies to be more asset-based and trauma-informed in ways that continued even after the
SOC’s conclusion. These policies related to allocations, management, and evictions. In terms of
allocations policies, housing providers agreed to work with a cohort that had traditionally been
excluded from access to housing due to various reasons such as previous rent arrears, evictions,
or convictions. Housing providers also changed their management and evictions policies by
offering more supportive, person-centered services to help individuals maintain accommodation
rather than automatically pursuing enforcement. Interviewees largely saw these policy changes
as enduring after the end of the SOC. In particular, interviewees highlighted that housing
providers have continued to work in asset-based ways through the Housing First pilot, which
many saw as a type of follow-on program to the GM Homes SOC. However, even more broadly,
one Housing Provider explained that, “In terms of the housing providers… I think that has
continued, because a big bit of work in Manchester around trauma-informed ways of working
and things like that… everyone is sort of working in that way.”
57
Interviewees also frequently mentioned the system effects of the GM Homes SOC on
dual diagnosis services – again, particularly in the design of the Housing First pilot. During the
program, the GM Homes SOC pilot tested the use of a dual diagnosis nurse to help people access
both mental health and substance abuse services. As one Investor explained, working with this
nurse diverged from how people traditionally accessed mental health and substance abuse
services, because historically “the drug service was like, we can't work with them because it's
predominantly mental health issue. Mental health team said, well, we can't work with them…
until they're not using drugs... And there were some of these people stuck in the middle with not
being able to have access of any services.” Interviewees saw the effects of this pilot continuing
after the SOC’s end given that this approach was scaled by other programs. From one
Commissioner’s perspective, "the learning around the mental health kind of an activity, that
person-centered approach, was taken into Housing First. And we tried to bring it into other
projects. So I do think there’s a legacy left with the SIB.” One Manchester, one of the SOC’s
investors, has also since piloted having its own mental health worker on their staff.
Despite many positive examples of how the SOC was able to impact the broader system
even after its conclusion, it is important to note that interviewees also highlighted a number of
challenges that they faced in pursuing systems change, particularly related to funding. For
instance, interviewees reported that larger housing providers have been better able to continue
working in asset-based, trauma-informed ways as they had access to more funding and resources.
In contrast, smaller housing providers have often struggled to continue providing these more
intensive types of support once the SOC funding ended. Similarly, interviewees noted that while
there has been an increased recognition of the importance of offering dual diagnosis services,
that the ability for organizations to provide such services is often dependent upon their capacity
58
and their funding. As well-summarized by a Commissioner, one of the difficulties of advocating
for the use of asset-based practices more broadly is that: “Other services want to be asset-based,
they want to be person-centered and flexible. And often that takes well-funded teams.”
Additionally, although many interviewees saw the Housing First pilot as building on the GM
Homes work, questions remained on the extent to which systems change efforts would continue
after the end of the pilot if it was not extended or mainstreamed. As one Housing Provider
shared, “If Housing First came to an end, that would be the end of this cohort being rehoused
with this level of support… the system will only remain changed for this sort of people with
these additional needs if the money comes with it.”
Asset-Based Working
In order to test the hypothesis that the SOC’s use of asset-based working led to systems
change, the analysis employed the following evidentiary tests. As a Straw-in-the-Wind test, the
analysis examined whether the program was designed to take an asset-based approach to service
delivery. As a Hoop test, the analysis examined if the program’s implementation of asset-based
working empowered service providers and program participants, as well as led to changes in
organizational culture and staff mindsets. As a Smoking Gun test, the analysis examined whether
program staff advocated for asset-based working and if system actors adopted asset-based
working. Finally, as a Doubly Decisive test, the analysis examined whether systems actors
adopted new organizational policies and mindsets because of the SOC’s advocacy for assetbased working. These tests are summarized in table 2.3 below.
59
Table 2.3: Asset-Based Working Hypothesis Tests
Sufficient for Causal Inference
No Yes
Necessary for Causal
Inference
No
Straw-in-the-Wind (Relevance):
The program was designed to use
an asset-based way of working
Smoking Gun (Contribution):
Program staff advocated asset-based
working; system actors began to work in
asset-based ways
Yes
Hoop (Correlation):
Asset-based working empowered
service users/providers and
changed organizational
culture/mindsets
Doubly-Decisive (Causation):
Systems actors changed their
organizational culture/mindsets because
the program promoted asset-based
working
First, in passing the Straw-in-the-Wind test, there were many examples of how the
program employed asset-based working. A key aspect of asset-based working was the building
of trusted relationships between frontline staff and program participants. This involved staff
getting to know individuals, including their interests and aspirations. Another key element of
asset-based working was supporting individuals with more practical tenancy-related tasks, such
as furnishing and decorating homes prior to individuals moving into accommodation. The staff
also used personalization funds, which were budgets that could be used based on each
individual’s preferences. In addition, all three of the delivery partners employed staff or
volunteers with lived experience, in this case previous personal experience with homelessness,
which they saw as important because “it meant that the engagement was more authentic”
(Delivery Partner). Importantly, interviewees saw working in an asset-based, person-centered,
trauma-informed way as being essentially one unified approach, rather than three approaches
which could be separated from each other.
Second, in passing the Hoop test, there was evidence that asset-based working was
empowering individuals, particularly by giving them more choice and agency. For instance, SOC
60
partners worked to give individuals a choice over where they lived – both at the beginning of
their engagement, or further down the line if they wanted to relocate through a managed move.
Relatedly, another principle was that all services be voluntary, as “there's no way of forcing that
on somebody; they either feel that's something they want to do or they're not there. All we can do
is really be a level of consistent background support” (Housing Provider). This meant not closing
people off from services, for instance due to lack of engagement or compliance, but instead
offering as many second, third, or fourth chances as individuals needed. Therefore, in many ways
asset-based working essentially was seen as: “you have to co-design a personalized intervention
with each individual person” (Investor). One example of such personalization was that of two
young men who were bored on the weekend and whose excessive drinking and anti-social
behavior were putting their tenancy at risk. Based on their interests, the delivery partner used the
personalization fund to buy them fishing equipment. By spending more time fishing on the
weekends, their behavior improved and they were able to successfully sustain their tenancy.
In addition to empowering individuals, there was evidence that asset-based working had
impacts on staff mindsets as well. In fact, many of the interviewees, in reflecting back on the
work they had done with GM Homes, saw it as some of the most meaningful work they had ever
done. As one Delivery Partner expressed, “I couldn't imagine going back to working in the other
ways that I've been working for so long. I felt very, very passionately about doing things
differently. Because the amount of systems change and the way that the innovation and the
influence to do things differently and advocate for people in a different way… there was never
the opportunity to do that type of innovation in other services.” Interviewees also described
applying learnings from their GM Homes experience into their future work. As another Delivery
Partner shared, “I constantly reflect back on to GM Homes… it’s not about offering somebody a
61
service, it’s about what we can do differently to give that person the services that they need in a
way they want it. So I personally use the learnings from GM Homes like in my every day.”
Third, in passing the Smoking Gun test, interviewees gave many examples of how the
SOC advocated for and worked to embed asset-based practices to sustain after the SOC’s
conclusion. As asset-based, trauma-informed working was new to many housing providers,
program staff had to spend considerable time and effort explaining the benefits. As one Delivery
Partner explained, “at the time, it felt like every interaction, we were training people to think in a
different way.” In addition, the SOC provided trauma-informed trainings to housing provider
staff, including some trainings done by the Greater Manchester Health Trust. Training of
frontline staff was seen as an important step, because although the chief executives of housing
providers had agreed to taking a new approach, this did not necessarily mean that the
commitment or understanding trickled down to the front-line. Program partners also worked with
housing providers to draft and circulate seven principles outlining good practices for continuing
this new way of working after the program ended. These principles related to: 1) organizational
change, from leadership to frontline staff; 2) high levels of communication to facilitate effective
collaboration; 3) trauma-informed and asset-based practices; 4) flexible, person-centered
approaches; 5) minimum property standards (to help create a home, not just a house); 6) tenancy
preparations; and 7) linking up specialist services.
In addition to evidence on such advocacy efforts, housing providers themselves appeared
to demonstrate commitment to continuing with asset-based working beyond the program. For
instance, interviewees explained that some housing providers changed their staffing roles and
responsibilities following the SOC to further facilitate this new way of working. As one Delivery
Partner shared, “Some of the major housing associations who gave us properties for the SIB
62
recruited people afterwards to be support officers, rather than housing officers and who… tried
to maintain that culture of: don’t just go by what the book says, try to understand the person and
see if we could intervene, build a relationship with them.” To many interviewees, these changes
signified broader changes to organizational culture. As one Commissioner described, “from a
cultural point of view, I think a lot of the housing providers really did change the way that they
worked with people, for example, things that have been a bit more trauma informed.”
Fourth, although the analysis did not uncover explicit evidence to pass the DoublyDecisive test, it is helpful to consider whether housing providers would have made such
organizational changes at that point in time had the SOC not been advocating for asset-based
working. Interviewee opinions on this point were mixed, as a number of interviewees noted that
the SOC was part of a more general push to make public services in Greater Manchester more
person-centered, asset-based, and trauma-informed. However, interviewees also highlighted that
this way of workings was still something that had not previously been adopted by housing
providers. As one Delivery Partner explained, “So the education to housing providers about…
instead of being process-driven, are actually person-driven and psychologically informed and
trauma informed… we invited them into that as well, because that's something they weren't
getting in housing.” Therefore, while not conclusive proof, overall evidence provides high
confidence that GM Homes spurred, in large part, considerable cultural changes within housing
provider organizations to adopt more asset-based, trauma-informed working.
Innovation
In order to test the hypothesis that the SOC’s focus on innovation led to systems change,
the analysis employed the following evidentiary tests. As a Straw-in-the-Wind test, the analysis
63
investigated if the program was designed to enable innovation in service delivery. As a Hoop
test, the analysis investigated if the program tested new approaches and if the program collected
evidence through monitoring and evaluation activities. As a Smoking Gun test, the analysis
investigated if the program staff shared learnings and evidence on innovative approaches and if
system actors adopted new approaches. As a Doubly-Decisive test, the analysis investigated if
systems actors adopted or scaled new approaches because the SOC demonstrated evidence on the
effectiveness of such approaches. These tests are summarized in table 2.4 below.
Table 2.4: Innovation Hypothesis Tests
Sufficient for Causal Inference
No Yes
Necessary for Causal Inference
No
Straw-in-the-Wind (Relevance):
The program was designed to
enable innovation in service
delivery.
Smoking Gun (Contribution):
Program partners shared evidence and
learnings on effective new approaches;
system actors adopted/scaled new
approaches.
Yes
Hoop (Correlation):
The program tested new
approaches; the program collected
evidence through monitoring and
evaluation.
Doubly-Decisive (Causation):
Systems actors adopted/scaled new
approaches because the program partners
demonstrated evidence on their effectiveness.
First, for the Straw-in-the-Wind test, interviewees were divided on the extent to which
the program was designed to enable innovation. Some interviewees said explicitly that even from
the bid stage the program was designed to encourage innovation. For instance, one
Commissioner stated, “we wanted something that was different and innovative.” Meanwhile,
many interviewees, particularly delivery partners, did not see the program as necessarily setting
out to be innovative; instead, they viewed the SOC more generally as being flexibly designed to
enable ‘testing and learning’. When talking about such flexibility, this was most often made in
64
reference to the SOC’s funding structure and the ability to provide highly personalized services.
For instance, as one Delivery Partner explained, “that ability to learn from what's happening and
then change it is really important, because otherwise we're really trying to fit people into what
we want them to achieve.” Further, some interviewees saw the SOC at a higher level as testing
the effectiveness of different asset-based approaches, as the three delivery partners’ models each
differed slightly. Thus, as one Investor said, the SOC enabled partners to “experiment with
different approaches and learn as we went along, because I think we all recognized that there
wasn't much evidence about what works.”
Second, for the Hoop test, interviewees again were divided on whether the program did
ultimately test innovative approaches. Delivery partners often expressed that what others viewed
as innovative they viewed as a natural extension of asset-based working. For instance, in
reference to the use of the personalization fund in the ‘fishing rod’ story, one Investor stated,
“Some people needed a fishing rod, we got them a fishing rod… none of those things are
particularly innovative.” Thus, rather than viewing such approaches as arising from a focus on
innovation, interviewees saw them instead as necessary for upholding the ethos of asset-based
working. Nonetheless, the SOC was seen as enabling such personalized approaches by offering
more flexible funding, as opposed to more traditional funding which specifies inputs and
activities. As one Investor shared: “no commissioner would ever look at a whole load of data and
go: well, on average, what works is giving people a fishing rod… whereas what the SIB tried to
do was to design… from the person rather than design from the average.”
Meanwhile, other interviewees pointed to the specific pilots developed over the course of
the program as examples of innovations. These pilots included: managed moves for individuals
who needed to be relocated to new accommodations; a partnership with the Bond Board to
65
provide private accommodation; a dual diagnosis nurse for accessing both mental health and
substance misuse services; a partnership with the Growth Company to access direct referrals into
employment and training services; a partnership with probation officers and courts to help
individuals avoid short-term prison stays for minor offense; and a biometric ID for accessing
personal documents using an individual’s thumbprint. Notably, as one Investor highlighted, most
of “the innovations came as we learned… when we found a barrier, we would bring in an
innovation to overcome that barrier.”
Despite disagreements over what constituted an innovation, interviewees agreed the
program did set out to collect data to facilitate learning, including through monitoring and
evaluation. Interviewees described monitoring as an opportunity to get a bigger picture on the
program overall and for partners to support each other in addressing barriers. Monitoring was
done through monthly or bimonthly meetings among SOC partners to “talk about successes and
challenges and what we could do differently” (Investor). Interestingly, while monitoring
involved measuring progress against targets, interviewees said the focus remained on what
program participants and staff needed to be successful. As one Delivery Partner described, “it
created a framework for us to try… whatever we wanted to try to help somebody achieve the
outcome.” However, interviewees did note that the GM Homes evaluation was done towards the
end of the program, so it was hard to capture emergent learnings such as which pilots were or
were not working.
Third, for the Smoking Gun test, interviewees gave examples of a few different ways in
which they saw the SOC’s learnings being shared more broadly. At a more formal level,
interviewees noted that the SOC’s evaluation was intended to share information about effective
pilots and ongoing challenges to help influence future programs and policies. As one Investor
66
expressed, “other people can come and look and say… how can we try and replicate the things
that worked well? And try and fix some of the things that didn’t? And so hopefully from one
project to another you get progress.” Evaluation was also seen as a tool for promoting
accountability and for securing future funding. For instance, as one Delivery Partner said,
funders and governments “want to know: what am I getting for my money?”
In addition, a few interviewees noted the role that Bridges played in communicating the
program’s findings within their network, for instance through learning events and conferences.
Interviewees also mentioned sharing program learnings as a part of their advocacy efforts in
trying to convince other systems actors to change their ways of working. For instance, one
Investor said that, “when we were talking to different organizations about the benefits of
working in this way, we were able to use tangible situations to evidence where it worked
working like this versus what had happened traditionally.”
Interviewees also often discussed ways in which learnings were shared more informally,
especially internally within their own organizations. As interviewees noted, Bridges has used
their learnings to feed insights back into other programs that they support. For example, although
the SOC faced many challenges in implementing the Growth Company employment pilot, this
learning experience prompted Bridges to create ETE specialist roles within delivery partner
organizations for their Kirklees Better Outcome Partnership. Further, interviewees provided
many examples of how such internal knowledge transfers contributed to the Housing First pilot
specifically, as it shared many program partners with the GM Homes SOC. For instance, one
Housing Provider shared that, “I think we’d taken our own personal learnings… through into the
Housing First program.” In particular, interviewees highlighted that while the GM Homes SOC
was only able to hire one dual diagnosis nurse to provide services in one borough due to Mental
67
Health Trust policies, the Housing First pilot was able to scale up this approach to hiring three
workers to provide services across all ten boroughs.
Fourth, for the Doubly-Decisive test, as before, there is partial evidence that the GM
Homes SOC’s evaluation of new approaches led to the scaling of effective practices. As noted
previously, interviewees disagreed on the extent to which the SOC implemented truly innovative
interventions. However, interviewees did agree that the GM Home’s learnings from its dual
diagnosis pilot did encourage others to adopt or scale this approach. For instance, Great Places
was a delivery partner for both programs, and one Great Places employee shared that, “I call ours
[GM Homes] the precursor to the Housing First program… I think it really directly influenced
how Housing First was delivered. Yeah, I think they took a lot of learning from our program.”
Further, although one Commissioner shared that, “the evaluation of the learnings from that [GM
Homes] has gone on to kind of influence Housing First,” most interviewees viewed the informal
and internal knowledge transfers as more influential. As another Commissioner expressed, “Do I
think that… evaluation will have informed Housing First? No, not directly… it’s informal policy
transfers… [it’s] the crossover of partners.” Overall, these findings suggest that the mechanism
as initially theorized needs revising, as there is still strong evidence that certain elements of this
process led to systems change. In particular, the mechanism appeared more focused on the role
of flexibility and using data to facilitate continuous program improvements.
Collaboration
In order to test the hypothesis that the SOC’s collaborative nature led to systems change,
the analysis employed the following evidentiary tests. As a Straw-in-the-Wind test, the analysis
assessed if the program was designed to encourage collaborative working. As a Hoop test, the
68
analysis assessed if the program brought together a range of organizations and established new
relationships. As a Smoking Gun test, the analysis assessed if program partners encouraged the
joining up of services and if systems actors shared knowledge and resources. As a DoublyDecisive test, the analysis assessed if systems actors created new relationships to join up services
because of the SOC’s collaborative approach. These tests are summarized in table 2.5 below.
Table 2.5: Collaboration Hypothesis Tests
Sufficient for Causal Inference
No Yes
Necessary for Causal
Inference
No
Straw-in-the-Wind (Relevance):
The program was designed to
encourage collaborative working.
Smoking Gun (Contribution):
Program partners coordinated the joining-up
of services; systems actors shared
knowledge and resources.
Yes
Hoop (Correlation):
The program brought together a
diverse range of system actors;
established new relationships.
Doubly-Decisive (Causation):
Systems actors created new relationships to
join-up services (share knowledge and
resources) because of the program’s
collaborative working.
First, for the Straw-in-the-Wind test, there is strong evidence that the program was
designed to be collaborative, even starting at the bid stage. The original bid was put together by
two of the SOC’s investors: One Manchester and Trafford Housing Trust, both housing
providers. Interestingly, at the same time another competing bid, also led by housing providers,
was being pursued. However, leadership from these housing providers decided that “as a sector,
this doesn’t make sense, we need to come together for one bid” (Housing Provider). Thus, the
bid ended up involving over twenty housing providers who agreed to take a collaborative, rather
than competitive approach. This also aligned well with the GMCA’s vision for the program
because, as one Commissioner shared, “We wanted like a consortium bid… [a] collaborative
approach.”
69
Second, for the Hoop test, the SOC involved a large number of organizations from
different sectors, including over twenty housing providers, three homelessness service providers,
three social investors, and one combined authority representing ten local authorities. As a part of
the bid’s collaborative focus, the investors specifically wanted to bring in delivery partners with
considerable local expertise to drive the program’s design. In addition, One Manchester and
Trafford Housing Trust wanted to bring on Bridges as the third investor, as Bridges had prior
experience as a SOC investor. Program referrals were also designed to go through the region’s
ten local authorities. Notably, bringing together a diverse group of organizations did pose some
challenges; as such, a number of interviewees also highlighted the importance of creating a
shared vision at the beginning of the program to align efforts.
In addition, interviewees provided many examples of how the SOC created new
partnerships. For instance, Shelter and Northwards Housing had never worked together before.
Through the SOC, the two organizations began to have monthly meetings to assess participant
progress and discuss any challenges. As one Northwards employee explained, Shelter “were very
keen to build that relationship… rather than having difficult conversations… you can work
together, you do joint visits, joint meetings, that sort of thing to resolve issues. So then the
participants themselves or tenants are able to see that you're working together.” Similarly,
gaining the buy-in of local authorities in order to collaborate effectively sometimes involved the
SOC’s program manager “going in with the relevant delivery partners for their areas, and sitting
down and explaining the project and explaining: we’re not here competing with you, we’re here
purely to support” (Investor).
Interestingly, one element of creating new partnerships that was not originally included in
the hypothesized mechanism but that was mentioned by many interviewees as being crucial to
70
the program’s success was establishing commitments. For instance, one Investor noted the
importance of “the commitment across the partnership that we all signed up to that learning way
of working.” In particular, many interviewees shared that gaining firm commitments from
housing providers at the beginning of the SOC helped hold them accountable to changing their
ways of working over the course of the program. Housing providers following through on their
agreement to change their allocations policies was crucial because, as explained by one Investor,
“[We] didn't know whether that was actually going to happen… we've run programs in the past
where housing associations commit to stuff at the beginning, and then it never really follows
through… when they meet the people that we're asking them to house sometimes they say: No,
we can't possibly do that.”
Third, for the Smoking Gun test, interviewees provided many examples of how the
program partners worked to join up services. Largely, this push for a joined-up approach was
spurred by the “recognition that current systems didn't work, current systems were… operating
in silos and potentially conflicting with each other… And that the only way that you're going to
overcome homelessness and rough sleeping is through that systemic collaboration” (Investor).
One example of the SOC’s efforts to join up services was in the Diversion from Custody pilot.
As a part of this pilot, the program manager worked with local courts and probation officers to
avoid short-term prison sentences for program participants who committed minor offenses so as
not to disrupt their accommodation. As one Commissioner explained, “If somebody missed their
probation appointment, for example, normally you would be recalled for one or two weeks for
that. And what that usually means is that any work that's been done, any kind of accommodation
that's been sort of gotten, that just all falls apart whilst that person is in prison.” While this pilot
was largely seen as a success among the interviewees, it was mostly seen as being restricted to
71
the program’s duration and not sustained after the SOC’s end – in part because it relied so
heavily on strong partnership coordination by the SOC’s program manager.
Another example of both joining up services as well as sharing knowledge and resources
more generally was seen in the SOC’s work with the Greater Manchester Mental Health Trust.
Not only did the SOC partners work with the Trust to create the dual diagnosis pilot, as
explained above, but the two also worked together to provide trauma-informed trainings. As one
Investor explained, “I think our collaboration with Greater Manchester Mental Health Trust was
really helpful as well, because it gave a very strong cross sector partnership viewpoint on the
benefits of this way of working. So when we spoke to housing providers, they knew it wasn't just
us that was saying this was a beneficial way of working because we had the research behind
trauma-informed practice and strength-based working to empower individuals.”
Interviewees also highlighted that sharing knowledge and resources among housing
providers and other services was crucial for the transition at the end of the SOC as well. Six
months prior to the end of the program, as a part of their transition plan to help facilitate a
continuity of services for participants, program staff assessed participants on their risk of
sustaining their tenancies. The team then spoke to local authorities and housing providers about
any ongoing support needs to facilitate the transition, as well as shared personal preferences such
as the best time to contact different individuals. SOC staff also introduced participants to housing
officers and support workers to start to build relationships and trust to continue this work.
Fourth, for the Doubly-Decisive test, there was some evidence that the SOC’s
collaborative nature encouraged the creation of new partnerships and the joining-up of services.
Perhaps the most convincing piece of evidence was that while there was an existing partnership
of around 25 housing providers who had been working together for several years before the
72
SOC, that no partnership at this scale had ever been implemented across Greater Manchester
before. Interviewees by and large agreed that the program would not have been successful, or
even possible, without such committed collaboration from the housing providers. Many saw this
new, large-scale relationship working as enduring, as it continued into the Housing First pilot as
well. One Commissioner even stated that “I do think the SIB was a major turning point. It was
the first GM project, started off a bit rocky, a bit competitive, but it paved the way for a lot of
that [future] collaboration.” However, as seen with the joining up of services through the
Diversion from Custody pilot, some new partnerships that were created as a part of the SOC did
not continue after the program’s end. Appendix 2.4 provides tables summarizing this evidence,
along with the evidence found for the two other mechanisms.
DISCUSSION
The process tracing analysis provides high confidence that asset-based working
contributed to ecosystem effects within the GM Homes case. Evidence showed that asset-based
working was a crucial component to systems changes as it helped address root causes, both in
terms of addressing personal traumas as well as highlighting systemic barriers. Further, building
trusting relationships with program participants was vital for effective asset-based working,
because most participants had lost trust in the system due to years of negative experiences in
trying to access services. In particular, asset-based working seemed vital for changing
organizational culture and mindsets within housing provider organizations, which endured after
the SOC’s completion. Further, while many interviewees noted that asset-based working “had
been done in local areas on a small scale across GM,” that it was not the status quo for many
73
services and that the SOC was the first Greater Manchester-wide program to be asset-based. As
such, most participants had never experienced that type of service relationship before.
In contrast, the analysis offers only moderate confidence as to whether innovation (on its
own) was a significant mechanism for systems change within the case. Most interviewees saw
the program’s ‘innovations’ or pilots as strategies for overcoming systems barriers when working
in as person-centered way, rather than trying something totally new. Instead, they expressed that
the focus was more on testing and learning to collect evidence of what works. As one
Commissioner explained, “we kind of already knew a lot of this stuff, because it… has been
done in small pockets. But it's almost like… an echo chamber… We're kind of going for the
route: get just as much evidence as we possibly can that this works and eventually it will crack
and eventually it will spill through into how we work with people.” These somewhat conflicting
sentiments reflect a broader theme within the literature on the general lack of understanding on
what is meant by ‘innovation,’ with some viewing it as testing something totally new and others
viewing it as trying something in a new context (Hammond et al. 2021; Gustafsson-Wright et al.
2020b). Notably, Phills et al. (2008) incorporate both of these dimensions into their definition,
stating that an innovation does not necessarily have to be an original ‘invention’ – it can also be
an existing product or process that is novel to a new group of users or a new context, and that
improves the effectiveness, efficiency, sustainability, or fairness of existing solutions. Defined in
this way, the innovation mechanism becomes more convincing and better accounts for both
perspectives from interviewees.
Further, while interviewees noted that monitoring and evaluation were important for
demonstrating the effectiveness of different approaches, they thought that learnings were more
often shared internally through informal policy transfers as opposed to externally through formal
74
evaluation reports. For instance, the scaling of the dual diagnosis service in the Housing First
pilot appeared largely driven by “the crossover of partners” (Commissioner) between that
program and the GM Homes SOC. These insights suggest that the role of formal SOC
evaluations, in their current form, may be more related to improving transparency and
accountability, rather than capturing learnings for scaling (De Pieri et al. 2022). Given these
findings, a revised ‘innovation’ mechanism might instead be that systems changes were driven
by ‘adaptive management.’ Adaptive management is a data-driven, learning-oriented approach to
program design and implementation that integrates flexibility, monitoring, and stakeholder
engagement to facilitate continuous improvement (Prieto Martín et al. 2020). This reframing of
the mechanism as ‘adaptive management’ resonates with both Savell’s (2022) and GustafssonWright et al.’s (2020b) conceptions around outcome-focused delivery and performance
management, respectively.
As with asset-based working, the analysis established high confidence that collaboration
enabled the SOC to impact its service ecosystem. In particular, the scale of the SOC’s
partnership seemed to enable systems change efforts, especially around housing provider
policies. A large factor in GM Homes’ ability to produce ecosystem effects was the SOC’s high
level of partnership working, which helped hold housing providers accountable and follow
through on their commitments. While collaborative working also helped drive the success of the
diversion from custody pilot, this intervention did not sustain after the SOC ended, which some
interviewees explained to be because the SOC’s program manager played such a huge role in
maintaining the relationships with the other criminal justice stakeholders.
The study’s results further uncovered important interactions among the three
mechanisms. In support of Fox et al.’s (2022b) preliminary findings, the evidence suggests that
75
effective asset-based working inherently required collaboration and adaptability to provide wraparound, tailored services based on individual needs. For example, encouraging housing providers
to move away from automatic enforcement processes to more supportive and person-centered
processes required delivery partners to build more trusting relationships with both housing
providers and program participants. Similarly, by adapting services to individual interests and
goals, delivery partners were better able to encourage individuals to continue to try new
approaches when they otherwise might “just give up trying because they’ve never had success
before” (Commissioner). Therefore, it appears that while asset-based working is sufficient to
produce systems-level effects, all three mechanisms are jointly necessary.
SOC Effects
When considering these results, it is further helpful to discuss the role of the SOC as a
funding mechanism itself in facilitating systems change. Interviewees shared a range of opinions
on the importance of the SOC model in the program’s success. By and large, interviewees agreed
that the SOC provided a much more flexible way of working than traditional commissioning
models, as public contracts are typically highly specified and include strict delivery budgets. In
contrast, the flexible funding provided by the SOC gave service providers the freedom to
“actually really listen and be person-led and empower the people that were working with”
(Investor). For this reason, many interviewees felt that the SOC model was better suited for
supporting systems change efforts than traditional models. For instance, one Delivery Partner
stated that flexibility “gives people the choice of how they engage. And you only really get that
on these contracts.” Similarly, one of the SOC’s Investors said, “I absolutely do believe that that
76
is the best way of trying to create a program that is all about learning, experimenting, innovating,
iterating and keep trying new things.”
On the other hand, others pointed out that while SOCs can help facilitate asset-based
working and systems change efforts, that these approaches “are certainly not exclusive to SIBs”
(Advisor). Further, that because SOCs are costly to set-up and manage, for instance in defining
outcomes and measuring performance, that sometimes a flexibly-designed fee-for-service
contract could be easier to implement. Another challenge discussed by interviewees was the
relatively short-term nature of the SOC funding in comparison to mainstream public services.
Interviewees shared that time-limited funding for contract working can increase staff turnover,
which can be disruptive for participant progress, as individuals need consistency to build trust
and engage with services. Moreover, if there is no plan for extending funding for services
beyond the SOC, then individuals could lose access to vital support once the contract ends.
Relatedly, SOC funding for entrenched systemic issues such as rough sleeping often only comes
from one commissioning source, despite contributing to outcomes in a range of policy areas. As
one Commissioner highlighted, “it's a challenge when you're paying for health services… from
housing money, rather than health money.” Therefore, many interviewees felt that truly
sustainable systems change would only really be possible if such programs became regularly
funded services, potentially with contributions from a combination of departmental budgets, as
opposed to one-off programs.
CONCLUSION
Overall, the analysis uncovered compelling evidence that the GM Homes SOC helped
contribute to systems change, particularly in the areas of housing provider policies and dual
77
diagnosis services. The analysis established the highest confidence in the asset-based working
mechanism, as well as high confidence in the collaboration mechanism. Conversely, evidence
suggested the need to revise the innovation mechanism to focus more on the role of adaptive
management in facilitating continuous program improvements. Findings further discovered
important linkages among all three mechanisms – namely, that both adaptive management and
large-scale collaborative working were vital to enabling effective asset-based working. In sum,
findings provide novel evidence in support of the claims that SIBs (and SOCs) can be designed
in ways which promote more transformative change.
It is also important to note some of this study’s methodological limitations. One common
critique of process tracing is that of infinite regress, or the potential to continue to specify the
steps in a causal chain at increasingly fine-grained levels (Bennett 2010). As researchers must
bound their analyses, all mechanistic explanations are incomplete and provisional to some extent
(Fox et al. 2022a, citing Bennett and Checkel 2015). In recognition of this concern, this study has
attempted to clearly explain the level at which its three mechanisms were theoretically defined
(Fox et al. 2022a, citing Bennett and Checkel 2015). Another critique is that process tracing, as a
within-case method, contains a greater number of variables than observations (Bennett 2010).
This degrees of freedom problem could lead to indeterminacy, or a situation in which there is an
infinite number of possible solutions (Waldner 2012). However, as process tracing relies on
causal-process observations rather than dataset observations, it is not the amount of evidence that
validates this study’s analysis; rather, it is the quality of the data and its ability to support
different causal claims (Waldner 2012; Bennett 2010). Lastly, process tracing is often unable to
establish that a mechanism was exclusively responsible for producing an outcome (Befani and
Mayne 2014). For instance, process tracing evidence rarely meets the high demands of doubly-
78
decisive tests in practice, as was the case in this study. Additionally, other mechanisms could be
excluded from the analysis that might be contributing to the observed outcomes in parallel
(Befani and Mayne 2014). As such, it is important to understand this study’s results as
determining if each mechanism – as a part of a causal package along with other supporting
factors – played a contributory role in the specific case (Befani and Mayne 2014).
Given its sensitivity to case-specific contextual factors, process tracing also has greater
internal than external validity and findings are not necessarily generalizable unless it can be
shown that another positive case is contextually similar (Trampusch and Palier 2016; Waldner
2012; Blatter and Haverland 2014; Beach 2017). Therefore, it is important to highlight the most
unique characteristics in the GM Homes case when considering the wider applicability of this
study’s findings. As mentioned by many interviewees, GM Homes was able to tap into an
existing network of over 20 housing providers who had been working together for several years
prior to the SOC’s launch. This partnership allowed the SOC to foster collaborative working,
underpinned by strong commitments and accountability, on a much larger scale than previously
possible. This housing network also had strong working relationships with both Mayor Burnham
and the GMCA, lending the SOC considerable political support and visibility. Thus, the ability
for SIBs (and SOCs) to generate systems-level effects may partially be dependent upon the
strength of existing partnerships and political alignment.
Findings offer some potential areas for future research, as well. For example, while this
study reframed Carter et al.’s (2018) mechanism of prevention as asset-based working, assetbased working may not be as applicable as a mechanism in other policy domains. Therefore,
future studies should continue to refine and test this conceptual framework to evaluate the effects
of SIBs (and SOCs) in other service systems – including in other countries or regions, too. In line
79
with growing calls for evidence on ‘SIB effects’ more generally, future evaluations of SIB
ecosystem effects should also try to further untangle the specific contributions of the SIB from
the contributions of the program itself to systems-level change (Gallucci et al. 2019; Carter et al.
2018).
Additionally, the study’s findings suggest some implications for practice. For instance,
SIB (and SOC) funding could better support systems change efforts by utilizing longer-term
contracts as well as drawing on interdepartmental budgets. SIBs could also engage
representatives from more sectors from the outset, such as through steering groups or advisory
boards. In the area of rough sleeping, this could mean bringing in partners from the mental
health, criminal justice, and employment sectors to take a more active role in systems
orchestration, as GM Homes did with the housing sector. Such wider intersectoral involvement,
both financially and operationally, would help systems actors “to understand the interlinkages
between sectors as opposed to specific organizations” (Investor) as well as create more joint
ownership in addressing systemic challenges.
80
CHAPTER 3 – RESULTS-BASED FUNDING VIA DEVELOPMENT IMPACT BONDS:
STAKEHOLDER PERCEPTIONS ON BENEFITS AND COSTS
by Hilary Olson
ABSTRACT
In response to growing social challenges compounded by stretched public and
philanthropic resources, the international development sector has become increasingly interested
in innovative financial mechanisms such as Development Impact Bonds (DIBs). Nonetheless, of
the 275 impact bonds worldwide, only 21 have been DIBs; the rest have been Social Impact
Bonds (SIBs). Consequently, impact bond research had tended to focus on the implementation of
SIBs, especially in high-income countries. This study contributes to the growing field of research
on DIBs in low- and middle-income counties (LMICs) by exploring stakeholder perceptions on
the model’s benefits and costs thus far. The study asks: What are the contextual factors and
anticipated model benefits that are motivating stakeholders to use DIBs? What costs and
challenges do stakeholders experience in implementing DIBs? To answer these questions, the
study analyzes data from 16 interviews with stakeholders from 7 recent DIBs. The study finds
that stakeholders wanted to test the effectiveness DIBs, and that the DIB’s provision of
outcomes-focused risk capital helped enable a collaborative and adaptive approach to service
delivery. This approach then generated evidence of blended returns and increased organizational
capacity, as well as spurred cultural shifts towards more sustainable outcomes-based working.
Stakeholders also experienced costs and challenges in implementation – especially around
garnering institutional buy-in, negotiating contract terms, and managing changing relationships.
81
INTRODUCTION
Over the last few decades, the world has seen considerable progress in human
development and standards of living (Gustafsson-Wright et al. 2017). However, pressing social
needs remain, such as poverty, income inequality, and social inclusion, which require
collaborative, intersectoral solutions (Fahrudi 2020; Bortagaray and Ordóñez-Matamoros 2012).
Reflecting the urgency of addressing such issues is the United Nations’ (UN) 2030 Agenda,
outlining 17 Sustainable Development Goals (SDGs) (Halkos and Gkampoura 2021). With less
than 10 years left to achieve these ambitious goals, there is a sizeable funding gap between
existing government budgets and philanthropic grants and the resources required to meet these
highly complex global problems (Jackson 2013). As a result, there is a growing call from the
international community for innovative social financing mechanisms which can attract additional
resources for development as well as improve the effectiveness of aid (Rizzello and Kabli 2020).
One such mechanism is results-based funding.
Results-based funding is a contract in which a principal agrees to give funding to an
agent upon the achievement of pre-determined development results (Pearson 2011; Grittner
2013). As traditional input-based development funding has “often failed to deliver the desired
results,” results-based funding aims to “link funding more closely to measurable results”
(Grittner 2013: 3). Results-based funding can take the form of results-based aid (RBA), resultsbased financing (RBF), or a hybrid of the two. In RBA, the contract is between a donor, such as
a bilateral or multilateral aid organization, and a government entity; in RBF, the contract is
between a domestic government and a service provider, such as a non-profit, private, public, or
non-governmental organization (NGO) (Pearson 2011; Klingebiel 2012; Grittner 2013; Pearson
et al. 2010). Meanwhile, in a hybrid model, the contract is between a donor and a service
82
provider directly (Pearson et al. 2010; Grittner 2013). An increasingly popular hybrid model for
results-based funding is the Development Impact Bond (DIB).
Contrary to their name, DIBs are not technically bonds as they are not tradeable
instruments (Rizzello and Kabli 2020). Instead, DIBs are a type of contractual partnership
involving four central stakeholders: investors, service providers, outcome funders, and
intermediaries. Investors provide up-front capital to service providers; service providers deliver
an intervention; outcome funders repay investors based on the intervention’s achievement of predetermined outcomes; and intermediaries offer assistance with contract development and
program design (Gallucci et al. 2019; Williams 2020; Gruyter et al. 2020; Rizzello and Kabli
2020). A type of impact bond for funding development, DIB outcome payments are made by a
donor organization such as a foundation, NGO, or international aid agency; in Social Impact
Bonds (SIBs), a (domestic) government actor pays for outcomes (Loraque 2018).
Beyond having different outcome funders, SIBs and DIBs operate within different
political, economic, and social environments, and have considerably different relationships with
government in social service delivery and policymaking. For instance, while SIBs arose in
response to the 2008 financial crisis to address a lack of public funding and austerity, DIBs
emerged in response to a persistent shortage of development funding (Alenda-Demoutiez 2020).
Further, while studies often position SIBs as part of broader public service reform efforts (e.g.
Fox and Morris 2021), scholars align DIBs with the broader push towards greater
professionalism, transparency, and accountability within the international aid sector over the last
few decades (e.g. Alenda-Demoutiez 2020). Since their inception, there has also been
considerable difference in the growth between the two models. The first SIB was established in
the UK in 2010, while the first DIBs were established in Peru and India in 2015. To date, 275
83
impact bonds have been launched in 36 countries. Of these, only 32 have been in developing or
low- and middle-income counties (LMICs),10 and just 21 have been DIBs (Indigo 2023). Given
the relatively smaller number of DIBs, much of the impact bond literature has focused on the
implementation of SIBs in high-income countries (HICs) (Alenda-Demoutiez 2020).
Consequently, little research has explored the use of DIBs in supporting development initiatives
in LMICs. To better understand the role played by impact bonds across cases, countries, and
contexts, more DIB-specific research is needed (Alenda-Demoutiez 2020; Gallucci et al. 2019).
To build up the evidence base on DIBs, this exploratory study investigates stakeholder
perceptions on the model’s main benefits and costs for 7 recent DIBs. The paper proceeds as
follows. First, it summarizes the DIB literature to date. Second, it provides conceptual framing
by drawing on the SIB literature. Third, it lays out the methodology for the study, namely
interviews and documentary analysis. Fourth, the paper details the study’s main findings. Fifth, it
situates these findings within the context of the broader DIB and SIB literature. Sixth, it
concludes, noting the study’s implications for both practice and research.
DEVELOPMENT IMPACT BONDS
Despite growing interest in DIBs worldwide, the body of DIB research is limited
(Gustafsson-Wright et al. 2017; Gallucci et al. 2019; Mishra and Dash 2022). This research has
most often involved single case studies (e.g. Belt et al. 2017; Loraque 2018; Gallucci et al. 2019;
Oroxom et al. 2018; Gustafsson-Wright et al. 2022), along with a couple of multiple case studies
(e.g. Clarke et al. 2018; Rizzello and Kabli 2020). A handful of evaluations and final reports of
10 As Khan et al. (2022: 5) caution, the terms ‘developed’ and ‘developing’ countries have negative connotations;
they suggest that “using income as a source of distinction may be more useful.” The remainder of the paper
classifies countries using the World Bank’s 2022 income definitions, with high-income countries having a Gross
National Income per capita of $13,205 or more (Hamadeh et al. 2022).
84
completed DIBs have been published as well (e.g. Kitzmuller et al. 2018; Gallardo et al. 2021;
O’Neil et al. 2021; Savell and Eddleston 2021; Hera 2022). These publications include an
extensive longitudinal evaluation on the three DIB pilots funded by the UK’s Foreign,
Commonwealth, and Development Office (FCDO) (Cox et al. 2019; Lau et al. 2021; Grant et al.
2022). In addition, several studies have examined the feasibility of launching DIBs in various
countries and sectors (e.g. Welburn et al. 2016; Dieteren et al. 2023; Anyiarn et al. 2017; Ravi et
al. 2019). Notably, only a few studies have conducted more comprehensive literature reviews
(e.g. Alenda-Demoutiez 2020; Mishra and Dash 2022) or larger cross-case empirical analyses
(e.g. Gallucci et al. 2022; Gustafsson-Wright et al. 2021; Gustafsson-Wright et al. 2017).
Although this literature remains nascent, it is a helpful starting point for understanding potential
stakeholder motivations and expectations around DIB benefits and costs thus far (summarized in
table 3.1).
Motivations and Expected Benefits
Stakeholders may be motivated to test whether DIBs are more effective than traditional
financial models for a range of reasons. To begin with, governments in LMICs often face limited
public budgets with which to address social challenges and gaps in service delivery (AlendaDemoutiez 2020; Gallucci et al. 2019; Lau et al. 2021). International aid has also become more
competitive, with “only moderate growth in official development assistance (ODA) despite a
worsening of major humanitarian crises” (Oroxom et al. 2018: 1). Additionally, many service
providers lack access to investment capital, as they do not offer sufficient financial returns to
attract private investors (Clarke et al. 2018; Lau et al. 2021; Welburn et al. 2016; Oroxom et al.
2018).
85
Aside from these contextual factors, the DIB model itself offers incentives to different
stakeholders (Alenda-Demoutiez 2020). For example, service providers may join to access
flexible up-front capital with which to scale up their services (Welburn et al. 2016; Oroxom et al.
2018; Gallucci et al. 2019). Meanwhile, (impact) investors may seek to generate blended social
and financial returns – both to boost their reputations and to recycle their capital into future
social programs (Gallucci et al. 2019; Belt et al. 2017; Loraque 2018; Alenda-Demoutiez 2020).
Additionally, outcome funders may participate due to the transfer of financial risk, allowing for
more effective resource allocation as they only have to pay for successful program outcomes
(Alenda-Demoutiez 2020; Welburn et al. 2016; Belt et al. 2017; Loraque 2018; Clarke et al.
2018).
It is further important to note that because DIBs do not involve the government as an
outcomes funder (unlike SIBs), another possible incentive for stakeholders may be to “improve
local development without involving local and national governments” (Alenda-Demoutiez 2020:
892). By eliminating or reducing the need to interact with governments, stakeholders could
overcome potential challenges related to low institutional capacity, political constraints, or
corruption (Lau et al. 2021). However, some level of government engagement will likely be
necessary in order to avoid duplication of services, to encourage future SIB participation or other
follow-on funding, and to strategically align on policy priorities (Alenda-Demoutiez 2020;
Clarke et al. 2018; Gustafsson-Wright et al. 2017).
In addition to these motivations, the DIB literature highlights many other expected
benefits of using the model. One expectation is that the transfer of financial risk will encourage
investors to support service providers in improving program performance to increase the
likelihood of (re)payment (Muñoz and Kimmitt 2019; Alenda-Demoutiez 2020; Belt et al. 2017;
86
Oroxom et al. 2018). This support could then help build up the performance management
capacity of service providers (Loraque 2018; Clarke et al. 2018; Lau et al. 2021). By focusing on
measuring outcomes rather than monitoring activities, service providers are also expected to
have greater flexibility in how they deliver and adapt their programs (Belt et al. 2017; Welburn
et al. 2016; Alenda-Demoutiez 2020). This flexibility could encourage greater innovation in
service delivery, including by better harnessing local expertise (Muñoz and Kimmitt 2019;
Gallucci et al. 2019; Loraque 2018; Clarke et al. 2018; Oroxom et al. 2018). The model’s focus
on outcomes is also anticipated to improve the accountability and transparency of development
initiatives through the use of monitoring and evaluation (Alenda-Demoutiez 2020). In turn,
demonstrating (successful) program outcomes is likely to help service providers access
additional funding in the future (Gallucci et al. 2019).
Potential Costs and Challenges
Despite these many anticipated benefits, the literature notes a number of possible costs of
implementing DIBs as well. Perhaps most commonly-cited are the start-up costs in setting up
DIB projects, including fees for intermediaries, lawyers, and technical advisors (Muñoz and
Kimmitt 2019; Belt et al. 2017; Alenda-Demoutiez 2020). It could also take significant time and
effort to implement the necessary data systems for effective performance management,
monitoring, and evaluation (Loraque 2018; Belt et al. 2017). Stakeholders are likely to
experience high transactions costs as well, due to regulatory and legal issues alongside low data
availability (Muñoz and Kimmitt 2019). Given the need to reach a consensus on outcome targets,
payments, and verification methods, contract negotiations can also be time-consuming (Clarke et
al. 2018; Oroxom et al. 2018).
87
Further complicating negotiations, investors and outcome funders could find it difficult to
reconcile competing interests around risk transfer. For instance, investors could push for higher
rates of return and capital guarantees, as well as prefer more evidence-based and less
experimental programs (Alenda-Demoutiez 2020; Loraque 2018). Meanwhile, outcome funders
could struggle to obtain institutional buy-in given concerns around paying returns to the private
sector, especially given a lack of evidence on DIB effectiveness (Clarke et al. 2018; Belt et al.
2017). Stakeholders may also face difficulties related to relationship management, as the shift
towards outcomes-based working will require “donor organizations to adapt to a different role
from that which they have held in the past,” such as by taking a more hands-off approach than
usual (Gustafsson-Wright et al. 2017: 29). Finally, DIBs could experience operational
disruptions due to the susceptibility of LMICS to political and economic shocks (Lau et al. 2021;
Gustafsson-Wright et al. 2017).
Table 3.1: Existing Themes from the DIB Literature
Motivations • Limited resources and service gaps
• DIB model incentives (EX: up-front capital, blended returns, and
financial risk transfer)
• Reduced reliance on governments
Expected Benefits • Scaled impact and resource efficiency
• Reputational gains and future funding
• Performance management and capacity building
• Operational flexibility and innovation
• Monitoring/evaluation and accountability
Potential Costs • Start-up and transaction costs
• Protracted negotiations and institutional-buy-in
• Changing roles/relationships and environmental shocks
CONCEPTUAL FRAMING OF SIBS
Given that the DIB literature remains limited, it is helpful to turn to the SIB literature for
conceptual framing around model benefits and costs. According to Fraser et al. (2018), three
88
primary narratives have emerged within the SIB literature – two narratives of promise and one
narrative of caution. These two narratives of promise relate to the potential of SIBs in supporting
public sector reform and financial sector reform. The public sector reform narrative highlights
the role of SIBs in solving entrenched social problems. To do so, SIBs use incentives to
encourage intersectoral collaboration and innovation – with a particular focus on harnessing
private sector practices for public service delivery. In addition, with an outcomes-focused
approach, SIBs promote greater performance measurement and accountability (Fraser et al.
2018). For these reasons, many scholars align this narrative with New Public Management
(NPM), which stresses the importance of incentives, performance management, and evaluation in
achieving measurable outcomes and improving public sector efficiency (Fitzgerald et al. 2020;
Osborne 2006; Chandra et al. 2021).
Meanwhile, the financial sector reform narrative primarily highlights the role of SIBs in
growing the social finance market (Fraser et al. 2018). This narrative tends to highlight the use of
risk and returns to attract private sector capital – noting that such blended returns can also be
helpful in generating reputational gains. Further, by linking investor repayment to verifiable
outcomes, this narrative posits that SIBs encourage greater emphasis on monitoring and
evaluation, as well as capacity building efforts around data management to support these
activities (Fraser et al. 2018). In addition to aligning with Social Entrepreneurship (Fraser et al.
2018), this financial sector reform narrative also has ties to New Public Governance (NPG),
which underscores the role of collaborative relationships in delivering effective public services
and generating positive social outcomes (Osborne 2006; Ormiston et al. 2020).
On the other hand, the cautionary narrative questions the “inappropriate intrusion of
private sector and financialized values in social policy” (Fraser et al. 2018: 12). One concern is
89
that the introduction of performance management techniques could lead to service provider
mission drift (Joy and Shields 2013). Critics further worry that SIBs could reduce transparency
and accountability in public spending, given the high transaction costs of establishing SIBs
coupled with the difficulty of assessing SIB effects (Fraser et al. 2018) – or the effects of the
financial model separate from the effects of the interventions (Carter et al. 2018). Another
cautionary argument is that “investors may be more risk-averse than some SIB proponents have
claimed, and are likely to require government or philanthropic funds to guarantee, or underwrite
their investment” (Fraser et al. 2018: 13). Consequently, critical scholars align SIBs with neoliberalism and financialization, in which financial sector interests are prioritized over social
needs (Fraser et al. 2018). The cautionary narrative is therefore largely skeptical of theoretical
developments such as NPM and Social Entrepreneurship.
Of note, Fraser et al.’s (2018: 7) analysis focused solely on SIBs in HICs as “their actors
and social problems are significantly different” from those of DIBs. To bring DIBs into this
debate, this paper offers new insights into the model’s benefits and costs using two conceptual
frameworks. Following Ormiston et al. (2020), this study analyzes DIB benefits by aligning
stakeholder motivations and expectations with broader objectives around social finance,
collaboration, and social impact. To analyze DIB costs, the paper employs Agusti Strid and
Ronicle’s (2021) model of SIB market development barriers around government demand,
regulatory frameworks, economic and political contexts, data availability, and market capacity.
SIB Stakeholder Expectations
In their investigation into SIB stakeholder expectations, Ormiston et al. (2020) analyze
the press releases of 29 SIBs in the UK, US, and Australia. Through their analysis, they uncover
90
three overarching discourses which reflect objectives related to social finance (46%),
collaboration (26%), and impact (28%). Within the social finance discourse, stakeholder
expectations relate to testing a new financial model (36%), building the social finance market
(8%), and attracting private capital (2%). Meanwhile, stakeholder expectations within the
collaboration discourse include intersectoral partnership (14%), financial returns for investors
(6%), government savings (5%), and flexible funding for service providers (1%). Finally, within
the impact discourse, stakeholder expectations correspond to improved social outcomes (22%),
measurement and accountability (4%), and innovation (2%).
In addition, Ormiston et al. (2020) find that outcome funders and investors tend to
emphasize the role of SIBs “as an innovative funding mechanism targeted at catalysing social
finance markets” (Ormiston et al. 2020: 240). Meanwhile, intermediaries focus more on
collaborative efforts to build the social finance sector, and service providers focus more on the
impacts of SIBs upon service users. The authors also note that while the social finance discourse
is most prevalent during SIB launch, the discursive focus shifts to impact during implementation.
Spatially, Ormiston et al. (2020) uncover slight discursive differences between the three
countries as well, which they conclude suggests that SIBs should be adapted during
implementation to fit their context.
In situating these findings within broader SIB literature, Ormiston et al. (2020) view the
social finance discourse as aligning with NPM and the collaboration discourse as aligning with
NPG. Moreover, they view both discourses as aligning with Fraser et al.’s (2018) financial sector
reform narrative, with the collaborative discourse supporting the social finance discourse by
prioritizing private sector investors as non-traditional partners in social service delivery. They
91
further posit that “the marginalization of service providers and the impact discourse supports the
cautionary tale” (Ormiston et al. 2020: 244).
SIB Development Barriers
In their study of SIBs in Latin America, Agusti Strid and Ronicle (2021) suggest that SIB
market development follows three phases: 1) developing first time SIBs, 2) establishing the SIB
mechanism, and 3) growing the SIB ecosystem. The main objectives of the first phase are to
generate learnings about the model, to demonstrate the model’s viability, and to gain stakeholder
buy-in. During the second phase, the focus shifts to building on the lessons learned from firsttime SIBs, demonstrating the cost-effectiveness of the model, and building the capacity of
additional external stakeholders to participate in future SIBs. In the third phase, the aim becomes
better understanding how SIBs can best be applied in different contexts, while also addressing
structural barriers.
To advance from one phase to the next, Agusti Strid and Ronicle (2021: 57) identify five
‘DREAM’ factors which can act as barriers (or enablers): Demand from government, Regulatory
framework, Economic and political context, Availability of data, and Market capacity.
Government demand refers to whether there are potential public sector outcome funders with
sufficient understanding of the model and organizational buy-in to contribute to SIB design and
implementation. Supportive regulatory frameworks include legal contexts in which governments
(or other potential funders) can commission SIBs over multiple years; investors can legally earn
a return on their investments; and payments can be tied to outcomes. Data availability relates to
whether stakeholders have access to robust and reliable socio-economic data with which to
establish baselines as well as price and validate outcomes. Economic and political context is
92
dependent on the stability of public institutions and market conditions, and is affected by factors
such as elections and recessions. Lastly, market capacity refers to whether there is sufficient
investor interest and service provider capabilities to participate in SIBs. Capacity also includes
whether stakeholders have access to technical expertise in designing and managing SIBs.
Agusti Strid and Ronicle (2021) further report that stakeholders can overcome issues
posed by unsupportive regulatory frameworks, unstable economic and political contexts, and low
data availability on a case-by-case basis as long as stakeholders remain committed and flexible.
To do so, “it is essential to work with a coalition of like-minded, purpose-driven organizations”
during the early stages of market development, as well as have champions within organizations
who are willing to drive the cause forward (Agusti Strid and Ronicle 2021: 69). However, as the
SIB market matures, stakeholders will need to more seriously and systematically resolve such
structural barriers; otherwise transaction costs will remain high as stakeholders navigate legal
and political obstacles one project at a time. Meanwhile, relational challenges around
government demand and market capacity should diminish (to a certain extent) as stakeholders
embed institutional knowledge and share lessons learned with broader networks which should
encourage future DIB participation.
RESEARCH QUESTIONS
Based on the review of the DIB literature and the two conceptual frameworks above, this
study’s research questions are:
1) What are the contextual factors and anticipated model benefits that are motivating
stakeholders to use DIBs?
a. How do these motivations and benefits align with broader objectives around
social finance, collaboration, and social impact?
2) What costs and challenges do stakeholders experience in implementing DIBs?
93
a. How do these costs and challenges relate to the barriers of early impact bond
market development?
In answering these questions, this research aims to address a couple of pressing gaps in
DIB scholarship. First, as “there has been no investigation of the ideologies behind and paradigm
of DIBs” (Alenda-Demoutiez 2020: 892), this study will offer important insights into how DIBs
fit into broader processes of social change by exploring the intended benefits of DIBs in the
shorter and longer-term. Second, by generating new evidence on the costs and challenges that
stakeholders face during implementation, this study will help generate a better understanding of
the contexts in which DIBs are an appropriate approach (Albertson et al. 2018). And third, by
applying SIB frameworks in analyzing DIBs, this study will situate DIBs within broader
narratives around the pros and cons of the impact bond model.
DATA AND METHODS
To answer the research questions above, the study employs interviews of DIB
stakeholders to collect primary data. Through these semi-structured interviews, this study seeks
to collect in-depth insights from a relatively small set of participants (Leech 2002) to allow for
helpful comparisons “across contexts, situations, and kinds of people” (Lamont and Swindler
2014). During the interviews, questions prompted individuals to reflect on their decisions to
participate in DIBs as well as on their experiences as participants thus far (see Appendix 3.1 for
the interview protocol). Interviews lasted approximately 60 minutes, were conducted virtually,
and were audio/video recorded with participant consent. All interview quotes have been kept
anonymous in recognition of the possibly sensitive or political nature of the responses.
Analysis then entailed thematically coding interview transcripts. Based on the literature
review on DIBs, high-level codes were ‘motivations’, ‘benefits’, ‘costs,’ and ‘challenges.’ Sub-
94
codes within ‘motivations’ were contextual factors (EX: limited resources and service gaps) as
well as incentives (EX: up-front capital, potential blended returns, and risk transfer). ‘Benefits’
sub-codes included scaling, social impact, resource efficiency, reputational gains, future funding,
performance management, capacity building, operational flexibility, innovation, monitoring and
evaluation, and accountability. To further classify these many benefits, coding captured whether
stakeholders viewed them as model design elements (‘inputs’), immediate benefits (‘outputs’),
short-term benefits (‘outcomes’), or longer-term benefits (‘impacts’). In addition, the ‘costs’ subcodes were divided into ‘transaction costs’ and ‘management challenges’ (Fitzgerald et al. 2020).
‘Transactions costs’ pertained to the design and development of the SIBs, including start-up
costs, negotiations, and institutional buy-in (within organizations); meanwhile, ‘management
challenges’ related to the implementation and administration of the SIBs, including relationship
management (between organizations), performance management, and environmental and
developmental conditions (Fitzgerald et al. 2020; Gustafsson-Wright and Osborne 2020b). Codes
were also added based on emergent themes during analysis, such as stakeholder ‘learnings’ and
‘recommendations’ based on their experiences. A second round of coding was then conducted to
apply the finalized coding scheme across all documents consistently. The analysis also drew on
secondary data from press releases, news articles, and program reports.
Once this coding stage was complete, analysis then classified motivations and benefits as
aligning with Ormiston et al.’s (2020) social finance, collaboration, and impact objectives. Codes
(and sub-codes) designated within the social finance objective related to testing the model as an
innovative financial tool, attracting private investment capital, and growing the social finance
market. Codes within the collaboration objective corresponded to incentives for encouraging
intersectoral collaboration, such as cost savings for outcome funders, flexible funding for service
95
providers, and financial returns for investors. Lastly, codes categorized within the impact
objective encompassed sustainable impacts, scaled impacts, outcome measurement,
accountability, evidence, social innovation, and prevention.
Next, the analysis classified costs and challenges using Agusti Strid and Ronicle’s (2021)
DREAM framework for SIB market development barriers. However, in applying this framework
to DIBs rather than SIBs, the analysis redefined market capacity to include outcome funder
interest and capacity, while government demand was eliminated as its own separate category.
These changes reflect the fact that governmental entities do not serve as outcome funders in
DIBs as they do in SIBs. Instead, DIB outcome funders, such as international aid agencies and
foundations, align more with the types of investor and service provider organizations included in
the market capacity category. Codes classified as market capacity factors related to partner
recruitment, organizational buy-in, institutional readiness and flexibility, and technical expertise
and support. Codes considered regulatory framework factors included budgeting practices,
procurement guidelines, and legal (contracting) procedures. For the economic and political
context classification, codes corresponded to macroeconomic (EX: currency) stability, political
support, and external shocks. Lastly, codes classified as data availability encompassed access to
individual service user data, socio-economic data, and program data, along with ability to
measure and manage performance (EX: monitoring and evaluation).
Sample
The interview sample was drawn from programs included in the Government Outcomes
Lab’s Impact Bond Dataset, which lists a total of 21 DIBs in LMICs (Indigo 2023). Of these, 6
96
projects were in upper-middle, 10 in lower-middle, and 5 in low-income countries.
11 The region
with the most DIBs was Africa (10, all in Sub-Saharan Africa), followed by Asia (7), and the
Americas (4, all in Latin America and the Caribbean). The policy sectors addressed by the DIBs
were health (6), employment and training (6), education (5), poverty reduction (2), and
agriculture and the environment (2).
From these 21 projects, the sample was restricted to DIBs that were currently underway
or were recently concluded.12 Of note, due to language limitations, one DIB (Crecemos con
empleo y oportunidades) was removed from the sample as most public program documents were
written in Spanish. These criteria gave the study a final sample size of 7 DIBs. The study then
involved 16 interviews with 18 individuals. Of these interviews, 8 were conducted with
individuals from outcome funders, 3 from investors, 3 from service providers, and 2 from
intermediary organizations.
Table 3.2: Interview Sample
DIB Country Start
Year
End
Year Sector Beneficiaries
Max
Outcome
Funding
(USD)
ICRC
Mali
DRC
Nigeria
2017 2022 Health 3,600 individuals with
physical disabilities $26.6 M
QEI India 2018 2022 Education 300,000 primary
school-aged children $9.2 M
Cameroon
Cataract Cameroon 2018 2023 Health 18,000 low/middleincome patients $2.2 M
Cambodia Rural
Sanitation Cambodia 2019 2023 Poverty
reduction 1,600 rural villages $10.0 M
F4J Palestine 2019 2023 Employment
and training
1,500 job seekers (18-
29 years old) $5.0 M
In Their Hands Kenya 2020 2022 Health 193,000 girls (15-19
years old) $6.6 M
Skill India India 2021 2025 Employment
and training
50,000 unemployed
youth $14.4 M
11 Two projects took place in multiple countries that included both low and low-middle income countries. 12 DIBs that launched prior to 2022 and that ended (or will end) in 2022 or later.
97
Program Contexts
As shown in table 3.2, the 7 DIBs were diverse in many ways, including in size,
geography, and focus. They also varied in the types of contexts in which they were launched,
given that countries and stakeholders ranged in their prior experience with impact bonds. Based
on the Dataset (Indigo 2023), the majority of the DIBs were the first impact bonds to be launched
in a country. The three exceptions were the Quality Education in India (QEI) and Skill India
DIBs in India and the In Their Hands DIB in Kenya. In fact, India was the site of the world’s
first DIB, Educate Girls, which ran from 2015 to 2018. Meanwhile, Kenya’s first DIB, the
Village Enterprise DIB (also implemented in Uganda), ran from 2017 to 2021.
Looking at the DIBs chronologically, the International Committee of the Red Cross
(ICRC) DIB was the first launched in all three of its host countries: Mali, the Democratic
Republic of Congo (DRC), and Nigeria. However, it was the second DIB in which FCDO acted
as an outcome funder, the first being the Village Enterprise DIB. KOIS, one of the DIB’s
intermediaries, had also earlier served as an intermediary for the Duo for a Job SIB in Belgium.
Within the QEI DIB, one of the outcome funders, Comic Relief, also served as an
outcome funder for the Improving HIV Treatment SIB in the UK. Further, the QEI DIB shared
UBS Optimus Foundation (UBS) as an investor with the earlier Educate Girls DIB. In 2018, the
same year that the QEI DIB was launched, the Utkrisht DIB was launched in India. UBS
invested in the Utkrisht DIB as well, which ended in 2021.
Turning attention to the Cameroon Cataract DIB, one intermediary, Volta Capital,
simultaneously participated in a SIB in South Africa. It is also worth noting that although the
Cameroon Cataract DIB was the first to be launched in Cameroon, another health DIB was
launched shortly after it called the Kangaroo Mother Care DIB.
98
Meanwhile, the Rural Sanitation DIB represented the USAID’s third time serving as a
DIB outcome funder, having previously participated in the Utkrisht DIB and the Village
Enterprise DIB. The DIB’s intermediary, Social Finance, had also participated in the Utkrisht
DIB (along with numerous earlier SIBs). Around the same time as the Rural Sanitation DIB’s
implementation, Social Finance also served as the intermediary for three other DIBs: the
Kangaroo Mother Care DIB, the Programa Primero Lee DIB in Chile, and the Finance for Jobs
(F4J) DIB in Palestine.
The F4J DIB was also the first impact bond launched in Palestine. Out of the DIB’s
stakeholders, only two of the DIB’s intermediaries, Bridges Fund Management (Bridges) and
Social Finance, had prior impact bond experience. As with Social Finance, Bridges had also
supported many earlier SIBs. Bridge’s philanthropic arm, Bridges Impact Foundation, served as
an investor in the Village Enterprise DIB as well (Bridges Fund Management 2018).
With regards to the In Their Hands DIB in Kenya, FCDO was once again an outcome
funder. Interestingly, the DIB’s investor, the Children’s Investment Fund Foundation (CIFF),
had previously played the role of outcome funder in the Educate Girls DIB. KOIS, the DIB’s
intermediary, had also earlier supported the ICRC DIB along with a few SIBs in Belgium and
France.
As the fourth DIB launched in India, the Skill India DIB involved a number of
stakeholders with prior DIB experience, with notable overlap in stakeholders from the QEI DIB.
This overlap included two intermediary organizations, the British Asian Trust (BAT) and
Dalberg Associates. Another common stakeholder was the Michael and Susan Dell Foundation,
which served as an outcome funder in the QEI DIB and an investor in the Skill India DIB. In
addition, the Skill India DIB’s outcome funder, CIFF, had earlier served as an outcome funder in
99
the Educate Girls DIB and an investor in the In Their Hands DIB. Two of the Skill India DIB’s
intermediaries, USAID and FCDO, had each served as outcome funders for three prior DIBs. As
such, India represents a fairly unique context within this study in that the DIB market was more
advanced than in the other countries. Consequently, the DIBs were able to build off of learnings
from one another, including in some cases by sharing stakeholders whose institutional
knowledge they could leverage from one DIB to another.
RESULTS
Corresponding to the study’s research questions, findings related to two main topics: 1)
stakeholder motivations and perceptions of model benefits, and 2) implementation costs and
challenges. Within the first topic, motivations include the contextual factors driving stakeholder
interest in DIBs and the design elements of the model incentivizing DIB use. Model benefits
refer to what stakeholders view as the (positive) outputs, outcomes, and impacts of using the
model. These motivations and benefits are then classified according to Ormiston et al.’s (2020)
social finance, collaboration, and social impact objectives. Within the second topic, costs and
challenges relate to the various start-up costs and difficulties that stakeholders have experienced
in implementing DIBs. This section also incorporates suggestions from stakeholders on future
DIB use based on their learnings from experience. Costs and challenges are then categorized
using Agusti Strid and Ronicle’s (2021) framework as market development, data, regulatory, or
economic and political barriers.
100
Motivations and Model Benefits
The top motivation driving stakeholder participation in DIBs, mentioned by 69% of
interviewees, was the opportunity to test a new financial model. Stakeholders stated many
reasons for this, including the desire to gain first-hand experience using results-based funding
tools as well as obtaining evidence on these models for the broader development sector. As one
Service Provider shared: “we really are genuinely interested in proof of concept here… for others
in the space to learn what this looks like.” The importance of testing DIBs as a new tool was
further reflected in stakeholder and program press releases and websites, which frequently stated
that a DIB was new to a certain country or sector, or novel in some other way, such as in size.
For instance, Kanthan Shankar, the World Bank Country Director, stressed the significance of
the F4J DIB in Palestine as “the first time to be applied by the World Bank in a fragile
environment” (FMO 2019). Appendix 3.2 provides a list of examples for all seven DIBs.
Another frequent motivation was the need to overcome limited resources (31%),
including from domestic governments and international donors, by diversifying or pooling
capital. Some interviewees further mentioned the aid environment becoming more competitive
following the on-set of COVID, with considerable funding being diverted to relief efforts.
Interviewees commonly cited the need to address persistent service gaps (38%) as well, such as
by targeting marginalized groups, expanding programs to reach larger populations, and
improving the quality of existing services. Although less frequently, stakeholders also mentioned
wanting to address a lack of evidence (19%) in their policy areas, for instance on the (cost)
effectiveness of different interventions. Interestingly, interviewees shared that even when there
was sufficient access to donor funding, it was often difficult to link inputs to program outcomes.
Thus, they cited the need for more evidence tying impacts to funding as well.
101
In terms of model design elements (or inputs), interviewees most commonly mentioned
the transfer of risk (75%) from both outcome funders and service providers to investors. For
outcome funders, this risk transfer centered on only being required to pay for successful program
results. Therefore, risk transfer was frequently framed as a means of ensuring the efficient use of
scarce resources, which helped generate institutional buy-in. As one Outcome Funder stated,
“It’s a marketing point to be able to say the risk elements are more suitable for institutions like
ours.” Outcome funders also explained that while DIBs helped shift some financial risk to
investors, that the operational risks of delivering programs in volatile development contexts were
not explicitly covered by the DIB contracts. For instance, “There are quite a bit of risks outside
that box of [financial] risks... What happens if there’s a war? What happens if… there’s a run on
banks and they shut down?” (Outcome Funder). Further, the degree of risk transfer varied
considerably from contract to contact. For instance, in the Cameroon Cataract DIB, there is
100% capital protection for investors (although the loan's interest rate is conditional on
performance), and financial risks are shared between outcome funders and service providers
(Indigo 2022a). Conversely, while 100% of the Cambodia Sanitation DIB investor’s capital is at
risk, the service provider has also invested into the DIB and will share in the profits if sufficient
outcomes are achieved (Stone Family Foundation and iDE n.d.).
Meanwhile, service providers saw the transfer of risk as obtaining access to up-front
flexible funding (56%), thus reducing their financial risks during program implementation. Given
the model’s outcomes-focused approach, stakeholders almost always mentioned that the lack of
input-based specifications gave service providers much more flexibility in how they spent money
and designed interventions. One Intermediary even saw this as empowering providers by
changing the traditional donor-grantee relationships, explaining: “It didn’t seem the power was
102
with the donor… it felt like [the funder] was saying, here’s the outcome, however you are able to
achieve it is your problem… [and] there’s someone there to pay for your risk taking.” Although
some interviewees further viewed such risk transfer as encouraging providers to innovate with
less fear of failure, most highlighted the role of risk capital in scaling proven interventions. As
one Outcome Funder shared, “[We] don’t see DIBs as an innovation instrument, we really see it
as a scaling instrument… So we want to scale things that are proven.”
Additionally, investors were seen as being incentivized to take on this delivery risk by the
potential for making financial returns (25%). Investors shared that they considered taking on
these financial risks as part of their institutional mandates, with many pursuing blended social
and financial returns. Generating financial returns was also seen as a means of increasing the
resources available for future social projects by allowing investors to continue to reinvest funds.
As one Investor explained, “if we can recycle our capital, then we can go and do more things
with it, have more social impact, invest in other good things.” As a whole, interviewees
considered these various inputs as important incentives to encouraging intersectoral
collaboration – particularly through partnership with private sector investors.
It is also important to note that although governments are not typically involved as
contractual partners in DIBs, the degree to which stakeholders strategically engaged with
government actors varied by program. In most DIBs, government engagement was limited;
although stakeholders noted the importance of getting government buy-in through alignment on
policy priorities, government was not involved in day-to-day DIB operations. A few stakeholders
also highlighted the importance of coordinating with the government when programs were run
through public facilities or when the government was involved in outcome verification or
reporting. In other DIBs, engagement with the government was intentionally kept minimal, for
103
instance due to a focus on building local private sector capacity. In these cases, engagement
primarily entailed keeping the government informed of the DIB’s operations and progress.
However, in a couple of programs, the government played a more formal role. One example is
the ICRC DIB, which was “implemented by DAI Global on behalf of the Palestinian Ministry of
Finance, and… funded by the World Bank” (FMO 2019). Another example is the Skill India
DIB, in which the National Skill Development Corporation (NSDC), “a public-private
partnership set up by the Ministry of Skill Development and Entrepreneurship” serves as an
investor (Gustafsson-Wright et al. 2022: 15). Table 3.3 summarizes these motivating contextual
factors and design elements.
Table 3.3: Motivating Contextual Factors and Model Inputs
Contextual factors • New financial model (11, 69%)
• Service gaps (6, 38%)
• Limited resources (5, 31%)
• Lack of evidence (3, 19%)
Design elements (inputs) • Incentives for collaboration
o Risk transfer (12, 75%)
o Up-front flexible funding (9, 56%)
o Potential financial returns (4, 25%)
• Strategic government engagement (varied)
Interviewees discussed a range of benefits of using the DIB model, from immediate
outputs to intermediate outcomes and longer-term impacts (summarized in table 3.4). The mostoften mentioned output was the use of adaptive performance management (63%). Adaptive
management encompassed a range of activities, with a focus on using real-time insights from
data to make continuous program improvements. For some interviewees, adaptation included
innovation, for instance by designing and testing new services. Pivoting during service delivery
was also seen as necessary for responding to changes in local political, economics, and social
104
conditions – such as during the onset of COVID. Another frequently highlighted output was
monitoring and evaluation (63%). Evaluation took a variety of forms, including internal
verification processes and third-party assessments. Evaluation sometimes also included learning
components, such as measuring longer-term program impacts and DIB effects, which were not
tied to investor repayments.
Another common output was capacity building (6, 38%), often for government actors.
This capacity building took the form of offering the government a better understanding of
results-based funding models. It also involved helping the local government overcome
implementation barriers, such as by developing targeted outreach strategies. For instance, “when
you get down to the provincial and local government level… we rely really heavily on those
government partners, and in turn do a lot of capacity building… [around] mapping communities,
understanding what the barriers are” (Service Provider). While some interviewees mentioned
providing capacity building for service providers, typically around data collection and reporting,
most noted that providers were often selected based on existing performance management and
monitoring capabilities. As an example, the service providers in the QEI DIB were seen as wellestablished and high-performing organizations: “All have over 10 years’ experience providing
education interventions, experience operating at scale and have engaged in independent
evaluations to measure their effectiveness” (Erskine 2021: 5).
With regards to model outcomes, interviewees most often discussed evidence generation
and accountability, financial returns, and scaled impact – all mentioned by 10 (63%)
stakeholders interviewed. Evidence generation included evidence on the outcomes of
interventions and the costs of achieving these outcomes. One Intermediary even shared that, “I
cannot emphasize how important that is… evidence on costs, price efficiency… [I] know of
105
donors who would do a DIB just because of the quality and level of evidence generation that
would happen.” Evidence was also related to increased accountability, especially for investors
and outcome funders seeking to use their resources more effectively. Meanwhile, financial
returns related to cost recovery of the initial investment along with any profits made on interest
or returns. Capital recovery was seen as a means of re-investing money into other social
programs, while generating financial returns was important for demonstrating the viability of the
model to attract more private investors in the future. Interviewees also viewed DIBs as
generating considerable social impacts – at a faster pace, greater scale, and higher quality. One
Outcome Funder shared that the DIB has “been functioning like clockwork in a way that I've not
seen many other traditional [Institutional] projects actually operate as smoothly.” Additionally,
stakeholders shared how participating in a DIB, particularly when launching a DIB in a new
context, brought high visibility to the programs, providing organizations with reputational gains
(6, 38%) and more credibility as thought leaders in their sectors.
In addition, stakeholders mentioned several ways that the DIBs were having, or were
expected to have, broader impacts beyond the individual contracts themselves. The primary
impact was growth of the social finance market (14, 88%). This growth covered a multitude of
areas, including opportunities for follow-on DIBs (e.g. for the same programs), other DIBs (e.g.
for different programs), and potential future SIBs. As one Intermediary reported that, “We're
seeing a very, very strong interest from the donor community to say, what is the next version of
it? Can we participate? Will you scale this up?” In fact, following the conclusion of the In Their
Hands DIB in Kenya, the UN Joint SDG Fund decided to scale up the DIB through an
investment of $7 million (Indigo 2022b). Building on the learnings from the first DIB, “this new
phase of the DIB will also have more engagement from the public sector as the national
106
government will be a member of the project committee” (Indigo 2022b). Stakeholders mentioned
more general follow-on funding for service provider organizations as well, including by
attracting more private investors in the future.
The DIBs were also seen as catalyzing cultural shifts (5, 31%) towards outcome-based
working more broadly, especially for service providers, donors, and even governments. For
example, “the culture shift that's happening at the service provider level is really out of this
world” (Outcome Funder). Notably, some stakeholders saw this shift as part of the more general
trend toward professionalization of the international development sector, given the increased
focus on monitoring, evaluation, and learning within DIBs. Moreover, the evidence generated
through the DIBs was expected to allow service providers to continue to improve and expand
their programs, as well as to generate helpful programmatic learnings (3, 19%) for the
international development and public sector organizations looking to scale effective approaches.
Stakeholders also saw DIBs as contributing to greater sustainability (5, 31%), for instance
through increased local capacity, private sector stimulation, and a focus on preventative and
early-intervention programs.
Table 3.4: Beneficial Model Outputs, Outcomes, and Impacts
Immediate benefits (outputs) • Adaptive performance management (10, 63%)
• Monitoring and evaluation (10, 63%)
• Capacity building (for governments) (6, 38%)
Intermediate benefits (outcomes) • Evidence/Accountability (10, 63%)
• Financial returns (10, 63%)
• (Scaled) social impact (10, 63%)
• Reputational gains (6, 38%)
Longer-term benefits (impacts) • Social finance market growth (14, 88%)
• Cultural shifts (outcome-based working) (5, 31%)
• Sustainability (5, 31%)
• Programmatic learnings (3, 19%)
107
In categorizing these motivations and benefits using Ormiston et al.’s (2020) discourses
around wider objectives, contextual factors are relatively split between the social finance and
social impact objectives. While overcoming limited resources and testing a new financial model
relate to the social finance objective, addressing an existing lack of evidence and filling service
gaps relate to the social impact objective. However, testing a new financial model is the
dominant motivation, as it was mentioned by the greatest number of interviewees. Interestingly,
all inputs fall within the collaboration objective, as flexible funding, potential financial returns,
and risk transfer were all mentioned as key incentives for collaboration. Also falling within the
collaboration objective is strategic government engagement, although the extent of this
engagement varied in intensity from project to project. Meanwhile, outputs primarily correspond
with the social impact objective, with adaptive management and monitoring and evaluation both
being highly-discussed immediate model benefits. The exception is capacity building, which
corresponds with the collaboration objective. Next, model outcomes are fairly divided between
two objectives, as financial returns and reputational gains relate to the social finance objective,
while evidence and accountability along with scaled social impact relate to the social impact
objective. Finally, with regards to model impacts, cultural shifts, sustainability, and
programmatic learnings align with the social impact objective. Nonetheless, social finance
market growth, which aligns with the social finance objective, is by far the most frequently
mentioned long-term model benefit. Overall, it appears that the social finance and social impact
objectives are most prevalent across contextual factors and model benefits (outputs, outcomes,
and impacts), while the collaboration objective is largely limited to model inputs (especially
incentives). Table 3.5 summarizes these categorizations.
108
Table 3.5: Motivations and Benefits Categorizations
Social Finance Collaboration Social Impact
Contextual
Factors
-Limited resources
(5, 31%)
-New financial
model (11, 69%)
-Lack of evidence (3,
19%)
-Service gaps (6, 38%)
Model
Inputs
-Up-front, flexible funding
(9, 56%)
-Potential financial returns
(4, 25%)
-Risk transfer (12, 75%)
-Strategic government
engagement (varied)
Model
Outputs
-Capacity building (6, 38%) -Adaptive management
(10, 63%)
-Monitoring and
evaluation (10, 63%)
Model
Outcomes
-Financial returns
(10, 63%)
- Reputational gains
(6, 38%)
-Evidence/
Accountability (10, 63%)
-(Scaled) social impact
(10, 63%)
Model
Impacts
- Social finance
market growth (14,
88%)
- Cultural shifts (towards
outcomes) (5, 31%)
-Sustainability (5, 31%)
-Programmatic learnings
(3, 19%)
Costs and Challenges
In addition to these many benefits, stakeholders noted a number of transaction costs and
management challenges in implementing the DIB model (see table 3.6). In terms of transaction
costs, stakeholders cited the time, effort, and money it took to set-up the DIBs (25%). These
start-up costs included fees for hiring intermediary services and conducting feasibility studies.
Stakeholders further shared that it was often difficult to find partner organizations (25%) to fill
the investor and outcome funder roles, partly due to the limited evidence on DIBs at the time.
Potential outcome funders were also frequently hesitant to participate in a DIB due to budgetary
reasons, such as difficulties with earmarking funding for the DIB in future years’ budgets. As
109
one Service Provider reported, “the most challenging part of the impact bond model… [is] the
ability to secure outcome funding.” Moreover, many possible outcome funders were skeptical
about paying returns to private investors, as this could be seen as paying a higher price for
funding the same types of programs. On the other hand, potential investors often wanted either to
receive no returns as a part of their corporate social responsibility activities, or wanted higher
returns (or more capital protections) given the perceived risks of trying DIBs for the first time or
in new contexts. Within the context of the ICRC DIB, Francoise de Borchgrave, the founder and
Managing Director of ICRC, expressed: “The challenge was to develop a product that would
make investors comfortable with the idea to invest in a very volatile and fragile context: postconflict zones” (Bollag 2017). For these reasons, individual staff members of investor or
outcome funder organizations often had to spend time and effort to garner support and
organizational buy-in (25%) to participate in the DIBs.
Another high cost was the considerable time it took to negotiate contracts (31%). While
financial terms, such as return rates and capital protections, did often take time to negotiate
among the parties, stakeholders explained that outcome definition was more often a sticking
point. One Service Provider recalled, “when it comes to figuring out what the outcome metric is,
that was a big and difficult and long term conversation… there were some who thought that
longer-term… outcomes should be something that we look at… which ultimately [you] hope
you’re contributing to but in terms of defining outcomes didn’t make sense.” Outcome
agreement was frequently a difficult process, given all parties had to agree on the most
appropriate metric to use, as well as the most appropriate achievement targets. This required
finding a simple, measurable metric that was adequately ambitious – often challenging when
existing evidence was scarce.
110
Once terms were agreed, stakeholders also reported difficulties in contracting itself.
Often, this was due to each party having its own procedures and guidelines, and was especially
difficult when working with parties from different countries who were subject to different laws.
Contracting incompatibility was particularly challenging when working with larger organizations
which tended to be more bureaucratic and inflexible. As one Outcome Funder explained, it can
be difficult to “convince these folks to get on board and think outside the box of the traditional
[Institutional] processes and convince them that they’re not breaking rules.” Another difficulty
related to differences in the currencies use by various stakeholders. For example, in the QEI DIB,
there was a “difference in the exchange rates between the currencies in the DIB (US Dollars,
British Pounds and Indian Rupees… [so outcome] partners have agreed to take on the risk…
[and] if the value of the India rupee appreciates against the US Dollar, BAT will be liable to
cover the funding gap” (Erskine 2021: 15).
Given the relatively high transaction costs of establishing these contracts, interviewees
noted that DIBs are better suited for larger projects and larger investments. One Investor
explained, “the big takeaway has been that we need these instruments to be larger. So maybe…
the next one is 25 million, because otherwise the fixed costs of managing something like this is
very high.” Stakeholders further suggested having a very strong and clear reasoning for why a
DIB would be better suited for funding the program rather than more traditional grants or
contracts. Stakeholders also viewed DIBs as only “another tool in the toolkit” (Outcome Funder)
and not appropriate for funding all programs.
With regards to management challenges, stakeholders repeatedly highlighted the
difficulty of adjusting to new roles and organizational practices (44%). For instance,
stakeholders expected that the DIB model would prompt investors to become more involved in
111
management, outcome funders to take a less active role in monitoring, and service providers to
take a more flexible approach to service delivery. This was a challenge for outcome funders in
the Cameroon Cataract DIB, who shared that they were previously “used to a hands-on
management approach from their grants and acknowledged that allowing the bond manager to
take on the intermediary role between them and the service provider was a steep learning curve”
(Strid 2021: 23). Moreover, adjusting to these ways of working often required organizational
change. Such change was again difficult to facilitate in the larger, more established institutions,
as “we have strict rules that we have to follow to ensure transparency and competition in
procuring services… [that] seems to pivot us back into the traditional way of doing things”
(Outcome Funder). Additionally, as organizations took on new roles, relationships had to adapt
as well. These relational changes sometimes caused tension among DIB partners – especially
partners that had worked together previously through more traditional funding models. One
Service Provider expressed, “I think one of the valuable learnings for us… was learning how to
work with a trusted partner… but with a slightly different relationship.”
Stakeholders therefore noted the importance of considering organizational match for
various DIB roles. For instance, investors should be mission-driven, risk-taking organizations
and outcome funders and service providers should be flexible and learning-oriented
organizations. Interviewees also cautioned that organizations should consider whether they are
better suited to fill the role of investors or outcome funders, including based on whether they
want to have more control and oversight. As one Intermediary shared about one of their Investor
organizations, “if they had come in as outcome funders, they would have been distanced from
the day to day. So the learnings, the best practices… would not be institutionalized the same
112
way.” In fact, a few of the stakeholder organizations had served different roles in different SIBs
and DIBs based on their fit for each program.
Another common theme throughout the interviews was insufficient institutional
readiness (19%) for DIB partners given a lack of prior experience working this way. As one
Intermediary explained, “I think also a little bit of a challenge is just to see how different
stakeholders who have not been part of a DIB get used to the workings of a DIB… it has
required extensive hand holding and capacity building for them to realize why the data matters,
why the data quality matters, and for them to actually incentivize different rank and file of the
organization to do this well.. [including] frontline staff.” As such, many interviewees stressed the
value of hiring intermediaries with specialized expertise in results-based funding as many
organizations lack such in-house capacity. As one Service Provider shared, “the kinds of
expertise that I think you need to pull something like this off… I don’t think that any…
organization should kid themselves and think that it’s going to be easy to put something like this
together on their own. Having outside support is really important. And that was surprising to
me… I would call it even a shock.” Stakeholders also mentioned evaluation challenges (19%),
ranging from access to data, to evaluator technical expertise, to political risks of reporting. For
example, within the In Their Hands DIB, “because of unforeseen delays in the procurement
process, the impact evaluation… and data collection for the baseline started after 10 months after
the intervention… [so funders] decided to pay 100% of outcome payments according to service
delivery” rather than including a portion of payments on contraceptive uptake (Indigo 2022b).
A more external difficulty that came up throughout the interviews was the impacts of
COVID on the DIB programs. As COVID unfolded, some DIBs were preparing to launch while
others were already underway. However, while stakeholders agreed that COVID posed a
113
challenge to implementation, most shared that partners remained committed to delivering the
programs given that many of these services became even more imperative during this time.
When asked whether the DIB model’s flexibility allowed the partners to adapt their programs in
response to COVID, stakeholder answers were mixed. While some stakeholders reported that the
flexibility allowed them to update their outcome targets and investment terms relatively quickly
and easily, others noted that most partner organizations had experience operating in difficult and
changing environments, and would have been able to pivot operations even without the DIB
structure. As one Outcome Funder explained the DIB’s response to COVID: “I think that has to
do with… the impact bond but … also the fact that it’s… ups and downs all the time. And having
to… be able to adapt quickly and adjust to the worst-case scenario, I think also plays a huge
factor… there was definitely already an adaptability factor there.”
Despite these various costs and challenges, many stakeholders shared that they were
‘surprised’ at how well the DIB model worked. As one Intermediary expressed, “I was really
surprised that it worked. And worked well. The ethics of it worked… the data, the flexible
funding, the power dynamics, the focus on MEL [monitoring, evaluation, and learning] – those
things work.” Stakeholders also noted that while other funding arrangements could
hypothetically produce the same benefits as DIBs – such as high levels of collaboration and
flexible service delivery – that DIBs offer a convenient package that can be adapted to meet
program and partner needs.
Table 3.6: Start-Up Costs and Management Challenges
Transaction costs • Start-up costs (4, 25%)
• Partner recruitment (4, 25%)
• Institutional buy-in (4, 25%)
• Contract negotiations (5, 31%)
114
Management challenges • Changing roles and relationships (7, 44%)
• Evaluation difficulties (3, 19%)
• Institutional readiness (3, 19%)
• COVID (varied)
In classifying these costs and challenges using Agusti Strid and Ronicle’s (2021)
(D)REAM framework, the majority of obstacles relate to market capacity considerations. Startup costs primarily denote the fees incurred in obtaining technical support, for instance in hiring
intermediaries to recruit DIB partners. Further, many stakeholders reported difficulties in
obtaining organizational buy-in given a lack of evidence on the effectiveness of DIBs.
Stakeholders also shared challenges in adjusting to new ways of working and new relationships,
partly due to a lack of institutional readiness from having no prior experience with RBF or DIBs.
The other three types of barriers were then all fairly equal in how frequently they posed obstacles
to stakeholders. Regulatory issues primarily related to organizations having unsupportive
budgeting, procurement, and/or contracting rules, which complicated partner recruitment as well
as contract negotiations. Such strict procedures made adjusting to organizational change around
new roles and responsibilities particularly difficult for larger, more bureaucratic outcome funder
organizations. Regarding economic and political context, one aspect which impacted contract
negotiations was the difference in currencies between countries, as inflation could affect the
value of agreed-upon outcome payments. Another challenge was around the political risks of
reporting outcomes, especially when DIB-funded programs were being compared to non-DIB
public programs. Political and economic contexts were also disrupted by external shocks, such as
the COVID pandemic. Finally, low data availability posed a few notable costs and challenges,
such as increasing start-up costs due to the need to conduct feasibility studies. It further posed
evaluations challenges around establishing cohort baselines, and complicated contract
115
negotiations around outcome specification due to limited evidence on appropriate metrics, prices,
and targets for outcomes. Table 3.7 summarizes these classifications.
Table 3.7: Costs and Challenges Categorizations
Market
Capacity
Regulatory
Framework
Economic and
Political Context
Data
Availability
Transaction
Costs
Start-up costs
(4, 25%) x x
Partner
recruitment
(4, 25%)
x x
Institutional
buy-in
(4, 25%)
x
Contract
negotiations
(5, 31%)
x x x
Management
Challenges
Changing roles
and relationships
(7, 44%)
x x
Evaluation
difficulties
(3, 19%)
x x x
Institutional
readiness
(3, 19%)
x
COVID
(varied)
x
DISCUSSION
This study’s findings contribute to the growing DIB literature in a few of key ways. First,
stakeholder perceptions on the DIB model’s benefits suggest how DIBs can contribute to broader
processes of social change, even after contracts conclude. Second, the types of costs and
116
challenges that stakeholders face during DIB implementation offer insights into potential future
DIB market development barriers and model applicability. In situating these findings within the
wider literature on SIBs, some interesting similarities and differences between DIBs and SIBs
emerge as well. Similarities primarily relate to comparable trends between the market
development of these two models, while differences largely correspond to contextual distinctions
between DIBs in LMICs and SIBs in HICs.
Broader Social Change Objectives
DIB stakeholder perceptions on the model’s benefits demonstrate notable alignments
with wider objectives around social finance and social impact. With respect to social finance
objectives, evidence largely conforms with Ormiston et al.’s (2020) findings on SIB stakeholder
expectations. DIB stakeholders frequently set out to test DIBs as a new financial model, with
longer-term goals of growing the social finance sector. For instance, investors aimed to
demonstrate the potential of generating returns in order to attract more investors to the DIB or
results-based finance space in the future. As one Investor shared, “The optics around the return
were important in the sense that… we needed to make a sort of 5% return to make it credible to
achieve that secondary impact that we were looking for, which is really about changing the way
finance flows in the sector.” Similar to Ormiston et al. (2020), these findings resonate with
Fraser et al.’s (2018) financial sector reform narrative. By “bringing actors into governance
regimes that have not historically been associated with social policy development or
interventions,” these social finance objectives further align with NPG (Ormiston et al. 2020:
238).
117
Moreover, the interest in testing and demonstrating the effectiveness of the DIB as a new
model reflects trends within early SIB development. A survey by Gustafsson-Wright et al.
(2015) found that the opportunity to test an innovative financial tool was among the top three
motivations for all SIB stakeholder types (outcome funders, service providers, investors, and
intermediaries). This motivation will likely become less relevant as evidence is accumulated on
DIB effectiveness, with the focus anticipated to shift from demonstrating the model’s viability to
demonstrating the model’s suitability in various contexts (Agusti Strid and Ronicle 2021). This
shift has already begun to occur within SIBs, with researchers and practitioners increasingly
calling for evaluations into SIB effects. By examining whether the SIB mechanism or the funded
interventions are driving outcome achievement, such evaluations are expected to generate a
better understanding around “what actually makes a difference, for whom, how, why and in
which circumstances” (Carter et al. 2018: 22).
On the other hand, one important consideration among DIB stakeholders in seeking to
grow the social finance market that was not relevant for SIBs was strategic government
engagement. While DIBs normally differ from SIBs in excluding (domestic) governments as
contractual partners, there was considerable variation in how this study’s projects approached
government engagement. In some DIBs, working with the government to build local public
capacity was an explicit goal; in other DIBs, government actors were even formally included in
DIB contracts outside of traditional outcome funder roles. This type of engagement diverges
from earlier findings that “the presence of local or national government is very limited” in DIBs
(Alenda-Demoutiez 2020: 898). It further highlights that a goal for many DIB stakeholders is to
build government demand to serve as outcome funders for future SIBs (Agusti Strid and Ronicle
2021).
118
Meanwhile, in contrast to the relative marginalization of the social impact discourse in
Ormiston et al.’s (2020) study, DIB stakeholders strongly highlighted the model’s potential in
pursuing social impact objectives as well. Interviewees saw DIBs as an opportunity to more
rigorously measure and manage performance in order to improve accountability and scale social
impact. Stakeholders further saw DIBs as promoting longer-term social change by offering
programmatic learnings for the sector, producing more sustainable outcomes for beneficiaries,
and spurring broader cultural effects around monitoring and evaluation. For example, one
Intermediary explained that “One of the shared objectives for the entire consortium… [was] if
they can demonstrate how outcomes under DIBs are better, cost price discovery is better... Some
of this will start influencing the national, the federal and the state government schemes… So
mainstreaming of outcome based financing principles was a key motivation.” Thus, this study’s
findings also seem to align with NPM principles as well as reflect notable themes from Fraser et
al.’s (2018) public sector reform narrative (Ormiston et al. 2020).
In part, this increased emphasis on social impact within DIBs could reflect the fact that
LMICs often face greater challenges around data availability. For instance, governments and
NGOs may lack funding for large-scale data collection and impact evaluations, resulting in a
scarcity of publicly available data (Clarke et al. 2018; Muñoz and Kimmitt 2019). As such, a
motivation among DIB stakeholders not frequently highlighted in prior literature was the use of
DIBs in addressing a lack of evidence on social needs. This focus on scaling social impact could
also be due to the fact that LMICs face bigger gaps in social services in terms of breadth and
depth, thus requiring larger-scale solutions (Gustafsson-Wright et al. 2015). For instance, while
DIBs account for just 9% of all impact bonds worldwide, they represent 47% of beneficiaries
(Gustafsson-Wright and Osborne 2020a). In contrast, SIBs in HICs “have focused on providing
119
small-scale interventions to specific target groups or typically marginalised populations” (Ravi et
al. 2019: 30).
Nonetheless, the view that DIBs are more beneficial in scaling proven interventions
rather than promoting innovation again parallels features of early SIBs. For example, in their
analysis of first-wave SIBs in the UK and US, Olson et al. (2022) found that 64% of UK SIBs
and 91% of US SIBs supported the scale-up as opposed to the pilot testing of social
interventions. This contrasts with some claims within both DIB and (especially) SIB literature
that impact bonds have considerable potential in facilitating experimentation within service
delivery (e.g. Muñoz and Kimmitt 2019; Loraque 2018; Gustafsson-Wright et al. 2015).
However, Agusti Strid and Ronicle (2021: 47) predict that “once the mechanism has been tested
and further established… more complex and innovative interventions can be developed through
SIBs.” Recent developments in SIB design seem to support this claim, as more recent UK SIBs
have begun incorporating a much stronger emphasis on co-production in facilitating early-stage
innovation (e.g. Fox et al. 2022b). Whether this trend is replicated within DIB practice remains
to be seen.
In addition to providing evidence on how DIBs align with broader objectives, the study’s
findings on the model’s most prominent inputs, outputs, outcomes, and impacts also offer
insights into a possible emergent DIB theory of change (Gugiu and Rodríguez-Campos 2007).
As visualized in figure 3.1 (see Appendix 3.3 for a detailed logic model), this theory could be
that the DIB’s provision of outcomes-focused risk capital enables partners to take an adaptive
data-driven approach to service delivery. In turn, this approach generates evidence of blended
returns and increases organizational capacity, which helps spur cultural shifts towards more
outcomes-based working (and funding) in the future. This process is similar to other
120
hypothesized DIB and SIB theories of change from Oroxom et al. (2018) and Gustafsson-Wright
and Osborne (2020b), respectively. Both of their theories propose that the model’s outcomes
focus, up-front funding, risk incentives (financial and reputational), and private sector
engagement fosters greater innovation, performance management, capacity building, and
collaboration. Notably, however, the present study provides additional insights into the potential
longer-term outcomes and impacts of using DIBs, while the other two models tend to focus more
on the tool’s inputs and outputs. The newly proposed model further specifies the role of
innovation as relating more to the use of adaptive management in helping to scale proven
interventions as opposed to the creation of wholly new services (Gustafsson-Wright et al. 2015).
Figure 3.1: Possible DIB Theory of Change
DIB Market Development Barriers
The analysis also uncovered a number of transaction costs and management challenges in
implementing DIBs. The classification of these obstacles following Agusti Strid and Ronicle’s
(2021) DREAM model helps suggests whether they are likely to persist or decrease as the DIB
market continues to grow. As the results above demonstrate, many costs and challenges related
to market capacity. For instance, there were high start-up fees in obtaining technical support; it
took considerable time and effort to recruit DIB partners and gain institutional buy-in; and
stakeholders often struggled to adjust to new ways of working. These market capacity challenges
should decrease over time as DIB partners gain experience using these tools and build
OutcomesFocused Risk
Capital
Collaborative
and Adaptive
Data-Driven
Delivery
Evidence (of
Blended
Returns) and
Capacity
Cultural Shift to
OutcomesBased Working
121
institutional knowledge which they can apply in future DIBs (Agusti Strid and Ronicle 2021). As
one Service Provider expressed: “I think of the long term [goals]… [was] learning how these
things work and then positioning ourselves for similar types of opportunities in the future.”
Further, as stakeholders share lessons learned with broader networks, this should encourage
future participation of external stakeholders (Agusti Strid and Ronicle 2021).
In fact, such market capacity growth has already been happening in India as its DIB
market matures. As India’s second education DIB, QEI was in some ways a follow-on project
from the Education Girls DIB (Gustafsson-Wright et al. 2022). As such, the stakeholders in the
QEI DIB were able to build off of the experiences from the Educate Girls DIB. For instance, an
FCDO evaluation found evidence that “learning had been taken forwards from the Educate Girls
DIB [into QEI] to improve design and increase efficiency in transactions,” including “involving
an outcome evaluator earlier in the project and allowing flexibility within the contracting
process” (Erskine 2021: 29, 2). Nonetheless, some questions remain over the DIB model’s
scalability, particularly around whether DIBs will be able to attract more mainstream investors
and whether there is sufficient supply of qualified service providers (Agusti Strid and Ronicle
2021).
To some extent, challenges related to low data availability should also decrease as the
DIB market grows. As a greater number of DIBs conclude and evaluation reports are published,
this should help contribute to evidence-building on cohort characteristics and intervention
effectiveness which should support future DIB design (Maier et al. 2018). Such evidence
generation should help answer questions about DIB terms such as: “Which indicators? How do
we define those indicators? How should we set targets on those indicators? Because we still
didn't have good quality or robust data of what is the price per outcome” (Intermediary).
122
Especially if DIBs are launched in the same geographic and policy areas, stakeholders should
face fewer challenges around selecting appropriate outcome metrics, targets, and costs (Agusti
Strid and Ronicle 2021). However, a problem more likely to persist is access to socioeconomic
data, which could continue to pose evaluation issues unless standards are improved – especially
when DIB interventions are run through public institutions or outcomes are validated using
government-enforced standards (Agusti Strid and Ronicle 2021).
On the other hand, unsupportive regulatory environments and instability in economic and
political conditions are likely to pose more persistent structural barriers. In terms of regulatory
challenges, these primarily related to the rigid policies within larger outcome funder
organizations. For instance, many DIB outcome funders have found it difficult to become more
hands-off during design and delivery due to institutional mandates on procurement and
transparency. As one Outcome Funder shared: “the entity receiving it needs to show that they
have deployed this funding using competitive selection with a focus on inputs and cost.”
Similarly, interviewees noted additional challenges around structuring contracts with parties
from different countries, given considerable differences in the legal regulations and contracting
procedures to which they are subject. Thus far, DIB stakeholders have addressed such obstacles
on a project-by-project basis; such as by having different outcome funder pays for outcomes in
different years to overcome annual restrictions in budgeting (Agusti Strid and Ronicle 2021). If
organizations do not adjust such procedures to be more supportive to DIBs in the longer run,
DIB transaction and start-up costs will remain high. However, creating more supportive
regulatory environments is likely to require policy or organizational change, which could take
considerable time and advocacy efforts (Agusti Strid and Ronicle 2021).
123
Finally, hardest to overcome are economic and political obstacles, such as uncertainty
around exchange rates and inflation, political sensitivities towards outcome validation and
reporting, and external shocks such as COVID. Some DIBs in this study even experienced
volatility stemming from threats of violent outbreaks, with the F4J DIB in Palestine and the
ICRC DIB in Nigeria, Mali, and the DRC operating in areas experiencing high levels of fragility
and conflict, respectively (FMO 2019; ICRC 2022). In cases of high instability, DIB partners
could turn to intermediaries to play a larger role in building stakeholder confidence in
participating in DIBs when trust in markets and institutions are lower (Agusti Strid and Ronicle
2021). Stakeholders could also more strategically plan DIBs around election cycles, as turnover
in public servants or leadership could affect government priorities and support for DIB initiatives
(Agusti Strid and Ronicle 2021). As one Service Provider reflected: “I think the lesson learned is
thinking beforehand about what those political risks might be… if you're working to push
forward on a specific goal or strategy, you don't have any control over how the government is
going to handle that, [and] it could impact… what you're getting paid on.” Nonetheless, these
types of conditional factors are likely to remain uncertain and unstable, as they depend on
political systems which are hard to change (Agusti Strid and Ronicle 2021).
Given all these possible costs and complications in launching DIBs, organizations should
seriously consider whether the DIB model is an appropriate mechanism given the political
economy within their service ecosystem. To help aid in this decision, potential DIB partners
should ask themselves questions such as: Have RBF initiatives been launched in the country or
region before? Is there a strong and active impact investing sector? Does the civil society have a
history of being involved in social innovation initiatives? (Agusti Strid and Ronicle 2021).
Stakeholders should also determine whether the intervention itself is suitable to the DIB
124
approach; for instance, as one Service Provider suggested, if “you were trying to solve a
problem, you had some ideas on how that problem might be solved, that you knew that there was
going to need to be some innovation and therefore some risk.” Stakeholders could also think
about launching DIBs using a relatively newer approach: outcome funds (Gustafsson-Wright and
Osborne 2020b). Outcome funds, an approach becoming increasingly popular in the UK, “pool
several different impact bond agreements through streamlined and shared contract templates,
metrics, and evaluation systems,” thereby spreading costs among a wider number of projects
(Gustafsson-Wright and Osborne 2020b: 25). For example, the Sierra Leone Education
Innovation Fund just recently launched three DIBs in the country, all of which share the same
five outcome funders (including FCDO), as well as share Bridges as an intermediary (Indigo
2023). Regardless of the exact approach taken, what appears key to continued DIB market
development is carefully adapting the model based on each project’s specific needs – be they
related to stakeholder objectives or contextual constraints.
CONCLUSION
Overall, this study identified several important stakeholder considerations of using the
DIB model, including motivations, perceptions on the model’s costs and benefits, and
recommendations for future DIB use. For contextual factors, interviewees cited limited
resources, limited evidence on what works, persistent social challenges and service gaps, and the
opportunity to test a new model. Interviewees were also incentivized to try DIBs given the
model’s transfer of risk (with payment tied to outcomes), potential to generate financial returns,
and provision of up-front flexible capital. In terms of benefits, interviewees shared how the
model prompted them to engage in adaptive performance management, monitoring and
125
evaluation, and capacity building. As a result, interviewees saw the model as providing evidence
on outcomes and costs, principal repayment and returns, increased organizational credibility and
accountability, and greater social impact. Lastly, in terms of longer-term impacts generated
through the DIBs, interviewees highlighted cultural shifts towards results-based working, growth
in the social finance market (and future funding opportunities), and more sustainable programs
and policies. Along with these benefits, interviewees encountered many start-up costs and
management challenges related to institutional buy-in, protracted negotiations, evaluation
difficulties, and organizational readiness and change.
To situate these findings within the broader SIB literature, the paper further classified
DIB benefits using Ormiston et al.’s (2020) conceptualization of social finance, collaboration,
and social impact objectives. Results showed that DIB benefits primarily corresponded to
objectives around social finance and social impact, aligning with Fraser et al.’s (2018) financial
and public sector reform narratives. As with Ormiston et al. (2020: 244), findings also suggested
that DIBs appear to “manifest as both NPM and NPG,” signifying that these might be
complementary rather than competing paradigms. Another possible explanation is that neither
NPM nor NPG is sufficient on its own for analyzing the various applications of DIBs (and SIBs)
in practice (Albertson et al. 2020). In seeking to fill this possible theoretical gap, an alternative
framework advanced by recent SIB scholarship is social innovation (e.g. Albertson et al. 2020;
Fox et al. 2022b; Olson et al. 2022). Although it is an emergent theory with many internal
debates, social innovation appears to “capture many of the features of SIBs, particularly the
opportunity for stakeholders in different sectors… to work collaboratively to address pressing
social challenges and deliver social outcomes in new ways” (Olson et al. (2022: 7). Future
research could thus assess the analytic value of studying DIBs through the lens of social
126
innovation, such as by evaluating DIBs against more utilitarian, market-based conceptions of
social innovation versus more radical, democratic conceptions (Ayob et al. 2016; Beckman et al.
2023).
In classifying DIB costs and challenges according to Agusti Strid and Ronicle’s (2021)
DREAM model, the analysis also uncovered important insights which could help inform
continued DIB development. For instance, while many costs and challenges related to market
capacity, such as high technical assistance fees and a lack of institutional readiness, these
difficulties are expected to decrease as DIB partners gain experience and share their learnings
within their networks. Similarly, problems around data availability should decrease, at least to
some extent, as DIBs generate evidence on cohorts and interventions. Conversely, DIBs are
likely to face more persistent costs and challenges around unsupportive regulatory frameworks
and unstable economic and political environments due to the considerable time and effort
required to change bureaucratic institutions and political systems. Thus, potential DIB partners
must take all of these factors into account when designing and adapting DIBs in order to fit the
model to the specific needs and constraints for each project.
This study’s findings have other implications for practice and research as well. In terms
of practice, stakeholders should consider using DIBs to scale proven interventions that have
room for improvement, especially when participating in their first DIB or when implementing
DIBs in an untested context. Stakeholders should also try to recruit partners that are a good
cultural fit for the DIB, for instance mission-driven, flexible, and learning-oriented organizations
with expertise in social finance or specific geographies and policy sectors. Finally, and most
crucially, stakeholders should establish solid reasoning for why a DIB would be the most
appropriate funding mechanism for the project at hand.
127
This study also suggests a few fruitful areas of further inquiry. First, it would be
beneficial to conduct additional studies which include a diverse range of both DIBs and SIBs in
LMICs, as comparisons could help shed light on the importance of social, economic, and
political factors in each model as well as the role of the government as a direct versus indirect
actor. Second, future investigations could delve deeper into the political economy of each DIB
location to better understand the impacts of local context on DIB design and purpose. Third, as
the DIB market is still relatively new and small, motivations, expectations, and experiences
could shift as the model is adapted in practice. In fact, this has begun to happen with the SIB
model, as recent SIBs have looked considerably different from early-wave SIBs (Fox et al.
2022b; Olson et al. 2022). Thus, it would be advantageous for research to track how DIB
stakeholder perceptions change over time as the model evolves as well.
128
CONCLUSION
In the ten plus years since SIBs were first introduced as a form of OBC, research has
generated a number of findings. These findings include insights into stakeholder motivations,
common implementation challenges, and helpful practices around contract negotiation,
performance management, and evaluation. However, to date most SIB studies have “addressed
SIBs in a single country and/or single sector” and few studies have collected primary data “such
as semistructured interviews with SIB stakeholders” (Broccardo et al. 2020: 1322-1323). Studies
which did conduct analyses across countries and sectors often used a case study approach
drawing on secondary data (Broccardo et al. 2020). This dissertation is thus unique in offering an
overarching perspective on SIBs as applied in a variety of contexts, including geographically and
economically diverse countries, using both innovative methods and new primary data. The
following sections summarize the key take-aways of the dissertation’s three essays, as well as the
cross-cutting themes that emerge when considering these essays as a whole.
MAIN TAKE-AWAYS
The dissertation’s essays examined the SIB model from various perspectives: social
innovation, systems change, and international development. Findings from these essays make
notable contributions to the growing SIB literature, as well as offer insights for practitioners and
policymakers. They also reflect the need for continued research on SIBs as OBC tools being used
in a variety of contexts. Summaries of each essay’s main findings, implications for practice, and
opportunities for future research follow.
129
Key Findings
The first article, “Are Social Impact Bonds an Innovation in Finance or Do They Help
Finance Social Innovation?” analyzed SIBs through the theory of social innovation. The data
revealed that around two-thirds of US SIBs, but only one-sixth of UK SIBs, received up-front
funding from private investors. The analysis also found that the majority of SIBs, particularly
within the US, funded scaled programs which aimed to test the effectiveness of social
interventions as opposed to pilot programs for testing feasibility. The results provided limited
evidence that: 1) SIBs attracted private capital for the production of social goods and, 2)
facilitated the pilot testing and scaling-up of social interventions. As such, this study brings into
question the claims that SIBs are both an innovative financial instrument as well as a policy tool
which finances other social innovation processes
The dissertation’s second article, “Can Social Outcomes Contracts Contribute to Systems
Change? Exploring Asset-Based Working, Innovation, and Collaboration,” explored how SIB
design could support systems change by evaluating the case of GM Homes. The analysis
uncovered compelling evidence that GM Homes helped generate systems-level effects,
particularly in the areas of housing provider policies and dual diagnosis services. Results further
suggested that asset-based working was the most influential causal mechanism, but that adaptive
management (rather than innovation) and large-scale collaborative working were vital to
enabling this. These findings lend support to the claims that SIBs, and SOCs more broadly, can
be designed in ways which promote more transformative change.
The third and final article, “Development Impact Bonds as Results-Based Financing:
Stakeholder Motivations, Expectations, and Experiences,” examined the application DIBs in
LMICs for international development purposes. The study generated important insights into
130
stakeholder perceptions on the DIB model, including the types of problems that DIBs can help
address and the model’s key design features. Stakeholders also identified a number of benefits
related to the model’s immediate outputs, short-term outcomes, and longer-term impacts.
Further, stakeholders noted institutional barriers and start-up costs of implementing the model, as
well as a number of lessons learned for future DIBs. In sum, findings help provide a clearer
picture of how DIBs fit into broader social change goals, as well as potential tradeoffs when
using these models compared to other financial tools.
Implications For Practice
The results of the three studies provide some helpful recommendations for practice.
Through its mixed methods analysis, the first essay proposes that SIBs could be more effective at
facilitating social innovation in the following three ways: 1) involving more mainstream private
investors (as opposed to social investors), 2) funding more experimental pilot programs (rather
than scaling proven interventions), and 3) incorporating more co-productive processes (by giving
greater voice to service users). Given that only 3 projects in the US, out of 47 UK and 22 US
SIBs, specifically mentioned employing an RCT, the paper also notes that SIBs could increase
their use of rigorous evaluations.
In addition, drawing on insights from the GM Homes case study, the second essay
suggests that SIB funding could better support systems change efforts by utilizing longer-term
contracts as well as drawing on interdepartmental budgets. SIBs could also engage
representatives from more sectors earlier on in the design process, such as through steering
groups or advisory broads. Such wider intersectoral involvement, both financially and
131
operationally, would help systems actors to create more joint understanding and ownership in
addressing systemic challenges.
Further, based on interviewee recommendations, the third essay notes that DIBs might be
most appropriate for funding development initiatives when scaling proven programs that have
room for growth and improvement. Findings also highlight the importance of building a
partnership among organizations that are a good match for the DIB, for instance by being
mission-driven, flexible, and learning-oriented. Above all, interviewees stressed the need for
stakeholders considering launching a DIB to establish clear reasoning for why this model would
be the best funding mechanism for the project compared to alternatives.
Opportunities For Research
The dissertation’s findings also provide suggestions for fruitful areas of continued
research. While the first paper assesses the presence of investors from different sectors, followon studies could measure the amount of funding provided by each investor to compare the
relative size of contributions from different sectors. Scholars could also extend the analysis to
include the earlier and later stages of the social innovation process – namely, co-production
(during design) and diffusion. Moreover, as this study focused on early-wave SIBs in the US and
UK, future studies could extend the analysis to the more recent application of SIBs globally,
including by analyzing DIBs in LMICs.
In the second paper, given that process tracing is sensitive to case-specific contextual
factors, the findings around asset-based working, innovation, and collaboration as causal
mechanisms have somewhat limited generalizability (Trampusch and Palier 2016; Waldner
2012; Blatter and Haverland 2014; Beach 2017). Thus, future research should continue to test
132
and refine this conceptual framework using other case studies. It would be particularly
interesting to compare findings from cases in other regions and policy areas, as well as cases
which take the form of DIBs as opposed to SIBs.
Lastly, as the third paper focused specifically on DIBs in LMICs, further studies could
expand this analysis by comparing SIBs and DIBs in LMICs to better understand the
government’s role as a direct versus indirect partner in the model, respectively. As the DIB
market is still relatively new and small, stakeholder motivations, expectations, and experiences
could shift as the model is adapted in practice. Thus, it would be beneficial for research to track
how DIB stakeholder perceptions change over time as well.
CROSS-CUTTING THEMES
As noted above, this dissertation provides a unique perspective on SIBs by analyzing the
tool using both novel methods and data as well as by examining SIBs from a range of countries
and sectors. The dissertation’s three essays include a mixed-methods analysis of 69 early-wave
SIBs in the US and UK; an in-depth process tracing investigation of a UK rough sleeping SIB;
and a thematic analysis of interview data from seven recent DIBs in LMICs. In considering these
three essays a whole, a few major cross-cutting themes emerge: 1) the role of data-driven
delivery within SIBs, 2) the ability for SIBs to scale social impact within their ecosystems, 3) the
ongoing growth of the SIB and social finance markets, and 4) the alignment between SIBs and
social innovation.
133
Data-Driven Delivery
The first theme among the three essays was the importance of taking a data-driven
approach to service delivery within the SIBs. One such approach was adaptive management,
which played a vital role in facilitating a process of continuous learning and improvement within
SIBs. Adaptive management, a concept that has its roots in environmental resource management,
“involves an iterative cycle of design, implementation, reflection and adaptation, which is
supported both by system monitoring and stakeholder engagement” (Prieto Martín et al. 2020:
10). In contrast to predictive management approaches, such as command and control, adaptive
management entails flexibly managing and adjusting programs based on emergent insights from
data (Prieto Martín et al. 2020). Adaptive management was not only seen as one of the causal
mechanisms catalyzing systems change within the GM Homes SIB in Essay 2, but was also seen
by DIB stakeholders as a prominent benefit of using the model to support development initiatives
in LMICs in Essay 3.
A related approach is the social innovation process discussed in Essay 1. Key to effective
pilot testing and scaling – the focus of Essay 1 – is assessing the effectiveness of different
solutions, otherwise “how do we know whether a social initiative has the promised positive
impact?” (Henriques and Beckman 2022: 158). As such, SIBs which seek to promote innovative
solutions for social service delivery must implement some form of performance management or
evaluation to improve interventions and build evidence on what works (Maier et al. 2018).
However, SIB stakeholders often run into difficulties when attempting to conduct rigorous
evaluations, for instance due to challenges in accessing and sharing data (Heinrich and Kabourek
2019; Avila 2018). As a result, SIB evaluations to date have not been as rigorously conducted as
was initially expected. For instance, as found in Essay 1, only three SIBs profiles (all US
134
projects) out of 69 SIBs in the US and UK explicitly mentioned employing an RCT to evaluate
outcome achievement. The GM Homes case in Essay 2 further demonstrates that there also
remains considerable potential in designing SIBs to support earlier-stage innovation, such as by
harnessing more data on service user insights through greater use of co-production (Fox et al.
2022b).
Scaled Impact and Systems Change
The second theme across the essays was the potential for SIBs to spur broader impacts
within their service systems. For example, Essays 1 and 3 show that a key goal of SIBs and DIBs
is to expand and replicate effective models in order to scale social impact. The dissertation also
provides some new insights into the possible link between SIB evaluations and evidence-based
policymaking. Evidence-based policymaking aims to achieve "materially better outcomes by
targeting funds to those interventions that have been shown to be highly effective in achieving
desired results" (VanLandingham and Silloway 2016). Given that SIBs inherently require some
form of outcome assessment to determine investor repayment, proponents have suggested SIB
evaluations could help inform future policy and funding decisions (Fry 2019; Fitzgerald et al.
2020). However, “the path from generating evidence to making policies is not as linear and
technically rational as proponents of SIBs have often implied” (Maier et al. 2018: 1350). As
found in Essay 2’s investigation into the GM Homes case, stakeholders felt that policy learnings
were being shared more informally within organizations and networks as opposed to more
formally through evaluation reports. This suggests that stakeholders may need to more seriously
consider how they design SIB evaluations moving forward if influencing policymaking is a goal
135
– such as by employing more learning-oriented evaluations which seek to “extract broader
lessons around longer-term programme design” (Savell and Heady 2016: 6).
The essays also shed light on the ability for SIBs to generate system-level effects. As
defined by Gustafsson-Wright et al. (2020: 4), SIB ecosystem effects are impacts “beyond the
outcomes achieved for the beneficiaries,” and include the effects of the SIB mechanism “on
social services financing and delivery” as well as “lessons learned or innovations developed
within and around the impact bond structure.” In Essay 2, the GM Homes SIB was able to
catalyze systems change through a combination of asset-based working, adaptive management,
and collaboration. Stakeholders saw the commitment of housing providers to taking a more
personalized asset-based and trauma-informed approach, even after the SIB ended, as an
example of relatively enduring changes in organizational change and mindsets. In addition, in
Essay 3 stakeholders reported that one of the primary longer-term benefits of the DIB model was
the ability to spur cultural changes within organizations and sectors to work in a more outcomesbased way.
Market Growth
The third theme is how both the SIB market and broader social finance market have
grown since the emergence of SIBs. Altogether, the essays demonstrate that the SIB model has
undergone considerable evolution as the market has continued to grow and SIBs have been
implemented in a greater number and variety of contexts. For instance, newer SIB variations,
such as the GM Homes SIB in Essay 2, appear to be taking a more innovative approach to
service delivery than early wave SIBs, such as the US and UK SIBs in Essay 1. These variations
represent a “new wave of SIBs emerging that put people at the heart of service design and
136
delivery; create the conditions for learning; and encourage wider systems change,” which Baines
et al. (2021: 7) call ‘SIBs 2.0.’ As seen in Essay 3, it appears that similar trends are emerging
with the more recent DIB market, as stakeholders primarily see the model as a tool for scaling
interventions rather than for facilitating innovation. As DIB implementation increases and
stakeholders build up the evidence base on the model’s effectiveness, it is likely that DIBs may
begin to incorporate more innovative elements as well. However, as Agusti Strid and Ronicle
(2021: 67) explain, “as the SIB market grows, data availability, stable political and economic
context and a supportive regulatory environment become increasingly important.” Addressing
such systemic barriers will be particularly important in LMICs, which may face greater
regulatory and legal issues as well as limited access to public data (Muñoz and Kimmitt 2019)
In addition to the growth in the SIB market itself, SIBs can also provide insights to
inform the design of other social finance tools. Although many stakeholders across Essays 2 and
3 saw SIBs as a helpful funding mechanism with many benefits, they also stressed that SIBs are
not the only (or best) option for every situation. In fact, many stakeholders stated that other
models, such as grants, could be designed to incorporate beneficial SIB elements, for instance by
allowing for greater service provider autonomy and operational flexibility. As well summarized
by Williams (2020: 917): “the tools, logics, and practices that have developed around SIB
transactions, including the emphasis on data, tightly defined target populations, and a focus on
outcomes-based contracting and commissioning, are likely to have a much larger and enduring
impact on the social and public services sector.” Therefore, even if SIB market development
slows and other social finance tools emerge which replace them, it is likely these tools will build
upon the benefits of the SIB model while seeking to reduce their costs. Nonetheless, many
137
stakeholders underlined that they still found SIBs particularly appealing as they offer a readymade package of all of these elements which can then be adapted to specific needs and contexts.
Social Innovation
The final cross-cutting theme, with notable ties to the three preceding themes, is the
alignment between SIBs and social innovation. The key theoretical framework in Essay 1, social
innovation is “a novel process or product that intends to generate more effective and just
solutions to address complex social problems for collective gain” (Beckman et al. 2023: 4). As a
process, social innovation involves a non-linear progression between co-production, pilot testing,
scaling, and diffusion (Beckman et al. 2020). Thus far, SIB research has tended to suggest that
SIBs are primarily useful for the scaling stage. For instance, Gustafsson-Wright et al (2015: 43)
found that “SIBs have not supported many highly innovative interventions but some have
supported innovations that are being delivered in different ways or to different populations.”
Essays 1 and 3 uncovered similar trends, with evidence indicating that SIBs are being used to
scale up existing models rather than pilot more experimental models. In addition, many scholars
remain critical of the potential for SIBs to engage in co-production and systems change (Sinclair
2021; Andreu 2018; Broccardo et al. 2020). Critics argue that SIBs focus on individual issues
rather than underlying systemic causes (Andreu 2018; Broccardo et al. 2020), and that SIBs view
service users “as potential revenue sources rather than as conscious agents and citizens” (Sinclair
et al. 2021:13). Essays 2 and 3 offer some encouraging findings to the contrary. In Essay 2, the
GM Homes SIB was able to produce wider system-level effects, supported in part by the use of
co-productive, asset-based working and the pilot testing of new services. Additionally, Essay 3
found that DIB stakeholders saw the tool as promoting longer-term impacts by “offering
138
programmatic learnings for the sector, producing more sustainable outcomes for beneficiaries,
and spurring broader cultural effects around monitoring and evaluation.” Therefore, it appears
that the potential to design SIBs which enable the co-production, piloting, and diffusion of social
innovation is continuing to evolve along with the model.
The dissertation also speaks to the SIB model as a social innovation product itself. As
described in Essay 1, social innovation products are “new ideas (products, services and models)
that simultaneously meet social needs and create new social relationships or collaborations”
(Murray et al. 2010: 3). Such products include social finance tools which generate social returns
along with potential financial returns (Nicholls and Emerson 2015). By offering blended returns,
proponents claim that SIBs could increase access to private capital for social purposes, which has
historically been limited due to high investment risks (Gruyter et al. 2020; Ormiston et al. 2020;
Wilson 2016). In addition, by tying investor repayment to the achievement of specific social
outcomes, SIB are expected to improve the efficiency and effectiveness of service delivery by
incentivizing investors to support performance measurement and management efforts (Edmiston
and Nicholls 2017; Martin 2015; Social Finance 2009). Essay 1 finds only moderate evidence
that SIBs are behaving as social finance tools as just two-thirds of US SIBs and one-sixth of UK
SIBs involved private sector investors. In contrast, Essay 3 finds that DIB stakeholders relate
many expected benefits of using the model to social finance objectives. For instance,
stakeholders cite possible financial returns and reputational gains as motivating factors for using
the DIB model; they also report growth of the social finance market as a major longer-term goal.
While these three essays as a whole provide mixed evidence that SIBs support the social
innovation process and serve as innovative social finance tools, they nonetheless present a
139
number of promising practices which suggest that SIB design could adapt to better serve these
purposes going forward.
FINAL THOUGHTS
Although SIBs were first launched over a decade ago in 2010, the SIB literature is still
“relatively new and growing” (Broccardo et al. 2020: 1318), and there remains great need for
research which offers empirical evidence on the appropriateness and effectiveness of SIBs across
various contexts (Millner and Meyer 2022; Fitzgerald et al. 2020). To help grow this body of
evidence, this dissertation set out to address three of the ‘five big SIB questions’ posed by
Fitzgerald et al. (2020: 88). Not only should future studies continue to answer these questions,
they should update this research agenda as the SIB model evolves in practice. For instance, as the
new wave of SIBs grows, scholars may need to consider: what are the big SIB 2.0 questions?
140
REFERENCES
Abercrombie, R., Harries, E., & Wharton, R. (2015). Systems change: A guide to what it is and
how to do it. New Philanthropy Capital.
Agusti Strid, A., & Ronicle, J. (2021). Social Impact Bonds in Latin America-IDB Lab’s
pioneering work in the region: lessons learnt. Technical Note No. IDB-TN-2087. InterAmerican Development Bank.
Albertson, K., Bailey, K., Fox, C., LaBarbera, J., O’Leary, C., & Painter, G. (2018). Payment by
Results and Social Impact Bonds: Outcome-based payment systems in the UK and US.
Bristol: Policy Press.
Albertson, K., Fox, C., O’Leary, C. & Painter, G. (2020). ‘Towards a Theoretical Framework for
Social Impact Bonds’. Nonprofit Policy Forum, 11(2).
Alenda-Demoutiez, J. (2020). ‘A fictitious commodification of local development through
development impact bonds?’. Journal of Urban Affairs, 42(6), 892–906.
Andrade, M. (2016). ‘Tackling health inequalities through asset-based approaches, co-production
and empowerment: ticking consultation boxes or meaningful engagement with diverse,
disadvantaged communities?’. Journal of Poverty and Social Justice, 24(2), 127-141.
Andreu, M. (2018). ‘A Responsibility to Profit? Social Impact Bonds as a Form of
“Humanitarian Finance”’. New Political Science, 40(4), 708–726.
Anyiam, F., Lechenne, M., Mindekem, R., Oussigéré, A., Naissengar, S., Alfaroukh, I.O., Mbilo,
C., Moto, D. D., Coleman, P. G., Probst-Hensch, N., Zinsstag, J. (2017). ‘Cost-estimate and
proposal for a development impact bond for canine rabies elimination by mass vaccination
in Chad’. Acta Tropica. 175, 112–120.
Avila, A. R. (2018). ‘Rocky mountain evidence: Using data to drive Colorado state
government’. Pub. Admin. Rev., 78, 156.
Ayob, N., Teasdale, S., & Fagan, K. (2016). ‘How social innovation ‘came to be’: Tracing the
evolution of a contested concept’. Journal of Social Policy, 45(4), 635-653.
Baines, S., Fox, C., & Painter, G. (2021). Social Impact Bonds 2.0: The First Ten Years, The
Next Ten Years. Retrieved December 1, 2021, from https://socialinnovation.usc.edu/wpcontent/uploads/2021/05/Editorial-Next-10-years.pdf
Beach, D. (2016). ‘It's all about mechanisms–what process-tracing case studies should be
tracing’. New Political Economy, 21(5), 463-472.
Beach, D. (2017). Process-tracing methods in social science. In Oxford research encyclopedia of
politics.
141
Beckman, C., Painter, G. and Rosen, J. (2020). The Social Innovation Process. Retrieved January
10, 2020, from https:// socialinnovation.usc.edu/research/social-innovation/
Beckman, C., Rosen, J., Estrada-Miller, J., & Painter, G. (2023). ‘The Social Innovation Trap:
Critical Insights into an Emerging Field’. Academy of Management Annals 17(2), 684–709.
Befani, B., & Mayne, J. (2014). ‘Process tracing and contribution analysis: A combined
approach to generative causal inference for impact evaluation’. IDS bulletin, 45(6), 17-36.
Belt, J., Kuleshov, A., & Minneboo, E. (2017). ‘Development impact bonds: Learning from the
Asháninka cocoa and coffee case in Peru’. Enterprise Development & Microfinance, 28(1-
2), 130–144.
Bennett, A. (2010). Process tracing and causal inference. In Rethinking Social Inquiry. Rowman
and Littlefield.
Blatter, J., & Haverland, M. (2014). Case studies and (causal-) process tracing. In Comparative
policy studies. Palgrave Macmillan, London.
Blickem, C., Dawson, S., Kirk, S., Vassilev, I., Mathieson, A., Harrison, R., Bower, P. and
Lamb, J. (2018). ‘What is asset-based community development and how might it improve the
health of people with long-term conditions? A realist synthesis'. Sage Open, 8(3),
2158244018787223.
Bollag, B. (2017, September 8). ICRC launches World’s first Humanitarian Impact Bond.
Retrieved April 27, 2023, from https://www.devex.com/news/icrc-launches-world-s-firsthumanitarian-impact-bond-90981
Bortagaray, I., & Ordóñez-Matamoros, G. (2012). ‘Introduction to the Special Issue of the
Review of Policy Research: Innovation, Innovation Policy, and Social Inclusion in
Developing Countries’. The Review of Policy Research, 29(6), 669–671.
Bovaird, T., & Davies, R. (2011). Outcome-based service commissioning and delivery: does it
make a difference?. In New steering concepts in public management (Vol. 21, pp. 93-114).
Emerald Group Publishing Limited.
Bridges Fund Management. (2018, August 9). Bridges Impact Foundation Backs Village
Enterprise to deliver first development impact bond for poverty alleviation in Sub-Saharan
africa. Retrieved April 27, 2023, from https://www.bridgesfundmanagement.com/villageenterprise-closes-investment-for-first-development-impact-bond-for-poverty-alleviationin-sub-saharan-africa/
British Asian Trust. (n.d.). Skill impact bond. Retrieved April 27, 2023,
https://www.britishasiantrust.org/our-work/social-finance/skill-impact-bond/
Broccardo, E., Mazzuca, M., & Frigotto, M. L. (2020). ‘Social impact bonds: The evolution of
research and a review of the academic literature’. Corporate Social-Responsibility and
Environmental Management, 27(3), 1316–1332.
142
Cabinet Office. (2011). Growing the Social Investment Market: A vision and strategy. London:
Cabinet Office.
Carter, E., FitzGerald, C., Dixon, R., Economy, C., Hameed, T., & Airoldi, M. (2018). Building
the tools for public services to secure better outcomes: Collaboration, Prevention,
Innovation. Government Outcomes Lab, University of Oxford, Blavatnik School of
Government.
Chan, C. H., Chui, C. H.-K., & Chandra, Y. (2021). ‘The role of social innovation policy in
social service sector reform: Evidence from Hong Kong’. Journal of Social Policy, 1–19.
Chandra, Y., Shang, L., & Roy, M. J. (2021). ‘Understanding Healthcare Social Enterprises: A
New Public Governance Perspective’. Journal of Social Policy, 1-22.
Chesbrough, H. W. (2006), Open Innovation: The new imperative for creating and profiting
from technology. Watertown, Mass: Harvard Business Review Press.
Chesbrough, H. W. & Bogers, M. (2014). Explicating Open Innovation: Clarifying an emerging
paradigm for understanding innovation. In New Frontiers in Open Innovation. Oxford
University Press.
Chesbrough, H., & Di Minin, A. (2014). ‘Open social innovation’. New frontiers in open
innovation, 16, 301-315.
Clarke, L., Chalkidou, K., & Nemzoff, C. (2018). ‘Development impact bonds targeting health
outcomes’. CGD Policy Paper, 133.
Collier, D. (2011). ‘Understanding process tracing’. PS: political science & politics, 44(4), 823-
830.
Cox, K., Ronicle, J., Lau, K., & Rizzo, S. (2019). Independent Evaluation of the UK Department
for International Development’s DIBs Pilot Programme. Ecorys.
Crasnow, S. (2017). ‘Process tracing in political science: What's the story?’. Studies in History
and Philosophy of Science Part A, 62, 6-13.
Curley, M. (2016). ‘Twelve principles for open innovation 2.0’. Nature, 533(7603), 314-316.
De Pieri, B., Chiodo, V., & Gerli, F. (2022). ‘Based on outcomes? Challenges and (missed)
opportunities of measuring social outcomes in outcome-based contracting’. International
Public Management Journal, 1-26.
Dieteren, C., Boers, A., Thomas, W., Njoya, O., & Coutinho, R. (2023). ‘A small-scale
Development Impact Bond for hepatitis C diagnosis and treatment in Cameroon: the way to
elimination?’. medRxiv, 2023-03.
Edmiston, D., & Nicholls, A. (2017). ‘Social Impact Bonds: The role of private capital in
outcome-based commissioning’. Journal of Social Policy, 47(1), 57-76.
143
Elo, S., & Kyngas, H. (2007). ‘The qualitative analysis process’. Journal of Advanced Nursing,
62(1), 107-115.
Erskine, C. (2021). Quality Education India Development Impact Bond: A case study produced
as part of the FCDO DIBs Evaluation. Ecorys. Retrieved April 27, 2023, from
https://golab.bsg.ox.ac.uk/knowledge-bank/resource-library/quality-education-india-report/
Ettelt, S., Mays, N., & Allen, P. (2015). ‘The Multiple Purposes of Policy Piloting and Their
Consequences: Three Examples from National Health and Social Care Policy in
England’. Journal of Social Policy, 44(2), 319–337.
Eveleens, C. (2010). ‘Innovation management; a literature review of innovation process models
and their implications’. Science, 800, 900-916.
Fahrudi, A. N. L. I. (2020). ‘Alleviating poverty through social innovation’. Australasian
Accounting, Business & Finance Journal, 14(1), 71–78.
Farr, M. (2016). ‘Co-production and value co-creation in outcome-based contracting in public
services’. Public Management Review, 18(5), 654-672.
Felin, T., & Zenger, T. R. (2014). ‘Closed or open innovation? Problem solving and the
governance choice’. Research policy, 43(5), 914-925.
FitzGerald, C., Fraser, A., & Kimmitt, J. (2020). ‘Tackling the Big Questions in Social Impact
Bond Research through Comparative Approaches’. Journal of Comparative Policy Analysis,
22(2), 85–99.
FMO. (2019, November 15). FMO supports the first Palestinian Employment Development
Impact Bond (DIB). Retrieved April 27, 2023, from https://www.fmo.nl/newsdetail/4f39ef8e-d43f-44c6-9d8f-47c0a8fb4ca4/fmo-supports-the-first-palestinianemployment-development-impact-bond-(dib)
Foster-Fishman, P. G., Nowell, B., & Yang, H. (2007). ‘Putting the system back into systems
change: A framework for understanding and changing organizational and community
systems’. American journal of community psychology, 39, 197-215.
Fox, C., Gellen, S., Morris, S., Ozan, J., & Crockford, J. (2022a). Impact evaluation with small
cohorts: methodology guidance. Transforming Access and Student Outcomes in Higher
Education (TASO).
Fox, C., & Morris, S. (2021). ‘Evaluating outcome-based payment programmes: challenges for
evidence-based policy’. Journal of Economic Policy Reform, 24(1), 61-77.
Fox, C., Olson, H., & Armitage, H. (2021). Social Impact Bonds 2.0? Findings from a Study of
Four UK SIBs. Retrieved June 14, 2021, from https://socialinnovation.usc.edu/wpcontent/uploads/2021/05/Fox-et-al-2020-Finding-from-a-study-of-4-UK-SIBs-1.pdf
144
Fox, C., Olson, H., Armitage, H., Baines, S., & Painter, G. (2022b). ‘Can a Focus on Co-Created,
Strengths-Based Services Facilitate Early-Stage Innovation within SIBs?’ International
Public Management Journal, 1-17.
Fraser, A., Tan, S., Lagarde, M., & Mays, N. (2018). ‘Narratives of Promise, Narratives of
Caution: A review of the literature on Social Impact Bonds’. Social Policy and
Administration, 52(1), 4-28.
Fraser, A., Tan, S., & Mays, N. (2021). ‘To SIB or not to SIB? A comparative analysis of the
commissioning processes of two proposed health-focused Social Impact Bond financed
interventions in England’. Journal of Economic Policy Reform, 24(1), 28-43.
Friedli, L. (2013). ‘‘What we’ve tried, hasn’t worked’: the politics of assets based public
health’. Critical public health, 23(2), 131-145.
Fry, V. C. (2019). ‘Pay for Success: Diffusion of Policy Innovation for Social and Economic
Stability’. Public Administration Review, 79(5), 784–790.
Gallardo, M. A. J., Kananu, W., Lazicky, C., McManus, J., & Njogu-Ndongwe, F. (2021).
Village Enterprise Development Impact Bond Evaluation Findings. IDinsight.
Gallucci, C., Rosalia, S., & Riccardo, T. (2019). ‘Development impact bonds to overcome
investors-services providers agency problems: Insights from a case study analysis’. African
Journal of Business Management, 13(13), 415–427.
Gallucci, C., Del Giudice, A., & Santulli, R. (2022). ‘How to attract professional investors in
developing countries? An evidence-based structure for development impact bonds’. Finance
Research Letters, 46, 102816.
GMCA. (2021). Greater Manchester Entrenched Rough Sleeping Social Impact Bond: Year 2
and 3 Evaluation. Greater Manchester Combined Authority (GMCA).
Grant, E., Ronicle, J., Crane, D., Smith, R., Fairless, M., & Armitage, J. (2022). Findings from
the third wave of the independent Evaluation of the FCDO Development Impact Bonds Pilot
Programme. Ecorys.
Grewatsch, S., Kennedy, S., & Bansal, P. (2021). ‘Tackling wicked problems in strategic
management with systems thinking’. Strategic Organization, 14761270211038635.
Grittner, A. M. (2013). Results-based Financing: Evidence from performance-based financing in
the health sector. Discussion Paper No. 6/2013. Deutsches Institut für Entwicklungspolitik.
Gruyter, E. de, Petrie, D., Black, N., & Gharghori, P. (2020). ‘Attracting investors for public
health programmes with Social Impact Bonds’. Public Money & Management, 40(3), 225–
236.
Gugiu, P. C., & Rodríguez-Campos, L. (2007). ‘Semi-structured interview protocol for
constructing logic models’. Evaluation and program planning, 30(4), 339-350.
145
Gustafsson-Wright, E., Gardiner, S., & Putcha, V. (2015). The potential and limitations of
impact bonds: Lessons from the first five years of experience worldwide. Global Economy
and Development at Brookings, Brookings Institution: Washington, DC.
Gustafsson-Wright, E., Boggild-Jones, I., Segell, D., & Durland, J. (2017). Impact Bonds in
Developing Countries: Early Learning from the Field. Global Economy and Development at
Brookings, Brookings Institution: Washington, DC.
Gustafsson-Wright, E., & Osborne, S. (2020a). Are Impact Bonds Reaching the Intended
Populations? Global Economy and Development at Brookings, Brookings Institution:
Washington, DC.
Gustafsson-Wright, E., & Osborne, S. (2020b). Do the Benefits Outweigh the Costs of Impact
Bonds? Global Economy and Development at Brookings, Brookings Institution:
Washington, DC.
Gustafsson-Wright, E., Massey, M., & Osborne, S. (2020a). Are Impact Bonds Delivering
Outcomes and Paying out Returns? Global Economy and Development at Brookings,
Brookings Institution: Washington, DC.
Gustafsson-Wright, E., Osborne, S., & Massey, M. (2020b). Do Impact Bonds Affect the
Ecosystem of Social Services Delivery and Financing? Global Economy and Development
at Brookings, Brookings Institution: Washington, DC.
Gustafsson-Wright, E., Osborne, S., & Crane, E. (2021). How have impact bond-funded projects
in low-and middle-income countries fared in COVID-19?. Working Paper No. 164. Global
Economy and Development at Brookings, Brookings Institution: Washington, DC.
Gustafsson-Wright, E., Osborne, S., & Shankar, A. (2022). From Evidence to Scale: Lessons
Learned from the Quality Education India Development Impact Bond. Center for Universal
Education, Brookings Institution: Washington, DC.
Halkos, G., & Gkampoura, E. C. (2021). ‘Where do we stand on the 17 Sustainable Development
Goals? An overview on progress’. Economic Analysis and Policy, 70, 94-122.
Hamadeh, N., Van Rompaey, C., Metreau, E., & Eapen, S. G. (2022, July 1). New World Bank
country classifications by income level: 2022-2023. World Bank Blogs. Retrieved April 27,
2023, from https://blogs.worldbank.org/opendata/new-world-bank-country-classificationsincome-level-2022-2023
Hammond, J., Bailey, S., Gore, O., Checkland, K., Darley, S., Mcdonald, R., & Blakeman, T.
(2021). ‘The Problem of Success and Failure in Public-private Innovation
Partnerships’. Journal of Social Policy, 1–21.
Harrison, R., Blickem, C., Lamb, J., Kirk, S., & Vassilev, I. (2019). ‘Asset-based community
development: narratives, practice, and conditions of possibility—a qualitative study with
community practitioners’. SAGE Open, 9(1), 2158244018823081.
146
Haynes, A., Garvey, K., Davidson, S., & Milat, A. (2020). ‘What can policy-makers get out of
systems thinking? Policy partners’ experiences of a systems-focused research collaboration
in preventive health’. International Journal of Health Policy and Management, 9(2), 65.
Heinrich, C. J., & Kabourek, S. E. (2019). ‘Pay‐for‐Success Development in the United States:
Feasible or Failing to Launch?’. Public Administration Review, 79(6), 867–879.
Henriques, I., & Beckman, C. M. (2022). ‘Researching social innovation: how the unit of
analysis informs the questions we ask’. Rutgers Business Review, 7(2), 153-165.
Hera. (2022). A Development Impact Bond to Finance the In Their Hands Project: Third Party
Monitoring Final Report. UK Foreign, Commonwealth & Development Office. Retrieved
April 27, 2023, https://iati.fcdo.gov.uk/iati_documents/D0001335.pdf
Hevenstone, D., Fraser, A., Hobi, L., & Geuke, G. (2023). ‘Why is impact measurement
abandoned in practice? Evidence use in evaluation and contracting for five European Social
Impact Bonds’. Evaluation, 29(1), 91-109.
Hood, C. (1991). ‘A Public Management for all Seasons?’. Public administration, 69(1), 3-19.
iDE. (2019, November 18). Press release: World’s first $10 million sanitation development
impact bond launches - 18 Nov 2019. Retrieved April 27, 2023,
https://www.ideglobal.org/press/cambodia-rural-sanitation-dib
Indigo. (2022a, February 22). Cameroon Cataract Bond. Retrieved April 27, 2023, from
https://golab.bsg.ox.ac.uk/knowledge-bank/case-studies/cameroon-cataract-bond/
Indigo. (2022b, September 4). In Their Hands. Retrieved April 27, 2023, from
https://golab.bsg.ox.ac.uk/knowledge-bank/case-studies/in-their-hands/
Indigo. (2023). Impact Bond Dataset. Retrieved April 27, 2023, from
https://golab.bsg.ox.ac.uk/knowledge-bank/indigo/impact-bond-dataset-v2/
International Committee of the Red Cross (ICRC). (2022, June 28). First Humanitarian Impact
Bond successfully brings physical rehabilitation services to conflict-affected communities.
Retrieved April 27, 2023, https://www.icrc.org/en/document/humanitarian-impact-bondbrings-physical-rehabilitation-services
Jackson, E. T. (2013). ‘Interrogating the theory of change: evaluating impact investing where it
matters most’. Journal of Sustainable Finance & Investment, 3(2), 95–110.
Johal, A., & Ng, G. (2022). Outcomes for all: Ten Years of Social Outcomes Contracts. Big
Society Capital.
Joy, M., & Shields, J. (2013). ‘Social impact bonds: The next phase of third sector
marketization?’. Canadian journal of nonprofit and social economy research, 4(2).
147
Khan, T., Abimbola, S., Kyobutungi, C., & Pai, M. (2022). ‘How we classify countries and
people—and why it matters’. BMJ global health, 7(6), e009704.
Kitzmuller, L., McManus, J., Shah, N. B., & Starla, K. (2018). Educate Girls Development
Impact Bond: Final Evaluation Report. IDinsight. Retrieved April 27, 2023,
https://golab.bsg.ox.ac.uk/documents/ID_Insight_2018_Educate_Girls_Development_Impac
t_Bond_-_Final_Evaluation_Report.pdf
Klingebiel, S. (2012). Results-Based Aid (RBA): New aid approaches, limitations and the
application to promote good governance. Discussion Paper No. 14/2012. Deutsches Institut
für Entwicklungspolitik.
Lagarde, M., Wright, M., Nossiter, J., & Mays, N. (2013). Challenges of Payment-for
Performance in Health Care and Other Public Services – Design, Implementation and
Evaluation. London: PIRU Publications.
Lamont, M., & Swidler, A. (2014). ‘Methodological Pluralism and the Possibilities and Limits of
Interviewing’. Qualitative Sociology, 37(2), 153–171.
Lau, K., Ronicle, J., Rizzo, S., Agusti Strid, A., & Silver, D. (2021). Findings from the second
research wave of the independent Evaluation of the FCDO Development Impact Bonds Pilot
Programme. Ecorys.
Leech, B. (2002). ‘Asking Questions: Techniques for Semistructured Interviews’. Political
Science and Politics, 35(4), 665–668.
Loraque, J. (2018). ‘Development Impact Bonds: Bringing innovation to education development
financing and delivery’. Childhood Education, 94(4), 64–68.
Lynn, J., Stachowiak, S., & Beyers, J. (2022). How to do Process Tracing: A Method for Testing
“How Change Happened” in Complex and Dynamic Settings. Retrieved April 27, 2023,
from https://www.orsimpact.com/directory/how-to-do-process-tracing.htm
Mahoney, J. (2012). ‘The logic of process tracing tests in the social sciences’. Sociological
Methods & Research, 41(4), 570-597.
Maier, F., Barbetta, G. P., & Godina, F. (2018). ‘Paradoxes of social impact bonds’. Social
Policy & Administration, 52(7), 1332-1353.
Mansoor, Z., & Williams, M. J. (2023). ‘Systems approaches to public service delivery: methods
and frameworks’. Journal of Public Policy, 1-26.
Marques, P., Morgan, K., & Richardson, R. (2018). ‘Social Innovation in Question: The
theoretical and practical implications of a contested concept’. Environment and Planning C:
Politics and Space, 36(3), 496-512.
Martin, M. (2015). ‘Building impact businesses through hybrid financing’. Entrepreneurship
Research Journal, 5(2), 109–126.
148
Mathie, A., & Cunningham, G. (2003). ‘From clients to citizens: Asset-based community
development as a strategy for community-driven development’. Development in
practice, 13(5), 474-486.
Millner, R., & Meyer, M. (2022). ‘Collaborative governance in Social Impact Bonds: aligning
interests within divergent accountabilities?’. Public Management Review, 24(5), 729-751.
Mishra, A. K., & Dash, A. K. (2022). ‘Development impact bonds in developing countries: an
emerging innovation for achieving social outcomes’. Journal of Social and Economic
Development, 1-27.
Montgomery, T. (2016). ‘Are social innovation paradigms incommensurable?’. Voluntas:
International Journal of Voluntary and Nonprofit Organizations, 27(4), 1979-2000.
Morcol, G. (2005). ‘A new systems thinking: implications of the sciences of complexity for
public policy and administration’. Public Administration Quarterly, 297-320.
Moulaert, F., Martinelli, F., Swyngedouw, E., & Gonzalez, S. (2005). ‘Towards Alternative
Model(s) of Local Innovation’. Urban Studies (Edinburgh, Scotland), 42(11), 1969–1990.
Mulgan, G. (2006). ‘The Process of Social Innovation’. Innovations: Technology, Governance,
Globalization, 1(2), 145-162.
Mulgan, G. (2013). Joined-up innovation: What is systemic innovation and how can it be done
effectively. Systems Innovation Discussion Paper. Nesta.
Mulgan, G., Reeder, N., Aylott, M., & Bo’sher, L. (2011). Social Impact Investment: The
challenge and opportunity of Social Impact Bonds. London: The Young Foundation.
Muñoz, P., & Kimmitt, J. (2019). ‘A diagnostic framework for social impact bonds in emerging
economies’. Journal of Business Venturing Insights, 12, e00141–.
Murray, P., & Ma, S. (2015). ‘The Promise of Lean Experimentation’. Stanford Social
Innovations Review.
Murray, R., Caulier-Grice, J., & Mulgan, G. (2010). The Open Book of Social Innovation.
London: National Endowment for Science, Technology and the Art.
Nel, H. (2018). ‘A comparison between the asset-oriented and needs-based community
development approaches in terms of systems changes’. Practice, 30(1), 33-52.
Nicholls, A. & Emerson, J. (2015). Social Finance: Capitalizing Social Impact. In Social
Finance (First edition.). Oxford: Oxford University Press, pp. 1-46.
OECD.Stat. (n.d.). PPPs and Exchange Rates. Organisation for Economic Co-Operation and
Development (OECD). https://stats.oecd.org/index.aspx?r=989045# [accessed 23/02/2019].
149
Office of the Deputy Prime Minister (ODPM). (2005). A Systematic Approach to Service
Improvement Evaluating Systems Thinking in Housing. London: ODPM publications.
Olson, H., Painter, G., Albertson, K., Fox, C., & O’Leary, C. (2022). ‘Are Social Impact Bonds
an innovation in finance or do they help finance social innovation?’. Journal of Social
Policy, 1-25.
O'Neil, S., Vohra, D., Spitzer, M., Kalyanwala, S., & Rotz, D. (2021). Maternal Health Care
Quality Improvement in Rajasthan, India: A Series of Insights from a Development Impact
Bond Verification Agent. Mathematica.
Ormiston, J., Moran, M., Castellas, E. I., & Tomkinson, E. (2020). ‘Everybody wins? A
discourse analysis of competing stakeholder expectations in Social Impact Bonds’. Public
Money & Management, 40(3), 237–246.
Oroxom, R., Glassman, A., & McDonald, L. (2018). Structuring and funding development
impact bonds for health: nine lessons from Cameroon and beyond. Washington, DC: Center
for Global Development.
Osborne, S. P. (2006). ‘The New Public Governance?’. Public Management Review, 8(3), 377-
387.
Painter, G., Albertson, K., Fox, C., & O’Leary, C. (2018). ‘Social Impact Bonds: More than one
approach’. Stanford Social Innovation Review.
Pearson, M. (2011). Results based aid and results based financing: What are they? Have they
delivered results. HLSP Institute.
Pearson, M., Johnson, M., & Ellison, R. (2010). Review of major results based aid (RBA) and
results based financing (RBF) schemes: Final report. DfID Human Development Resource
Centre.
Phills, J. A. J., Deiglmeier, K., & Miller, D. T. (2008). ‘Rediscovering Social Innovation’.
Stanford Social Innovation Review.
Pidd, H. (2017, November 21). Hundreds of rough sleepers in Manchester to be offered homes.
The Guardian. Retrieved April 27, 2023, from
https://www.theguardian.com/society/2017/nov/21/manchester-rough-sleepers-to-beoffered-homes-in-investor-backed-plan
Pidd, H. (2018, August 20). Manchester has twice as many rough sleepers than official data
suggests. The Guardian. Retrieved April 27, 2023, from
https://www.theguardian.com/society/2018/aug/20/manchester-has-twice-as-many-roughsleepers-than-official-data-suggests
Pidd, H. (2021, January 10). Off the streets: how Manchester found homes for hundreds of rough
sleepers. The Guardian. Retrieved April 27, 2023, from
150
https://www.theguardian.com/society/2021/jan/10/how-manchester-found-homes-hundredsrough-sleepers-home-partnership
Porter, M. & Kramer, M. R. (2011). ‘Creating shared value’. Harvard Business Review, 89(1/2),
62-77.
Preskill, H., & Beer, T. (2012). Evaluating social innovation. Center for Evaluation Innovation.
Prieto Martín, P., Apgar, M., & Hernandez, K. (2020). Adaptive management in SDC:
Challenges and opportunities. Swiss Agency for Development Cooperation.
Punton, M. & Welle, K. (2015). Applying process tracing in five steps. CDI Practice Paper 10
Annex. Institute of Development Studies.
Quality Education India DIB. (2022, September 29). World’s largest education Development
Impact Bond results released: students learn 2.5 times more than those in other schools
despite COVID-19. Retrieved April 27, 2023, https://qualityeducationindiadib.com/2318-
2/
Ravi, S., Gustafsson-Wright, E., Sharma, P., & Boggild-Jones, I. (2019). The promise of impact
investing in India. Brookings Institution India Center, Brookings India: New Delhi.
Rizzello, A., & Kabli, A. (2020). ‘Sustainable Financial Partnerships for the SDGs: The Case of
Social Impact Bonds’. Sustainability (Basel, Switzerland), 12(13), 5362–.
Ronicle, J., Stanworth, N., & Wooldridge (2022). Commissioning better outcomes evaluation:
3rd Update report. Ecorys.
Ronicle, J., Stanworth, N., Hickman, E., & Fox, T. (2014). Social Impact Bonds: The state of
play. London: Big Lottery Fund.
Rosen, J., & Painter, G. (2019). ‘From Citizen Control to Co-Production: Moving beyond a
linear conception of citizen participation’. Journal of the American Planning Association,
85(3), 335-347.
Sabato, S., Vanhercke, B., & Verschraegen, G. (2017). ‘Connecting Entrepreneurship with
Policy Experimentation? The EU framework for social innovation’. Innovation: The
European Journal of Social Science Research, 30(2), 147-167.
Savell, L. (2022). Social outcomes contracts & system strengthening. Social Finance.
Savell, L. & Eddleston, C. (2021). Cameroon Kangaroo Mother Care Development Impact Bond
2018- 2021: End of programme report. Social Finance. Retrieved April 27, 2023,
https://www.socialfinance.org.uk/sites/default/files/publications/cameroon_kmc_developme
nt_impact_bond_report_-_en.pdf
Savell, L., & Heady, L. (2016). Balancing Evidence and Risk: Evaluating Impact Bonds. Social
Finance.
151
Sightsavers. (2018, January). Experts praise Sightsavers’ innovative funding in Cameroon.
Retrieved April 27, 2023, https://www.sightsavers.org/news/2018/01/experts-praisefunding-cameroon
Sinclair, S., McHugh, N., & Roy, M. J. (2021). ‘Social innovation, financialisation and
commodification: A critique of social impact bonds’. Journal of economic policy
reform, 24(1), 11-27.
Smeets, D. J. A. (2017). ‘Collaborative learning processes in social impact bonds: A case study
from the Netherlands’. Journal of Social Entrepreneurship, 8(1), 67-87.
Social Finance. (2009). Social Impact Bonds: Rethinking finance for social outcomes. Social
Finance.
Social Finance. (2018). Impact Bond Global Database. Retrieved January 3, 2019, from
https://sibdatabase.socialfinance.org.uk/
Stead, S. M. (2019). ‘Using systems thinking and open innovation to strengthen aquaculture
policy for the United Nations Sustainable Development Goals’. Journal of fish
biology, 94(6), 837-844.
Stemler, S. (2000). ‘An overview of content analysis’. Practical Assessment, Research, and
Evaluation, 7(1), 17.
Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the range
of designs and methods for impact evaluations. Department for International Development.
Stone Family Foundation, & iDE. (n.d.). The Cambodia Rural Sanitation Development Impact
Bond: Two Years of Delivering Results through Innovative Finance. Retrieved April 27,
2023, from https://www.thesff.com/wp-content/uploads/2022/05/Cambodia-RuralSanitation-Development-Impact-Bond-Year-2-Report.pdf
Strid, A. A. (2021). Cameroon Cataract Bond: A case study produced as part of the Cameroon
Cataract Bond Evaluation. The Government Outcomes Lab. Retrieved April 27, 2023,
from https://golab.bsg.ox.ac.uk/knowledge-bank/resource-library/cameroon-cataract-bondeval/
Svensson, K., Szijarto, B., Milley, P., & Cousins, J. B. (2018). Evaluating social innovations:
Implications for evaluation design. American Journal of Evaluation, 39(4), 459-477.
te Lintelo, D. J., Munslow, T., Pittore, K., & Lakshman, R. (2020). ‘Process tracing the policy
impact of ‘indicators’’. The European Journal of Development Research, 32, 1312-1337.
Then, V., & Schmidt, T. (2020). ‘Debate: Comparing the progress of social impact investment in
welfare states-a problem of supply or demand?’. Public Money & Management, 40(3), 192–
194.
152
Tomkinson, E. (2016). ‘Outcome-based contracting for human services’. Evidence Base: A
journal of evidence reviews in key policy areas, (1), 1-20.
Trampusch, C., & Palier, B. (2016). ‘Between X and Y: how process tracing contributes to
opening the black box of causality’. New political economy, 21(5), 437-454.
Triggerise. (n.d.). Do More with Tiko. Retrieved April 27, 2023,
https://triggerise.org/impactbond/
Tse, A. E., & Warner, M. E. (2020). ‘A policy outcomes comparison: Does SIB market
discipline narrow social rights?’. Journal of Comparative Policy Analysis: Research and
Practice, 22(2), 134-152.
Van Evera. (2016). Guide to Methods for Students of Political Science. (1st ed.). Cornell
University Press.
VanLandingham, G., & Silloway, T. (2016). ‘Bridging the Gap between evidence and policy
makers: A case study of the Pew‐MacArthur results first initiative’. Public Administration
Review, 76(4), 542-546.
Voorberg, W., Bekkers, V., Timeus, K., Tonurist, P., & Tummers, L. (2017). ‘Changing public
service delivery: learning in co-creation’. Policy and Society, 36(2), 178-194.
Waddell, S. (2016). ‘Societal change systems: a framework to address wicked problems’. The
Journal of Applied Behavioral Science, 52(4), 422-449.
Waddock, S., Meszoely, G. M., Waddell, S., & Dentoni, D. (2015). ‘The complexity of wicked
problems in large scale change’. Journal of Organizational Change Management.
Wadeson, A., Monzani, B., & Aston, T. (2020). Process tracing as a practical evaluation
method: Comparative learning from six evaluations. Monitoring and Evaluation News.
Retrieved April 27, 2023, from https://mande.co.uk/wp-content/uploads/2020/03/ProcessTracing-as-a-Practical-Evaluation-Method_23March-Final-1.pdf
Waldner, D. (2012). Process tracing and causal mechanisms. In The oxford handbook of
philosophy of social science.
Warner, M. E. (2013). ‘Private finance for Public Goods: Social Impact Bonds’. Journal of
Economic Policy Reform, 16(4), 303-319.
Westley, F., Antadze, N., Riddell, D. J., Robinson, K., & Geobey, S. (2014). ‘Five
Configurations for Scaling Up Social Innovation: Case Examples of Nonprofit
Organizations From Canada’. The Journal of Applied Behavioral Science, 50(3), 234–260.
White, H., & Phillips, D. (2012). Addressing attribution of cause and effect in small n impact
evaluations: towards an integrated framework. International Initiative for Impact
Evaluation, New Delhi.
153
Wildman, J. M., Valtorta, N., Moffatt, S., & Hanratty, B. (2019). ‘What works here doesn’t work
there’: The significance of local context for a sustainable and replicable asset‐based
community intervention aimed at promoting social interaction in later life. Health & social
care in the community, 27(4), 1102-1110.
Williams, J. W. (2020). ‘Surveying the SIB economy: Social impact bonds, “local” challenges,
and shifting markets in urban social problems’. Journal of Urban Affairs, 42(6), 907–919.
Wilson, K. (2016). Investing for social impact in developing countries. In Development Cooperation Report 2016: The Sustainable Development Goals as Business Opportunities.
OECD Publishing, Paris.
Wooldridge, R., Stanworth, N. & Ronicle, J. (2019). A Study into the Challenges and Benefits of
the Social Impact Bond Commissioning Process in the UK - Final Report. Birmingham:
Ecorys.
Zellner, M., & Campbell, S. D. (2015). ‘Planning for deep-rooted problems: What can we learn
from aligning complex systems and wicked problems?’. Planning Theory & Practice, 16(4),
457-478.
154
APPENDICES
APPENDIX 1.1: FURTHER BACKGROUND ON PILOT VERSUS SCALED
CLASSIFICATION PROCESS
We initially used content analysis to classify the data from the SIB Database profiles
according to the type of language used to describe each program. We hand-labeled a program as
a “pilot” when the following type of language was used: “pilot,” “test,” “show whether or not
the… method is successful,” “initially support,” “provide additional academic support,” and
“identify the broader impact and benefits of the program.” On the other hand, we labeled a
program as “scaled” when the following kinds of phrases were used: “scale,” “scale-up,”
“expand,” “established model,” “existing suite of programs,” and “as a result of learnings from a
pilot program.”
Utilizing a sample of 27 these programs, each classified by their description, we next
looked for patterns in target population size, size of capital raised, size of maximum outcome
payments, and program duration. In general, we observed that most of the pilot-by-description
programs had target populations below 300, had both capital raised and maximum outcome
payments below $2 million, and had durations of less than 4 years. Summary statistics for these
variable thresholds for both the UK and US programs are provided in table 1.3 below.
Table 1.3: Trends in Potential Program Criteria for UK and US Programs
UK US
Target population
< 300 35.6% (16/45) 25.0% (5/20)
Capital raised
< $2M (> $0) 69.7% (23/33) 4.8% (1/21)
155
Max outcome payments
< $2M (> $0) 21.4% (6/28) 5.3% (1/19)
Duration
< 4 years 70.5% (31/44) 14.3% (3/21)
Pilot programs 31.9% (15/47) 9.1% (2/22)
Scaled programs 63.8% (30/47) 90.9% (20/22)
Non-classified programs 4.3% (2/47) 0% (0/22)
Note: Parentheses provide the number of programs that meet each qualification out of the number of
programs for which that value is non-blank and non-zero
However, based on data availability, we ultimately decided to use the target population
and capital raised criteria for our more systematic classification. Overall, this systematic
classification (by size) conformed well with our prior manual classifications (based on
descriptors). However, there were some notable instances in which this was not the case. For
example, there were two scaled-by-description programs which had target populations of only
180 and 200, and there were also two pilot-by-description programs with target populations of
1,300 and 2,250.
More specifically, the scaled-by-description program with a target population of 180 was
the Youth Unemployment program launched in Portugal in 2017 and had only $0.79 million in
capital raised. Thus, despite the fact that it was described as a “scale-up of Portugal’s first Social
Impact Bond,” we doubted whether this really should be referred to as a scaled program, and
thus felt that our coding of the program as pilot (by size) was reasonable. Similarly, the pilot-bydescription program with a target population of 2,250 was the Diabetes Prevention program
launched in Israel in 2016 and had raised $5.5 million in capital. Although its Database profile
stated that the program was designed to “test a preventative Diabetes model, and if successful…
extend diabetes prevention measures to many more people,” we questioned our initial manual
156
label of this program as a pilot, and thus felt that our coding of the program as scaled (by size)
was reasonable. In addition, there was one instance in which a pilot-by-descriptor program could
not be classified by size because of missing data for both target population and capital raised.
157
APPENDIX 1.2: UNIQUE INVESTORS FOR UK AND US SIBS
Table 1.4: Unique UK Investors
Investors # of SIBs
Bridges Fund Management (AKA Bridges Social Impact Bond Fund and Bridges
Ventures) 21
Big Issue Invest 8
Big Society Capital 7
CAF Venturesome 6
Esmée Fairbairn Foundation 6
Barrow Cadbury Trust 5
The Key Fund 5
Care and Wellbeing Fund 3
Impetus-PEF 3
Charities Aid Foundation 2
Department of Health Social Enterprise Investment Fund 2
Orp Foundation 2
Tudor Trust 2
3SC 1
Advance Personnel Management (APM) UK Ltd 1
Age UK 1
Berkshire Community Foundation 1
Big Lottery Fund 1
Bracknell Forest Homes 1
Buckinghamshire County Council 1
Elton John Aids Foundation 1
Friends Provident Foundation 1
Helena Partnerships 1
J Paul Getty Charitable Trust 1
Johansson Family Foundation 1
Knowsely Housing Trust 1
LankellyChase Foundation 1
Liverpool Mutual Homes 1
Montpelier Foundation 1
Nesta Impact Investments 1
Northstar Ventures 1
Nottingham City Council 1
Panaphur Charitable Trust 1
158
Paul Hamlyn Foundation 1
Prevista 1
Resonance Bristol SITR fund 1
Rockefeller Foundation 1
Sainsbury's Charitable Trust 1
Social and Sustainable Capital (SASC) 1
St. Mungo's Broadway 1
Stratford Development Partnerships 1
Thames Reach 1
The Henry Smith Charity 1
The King Badouin Foundation 1
The Monument Trust 1
The Social Venture Fund 1
Triodos 1
Wirral Partnership Homes 1
Total number of SIBs 107
Total number of unique investors 48
Average number of SIBs per investor 2.2
Number of unique investors invested in >1 SIB 13
Table 1.5: Unique US Investors
Investors # of SIBs
Reinvestment Fund 6
Goldman Sachs (including Urban Investment Group) 5
Living Cities (including Blended Catalyst Fund) 5
Northern Trust 5
Laura and John Arnold Foundation 4
Nonprofit Finance Fund 4
QBE Insurance Group Limited 3
Ally Bank 2
BNP Paribas 2
Corporation for Supportive Housing (CSH) 2
James Lee Sorenson Family Foundation 2
Sorenson Impact Foundation 2
The J.B. and M.K. Pritzker Family Foundation 2
The Robin Hood Foundation 2
Bloomberg Philanthropies 1
Blue Shield of California Foundation 1
159
BlueCross BlueShield of South Carolina Foundation 1
Calvert Foundation 1
Combined Jewish Philanthropies' Donor Advised Funds 1
DCF Social Impact Fund 1
Deutsche Bank 1
Doris Duke Charitable Foundation 1
Federal Human Services Administration 1
George Kaiser Family Foundation 1
Google.org 1
Greenville County SC First Steps 1
Maycomb Capital Community Outcomes Fund 1
Medicaid 1
Michigan Health Endowment Fund 1
New Profit 1
Prudential Financial 1
Santander Bank N.A 1
Sisters of Charity Foundation of Cleveland 1
Spectrum Health 1
The Ben and Lucy Ana Walton Fund of the Walton Family Foundation 1
The Boeing Foundation 1
The Boston Foundation 1
The California Endowment 1
The Cleveland Foundation 1
The Colorado Health Foundation 1
The Conrad N. Hilton Foundation 1
The Dakota Foundation 1
The Denver Foundation 1
The Duke Endowment 1
The George Gund Foundation 1
The Health Trust 1
The James Irvine Foundation 1
The Kresge Foundation 1
the Piton Foundation 1
The Rockefeller Foundation 1
The Sobrato Family Foundation 1
The Whitney Museum of American Art 1
United Way of Massachusetts Bay and Merrimack Valley 1
UnitedHealthcare 1
W.K. Kellogg Foundation 1
160
Total number of SIBs 87
Total number of unique investors 55
Average number of SIBs per investor 1.6
Number of unique investors invested in >1 SIB 14
161
APPENDIX 2.1: GM HOMES CASE TIMELINE
2016
• MHCLG announces funding (1.8M pounds) for Greater Manchester rough sleeping SIB
2017
• Andy Burnham enters office as mayor, pledges to end rough sleeping by 2020
• GM Homes Partnership submits and wins bid for SIB
• 19 housing providers commit to providing a total of 307 properties for program
• Housing providers meet to establish more supportive approaches for program
2018
• Program launches and receives over 500 referrals (exceeding target of 200)
• 406 individuals start the program
• Establishment of housing provider forum and participant forum
• Launch of Bond Board (private rentals) partnership
• Program receives additional MHCLG SIB funding (0.8M pounds)
• Program submits proposal to create mental health practitioner role
2019
• Launch of Growth Company (employment) partnership
• 2019 Northern Housing Awards grants GM Homes award for Best Initiative for Tackling
Homelessness
• Housing First Pilot begins roll-out
• Greater Manchester Mental Health Trust seconds mental health worker to program (for
last 18 months of program)
• Program holds trauma-informed practice event, develops action plan to embed ‘SIB
Principles’
• Commissioner Sharaf Tariq praises the systemic changes implemented by the program
2020
• Onset of COVID-19 pandemic
• Program begins creating exit strategy to transition clients after end of program
• Investors are repaid maximum outcome cap (2.6M pounds) 8 months ahead of schedule
• Program ends
2021
• Housing providers continue to meet for another year
• GMCA completes program evaluation
162
APPENDIX 2.2: INTERVIEW PROTOCOLS
Initial First-Time Interview Protocol
1. Role and Responsibilities
• What is your role within your organization? (EX: job title and brief description of role)
• What is your role in relation to the SIB?
2. Basic SIB Architecture
• Can you please provide a brief overview of the SIB?
• What is the target group for the SIB?
o Has this changed over time?
• What are the SIB outcomes?
o Are outcomes being met?
• What are the SIB payment metrics and trigger points for payment?
o How are these working?
• Has funding/support come from a national program such as the Life Chances Fund?
o If so, how significant was this in the development of the SIB?
o Did it change decisions about the design/viability of the SIB?
3. Asset-based Services
• Has asset-based service delivery has been incorporated into the design of the SIB?
o If so, what form has this taken?
• What were your motivations for pursuing an asset-based approach?
o Who proposed this approach?
o Were you influenced by peer organizations, program funders, or beneficiaries?
o Did you decide on an asset-based approach before or after deciding to participate
in a SIB?
• What impact has asset-based delivery had on the way that people delivering the service
work?
o How has it changed their training, practice, management, and support?
• What impact has asset-based working had on the way that service users are assessed?
o Are assessments done differently?
o Do assessments ask different kinds of question?
o Do assessments take longer?
• What impact has asset-based delivery had on service planning with service users?
o Do service plans look different?
o Do they have different kinds of targets or activities?
o Are service users more involved in planning?
o Do they have more ownership of their plan?
o Are plans more flexible?
• What impact has asset-based working had on the way that risks are assessed and
managed?
163
• How have service users responded to an asset-based way of working?
• What service user characteristics facilitate asset-based working or act as a barrier to it?
(EX: skills, intrinsic values, social capital, family composition, level of education, or
attitudes to risk)
• Are service users and/or their families/carers involved in designing the wider service
(EX: more than just the service they personally receive)?
o If so, how?
• Have you had to incentivize people to participate in service design work?
o If so, how?
• Has your organization changed to accommodate asset-based working?
o Have there been any trade-offs as a result of adapting this new approach?
• Where asset-based working has taken place, how does this fit into the wider system
within which the service is delivered?
• Where other organizations work in asset-based ways are their models the same/similar?
o If not how do they differ?
• How do other organizations not using asset-based approaches respond?
o What is their approach to managing risk?
o Where there are conflicts with other organizations, what do they center on?
4. Innovation
• What aspects of the service that you have developed/are delivering do you consider
innovative?
o Why are they innovative?
• What were your motivations for utilizing a SIB approach?
o Who proposed this approach?
o Were you influenced by peer organizations or program funders?
• Was encouraging innovation part of the rationale for setting up a SIB?
o What kinds of innovation were envisaged: financial innovation, technical
innovation, social innovation, etc.?
• Was the SIB based on a pilot or feasibility study?
o If so, what form did this take?
o Was it intended to scale up an existing intervention, for instance by implementing
it in a new location?
• Was the SIB itself intended to be a pilot test in order to test a new intervention?
• Are front-line workers aware of the SIB outcomes?
o If so, how does this manifest in management processes and their day-to-day
work?
• Is there any evaluation of the SIB taking place?
o What types of evaluation methods are being used (EX: baseline comparisons,
RCTs, etc.)?
164
5. Other Commissioning Models
• Where you have been involved in designing, implementing or commissioning similar
services using conventional commissioning models, what are the main differences
operating within the SIB framework?
165
Follow-Up Interview Protocol
1. Role and Responsibilities
• What organization do you currently work with and what is your role?
• Have you been involved in any other SIBs or any other work related to the SIB/program
since the contract ended? (EX: training, evaluation, knowledge sharing, advocacy,
follow-on SIBs/programs etc.)
2. Context and Co-Production
• What was the wider context in which the SIB/program was launched? (EX: what problem
was it trying to address?)
• To what extent were service users actively involved in the co-production of the
SIB/program’s design, implementation, and/or evaluation?
• Was there anything specific you were hoping to achieve by incorporating user input into
the SIB/program? (EX: user empowerment, efficiency, effectiveness, cost-savings, etc.)
3. Asset-based Services
• To what extent was pursuing asset-based working an objective of the SIB/program?
• How did asset-based working impact individual user experiences? (EX: more confidence,
more control, etc.)
• In what ways did insights from individual experiences impact wider service design (EX:
more than just the services individuals personally receive)?
• Were there any ways in which asset-based working helped address the root causes of
issues? (EX: at the individual, organizational, or system level)
• Were there any ways in which asset-based working shifted control or power towards
individuals/participants? (EX: within specific organizations or the system more broadly)
• What was the relationship between the SIB/program and other actors in the system who
were using deficit-based approaches?
• What helped facilitate asset-based working and what acted as a barrier to it? (EX:
individual/staff skills, SIB design, etc.)
• Were any measures put in place to help sustain these practices beyond the duration of the
SIB contract/program? (EX: sharing learnings, challenging deficit-based practices, etc.)
• To what extent have these measures been successful? (EX: examples)
4. Innovation
• To what extent was pursuing innovation or testing new approaches an objective of the
SIB/program?
• What aspects of the service that you developed or delivered do you consider particularly
innovative?
• What seemed most important to enabling an innovative environment and what acted as a
barrier to it? (EX: flexibility, risk aversion, etc.)
166
• Were any measures put in place to help promote new approaches beyond the duration of
the SIB contract/program? (EX: scaling successful approaches, broader policy change,
etc.)
• To what extent have these measures been successful? (EX: examples)
• Did any other organizations within the system adopt these innovations?
• What role did performance management, monitoring, and/or evaluation play in testing or
scaling innovations?
• How have evaluations of the SIB/program been used, if at all? (EX: shared with SIB
stakeholders, shared with other government entities, shared with other social
organizations, etc.?)
• How might SIB/program evaluations be even more useful in the future?
5. Outcomes and Systems Change
• Was encouraging systems change part of the rationale for setting up the SIB/program?
• If so, what kinds of systems-level changes were envisaged? (EX: diffusion of new
practices, policy change, etc.)
• To what extent has progress been made in these areas?
• What might better enable systems change efforts to continue beyond the SIB contract in
the future? (EX: funding arrangements, partnerships, SIB design, etc.)
167
Revised First-Time Interview Protocol
1. Roles and Responsibilities
• What organization do you currently work with and what is your role?
• What was your role in relation to the SIB/program? (EX: job title and organization)
• Have you been involved in any other SIBs or any other work related to the program since
the contract ended? (EX: training, evaluation, knowledge sharing, advocacy, follow-on
SIBs/programs etc.)
2. Context and Co-Production
• What was the wider context in which the SIB/program was launched? (EX: what problem
was it trying to address?)
• To what extent were service users actively involved in the co-production of the
SIB/program’s design, implementation, and/or evaluation?
• Was there anything specific you were hoping to achieve by incorporating user input into
the SIB/program? (EX: user empowerment, efficiency, effectiveness, cost-savings, etc.)
3. Asset-based Services
• In what ways did the SIB/program draw on users’ strengths and aspirations?
• What about community support and resources?
• How did asset-based working impact individual user experiences? (EX: incorporating
users’ strengths and aspirations)
• In what ways did insights from individual experiences impact wider service design (EX:
more than just the services individuals personally receive)?
• Were there any ways in which asset-based working helped address the root causes of
issues? (EX: at the individual, organizational, or system level)
• Were there any ways in which asset-based working shifted control or power towards
individuals/participants? (EX: within specific organizations or the system more broadly)
• What helped facilitate asset-based working and what acted as a barrier to it? (EX:
individual/staff skills, values, networks, etc.)
• What asset-based practices did you find particularly effective? (EX: examples)
• How did other organizations within the system respond to asset-based working? (EX:
open to learning, resistant, etc.)
• What was the relationship between the SIB/program and other actors in the system who
were using deficit-based approaches?
• Were any measures put in place to help sustain asset-based working beyond the duration
of the SIB contract/program? (EX: raising awareness of new practices, challenging
deficit-based practices, etc.)
168
4. Innovation
• To what extent was innovation or testing new approaches part of the SIB/program? (EX:
demonstrating a proof of concept, scaling existing approaches in a new location)
• What aspects of the service that you developed or delivered do you consider particularly
innovative?
• In what ways did innovating impact individual user experiences?
• What about broader service design?
• Were any measures put in place to help promote these new approaches beyond the
duration of the SIB contract/program? (EX: raising awareness, etc.)
• To what extent have these measures been successful? (EX: examples)
• Did any other organizations within the system adopt these innovations?
• What seemed most important to enabling an innovative environment and what acted as a
barrier to it? (EX: flexibility, risk aversion, etc.)
• What role did performance management, monitoring, and/or evaluation play in testing or
scaling innovations?
• How have evaluations of the SIB/program been used, if at all? (EX: shared with SIB
stakeholders, shared with other government entities, shared with other social
organizations, etc.?)
• How might SIB/program evaluations be even more useful in the future?
5. Outcomes and Systems Change
• Was encouraging systems change part of the rationale for setting up the SIB/program?
• If so, what kinds of systems-level changes were envisaged? (EX: diffusion of new
practices, policy change, etc.)
• To what extent has progress been made in these areas?
• What might better enable systems change efforts to continue beyond the SIB contract in
the future? (EX: funding arrangements, partnerships, SIB design, etc.)
6. Social Impact Bonds
• What were your motivations for utilizing a SIB approach?
• Where you have been involved in designing, implementing or commissioning similar
services using conventional commissioning models, what are the main differences
operating within the SIB framework?
169
APPENDIX 2.3: INTERVIEW SUMMARY TABLE
Table 2.6: Summary of Interviews
First-Time Interview
(Number)
Follow-Up
Interview
(Number)
Total
Interviewees
Stakeholder Type 2020 2022 Overall
Commissioner 3 Individual 3
Investor/Housing Provider 1 Individual 1 Individual 1 Individual 2
Investor/Housing Provider 1 Individual 1 Individual 1 Individual 2
Investor/Manager 1 Individual 1 Individual 1 Individual 2
Delivery Partner 2 Individual 2
Delivery Partner 1 Group (2) 2
Delivery Partner/Housing
Provider 1 Individual 1
Housing Provider 1 Individual 1 Individual 1
Advisor 1 Group (5) 1 Individual 5
Advisor 1 Individual 1
Totals 6 (11) 10 5 21
170
APPENDIX 2.4: SUMMARY OF PROCESS TRACING TESTS AND EVIDENCE
Table 2.7: Summary of Asset-Based Working Hypothesis Tests and Evidence
Sufficient for Causal Inference
No Yes
Necessary for Causal Inference
No
Straw-in-the-Wind (Relevance):
The program was designed to use
an asset-based way of working
Evidence:
-Trusting relationships
-Focus on interests/aspirations
-Personalization funds
-Staff with lived experience
-Trauma-informed
Smoking Gun (Contribution):
Program staff advocated asset-based
working; system actors began to work in
asset-based ways
Evidence:
-Considerable time explaining approach
-Trauma-informed trainings for housing
provider staff
-‘SIB principles’ for continuing work
-Changes to housing provider staffing
Yes
Hoop (Correlation):
Asset-based working empowered
service users/providers and changed
organizational culture/mindsets
Evidence:
-Choice over accommodations (EX:
managed moves)
-Voluntary, cases never closed
-Co-designed, individualized
services
-Meaningful way of working
Doubly-Decisive (Causation):
Systems actors changed their
organizational culture/mindsets because
the program promoted asset-based
working
Evidence:
-Part of broader push towards asset-based,
trauma-informed working
-But housing providers had not taken these
approaches prior to GM Homes
171
Table 2.8: Summary of Innovation Hypothesis Tests and Evidence
Sufficient for Causal Inference
No Yes
Necessary for Causal Inference
No
Straw-in-the-Wind (Relevance):
The program was designed to
enable innovation in service
delivery.
Evidence:
-Mixed views on being
‘innovative’
-More agreement around
flexibility to ‘test and learn’
-Opportunity to compare different
delivery models
Smoking Gun (Contribution):
Program partners shared evidence and
learnings on effective new approaches;
system actors adopted/scaled new
approaches.
Evidence:
-Evaluation demonstrates evidence on what
works to encourage scaling
-Learning events and conferences
-Informal/internal learning transfers
-ETE learnings influenced KBOP design
-Dual diagnosis scaled for Housing First
Yes
Hoop (Correlation):
The program tested new
approaches; the program collected
evidence through monitoring and
evaluation.
Evidence:
-Mixed views on ‘new
approaches’
-Some viewed as ‘common sense’
-Others pointed to specific pilots
-Examples of monitoring,
evaluation, and performance
management
-But evaluation done at late stage
Doubly-Decisive (Causation):
Systems actors adopted/scaled new
approaches because the program partners
demonstrated evidence on their effectiveness.
Evidence:
-Many GM Homes partners directly applied
learnings to Housing First
-Less evidence that evaluation of innovative
approaches led to scaling
-More evidence that learnings from adaptive
management led to scaling
172
Table 2.9: Summary of Collaboration Hypothesis Tests and Evidence
Sufficient for Causal Inference
No Yes
Necessary for Causal Inference
No
Straw-in-the-Wind (Relevance):
The program was designed to
encourage collaborative working.
Evidence:
-SOC model designed to involve
investors, delivery partners, and
commissioners
-Housing providers combined two
competing bids
Smoking Gun (Contribution):
Program partners coordinated the joining-up
of services; systems actors shared
knowledge and resources.
Evidence:
-Diversion from Custody pilot with criminal
justice stakeholders
-Dual diagnosis and trauma-informed
trainings with Mental Health Trust
-Coordination on transition plan
Yes
Hoop (Correlation):
The program brought together a
diverse range of system actors;
established new relationships.
Evidence:
-3 social investors, 3 delivery
partners, 1 commissioner (10 local
authorities), 20+ housing
providers
-New relationships among SOC
partners
-New partnerships with other
services (EX: mental health,
criminal justice)
Doubly-Decisive (Causation):
Systems actors created new relationships to
join-up services (share knowledge and
resources) because of the program’s
collaborative working.
Evidence:
-First partnership at this scale across GM
-Housing provider collaborative working
continued into Housing First
-However, relationships driving Diversion
from Custody ended with the program
173
APPENDIX 3.1: DIB INTERVIEW PROTOCOL
• What organization do you currently work with and what is your role?
• What is/was your role in relation to the DIB? (EX: job title and organization)
• What was the wider context in which the DIB was launched? (EX: what problem was it
trying to address?)
• What were your motivations for utilizing a DIB approach?
• What aspects of the DIB model did you find particularly compelling?
• What did you hope to learn and/or achieve through your involvement in the DIB?
• At which stage were you approached as a potential partner (and by who)?
o Or, if you were the originator, how did you choose the additional partners?
• Had you worked with any of the DIB partner organizations before?
• To what extent (and in what ways) does/did the government contribute to the DIB?
• Were there any deal-breakers that would have prevented you from participating in this
DIB? (Please explain.)
• How did you typically [finance/fund] social interventions previously, if at all?
• Would you still have [invested in/funded/delivered] this intervention if not through an
impact bond? (Please explain.)
• Did you invest in the program at all? If so, what were the investment terms?
• What were initial expectations around an interest/return rate?
o What was the final rate?
• What were initial expectations around investment guarantees or insurance?
o What were the final provisions, if any?
• How is/was funding allocated to repay investors at the end of the contract if outcomes
were met?
• Is/was there any flexibility in the contract to increase investments/funding if necessary?
(If so, please explain.)
• Were there any institutional barriers you had to overcome to participate in the DIB? (EX:
political or budgetary)
• To what extent was the DIB intended to test novel approaches to service delivery and/or
scale proven approaches to new locations/populations?
o [Service Provider: How often and in which ways do/did you incorporate user
voice into the program?]
• What were initial expectations around performance management, technical assistance,
and/or capacity building support?
o Is/was such support provided? (Please explain.)
o [Service Providers: Did you have any prior experience with impact evaluation
and/or performance management?]
• What role does/did monitoring and evaluating play in the DIB?
o Will/did it focus on verifying outputs/outcomes, calculating cost-savings,
assessing longer-term impacts, etc.?
o Will/have results be(en) made publicly available?
• Are/were you hoping to achieve any longer-term impacts through the DIB? (Please
explain.)
174
• To what extent have/did the DIB partners strategize(d) on next steps for after the contract
ends/ended? Can you provide any examples?
o If the DIB has already concluded, what ultimately happened?
o [Investors: If outcomes are met and you earn a profit from your investment, do
you plan to recycle any of these funds into additional social programs?]
• What (has) surprised you most about your experience? (EX: positives and/or negatives)
• Have you participated in any other impact bonds? (EX: before, during, and/or after)
• Would you participate in another impact bond in the future? (Please explain.)
• Overall, what do you view as the major tradeoffs between using impact bonds versus
more traditional funding models?
• What impacts did COVID have on the DIB?
o Did you find that this financial model offered more flexibility and adaptability
than other models?
175
APPENDIX 3.2: EXAMPLES OF UNIQUE DIB CONTEXTS
Table 3.8: Quotes on DIBs Being Applied in New Contexts
DIB Quote Source
International Committee of
the Red Cross
“first humanitarian impact bond” (ICRC 2022)
Quality Education in India “largest education DIB in the world” (Quality Education
India DIB 2022)
Cameroon Cataract “first to have a development finance
institution as an investor”
(Sightsavers 2018)
Cambodia Rural Sanitation “world’s first Development Impact
Bond for sanitation”
(iDE 2019)
Finance for Jobs “first time to be applied by the World
Bank in a fragile environment”
(FMO 2019)
In Their Hands “world’s first Development Impact
Bond in Sexual and Reproductive
Health for adolescents”
(Triggerise n.d.)
Skill India “first-of-its-kind and the largest Impact
Bond for skilling”
(British Asian Trust
n.d.)
176
APPENDIX 3.3: POSSIBLE DIB THEORY OF CHANGE
Figure 3.2: Expanded Logic Model
Contextual Factors Inputs Outputs Outcomes Impacts
Resource Gaps
Incentives to
Collaborate
(Risk Sharing)
Collaboration
and Capacity
Building
(Re)payment
and Capacity
Social Finance
Market Growth
Service and Data Gaps
Outcomes Focus
(Tied to
Payments)
Performance
Measurement
and
Management
Evidence and
(Scaled) Social
Impact
Learnings and
Sustainabile
Impacts
Development
Challenges
Outcomes- Focused Risk
Capital
Collaborative
and Adaptive
Data-Driven
Delivery
Evidence (of
Blended
Returns) and
Capacity
Cultural Shift
Towards
Outcomes-Based
Working
Abstract (if available)
Abstract
This dissertation offers an in-depth look at Social Impact Bonds (SIBs), a form of outcome-based contracting (OBC), from a variety of perspectives. First established in the UK in 2010, SIBs have generated considerable interest among policymakers and scholars: for their potential to harness greater amounts of capital for social impact and to foster innovation through a focus on outcomes. While the body of literature on SIBs is growing quickly, it remains a nascent field of study with many opportunities for development. This dissertation aims to address some of the gaps in the literature to both generate more academic knowledge about these innovative tools, as well as meaningful insights for practice. To do so, this dissertation is broken into three separate articles which offer a more nuanced view of SIBs as they relate to social innovation, systems change, and international development.
The first article, “Are Social Impact Bonds an Innovation in Finance or Do They Help Finance Social Innovation?,” evaluates SIBs using social innovation theory. As the literature has tended to situate SIBs within public management theories, this article offers novel evidence by analyzing SIBs through the theory of social innovation. The study’s two main research questions are: 1) Are SIBs social finance tools being used to bring in new sources of capital to fund service delivery? And 2) Are SIBs being used to finance specific stages in other social innovation processes? The research analyzes both qualitative and quantitative data on SIBs in the UK and US – leaders in SIB implementation.
The second article, “Can Social Outcomes Contracts Contribute to Systems Change? Exploring Asset-Based Working, Innovation, and Collaboration,” investigates the ability for Social Outcomes Contracts (SOCs), a broader term encompassing SIBs, to impact their surrounding service ecosystems. Although many practitioners and academics have lauded the potential benefits of SIBs on service delivery, including catalyzing more transformative change, others have criticized SIBs for focusing on individual rather than systemic causes to social challenges. To address this lack of evidence, the study evaluates the case of the Greater Manchester Homes Partnership (GM Homes), asking: What impacts did GM Homes have on its wider service delivery system? Through which mechanism(s) did it generate these ecosystem effects: asset-based working, innovation, and/or collaboration? Taking a novel process tracing approach, the analysis draws on evidence from interviews, primary documents, user data, and secondary literature.
The third article, “Results-Based Funding Via Development Impact Bonds: Stakeholder Perceptions on Benefits and Costs”, focuses on the application of impact bonds for international development purposes. As most research to date has focused on SIBs in high-income countries (HICs), this study contributes to the much newer and smaller set of literature on Development Impact Bonds (DIBs) by examining stakeholder motivations, expectations, and experiences in low- and middle-income countries (LMICs). The study’s two primary research questions are: What are the contextual factors and anticipated model benefits that are motivating stakeholders to use DIBs? What costs and challenges do stakeholders experience in implementing DIBs? The study answers these questions through interviews with DIB stakeholders, drawing from a sample of 7 recent projects.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Philanthropic foundations and social impact bonds: understanding impact investment approaches among philanthropic foundations
PDF
Innovation and good intentions: evaluations of three cross-sectoral programs for at-risk populations in Southern California
PDF
Technological innovation in public organizations
PDF
Community development agreements: addressing inequality through urban development projects
PDF
Using innovative field model and competency-based student learning outcomes to level the playing field in social work distance learning education
PDF
The spatial economic impact of live music in Orange County, CA
PDF
The context of leadership in the development of California’s innovation hubs
PDF
Redlining, neighborhood change, and individual outcomes: an exploration of how space shapes the landscape of inequality from past to present
PDF
Social capital and community philanthropy: the impact of social trust and social networks on individual charitable behavior and community foundation development
PDF
A framework for evaluating urban policy and its impact on social determinants of health (SDoH)
PDF
The economic and political impacts of U.S. federal carbon emissions trading policy across households, sectors and states
PDF
Fostering leadership resilience: examining the influence of social networks on female administrators’ capacity to lead in times of crisis or organizational change
PDF
The role of public policy in the decisions of parents and caregivers: an examination of work, fertility, and informal caregiving
PDF
Health impact assessment, the concept, science, and application in China
PDF
Faith-based promising practices: innovative forms of collaborative social services
PDF
Perceptions of school-based health services in promoting educational equity
PDF
Blended learning: developing flexibility in education through internal innovation
PDF
Assessing and articulating the impact of the Daniel K. Inouye Asia-Pacific Center for Security Studies: an innovation study
PDF
Supporting a high value maternity system of care: prioritizing resilience of and relationships with mothers to improve maternal and child health
PDF
Making an impact with high-net-worth philanthropists: understanding their attributes and engagement preferences at nonprofit organizations
Asset Metadata
Creator
Olson, Hilary Victoria
(author)
Core Title
Outcomes-based contracting through impact bonds: ties to social innovation, systems change, and international development
School
School of Policy, Planning and Development
Degree
Doctor of Philosophy
Degree Program
Public Policy and Management
Degree Conferral Date
2024-05
Publication Date
04/03/2024
Defense Date
06/09/2023
Publisher
Los Angeles, California
(original),
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
impact bonds,international development,OAI-PMH Harvest,outcomes-based contracting,Social Innovation,systems change
Format
theses
(aat)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Painter, Gary (
committee chair
), Beckman, Christine (
committee member
), Fox, Christopher (
committee member
)
Creator Email
hilaryol@usc.edu,hilaryvolson@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC113865215
Unique identifier
UC113865215
Identifier
etd-OlsonHilar-12748.pdf (filename)
Legacy Identifier
etd-OlsonHilar-12748
Document Type
Dissertation
Format
theses (aat)
Rights
Olson, Hilary Victoria
Internet Media Type
application/pdf
Type
texts
Source
20240403-usctheses-batch-1134
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
impact bonds
international development
outcomes-based contracting
systems change