Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Institutional review board implementation of the single IRB model for multicenter clinical trials
(USC Thesis Other)
Institutional review board implementation of the single IRB model for multicenter clinical trials
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INSTITUTIONAL REVIEW BOARD IMPLEMENTATION OF THE SINGLE IRB
MODEL FOR MULTICENTER CLINICAL TRIALS
by
Martinus J. Koning-Bastiaan
A Dissertation Presented to the
FACULTY OF THE USC SCHOOL OF PHARMACY
UNIVERSITY OF SOUTHERN CALIFORNIA
In Fulfillment of the
Requirements for the Degree
DOCTOR OF REGULATORY SCIENCE
August 2022
Copyright 2022 Martin Koning-Bastiaan
ii
DEDICATION
I dedicate this work to my family. I consider myself incredibly lucky to have the
amazing people in my life that I do. First, to my father, Hans Koning-Bastiaan, for all the
encouragement and wisdom he has tried to impart. Whether I took heart or learned from
it is my responsibility. I think I finally might be able to begin to understand the amazing
work you did for Genentech all those years. My mother, Hilda Koning-Bastiaan, for the
example of creativity, intelligence, and compassion that remains my goal to emulate. My
wife, Vanessa, for her forbearance, love, and support. When it matters enough, two busy
people can combine their lives together and come out better for it. My children, Hayley
and Keldon, for listening to your excited father bore you to tears about arcane issues
without complaint. I hope I have shown you that we never stop learning. Sometimes we
even get credit for it.
iii
ACKNOWLEDGEMENTS
It may not be common to begin a section acknowledging the support, help, and
sacrifice of others with an apology, but I feel I owe my family a debt. When I started this
journey several years ago, one of my children was still a pre-teen. To this day he has
never complained about the endless hours I spent in front of a computer, nor the
weekends broken up by all day classes. That was probably the greatest help anyone
could have given me. I must acknowledge the debt I owe to both my children, my wife,
parents, family, and several friends who may have felt abandoned. Their support has
been essential to this work.
I am deeply grateful for the guidance, insight, and instruction provided by my
advisor, Dr. Frances J. Richmond. I cannot thank her enough for her professionalism,
care, humor, and steel. Some lessons and phrases will stay with me forever: “Words are
cheap!” The faculty of the Department of Regulatory and Quality Sciences also deserve
mention. Drs. Nancy Smerkanich, Eunjoo Pacifici, Michael Jamieson, Susan Bain, and
others deserve much praise for their patience, dedication, and support. I also am grateful
for the members of my committee, including Drs. Terry Church and Paul Beringer, for
providing guidance, advice, and above all criticism. I also want to acknowledge the
members of my doctoral cohort, cohort 5, for the inspiring exemplars of smart, interesting
people you all are.
I would also like to acknowledge my focus group participants. Drs. Kristin Craun,
Amy Johnson, and Julie Reyes for their generous and insightful comments. Their
encouragement, interest, and candor were very helpful.
iv
Dr. Susan Rose is another to whom I am indebted. Her encouragement, belief in
me, and glowing recommendation to the doctoral program (so I am led to believe) were
all instrumental to the path that I followed to get here.
Lastly, I must also thank my colleagues, Yu Chung and DeAnn Diamond, for
encouraging me to look into the Regulatory Science program. This is all your fault.
v
TABLE OF CONTENTS
DEDICATION .................................................................................................................... ii
ACKNOWLEDGEMENTS ............................................................................................... iii
LIST OF TABLES ............................................................................................................ vii
LIST OF FIGURES ......................................................................................................... viii
ABSTRACT .........................................................................................................................x
CHAPTER 1: OVERVIEW .................................................................................................1
1.1 Introduction .....................................................................................................1
1.2 Statement of the Problem ................................................................................3
1.3 Purpose of the Study .......................................................................................5
1.4 Importance of the Study ..................................................................................5
1.5 Limitation, Delimitations, Assumptions .........................................................6
1.6 Organization of Thesis ....................................................................................7
1.7 Definitions .......................................................................................................8
CHAPTER 2: LITERATURE REVIEW ...........................................................................10
2.1 Literature Search Methodology ....................................................................10
2.2 The Need for Ethics Review .........................................................................11
2.2.1 Nuremburg and Helsinki .................................................................12
2.2.2 Beecher and Belmont ......................................................................14
2.2.3 The Common Rule and ICH ............................................................18
2.3 Clinical Trials ................................................................................................21
2.3.1 The Need for Trials .........................................................................22
2.3.2 Multicenter Trials ............................................................................24
2.4 Ethical oversight of distributed clinical trials ...............................................26
2.4.1 IRBs in Jeopardy .............................................................................26
2.4.2 Creating Central IRBs .....................................................................31
2.5 Current efforts to Manage Multicenter IRB Review ....................................36
2.5.1 Reliance Agreements and Local Context ........................................36
2.5.2 Local Knowledge .............................................................................38
2.5.3 Current Efforts .................................................................................41
2.6 Structuring Research into Single-IRB Implementation ................................44
vi
CHAPTER 3: METHODOLOGY .....................................................................................51
3.1 Introduction ...................................................................................................51
3.2 Survey Population .........................................................................................51
3.3 Survey Development and Validation ............................................................51
3.4 Survey Deployment ......................................................................................53
3.5 Survey Analysis ............................................................................................53
CHAPTER 4: RESULTS ...................................................................................................55
4.1 Survey Participation ......................................................................................55
4.2 Demographic Profile of Responding Institutions .........................................55
4.3 Developing Single IRB Capacity ..................................................................62
4.4 Implementation of Single IRB Review .........................................................77
4.5 Monitoring and Feedback .............................................................................87
CHAPTER 5: DISCUSSION .............................................................................................97
5.1 Methodological Considerations ....................................................................97
5.1.1 Delimitations ...................................................................................97
5.1.2 Limitations .......................................................................................98
5.2 Consideration of Results .............................................................................101
5.2.1 Process and Implementation Strategies .........................................101
5.2.2 Interim Implementation Outcomes ................................................107
5.2.3 Implementation Needs and Problems ............................................109
5.2.3.1 Assuring that HRPP staff are capable ..............................109
5.2.3.2 Creating consistent policies and expectations .................110
5.2.3.3 Educating clinical site personnel .....................................111
5.2.3.4 Assuring a sufficient number of staff ................................112
5.2.3.5 Altering documentation systems .......................................112
5.2.4 Future Directions ...........................................................................113
REFERENCES ................................................................................................................115
APPENDIX A. SURVEY ..........................................................................................125
vii
LIST OF TABLES
Table 1: Relevance of the CIPP Evaluation Type to Formative and Summative
Evaluations .................................................................................................49
Table 2: Longer Review Times Explanation .....................................................................63
Table 3: Shorter Review Times Explanation .....................................................................64
Table 4: Other Challenges for Expanding Single IRB Capacity .......................................66
Table 5: Other Responses to Cost for Changes .................................................................69
Table 6: Gaps and Improvements for Training Materials ..................................................71
Table 7: Other HRPP Training Mechanisms .....................................................................73
Table 8: Comments to Increase Effectiveness of Training ................................................74
Table 9: Comments on Improving Reliance Agreements ..................................................79
Table 10: Weighted Results for Changes needed for Ceded Studies .................................88
Table 11: Leaders in Single IRB ........................................................................................89
Table 12: Stakeholder Effects ............................................................................................92
Table 13: Lessons Learned for Relying Institutions ..........................................................93
Table 14: Lessons Learned for IRBs of Record .................................................................95
viii
LIST OF FIGURES
Figure 1: Participation in Single IRB Review ...................................................................55
Figure 2: Respondent’s Position in the HRPP ...................................................................56
Figure 3: Institution Description ........................................................................................57
Figure 4: Number of HRPP Staff .......................................................................................57
Figure 5: IRB Staff Size at Universities With and Without Med Centers .........................58
Figure 6: Number of Staff Dedicated to Single IRB Review ............................................58
Figure 7: Single IRB Review Staff at Universities With and Without Med Centers ........59
Figure 8: Number of Clinical Trials in Past Year ..............................................................60
Figure 9: Number of Trials by Type of Institution ............................................................60
Figure 10: Clinical Trial Review Times ............................................................................61
Figure 11: Review Times by Type of Institution ...............................................................62
Figure 12: Change in Review Times .................................................................................63
Figure 13: Prioritization of Ceded Reviews .......................................................................65
Figure 14: Challenges for Expanding Single IRB Capacity ..............................................66
Figure 15: Weighted Scoring for Challenges to Expanding Single IRB Capacity ............67
Figure 16: Additional Resources Needed to Expand Single IRB Capacity .......................68
Figure 17: Cost for Changes ..............................................................................................68
Figure 18: Useful Training Resources ...............................................................................70
Figure 19: Training of HRPP Staff ....................................................................................73
Figure 20: Factors Influencing Adoption of Single IRB Review ......................................76
Figure 21: Preparing for FDA Mandate for SIRB .............................................................77
Figure 22: Reliance Agreement Structure ..........................................................................78
Figure 23: Reliance Agreement Satisfaction .....................................................................79
Figure 24: Who is Serving as IRB of Record ....................................................................81
Figure 25: Quality of Reviews ...........................................................................................82
Figure 26: Likelihood of Tasks for Relying Institution .....................................................83
Figure 27: Likelihood of Tasks for Relying Institution (2) ...............................................84
Figure 28: Responsibility for Sending Results ..................................................................85
Figure 29: Methods for Keeping Track of Ceded Trials ....................................................85
Figure 30: Satisfaction with Tracking Ceded Trials ..........................................................86
Figure 31: Plans to Change Tracking Ceded Studies .........................................................86
ix
Figure 32: Experience with Changes Needed for Ceded Studies ......................................88
Figure 33: Effectiveness of Institution's Implementation ..................................................90
Figure 34: Satisfaction of Stakeholders .............................................................................91
Figure 35: Preparation for IRB Mandate ...........................................................................93
x
ABSTRACT
Clinical trials are typically the most time-consuming and expensive part of a drug
development program. They often must be conducted at multiple sites under the
oversight of multiple institutional review boards (IRBs), with each board conducting their
own ethics review for the protection of human participants. Gaining permissions and
managing interactions with those multiple IRBs can be a burdensome process often
marked by trial delays and miscommunications amongst sites. A strategy to improve the
efficiency of multisite clinical trials, instituted by the National Institutes for Health,
centralized the oversight to a single lead IRB. Its experience has led to similar initiatives
for several other types of multisite trials in the United States. However, it is still unclear
how currently existing IRBs are implementing and adapting to these “single-IRB”
strategies. Further, improving the implementations would be aided by knowing about the
concerns of the local IRBs, and whether, in their opinion, the new approaches are
yielding value. This study used a fit-for-purpose survey based on a program evaluation
framework which explored the implementation and consequences of the single IRB
mandate for multisite clinical trials from the viewpoint of US IRB management. The
results show that the single IRB mandate is now used by almost all human subjects
protection programs, but the programs, as yet, have not yielded the promised reductions
in workload or greater efficiency. Challenges that were identified in the past related to
institutional liability and negotiation of reliance agreements have been at least partially
addressed, but other issues related to management and logistics cause the IRBs to
struggle. Adequate training of HRPP staff and clinical study teams remains difficult for
xi
many. Also identified are needs for additional staffing, better documentation systems,
and more consistent implementation of studies across different IRBs.
1
CHAPTER 1: OVERVIEW
1.1 Introduction
The clinical trials supporting drug commercialization have become sophisticated
activities under rules designed to protect the welfare and rights of the human participants.
Because those trials often must enroll large numbers of participants to provide sufficient
evidence of safety and efficacy, they are conducted across many institutions at the same
time. The individual sites participating in these “multi-center” clinical trials traditionally
have secured their own ethics approvals, a precondition of such trials, by applying to the
Institutional Review Board (IRB) at the specific site at which trial conduct will be carried
out. IRBs are tasked with ensuring that the rights and welfare of research participants are
protected. Each IRB reviews the study to ensure that the participants at its site are
protected according to the federal, state, and local regulations governing research with
humans. In the United States, the primary regulations, often described as the “Common
Rule”, are found in title 45 of the Code of Federal Regulations (CFR) part 46 (OPRR,
1991). In addition, the FDA has a similar set of clinical trial regulations in 21 CFR 56.
Ethical approval for a clinical site begins with a submission by the principal
investigator to the IRB at the site. The IRB reviews the study and can ask for changes to
the study protocol, consent forms and other administrative arrangements. If the clinical
trial has several sites, the multiple inputs from different IRBs create a challenging and
time-consuming set of approval activities for the clinical program managers. Because all
sites must operate under a single protocol and study plan, the study directors must
reconcile different and often conflicting requests for changes from different IRBs. That
2
reconciliation can be difficult to manage. In some cases, for example, one site might
even decide that the procedures in the trial are too risky for the participants unless the
clinical approach is revised substantially. In such a case, the sponsor’s options are
limited. The sponsor could accept that requirement and amend the protocol. However, if
the protocol were to be amended only for the site requesting the change, then the
scientific value of the study could be diminished by damaging the statistical validity of
the study and exposing participants to risk without the possible benefit of a sound
scientific outcome. Thus, the sponsor of the study would have to amend the protocol for
all the sites and send amendments to each of the other sites for additional review. No
guarantee exists that the other sites would see the changes as reasonable, so lengthy
discussions could ensue.
Alternatively, the sponsor could argue the point with the requesting IRB, although
this path has not typically been very effective. The only other course of action for the
sponsor would be to abandon the site and recruit another site as a replacement. The
sponsor must make these decisions, coordinating and reconciling changes as different
sites come to terms with the agreements until each site has the same protocol. At the very
least, this is time intensive and costly. Prior efforts to relieve this process with
collaborative, centralized, or commercial IRBs have not resolved the issue. Thus, several
efforts are underway to mandate the streamlining of the process of approving multicenter
trials. The process decided upon is a “single IRB review” in which one IRB serves as the
lead IRB responsible for reviewing the study for ethical issues for all the sites that are
participating in the trial. The IRBs at the other sites must accept this review and look
only at issues specific to its institution.
3
Trying to reduce the ethics reviews to one review is not particularly new in
concept. Since 2012, the National Cancer Institute has operated a central IRB and
recommends that cancer studies under its purview undergo single IRB review through the
NCI’s Central Institutional Review Board (CIRB). Further, the National Institutes for
Health (NIH) has mandated that any clinical trials supported by NIH must undergo single
IRB review; they cannot be reviewed by each site at which the study is performed. The
most recent revision to the Common Rule expands the requirement for single IRB review
to all federally funded multicenter trials. This approach has several positive aspects. It
simplifies greatly the application and approval process and facilitates reporting and
amendments after the trial has started. However, not all IRBs appear to be comfortable
with this approach. In the past, many IRBs have been hesitant to relinquish their
authority, citing concerns that other IRBs may not be sufficiently prepared to review the
studies at its institution. For example, an IRB in southern California might have
reservations about ceding their review to an IRB from Nebraska, fearing that the
Nebraska IRB may not understand the context of the doing studies in certain
communities in southern California. On the other hand, concerns are also expressed
regarding the obligations of being the single IRB of record because issues arise around
the management of legal liability and the logistics of keeping track of the agreements
alongside single-site studies within its institution.
1.2 Statement of the Problem
An IRB knows how to review studies independently. Its core competency is
typically the ability to communicate with the study team to obtain the information or
changes necessary to support those reviews. Further it is well-experienced with the
4
logistics that follow. Paperwork must be filed and then updated when the IRB reviews
and approves renewals or amendments to the research. The IRB also understands its
traditional role in tracking reports of adverse events, complaints, or data safety
monitoring reports. However, what it may not have is a multitrack system compatible
with a single IRB review model. Most have traditionally had few interactions with other
IRBs. Typically, agreements with other IRBs, called “reliance agreements,” have been
crafted for only a few partners and are structured uniquely without a templated approach.
Thus, issues related to data sharing and management are relatively new and require that
IRB professionals develop new systems and solutions. IRBs have little experience with
handling other types of duties and responsibilities, such as those associated with adopting
the role of a central IRB. Thus, some IRBs have said publicly that they will not serve as
a central IRB. Instead, they plan to engage a commercial IRB to review their multicenter
trials that must comply with the single IRB mandate.
A modest literature has discussed the issues that led up to the new regulations on
single IRB review. A previous director of the Office for Human Research Protections
(OHRP), Jerry Menikoff, even published a paper in 2010 to explain the problems
(Menikoff, 2010). The rationale and possible benefits of single IRB review have also
been discussed in the Notice of Prior Rule Making (NPRM) related to the use of a single
IRB review, However, that document does not discuss in any detail the logistical issues
that might be related to implementation (OHRP, 2015). Thus, this transition poses
several “unknowns” related to the views and experience of IRBs as they attempt to
implement the single IRB model. Great value could be gained from a better
5
understanding of how IRBs are responding, what implementation paths they have taken,
and what concerns have come from those experiences.
1.3 Purpose of the Study
This study explores the current state of implementation of single IRB approaches
through the eyes of those most likely to understand that process, the directors and
managers of US academic IRBs nationwide. It investigates the views and challenges that
the IRBs have come to recognize as they work with these systems and try to integrate
them with their previous ways of conducting operations. A literature review was first
conducted to inform the development of a survey that was structured with reference to the
CIPP program evaluation framework’s Process area of focus, related most directly to
implementation. The survey tool was disseminated using the web-based survey platform,
Qualtrics XM®, to individuals who manage or direct IRBs.
1.4 Importance of the Study
No systematic review or analysis has been conducted to date to understand how
IRBs are responding to the transition to the single IRB review model for multisite studies.
Anecdotal evidence suggests that implementation has been variable in its effectiveness
and progress. This study may help IRBs to anticipate where hurdles may exist by
understanding the experience of others. Based on this information, they may be able to
adopt strategies, processes and procedures seen by others as particularly effective to
integrate these novel approaches into their workflow. IRB professionals have expressed
concern with the transition and these fears may be assuaged by insights from peers that
can better guide them as they go forward. Further, it is expected that learning about
6
implementation strategies will highlight other knowledge gaps and facilitate a scientific
inquiry into those areas.
Organizations advocating for or legislating single IRB solutions may be able to
benefit because they can better understand the stresses and strains of the implementation
for local IRBs. This understanding could help policymakers to design more effective
transition strategies or even suggest limitations of the model. We might even expect
benefits for the future planning of single-IRB models for other types of multicenter
clinical trials if those plans could be informed by the concerns and experiences of the
respondents to this study.
1.5 Limitation, Delimitations, Assumptions
This study has several delimitations. It has focused on the views and experiences
of only one group of professionals, the directors and managers of IRBs. Pharmaceutical
industry perspectives, regulator perspectives, and even the perspectives of the participants
in research could all be of great interest and value, but these groups were not included in
the present study. Further, the research was delimited to understanding only the issues
related to the implementation of the single IRB requirements, so did not study in any
depth the other functions and challenges unrelated to the implementation of the single-
IRB system. Because this thesis examined only implementation and effectiveness issues
for IRBs, it was framed by an implementation model that provided less insight into other
aspects of the single IRB approach. For example, the research did not address whether
the costs to the sponsors of the research have been reduced in any quantitative way, nor
did it focus on the rationale or organizing principles underlying the development of an
IRB system by policymakers. The research also focused on IRBs inside the US.
7
Different regulations and local practices are typical in other countries, so attempting to
mix their responses into this exploratory study was thought to be likely to diffuse and
confound the results.
Limitations to the study were related primarily to the ability to reach a
representative sample of appropriate IRB professionals who were willing to participate in
the study. Further, those professionals are very busy and may decide that filling out a
survey is not an effective use of their time. Care was taken to mitigate possible bias in
the representativeness of the sample. Because IRB professionals can feel overwhelmed
by their workloads, their opinions may also have been biased by a general resistance to
change that may affect their views on any type of change that might complicate the
workflow of the IRB. Nonetheless, I have assumed that the information provided by the
respondents to the survey was honest and accurate, and that the challenges that they face
were relatively unaffected by a generalized resistance to change.
1.6 Organization of Thesis
Chapter one serves as a brief introduction to the problem that will be studied and
describes the approach of the thesis. Chapter two reviews the literature on and around the
use of single IRBs for multicenter trials. It provides detail on the rationale, the policies,
and what is currently known regarding implementation. A description of the framework
used to structure the research is included. Chapter three describes the survey used. It
contains the survey methodology, including how the survey was created, distributed, and
to whom it was targeted. Chapter four presents the results of the survey and statistical
analysis of the results. Chapter five discusses what was learned and further issues to be
8
studied. It provides an interpretation of the results and includes recommendations for
further research.
1.7 Definitions
IRB – an Institutional Review Board is a committee dedicated to the review and
approval of human subjects’ research according to the applicable regulations governing
the conduct of the research. In this document, IRB is also used to denote the program
dedicated to the protection of human subjects (i.e., not just the committee, but the
administrative unit that supports the committee and serves the protection of human
research participants). In some organizations, a distinction is made between the IRB and
the Human Research Protection Program (HRPP). No such distinction is made in this
paper.
cIRB, or Central IRB – an independent ethics board created to approve research
for multicenter clinical trials. They typically are not aligned with any institution
performing the clinical trial. Many central IRBs are disease focused, such as cancer (NCI
CIRB) or stroke (StrokeNet).
sIRB (or reviewing IRB) – single IRB - the IRB responsible for the ethical review
of a multicenter clinical trial. This may include serving as the privacy board but need
not.
pIRB (or relying or ceding IRB) – participating IRB - the IRB at a site that is
relying on a sIRB for ethical review of a trial. This IRB may have responsibilities (such
as administrative issues or the privacy board), but they do not include responsibility for
the ethical review of the trial.
9
Multicenter clinical trial: A trial that is implemented by several organizations
where the particulars of the trial, such as the protocol and informed consent documents,
are substantially the same across the various sites that implement the trial. Any
differences between implementations must be determined to be scientifically unimportant
for the study.
Independent Ethics Committee (IEC): the internationally recognized term for an
ethics committee whose focus is the protection of human research participants. It Is
broader than the term “IRB”, since many countries do not use the US IRB system but
have regional ethics boards or centralized ethics boards. In the US, the term IRB is
commonly used interchangeably with IEC, hence the slightly odd “central IRB”
formulation, where no institution or “I” is actually involved.
Human Research Protection Program (HRPP): A term that denotes the program
for the protection of human research participants (also known as HSPP for Human
Subjects Protection Program). In this paper, HRPP is used interchangeably with the IRB.
In recent years, this term’s use and popularity has increased. It serves to distinguish
between the work done by the IRB committee and the administrative work supporting
human research.
10
CHAPTER 2: LITERATURE REVIEW
2.1 Literature Search Methodology
To better understand the context and history of single IRB review, a review of
current literature was conducted. This review began with a simple text search through
the University of Southern California Library’s search tool and Google Scholar search
function. Terms such as “single IRB”, “multicenter clinical trial” and “multiple IRB
review” generated many more hits (> 400) than it was possible to sift through. Picking
several top candidates, given their titles and authors that I recognized as influential in the
field, I then mined each of these for referenced books and articles. In this way, I worked
backwards to seminal articles and books, such as Burman’s “Breaking the Camel’s Back:
Multicenter Clinical Trials and Local Institutional Review Boards” from 2001 (Burman
et al., 2001). The Burman article was cited by over 150 articles, which then became the
next set of articles to examine. While examining these articles, however, I found many
references to the origin of ethical review, clinical trials, and IRBs. Thus, I expanded my
search to the very beginning of clinical medicine and ethical deliberation upon it. Much
of that information I chose not to discuss in the chapter, since it would serve to extend the
length and, while interesting, perhaps was not important to the topic at hand. Therefore, I
began the discussion of IRBs with the Nuremberg Doctors’ Trial as the modern turning
point for ethical review and the discussion of clinical trials with the scientific advances of
the mid-20
th
century. Each of these paths - first tracing the evolution of ethical regulation
of scientific experiments and then describing how clinical trials evolved - provided a
body of literature that was broad and deep. My task, therefore, was to pick out
judiciously what I or others felt were important inflection points. Google Scholar again
11
was used to find articles and books that reference those important events, such as 50 year
retrospectives of the Nuremberg trial or Beecher’s 1966 expose of human subjects’
research. Additionally, several journals, such as Ethics and Human Research, promised
by their titles to discuss these important issues and they did not disappoint.
Other references included guidance and notices from the FDA, NIH, and OHRP,
including the regulations and statutes found in the Federal Register and United States
Code respectively. Several books were found to be excellent resources, including Hilts’s
Protecting America’s Health and Marks’s The Progress of Experiment. For the
discussion of evaluative frameworks, Stufflebeam’s work on the CIPP framework was of
high value. Last, conversations with IRB directors, such as Ann Johnston from the
University of Utah and Kristin Craun of UCLA, were helpful to understand the current
context and views related to IRB functions, in addition to suggesting areas of focus for
the literature review.
2.2 The Need for Ethics Review
Ethical considerations underlying the conduct of medical practice have been
integral to the evolution of medicine itself (Bull, 1959). Concerns about the ethics of
human testing have been an evolving topic since disciplined experimentation on humans
began to occur at the end of the 18
th
century (Marks, 2000). By the beginning of the
1900s it became clear that laws governing the manufacture, distribution, and testing of
drugs were required (Young, 2014). By mid-century, the rationale for those laws became
abundantly clear, and gave rise to changes to the ways that clinical testing was viewed
and managed from an ethical perspective.
12
2.2.1 Nuremburg and Helsinki
The second World War was an important point of inflection in the revision of
clinical research ethics (Barondess, 1996, Nyiszli and Bettelheim, 1993). When
Germany surrendered to the Allies in 1945, Nazi experiments conducted on prisoners
were immediately reviled for their coercion, brutality, questionable scientific rigor, and
lack of concern for the participants’ life or pain (Annas, 1992). Many feared that those
who perpetrated the atrocious medical experiments might not be forced to account for
their crimes. Thus, refugees and former inmates lobbied the American and British
governments to prosecute those responsible for the experiments (Weindling, 2004,
Schmidt, 2004). The Nuremberg Doctor’s Trial was the result. The judicial proceedings,
memorialized in over 250,000 pages of documentation, brought several convictions for
“crimes against humanity”. The charges, unknown until the Charter of the Allied London
Agreement (which formed the basis for all the Nuremberg trials), also yielded a set of
principles meant to shape the future of experimentation on humans. That set of principles
is known as the Nuremberg Code (Nuremberg Code, 1949).
The Nuremberg Code is so named because it provides a series of protections for
research subjects that are listed as numbered elements or “codes”. For example, code 1
calls out provisions for informed consent; codes 4 & 5, provisions against unnecessary
suffering or death; code 6, benefits/risk assessment; code 9, the ability for subjects to
withdraw from a study; and code 10, the obligation to minimize harm or risk of death by
monitoring the subjects (Shuster, 1997). Although the Code was never legally binding, it
led most developed countries to adopt crucially important protections in current laws
protecting human research subjects (Ladimer, 1954).
13
The Nuremberg code gave the investigator the responsibility to ensure that ethical
precepts are followed. It came at a time when medical experiments seldom could claim
to meet its criteria. In fact, during the Nuremberg trial, the defense argued that medical
experiments conducted in the US were no better than the those of Nazi program
(Freyhofer, 2004). They argued that experiments done with prisoners in Illinois and in
the Philippines were very similar to Nazi experiments. In those experiments, prisoners
were infected with malaria or allowed to develop beriberi and then given varying
treatments to understand the disease or to test therapies (Katz et al., 1972). The defense
likened the Nazi sulfanilamide experiments, in which Polish women prisoners were
inflicted with simulated war wounds and treated with various antibiotics, to beriberi
experiments in the Philippines, where prisoners were allowed to develop beriberi and
provided varying diets to show that beriberi develops because of a dietary deficiency
(Freyhofer, 2004, Strong and Crowell, 1912). This false equivalency spurred Andrew
Ivy, the prosecution’s sole witness with medical credentials, into action. At his urging,
the American Medical Association hastily passed three “Principles of Ethics Concerning
Experimentation with Human Beings” in December of 1946 to diffuse the criticisms of
American trial management (Freyhofer, 2004, Katz, 1996).
In retrospect, many analysts have suggested that the Nuremberg code had
relatively little impact outside of Europe at the time. Many interpreted the outcomes of
the trials as applying to Nazis, not doctors (Rothman, 1991). Nevertheless, the Code
served as the foundation for another significant document in the history of human subject
protections, the Declaration of Helsinki (Levine, 1996). Initially passed by the World
Medical Assembly in June of 1964, the Declaration reformulated and expanded the
14
Nuremberg Code (WMA, 1964). It included requirements to protect research subjects
through informed consent, right of withdrawal, and assessment of benefit/risk and
protections of safety, all identified in the Nuremberg Code. However, the initial
Declaration expanded further on these principles by adding detail regarding their
application. With regard to informed consent, for example, it identified that those unable
to provide consent should not excluded, as the Code suggested, because the legal
guardian for the participant could provide consent for them (WMA, 1964). At the time of
its introduction, the Declaration did not receive universal acclaim. Many researchers
believed that it would provide undue restrictions on research, some asserting that it might
even wipe out experimental medicine altogether (Pappworth, 1990).
2.2.2 Beecher and Belmont
In the United States, neither the Nuremberg Code nor the Declaration of Helsinki
were considered to be binding (Howard-Jones, 1982). The AMA principles from 1946
could have been used to censure physicians who failed to follow them, but enforcement
was lax. Little discussion or explication of its principles can be identified from that time,
as would have been expected if they were meant to be followed (Katz, 1996). However,
by 1957, some progress toward more stringent rules could be recognized in the United
States. U.S. Common law established a need for “informed consent” in medicine, based
on an expanding focus on the right to self-determination (Faden and Beauchamp, 1986).
This precedent, associated with a legal judgement from the Salgo v. Leland Stanford Jr.
University Board of Trustees (Salgo) case, gave some legal recourse to those who were
subjected to medical treatments or experiments without their consent (Meisel, 1977). The
tenet of informed consent has evolved to become a cornerstone of clinical research ethics.
15
It obligates the doctor to explain the study to the participant. It is not sufficient to ask
patients whether they consent to a procedure; doctors must explain what they plan to do,
the risks involved, and what results can be expected (Faden and Beauchamp, 1986).
At about the same time as the need for informed consent became a legal
precedent, many local ethical guidelines were created and scholarly discussions became
more common. For example, while the Kefauver/Harris Amendments to the Food, Drug,
and Cosmetic Act were being debated at the beginning of the 1960s, requirements for
informed consent were controversial. However, by the time of their passage in 1962, the
Amendments required that the FDA create regulations for the testing of new drugs that
included an explicit requirement for consent of the participant (Frankel, 1972). One
important consideration was how the ethical conduct of experiments would be assured.
By autumn of 1965, debate about who should review proposed research led to the
decision that institutions employing the researchers would be responsible. The idea of
the Institutional Review Board (IRB) in its initial form was attributed to James Shannon
of the National Advisory Heart Council in 1965 (Frankel, 1972). He called for a
resolution that every institution receiving federal funds for human experimentation must
provide independent review by peers to ensure that the participants would be protected
and that the benefits outweigh the risks. When the resolution was passed, it spurred the
Surgeon General in February of 1966 to make it official by the Policy and Procedure
Order number 129 (Frankel, 1972). Nearly twenty years after the Nuremberg trials, the
US finally issued an order to protect participants of medical experiments, independently
assess the protection of human subjects, and evaluate the scientific experiments to be
performed on them.
16
The need to pay close attention to these enhanced rules of conduct was
underscored dramatically by Henry Beecher’s exposé of US medical experiments in
1966. Building upon the work of Maurice Pappworth in England, who documented many
ethical violations in British studies, Beecher used 22 studies conducted in the US as
examples to illustrate in horrific detail the lack of informed consent, inhumane treatment
of subjects, and terrible side effects of experimental treatment (Beecher, 1966). Despite
common belief that ethics violations were the exception, Beecher concluded that
violations were “not uncommon”. Ethical violations may have been seen as limited or
local when clinical research was conducted on a small scale. However, in the years
following WWII, the volume of clinical research ballooned. Beecher reported that NIH
grant money for clinical research grew by a factor of over 600 times in the 20 years after
the war (Beecher, 1966). The magnitude of the change catapulted medical research, and
its failings, onto the national stage. Worse yet was the public revelation of one
particularly distasteful experiment, the Tuskegee Syphilis Study.
In 1932, the Tuskegee study was funded under the direction of the US Public
Health Service to examine effects of untreated syphilis. The study ran for 40 years,
through an epoch that saw the discovery of effective treatments which could make full
recovery possible. Despite those opportunities, none of the African-American men in the
study received treatment (Brandt, 1978). Initially, the intent of the study was to treat the
nearly 400 men it screened who had syphilis, but it quickly morphed into studying the
effects of untreated syphilis. The study had no written protocol to follow (Jones, 1993).
Subjects were misled about their afflictions, given sham treatments for “bad blood”, and
followed by investigators, sometimes until the bitter end of their lives (Gray, 1998).
17
To the medical personnel in the area, the study was not secret. When the public
health program to treat syphilis in the late 1930s came to Macon County, Alabama,
participants of the study who came to the treatment centers were sent away without
treatment and were never informed of their true diagnosis (Gray, 1998). Over the course
of the study, many of the subjects suffered the predictable effects of late-stage syphilis
and even died because of it. A 1955 report from the study estimated mortality from the
untreated disease at over thirty percent (Brandt, 1978). In the forty-year span of the
study, no significant ethical objections were documented from the involved doctors and
nurses. It was only following newspaper accounts and the resulting public outcry in 1972
that the study was halted (Brandt, 1978, Jones, 1993). The outcry reached the US
Congress and in 1974 drove the passage of legislation creating the “National Commission
for the Protection of Human Subjects of Biomedical and Behavioral Research” (US
Congress, 1974), which was tasked with creating recommendations to strengthen human
subjects regulations (Brady and Jonsen, 1982). The result of this four-year effort was
called the Belmont Report.
The Belmont report is a short document meant to articulate the ethical principles
that should underlie all research with human participants (DHEW, 1979). The purpose of
the commission was not to provide regulations, but to explain the rationale and guiding
principles on which the regulations should be based. While heavily influenced by both
the Nuremberg Code and the revised Declaration of Helsinki (WMA, 1975), the authors
attempted to identify the ethical precepts that gave rise to them. Each of the three
principles highlighted in the Belmont Report had important historical antecedents
(DHEW, 1979). The first, respect for persons, forms the basis for seeking consent. It
18
recognized the disrespect inherent in doing something to someone without their
knowledge and thus violating that person’s right to self-determination. Its ethical
antecedents extend from laws regarding battery and negligence. These were made clearer
by case law, which confirmed that informed consent must be sought prior to a medical
procedure (Faden and Beauchamp, 1986). The second principle, beneficence, is a
principle that can be recognized in the Hippocratic Oath. It not only requires the research
to “do no harm”, but also that it maximizes benefits and minimizes risks. The third
principle, justice, is one whose articulation owes most to the Tuskegee Syphilis Study. It
holds that those who could be expected to benefit from the experiments must be included
in those who bear the burdens of the research. The racism associated with Tuskegee
demonstrates the clear injustice of having only black men bear the burden of research that
would benefit all persons.
2.2.3 The Common Rule and ICH
The Belmont Report and the subsequent laws passed to ensure ethical treatment of
research participants drove regulatory changes. The Common Rule, as it has become
known, appeared in the Congressional Record in 1991 as 45 CFR 46 (OPRR, 1991).
Sixteen Federal agencies initially agreed to the Rule. The FDA, however, had more
specific requirements that it satisfied by creating its own regulations, which were
nonetheless harmonized with the Common Rule through regulations published at the
same time as 21 CFR 50 and 56. The three main precepts of the Belmont Report, respect
for persons, beneficence, and justice, were represented respectively by requirements that
informed consent be sought and documented, the benefit/risk ratio be favorable after risks
had been minimized, and the selection of subjects be equitable. The new regulations also
19
required that approvals for human research be granted by a new type of oversight board,
called an Institutional Review Board (IRB), prior to the start of research.
The structure and responsibilities of the IRB were spelled out in the common rule.
An IRB must have at least five members, sufficiently experienced and knowledgeable to
“promote complete and adequate review” of the research performed at the institution. It
must be diverse in race, gender, and background. At least one member must be affiliated
with the institution and one member must be unaffiliated. The unaffiliated member is
expected to strengthen the independence of the board so that they focus on the welfare of
the participants and not institutional politics. None of the members are permitted to
review research in which they have a conflicting interest. Furthermore, at least one
member must be a scientist and one a non-scientist (OPRR, 1991). Having a non-
scientist on the committee allows the committee to be educated about the perspectives of
the participants and the public at large. These requirements are designed to ensure that
reviews recognize the needs and perspectives of the different groups with an interest in
the research and that reviews are able to represent the interests of the population that the
research is studying (Bankert and Amdur, 2006).
The review of the IRB is instrumental to the oversight of clinical research. The
IRB not only reviews the study before the trial begins but also exercises ongoing
oversight to be sure that any adverse effects of the treatment are recognized and
managed, that privacy and confidentiality are maintained, and that vulnerable populations
are safeguarded (Amdur and Bankert, 2010). This means in practice that resubmission to
the IRB is required at least yearly. Importantly, the IRB was given the authority to
prevent or halt research that does not comply with its decisions or with the protocol that
20
had been approved. If a study runs out of money or the staff cannot perform the
procedures in the protocol, then the participants would be subjected to risk without the
benefits that might be available otherwise. The members of the IRB must also
understand the policies of the institution and local laws, the capabilities of the research
staff and the adequacy of available funding. To this end, the IRB can invite people who
have additional expertise on specialized questions to assist in its deliberations. For
example, for an IRB that is reviewing research on prisoners might invite a non-voting
prisoner representative to its meeting. To assure a transparent and consistent approach to
their work, IRBs are required to create written policies, to document their decisions for
the researchers and for auditors, and to require written documents related to the trial,
including documented informed consent from participants (OPRR, 1991).
The US regulations are based on the findings of the Belmont report, but the
international community created similar protections based on the Declaration of Helsinki
(WMA, 2013). The International Council for Harmonisation of Technical Requirements
for Pharmaceuticals for Human Use (ICH) assisted in these efforts by including a section
on ethics review (section 3) of the Good Clinical Practice (GCP) guidelines called ICH
E6(R2) (ICH, 2016). It instructs researchers to follow the Declaration of Helsinki and
aligns closely with the US regulations. The United States, European Union, Japan,
Canada, Korea, China, Switzerland, Brazil, Singapore, and Taiwan have all implemented
this guideline.
21
2.3 Clinical Trials
Human research comes in many forms. Perhaps the most demanding experiments
from a logistical and regulatory perspective are clinical trials, defined by the National
Institutes of Health as:
… a research study in which one or more human subjects
are prospectively assigned to one or more interventions (which may
include placebo or other control) to evaluate the effects of those
interventions on health-related biomedical or behavioral outcomes (NIH,
2014)
In such studies, patients are often subject to medical interventions at a time when
their disease state makes them particularly vulnerable, so the need for protection and
oversight is enhanced. Historically, such studies were typically carried out by the
physicians themselves to ascertain whether certain medical treatments were having an
effect (Bull, 1959). However, by the middle of the twentieth century, not only was the
ethical foundation of research under scrutiny but the character of clinical trials was
changing. An early step in this evolution came in 1938, as the result of a tragedy
involving a liquid formulation of sulfanilamide whose solvent, diethylene glycol, killed
over 100 people, mostly children (Hilts, 2003). The tragedy prompted the passage of the
1938 Food, Drug, and Cosmetic Act (FD&C Act) that shifted the burden of proof for
drug safety to the company marketing the product (Marks, 2000). Prior to this Act, the
government had to prove that a drug product was dangerous, and only after the product
was on the market. The new FD&C Act required companies wishing to market a drug to
test the safety of that product before putting it on the market. This meant that companies
had to create methods to test their products.
22
2.3.1 The Need for Trials
The new Act came at a momentous time, a time when a pharmaceutical revolution
was just beginning. Wartime needs for game-changing novel antibiotics drove America
to supplement the tedious process of compounding a medicine with a more efficient
system of mass production. In particular, the introduction of penicillin led Florey and
Heatley in 1941 to press US governmental agencies such as the Department of
Agriculture and the Office of Scientific Research and Development to mass manufacture
this needed medication (Young, 1992). The proof of principle that it established under
governmental aegis encouraged firms such as Merck and Company, E.R. Squibb and
Sons, and Charles Pfizer and Company to expand their own manufacturing capabilities
(Lax, 2005, Hilts, 2003). By the end of the war, penicillin had proven the power of large-
scale, science-based drug production to fuel a profitable business. Many pharmaceutical
companies began to concentrate on isolating, testing, and marketing new products
(Young, 1992, Marks, 2000). Rapid growth and high returns, driven by a need for
effective new medicines, made companies very aggressive in their marketing efforts, and
set the industry up for another tragedy.
Among the wonder drugs of the late 1950s was a sedative called Thalidomide,
that promised pregnant women relief from the nausea of early term pregnancy often
called “morning sickness.” The drug was widely sold in Europe before its teratogenic
effects became apparent in the appearance of over 15,000 infant deformities and deaths
linked to the drug (Hilts, 2003). In America, a newly hired FDA drug reviewer, Dr.
Frances Kelsey, had reservations about the drug during her premarket review. As she
struggled to resist pressure to approve the product for commercial distribution, news of
23
widespread fetal malformations became public in the United States. A journalistic
explosion helped to drive the passage of a drug-safety bill that had been languishing in
Congress (Hutt and Temple, 2013). The “Kefauver-Harris Amendments” to the FD&C
Act introduced three key requirements relevant to clinical trials: companies would have
to prove that drugs destined for market were effective as well as safe; their marketing
submissions would be subject to a review process prior to marketing; and they must
conduct “adequate controlled trials” to provide the evidence for efficacy and safety (US
Congress, 1962).
What an “adequately controlled trial” meant would not be established until 1969,
when clarifying amendments were added to part 130.12 of Title 21 of the Code of Federal
Regulations. The revisions made clear that companies must conduct “well-controlled”
trials that allow for “quantitative evaluation” (FDA, 1969). Importantly, the FDA did not
prescribe a precise account of what would count as “adequate” or how many people
might be needed to provide sufficient evidence for efficacy. It recognized that an
“adequate” number of research subjects could vary depending on several factors. These
factors would have to be explained with reference to the scientific foundation basing the
decisions. The regulations were, however, unequivocally against uncontrolled trials:
Uncontrolled studies or partially controlled studies are not acceptable
evidence to support claims of effectiveness. A study is uncontrolled when
there is no comparison study against which to evaluate the treatment
results, or when such experimental factors as disease identity are not
controlled. (FDA, 1969)
The shift to evidence-based experiments placed an increased emphasis on
statistics (Gail, 1996, Junod, 2008). Much of its early foundation came from the
statistical approaches that were developed by Fisher and others (Reid, 1950, Hotelling,
24
1951). Interestingly, Fisher was not working in the medical product area. Trying instead
to prove whether artificial fertilizers increase crop yields, Fisher devised a series of
experiments in which plots of land were assigned randomly to be fertilized.
Randomization between portions with fertilizer and portions without allowed the effects
of confounding variables to be minimized (Fisher, 1935). The principle of randomization
was soon adopted as a generalizable tenet of clinical trials. For example, one group of
randomly drawn participants in a drug trial might receive the investigational medicine
and another receive a placebo or a comparator drug. The advantages of randomization to
reduce bias were magnified if the subjects were also blinded to the experimental
treatment (Hill, 1963). From trials with single blinds, in which the participants did not
know whether they would be receiving the experimental drug, grew trials in which bias
was further controlled by double-blinded or even triple-blinded protocols, in which the
physicians and the trial sponsor could not see the results of the trial until it was over.
These features of randomization, control, and blinding have become foundational to most
modern clinical trials (Lasagna, 1955, Green, 2002).
2.3.2 Multicenter Trials
Perhaps the most notable trend over the past twenty years has been the growth in
the size of clinical trials as reflected in their numbers of participants. Today, most drug
development and commercialization programs depend on four phases of clinical trials.
The first three occur prior to approval. These are described in FDA regulations found at
21 CFR 312.21. Phase I trials are first-in-human trials that attempt to establish
preliminary insight into product safety in only about 20-35 healthy volunteers. Phase II
trials typically enroll a few hundred participants with the targeted therapeutic indication
25
in whom the drug is tested for safety and initial efficacy. Those types of trials are also
intended to identify an anticipated therapeutic dosage window. Phase III clinical trials
use the dose-effect data from phase II to expand the testing of safety and efficacy. Phase
III trials are typically much larger trials, structured using rigorous statistical methods and
with participant pools in the thousands (Jenkins and Hubbard, 1991). Finally, some drugs
require that clinical trials be extended as phase IV trials for a period after
commercialization to solidify observations, add indications, or perform additional safety
testing in particular populations at risk.
The expansion in the size of trials, especially phase III trials, has also been fueled
by the increasingly competitive pharmaceutical business environment. While dramatic
treatment effects from early scientific breakthroughs can sometimes be ascertained from
modest clinical trials with relatively few subjects, the now-crowded therapeutic
marketplace requires that many new drugs compare themselves to a current treatment
over which they may have only an incremental advantage (Hilbrich and Sleight, 2006).
Many trials must therefore be large to prove that the effect seen following drug
administration is due to the new therapy, not to chance differences in unmeasured factors,
variable drug responses, or different baseline characteristics across the sampled
population. As the size of clinical trials grew, a single institution rarely had the ability to
enroll all the needed participants. Instead, multicenter or “distributed” clinical trials
expanded beyond an individual institution to establish sites at multiple institutions
(Burman et al., 2001). Since the 1970s, the proportion of trials that are conducted at
more than one institution has grown progressively. Such trials were rare in the 1960s, but
by the middle of the 1990s, nearly 60% of late-stage clinical trials had multicenter
26
configurations (Bell et al., 1998). Rapidly growing pharmaceutical opportunities also
affected the numbers as well as size of trials. Increased research dollars and interest
greatly expanded the possible drug candidates to pursue. With the possible payout of a
blockbuster drug in the billions, IRBs experienced over an order of magnitude increase in
clinical trials between the 1970s and the 2000s (Wagner et al., 2004).
2.4 Ethical oversight of distributed clinical trials
2.4.1 IRBs in Jeopardy
The fact that trials are commonly distributed across many sites sets up logistical
and ethical challenges, including challenges in the way in which IRBs handle their ethical
oversight. In 1998, the Inspector General of the Department of Health and Human
Services issued a report titled “Institutional Review Boards: A Time for Reform”. This
review argued that changes in the research environment put the IRB system in jeopardy
(DHHS, 1998). The report acknowledged that most research programs were no longer
“carried out by a single investigator working under government funding with a small
cohort of human subjects in a university teaching hospital” (p.ii). Instead, clinical trials
were mostly sponsored by private pharmaceutical or device companies and typically
spanned several institutions so that they could quickly enroll a large cohort of
participants. Additionally, the growth in numbers of applications, the complexity of
protocols, and the volume of safety data from such trials placed increasing pressure on
the IRB system. The report provided a series of recommendations that attempted to
address the issues. They recommended, for example, that IRB members have federally
mandated training, that IRBs be registered, are evaluated on performance, and have
adequate resources, Data Safety Monitoring Boards (DSMBs) be set up for some trials,
27
and that IRBs be informed of prior reviews to a protocol to discourage IRB shopping
(DHHS, 1998). Institutions responded with a series of piecemeal changes that could be
accommodated within the existing IRB regulatory framework. In a follow-on report to
assess the status of their recommendations, the Office of the Inspector General found that
only a few of their recommendations had been acted upon (DHHS, 2000).
More recently, additional changes were introduced to respond to the original
report. To address the expansion of trial numbers, institutions were forced to put more
than one IRB in place. To respond to the complexity of trials, reviewers were given
formal training and protocols were assigned to specific reviewers. Assigning two or three
reviewers to do an in-depth review prior to an IRB committee meeting helped to assure
that complex studies would get the benefit of a detailed review prior to that meeting. To
address the growing volume of safety data, DSMBs were put into place, so that
responsibility for data review could be shifted away from the IRBs. Instead, the DMSBs
would present periodic safety reports to the IRB (DHHS, 2000). However, these
measures mostly failed to address the root cause of the problem, specifically that changes
in size and nature of the clinical trials have made it difficult for IRBs to be effective in
reviewing those trials efficiently (Burman et al., 2001, Emanuel et al., 2004).
A specific challenge that made change difficult were federal regulations requiring
trials to obtain approval from the IRBs at every trial site engaged in the trial before the
trial could begin. Typically, then, a multicenter trial would have to seek approval from
multiple IRBs associated with the multiple sites at which the trial would be held (Stair et
al., 2001). This was, at best, a ponderous process and, at worst, an intractable nightmare.
Each institution could (and usually did) ask for changes based on concerns raised by the
28
IRB members. Typically, the members of one IRB would raise concerns that differed
from those of another (Shah et al., 2004, Khan et al., 2014). Since the trial must operate
under the same protocol at each site, a sponsor would then have to find a way to
incorporate all the requested changes. However, quite commonly, changes suggested by
different IRBs contradicted one another, making it difficult to reconcile the differing
suggestions without some level of negotiation (Silverman et al., 2001, Stair et al., 2001).
This process could take a good deal of time (Schnipper, 2017). Faced with a recalcitrant
IRB, an otherwise useful site might be dropped altogether from the study. The decision
to abandon an otherwise competent site is bad for everyone involved. The trial might
have benefited from having participants enrolled from that site. The clinical team at that
site may have benefited from the opportunities for advanced education and experience
with the product, and patients at the institution might have benefited from access to a new
and perhaps better therapeutic option. Furthermore, the financial support associated with
the conduct of a clinical trial brings desirable revenue to the institution and the site
personnel.
Why would a sponsor make the decision to abandon an otherwise capable site
rather than engage in a lengthy negotiation or protocol modification? Sponsors are very
sensitive to problems that might increase the length and cost of a clinical trial. Thus, they
may be reluctant to conduct a trial at an institution with a reputation for slow IRB
approval times or many contingencies (Forster, 2001). This puts the institution and its
IRB in a delicate position. Institutions are aware that to some extent they are competing
for the opportunity to participate in a trial. Increasingly, sponsors have the option to use
clinical sites in other countries whose institutions aggressively market their fast approval
29
times and many participants. The IRB might then become cautious about asking for
changes, especially if it knows that powerful and valued clinicians and scientists will
complain vociferously to their administration about the loss of scientific opportunities
and research money. In addition, an IRB may pay less attention to the study design,
because it knows that several other additional IRBs will review the study, and
presumably will catch anything important (Menikoff, 2010). Paradoxically, then, the
increased number of IRB reviews could reduce the number of real problems that are
addressed, especially if a sponsor were to select only those institutions whose IRBs were
meek enough to approve their protocol with little question.
Another development has also challenged the unique role of institutional IRBs as
the only gatekeepers to clinical trial sites. IRBs untethered to a particular institution
began to spring up as not-for-profit or commercial entities that provided services for trials
conducted in smaller clinics or doctor’s offices (Forster, 2001). At the beginning, such
IRBs were often viewed with suspicion by institutional IRBs and other stakeholders. It
was not until 1996 that NIH allowed for-profit independent IRBs to review federally
funded research (Forster, 2001). However, free-standing clinics were often more
competitive for certain types of studies because their commercial IRBs were more
effective at furnishing fast and straightforward reviews. As far back as 1998 the Office
of the Inspector General of Health and Human Services found that commercial IRBs
were found to conduct a review in less than one third the median time of the local
institutional IRB (DHHS, 1998). Thus, traditional IRBs became concerned that trial
sponsors would be lured by the promise of faster review times and possibly more
approachable management structures, and often also worried that those ethical reviews
30
would be more lenient. This competition has put pressure on institutional IRBs to
approve the research quickly.
One might ask why institutions might not outsource their IRB activities to a
commercial vendor rather than dealing with the logistics of its own IRB structure, given
that the commercial IRBs appeared to be more agile. The costs of outsourcing would of
course be of issue. However, concerns also exist about liability in situations where some
aspect of the trial goes wrong. When a suit is brought by participants in a clinical trial,
the institution where the trial was performed is typically named on the suit, in part
because its associated institution has deeper financial pockets than any of the individuals
conducting the trial (Hoffman and Berg, 2005). Institutions are wary of the financial
consequences and reputational damage that might occur if review were to be outsourced
and then conducted badly (McNeil, 2005). Thus, worry that participants would not be
well served has kept for-profit IRBs on the periphery (McNeil, 2005).
Regardless of whether IRBs were institutional or commercial, sponsors still faced
the challenge of working with multiple oversight bodies. The different sources of
oversight might be considered to add value if the multiple reviews greatly improved the
trial protocols. However, the evidence for such improvements has been difficult to find.
In fact, the weight of evidence suggests that the system of multiple reviews does not yield
studies with better designs or enhanced safety outcomes (Menikoff, 2010). For example,
Burman identified that multiple local reviews did not lead to more readable consent
forms (Burman et al., 2001) or suggest substantive changes in the protocol (Ravina et al.,
2010). Instead, the multiplicity of reviews only appeared to increase costs and delay the
onset of the trial (Menikoff, 2010).
31
Clearly, something needed to be done to reduce the duplication and redundancy
involved in the interventions of multiple IRBs. To some extent, the regulations (45 CFR
46.114) anticipated the need for some form of solution by allowing a pathway for
“Cooperative Research”. In Cooperative Research, a multicenter trial could be reviewed
by a single IRB, if the other IRBs could agree to accept that review. It was, however, left
to each IRB to decide whether it would accept the recommendations of a different IRB or
conduct its own independent review. Given the choice, most IRBs took the latter option
and justified that decision by citing liability concerns, responsibility to the local
participants associated with its institution, or distrust of another IRB’s determinations
(Loh and Meyer, 2004). Therefore, the option to accept another IRB review was not
sufficient to solve the issues associated with the oversight of multicenter trials.
2.4.2 Creating Central IRBs
Having one IRB to oversee all the institutions in a single clinical trial makes sense
to increase efficiency and consistency. Not surprisingly, then, proposals began to emerge
over the last two decades to institutionalize “central” IRBs (CIRBs) that would oversee
all sites in certain types of multicenter trials. The National Institutes for Health appeared
to have made the earliest efforts to develop such a system for the trials that they
sponsored. In their model, a new IRB, entirely separate from the IRBs at the clinical sites,
was created as the oversight board for all sites in the multicenter trials that it sponsored.
The National Cancer Institute, for example, established an NCI CIRB in 2001. It had at
least 16 members with expertise in cancer research (Wagner et al., 2010) to ensure the
scientific competence of the board (Christian et al., 2002). None of these board members
were employees of NCI to assure its independence from NCI.
32
At the beginning of the NCI program, site-based IRBs were not compelled to
accept the review of the CIRB. Consequently, 62% of applications submitted in the first
year of this program were re-reviewed by site-based IRBs that elected to examine the
protocol independently (Christian et al., 2002). This rate of re-review dropped
progressively over time as institutions gained confidence in the high quality of the CIRB
reviews (Massett et al., 2018). However, participation in the program was still not high.
By 2009 the adoption rate stalled; less than 50% of institutions enrolling cancer patients
in NCI trials used the NCI CIRB (Infectious Diseases Society of America, 2009). Many
attributed this lukewarm uptake to the fact that NCI used a hybrid model, called
“Facilitated Review” (Check et al., 2013). The facilitated model required a partnership
between the local IRB and the CIRB (Good et al., 2020) that gave the CIRB
responsibility to review the science and compliance with federal regulations but gave the
local IRB responsibility to assure that local laws were followed and local investigators
were qualified. This model aimed to replace two full committee approvals, thereby
saving time and avoiding re-reviews of the protocol (Massett et al., 2018). However, the
time savings were insufficient to overcome concerns about the shared responsibilities and
liabilities. By 2013, NCI transitioned from a hybrid model to one in which the CIRB
took charge of the complete IRB review. This transition was successful; by 2016, 96% of
institutions enrolling into NCI studies accepted the CIRB review (Massett et al., 2018).
An important metric to determine whether the centralized model had value was its
demonstrated ability to decrease review times and costs. A study of the NCI’s program
found that time to approval was indeed faster by a month compared to historical estimates
(Massett et al., 2018). However, program costs, paid by NIH, were higher by $55,000 a
33
month than a comparable site-based system, in which the distributed IRBs used their own
university faculty and staff (Wagner et al., 2010). Thus, the program trades off speed for
cost. Other CIRBs now exist and follow a model similar to that of NCI, but their
numbers are currently insufficient to meet the needs of the research requiring oversight.
As the NCI model showed, it can take years, persistence, and deep financial support to
make such an IRB viable. Thus, the central IRB model, while offering some relief for the
multiplicity of reviews, continues to be underutilized.
2.3.3 Developing Reliance Models
Another option to reduce duplication has been to encourage “regulatory reliance”
amongst IRBs. In this model, one IRB becomes the “reviewing” IRB and the others cede
their authority by “relying” on that review. Advantages might be gained by using the
existing IRB infrastructure rather than creating a separate oversight board. IRBs might
feel more comfortable to rely on a similar institutional IRB to review the study rather
than an unfamiliar IRB that they might not trust (Koski et al., 2005). Additional factors,
such as an existing culture of ownership, autonomy, and pride can also play an important
role, as it did with Harvard and its affiliate hospitals (Winkler et al., 2015). Costs also
might be reduced compared to that when a central IRB must be created or paid.
However, concerns also exist with a reliance model. One important concern is
liability, as introduced earlier in this chapter. When an institution contracts to carry out a
research study, it can be held liable for any problems created by the site team or trial
outcomes (Mello et al., 2003). For example, the failure of a researcher to conduct the
trial appropriately can put not only the researcher but also the organization at risk
(Anderlik and Elster, 2001). When the research has been approved by a different
34
institution, the liability is more difficult to control and can lead to finger-pointing (Mello
et al., 2003). It is easy to imagine that, in hindsight, an institution could claim that their
own IRB would never have approved a deficient study without better safeguards to
dissuade the bad behavior. Is this problem soluble? One approach has been to define the
liability that could be carried by each institution in a reliance agreement that is signed
between the IRBs. Both the Federal Regulations under the “new rule” published in 2018
and the NIH as part of their single IRB mandate encouraged the IRBs to describe the
roles and responsibilities clearly in the reliance agreement (NIH, 2016). If, for example,
the reviewing IRB approved a protocol in a way that could be considered unethical by
modern standards, then that reviewing IRB would be liable the consequences. However,
the institution at which the site is located would be held liable for actions executed by site
personnel that were not prescribed by the protocol. Those actions might include, for
example, falsification of information by a site investigator or failure to administer the
correct test material (Lidz et al., 2018). To this date it is unclear how the demarcation of
responsibilities will impact the assignment of liability by the courts.
One key factor affecting the confidence of accepting another IRB’s review is the
“decoupling” of ethical review from institutional responsibilities (Flynn et al., 2013). A
2013 report from the Clinical Trials Transformation Initiative (CTTI) found another
large, and unexpected, barrier to using a CIRB. IRBs are very effective gatekeepers. As
such, they are often assigned many institutional tasks unrelated to their core purpose of
ethics review (Flynn et al., 2013). For example, in many institutions, the IRB organizes
not only the detailed oversight of trials but also the data related to institutional
responsibilities to satisfy other types of regulatory requirements. Such responsibilities
35
extend to assuring conflict of interest review, HIPAA determinations, and compliance
with education and training mandates (CTTI, 2013). Further, the IRB may also become
the main, and in some cases, only source of documented information about the numbers
and status of research at some institutions. It makes sense that an institution would wish
to add these other responsibilities related to clinical research to the IRB review to ensure
that those obligations are assured in a holistic way. For example, some institutions use
the IRB submission forms to interrogate the investigator about whether he or she has
conflicts of interest that must also need review, albeit by a different committee. It is also
common for the IRB to serve as the privacy board to review HIPAA compliance for the
institution because it is in the unique position to understand the trial protocol and
therefore what HIPAA determinations must be made (Diamond et al., 2019).
There are good institutional reasons to couple the ethical review of a clinical
protocol with measures to satisfy other institutional responsibilities. However, the
management of these tasks can become more difficult when responsibilities are split
between a distant IRB doing the ethical review and the local IRB that is relying on that
review (O’Rourke, 2017). Quite commonly, the reviewing IRB will conduct an ethical
review and then the local IRB will perform an abbreviated review (without considering
the ethical aspects) in order to meet more specific institutional requirements such HIPAA
determinations, COI disclosures, and liaison with other local departments, hospitals and
other groups (such as radiation safety, biosafety, device management, pharmacies, etc.)
(Diamond et al., 2019). However, this focuses attention on the logistics of the
interactions and the reliance agreements memorializing them, because they may extend
beyond what might be considered as the narrow confine of ethics review.
36
2.5 Current efforts to Manage Multicenter IRB Review
Core to a reliance model is the need to select one IRB over others as the IRB of
principal record. This can become a tricky, and sometimes sticky, challenge. As
mentioned above, several concerns can challenge reliance. In the original reliance model,
it is simpler for the institutions to decide which one will do the ethics review and which
will rely on that review if the research originates from one institution and the other IRBs
are clearly positioned as subsidiary partners. In these cases, the originating institution
typically plays the larger role in the research, and the other institutions have a more
limited role in recruitment or specific research tasks, such as specialized imaging or
surgical intervention (Koski et al., 2005). Moreover, research funded by the Federal
government is typically under a grant or contract to a named primary institution, which
can make moot the decision about which IRB to use. In those cases, the reviewing
institution must be the one named or directly receiving the money. When, instead, the
research originates from a pharmaceutical company and each institution is relatively
equal, it can be more difficult to assign a lead IRB (Resnik et al., 2018a). Often the
decision is made by default when the other IRBs, concerned about liability, decline to
serve as a lead IRB. In other cases, the prospect of enhanced visibility and early trial
start-up can encourage one or more sites to vie for the role of lead IRB.
2.5.1 Reliance Agreements and Local Context
Reliance agreements are formal contracts signed between institutions. They are
required when one IRB serves as the ethics reviewer for another site (Lidz et al., 2018).
They vary in form and depth depending on the nature of the institutional relationships
that are involved (Resnik et al., 2018b). For example, it is common for research to be
37
carried out between a large institution and a small entity such as a local nursing home,
clinic or unaffiliated hospital. In those cases, a relatively simple letter of agreement
between the clinic or hospital must be signed to allow the researchers into the facility.
However, even these relationships can vary. Sometimes the small site only recruits the
participants, but in other cases, employees of the smaller facility are also engaged in
some or all aspects of the research. The more involved the other site, the more extensive
the arrangement must be. Regardless, the smaller facility must sign some sort of
agreement allowing the lead institution to serve as its IRB of record. In the case where
the smaller site has its own IRB, that IRB typically signs an agreement ceding some or all
its responsibilities to the IRB of the larger institution and thus agrees to abide by its
decisions. Such an approach has, however, been used sparingly because it requires a
formalized relationship between the IRBs. Negotiations regarding the agreement can
take months or years, even with broad agreement on the principles (Lidz et al., 2018,
Burr et al., 2019). The partnering entities must agree to and sign an “IRB Authorization
Agreement” in which the duties of each IRB are delineated, and the level of sharing is
identified, as required by 45 CFR 46.103(e).
In its simplest form, a reliance agreement might only be one page, as suggested by
inspecting the template released by OHRP (OHRP, 2011). It is common, however, for
the agreements to be much more detailed and require negotiation as each party tries to
limit its institution’s exposure to legal or financial liability (Resnik et al., 2018a).
Traditionally, this type of arrangement had been structured between organizations with
existing partnerships or that are located close to each other (Cobb et al., 2019). For
example, in Los Angeles, a reliance agreement is in place between the University of
38
Southern California and University of California Los Angeles. In such an agreement, the
reviewing IRB will always state that it will comply with the federal regulations for
review, provide approved copies of study documents and notify the reliant IRB of
problems, suspensions or terminations of the research (Resnik et al., 2018b). The reliant
IRB has duties as well: informing the reviewing IRB of any suspension or restriction
placed on the investigator at the relying institution; ensuring that the study staff have the
required certifications and training; act as the HIPAA privacy board; ensuring that local
laws and regulations are followed; providing the required local information for the
consent forms; and providing local review of ancillary committees, such as Radiation
Safety or Biosafety (Diamond et al., 2019). The agreement also will include information
on how continuing reviews, adverse events, or amendments are communicated.
2.5.2 Local Knowledge
Experience with reliance agreements suggests that IRBs appear to be more
comfortable with ceding certain responsibilities if they share a local geography or
preexisting relationship (Koski et al., 2005). This type of confidence can be harder to
develop when the collaborations span longer distances. Thus, attention must be paid to
the important concept of “local knowledge” (Flynn et al., 2013). Local knowledge,
otherwise known as “local context,” is a term that encompasses important differentiators
that can exist between one locale and another and can create misunderstandings between
IRBs (Henrikson et al., 2019). Perhaps the most obvious of these differentiators in the
US are the varying state laws that cover health-related activities. For example, an IRB
from the state of New York may have little understanding of California laws regarding
prisoners or adults unable to consent. It is easy to see why an IRB in California might
39
then be concerned that a reviewing IRB from New York may be ill equipped to ensure
that those laws are followed. To make matters more complex, state laws are not the only
legal structures of concern. Within a single state, each county may have its own
procedures or local laws. When research projects are conducted between local
institutions, they share a similar set of laws. A local IRB may have to learn the local
rules for a nearby county, but the amount of information and the divergence between
institutions is easily managed. When institutions are distant, local IRBs have found it
important to prepare information for the reviewing IRB to help it understand the local
laws (Henrikson et al., 2019). IRBs have been explicitly tasked with being aware of any
laws governing the research and being sensitive to concerns of the local community.
These contextual concerns are called out specifically by the NIH to preserve the integrity
of the IRB review process (NIH, 2016). The degree to which this information is an
influential factor to increase satisfaction with reliance relationships is controversial.
Some cite the factor as an important consideration, but others argue that the differences in
local laws have been overstated and that they are often not central to the review of the
research (Klitzman et al., 2019).
Local context may also include the demographic characteristics of the populace.
Because Los Angeles has many Spanish speakers, for example, consent forms and study
related materials all must be translated into Spanish. There may be other conditions that
the local lay member of the IRB understands but might be unknown to the reviewing
IRB. A history of oppression, abuse, or poverty in a particular locale or ethnic group can
influence how a particular research project may be viewed by potential trial participants.
Because the reviewing IRB must take those factors into account, it is important for the
40
reviewing IRB to be educated by its local IRB partner so that the protocol and recruiting
practices can be tailored appropriately (Lidz et al., 2018).
Another important type of local knowledge is that associated with the reputation
and past work of the investigator and the investigative team. A previous history of non-
compliance, for example, is something that the local IRB might know but be reluctant to
share with another IRB. Institutions may want to restrict information that might affect its
legal liability or reputation, and some information may even be protected by privacy
considerations or ongoing legal investigations. Currently the amount and type of
information that could be shared, particularly about individuals involved in the research,
is still a much-debated issue (Klitzman et al., 2019).
To deal with some of these concerns, the local IRB can put together a document
describing the local context to help the reviewing IRB. A “local context worksheet” is
now a common part of reliance agreements (Diamond et al., 2019). One issue of
concern, however, is standardizing the content of this worksheet. The information
included can vary substantially (Klitzman et al., 2019). Further, the forms that are used
to inform participants of the research risks are typically local. Informed consent forms
must include the investigator’s name and a local number, enabling participants to call
with questions or concerns that they have been harmed. Many local IRBs also require
that the informed consent form be stamped by the institution where the research is taking
place. Importantly, consent forms can also differ by region. California, for example,
requires an “Experimental Subject’s Bill of Rights” to precede the main consent form
content. Further complicating the matter, variation between consent forms regarding
HIPAA, privacy, subject injury, and compensation are common (Diamond et al., 2019).
41
2.5.3 Current Efforts
Against the backdrop of the issues and concerns regarding single IRB reviews is
the promise of greater efficiency and consistency. The prospect of better speed, lower
cost, decreased use of resources, and perhaps even better protections for participants
(Menikoff, 2010) has motivated important changes to federal policy. Some efforts to
address the issues and concerns mentioned above have been pursued for several years.
Two of these, the Harvard Catalyst program and its outgrowth, the SMART IRB
program, have been on the forefront.
The Harvard Catalyst program began in 2008. The need arose from difficulties that Harvard itself
found when trying to navigate the complexity of clinical research. Harvard University has 17
affiliated academic health centers and 11 different schools, each of which are incorporated
separately. Research that spanned more than one of these entities was fraught with delays and
inefficiencies (Winkler et al., 2015). IRB Authorization Agreements (IAAs) were typically
negotiated and managed individually by the research project itself. Harvard therefore devised a
common “Master Reliance Agreement” to standardize the relationships and eliminate the need for
individual negotiation of IIAs and individual reviews of protocols. This master agreement is now
shared by all the institutions (Winkler et al., 2015).
Signing on to the Catalyst program allows one signatory to rely on any other
signatory’s IRB review. The master agreement, however, requires significant alignment
by each signatory with respect to their interpretation of regulations and other research
approval processes, such as conflict of interest review, HIPAA determinations, and
insurance coverage. To reduce potential dissonance, the Catalyst program specifically
attempts to address differences that were previously found to be pervasive in regulatory
interpretation and language at different partnering organizations. For example, differing
42
policies on non-compliance and reporting commonly existed that would have required
significant negotiation between the two institutions. Requiring that all signatories sign on
to a common policy as a condition of being part of Harvard Catalyst allowed the program
to achieve significant alignment among the initial institutions within three years (Winkler
et al., 2015). In addition to policies, the Catalyst program encouraged significant
procedural and SOP alignment. Concerns in local IRBs regarding quality were to some
extent allayed by requiring accreditation by the Association for the Accreditation of
Human Research Participants Programs (AAHRPP) of its signatories, to assure that each
signatory regarded the other IRB programs as trustworthy. Additionally, common SOPs
for reporting and audits were developed to ensure that reporting requirements and
information sharing issues were settled beforehand. This program was successful enough
to prompt other committee agreements, such as those developed as part of their Biosafety
reliance program, to be built around the same model (Caruso et al., 2020).
The Harvard Catalyst program was sufficiently successful that its funding agency,
the National Center for Advancing Translational Sciences (NCATS), decided to work on
nationalizing the model. In 2016 the Streamlined, Multisite, Accelerated Resource for
Trials (SMART) IRB program was launched (Cobb et al., 2019). The SMART IRB
program is set up like the Harvard Catalyst program. Its main deliverable is a master
reliance agreement that allows any signatory to rely on any other signatory’s review. To
date, 883 institutions have signed the SMART IRB Master Common Reciprocal
Institutional Review Board Authorization Agreement (SMART IRB, 2021). This
agreement retains some of the conditions that Harvard Catalyst imposed – a quality
assessment of their human subjects’ protection program (such as AAHRPP
43
accreditation), maintaining Federalwide Assurance (FWA) with the federal Office of
Human Subjects Protections, and significant policy alignment on reporting, audits,
training, and quality improvement. The agreement provides much detail on the
responsibilities and roles of the reviewing IRB and the reliant institution, including
mechanisms by which the reliant IRB can provide local context. Some of the ancillary
reviews, such as HIPAA determinations and conflict of interest review and management,
are performed by the reviewing IRB (Cobb et al., 2019). By May 2019, just over a year
after launch, SMART IRB recorded over 1250 studies with reliance agreements (Cobb et
al., 2019). Some institutions that have signed the SMART IRB master agreement have
even begun to replace existing agreements with the SMART agreement. For example, in
conversation, Kristin Craun of the UCLA IRB mentioned that new studies collaborating
between USC and UCLA are following the SMART reliance model instead of the earlier
independently negotiated approach (Craun, 2021).
Still remaining are concerns about the assumption that the new systems will speed
study-start times, called into question by recent studies (Diamond et al., 2019). However,
it is difficult to assess the effectiveness of a program while institutions are still struggling
to assure full implementation. Sara Calvert, in a presentation as part of a workshop
hosted by OHRP in 2020, notes that there is “wide variation in how [single IRB review]
is being implemented” and a majority of participants have indicated that the process has
not yet “simplified the ethics review process” (Calvert, 2020). As Klitzman notes, in a
response to Diamond (et al.), these data highlight “urgent needs for research” on single
IRB review (Klitzman, 2019).
44
2.6 Structuring Research into Single-IRB Implementation
The need for research into the implementation and effectiveness of the single IRB
mandate is important now, while time remains to affect the outcome of the program.
Many IRBs are still working to understand and implement procedures and processes.
Therefore, a simple evaluation of the outcomes associated with single IRB review, such
as study-start times or costs, might be considered premature. It would certainly fall into
what Scanlon called a “Type III error” – trying to measure something that does not exist
(Scanlon et al., 1977). If it were assumed, for instance, that all IRBs have implemented
the single IRB mandate similarly (and this is not true), we would be measuring the
outcomes of a program that is not implemented completely. Further, such assessments
are challenging not merely because a multiplicity of implementation efforts have
occurred, but also that the full impact of using single IRBs has not yet been felt. While
the updated federal regulations in 2018 added language to mandate single IRB review (45
CRF 46.114(b)), to date the FDA has not altered their regulations to harmonize with the
new changes and does not yet require single IRB review. That means that IRBs are not
required to rely on a single ethics review for clinical trials sponsored by pharmaceutical
companies. Therefore, it is more likely that an evaluation that seeks to help IRBs to
understand and model successful behavior will be more useful than one that measures
how well the objectives of single IRB review have been met. We should not discount the
information we have about how the outcomes are faring. However, a “formative”
evaluative structure is more likely to be helpful than a “summative” one because the
additional information on implementation could be used to improve existing programs
and to understand the context of summative evaluations.
45
There are many ways to evaluate the progress of a program, and some depend on
the nature of the system under examination. Study of the single IRB system is
complicated because differences exist not only in the types and sponsorship of the studies
managed by the IRBs but also in the variety of implementation approaches that have been
tried. Thus, evaluation models that assume a simple linear progression from inputs to
processes and then to outputs would seem to be less useful than a framework deliberately
designed to cope with more varied implementations (Frye and Hemmer, 2012). Further,
as Fixsen and his colleagues note in their seminal monograph on implementation research
(Fixsen et al., 2005), it is important to distinguish implementation outcomes from
effectiveness outcomes. Applied to this situation, care must be taken to differentiate how
well IRBs are implementing the single IRB mandate from whether the single IRB
mandate is effective in achieving the stated goals. That there is “wide variation” in
implementation (Calvert, 2020) underscores this concern.
One framework designed to cope with complexity is the CIPP framework,
developed by Daniel Stufflebeam in the late 1960s as a response to the failure of earlier
evaluation efforts to produce useable results in educational program evaluation
(Stufflebeam, 1967). The CIPP framework identifies and classifies four elements that are
important to evaluate in a full accounting of system performance: Context Evaluation,
which assesses needs and problems; Input Evaluation, which looks at system capabilities
and implementation strategies; Process Evaluation, which examines a current
implementation process for problems and seeks to improve implementation; and Product
Evaluation, which seeks to close the loop by studying how outcomes achieve the initial
objectives of the project (Stufflebeam, 1967). As Stufflebeam sees evaluation as “the
46
process of supplying information for decision making” (Stufflebeam, 1971), it is
important to recognize the types of decisions that the evaluation will be informing. The
four elements defined by the CIPP framework are meant to support different types of
decisions.
Context evaluations seek to help identify needs, problems, or opportunities. A
good understanding of context helps to support planning decisions. In the case of
multicenter research, some understanding of contextual factors driving the new single-
IRB programs is already well advanced. Several problems had been identified to plague
multicenter trials with numerous independent IRBs: wasteful use of resources, delay in
clinical trial start up time, and insufficient protection of participants. Context evaluations
would typically focus on assessing the magnitude and effect of those issues, and whether
other related issues should be assessed. For example, inconsistencies in IRB reviews
have been suggested many times in the literature as a reason to move to a single or
centralized IRB model (McWilliams et al., 2003). However, it is not clear whether
reducing the number of reviews would assure a consistent review – it only ensures that a
single review is conducted. Thus, context evaluations to examine the factors seen as
drivers for the implementation of a new program would be central to this aspect of the
framework.
Once the objectives are clear, the course of action to meet those objectives must
the decided. An Input evaluation would seek to provide information relevant to
alternative programmatic options so that their relative merits could be assessed. Is
sufficient money or institutional support available, for example? In the case of IRB
reviews, issues related to the duplications of reviews that might waste resources or time
47
might be seen as important elements in designing the path forward. Input evaluation
might therefore consider whether the single IRB review approach is the best option or
whether other courses of action might also be valuable.
Once a particular plan is chosen to achieve the stated objectives, Process
evaluation provides information on how the plan is being implemented. It supports
implementation decisions by monitoring progress to allow for corrective actions or
optimizations. In this case, a process evaluation would provide information on how the
single IRB mandate is being implemented and what problems are faced by those
implementing it.
Last, a Product evaluation would provide information on whether the objectives
have been achieved - what are the results of the program? Such an evaluation would
support decisions regarding the effectiveness and impact of the program and help to
identify unexpected effects. For the single IRB mandate, a product evaluation would
provide information on how well it helped to reduce the problems articulated with
resources and time, for example.
The CIPP framework need not be applied rigidly. Different components of the
framework can be emphasized or deemphasized according to the state of the project,
indicated by the maturity of the activities that are in place (Stufflebeam, 2013).
Additionally, each of the evaluation strategies can be either formative or summative; they
could either inform the program to help make it better or evaluate the sorts of decisions
made at a particular stage of the program (Stufflebeam and Coryn, 2014). For example, a
context evaluation could be conducted to inform the program about the issue they wish to
48
address (formative), or it could be used to evaluate whether the program did a good job of
identifying the issues involved (summative).
In the present work, many of the decisions related to the nature of the “problem”
and approaches needed to solve it have been addressed with enunciated plans and
strategies. Thus, Context and Input evaluations would have been summative. A
summative evaluation of the issues and the plan to mandate single IRBs, while valuable,
seemed of less importance to the IRBs trying to implement the single IRB model.
Evaluations related to the process aspects of single IRB implementation have been called
out specifically as gaps in our current knowledge (Taylor and Ervin, 2017, Burr et al.,
2019, Diamond et al., 2019). Since implementation is still going on, a formative
evaluation could inform other implementations, insofar as it identifies areas where
improvements would be useful to assure more successful implementation patterns. Thus,
in this study, the process evaluation area was prioritized.
As Stufflebeam elaborates, the primary objective of the formative use of a process
evaluation should be to inform those implementing the program of problems and issues
of implementation as they occur (Stufflebeam, 2000). That formative work will “take
stock of their progress, identify implementation issues, and adjust their plans and
performance to ensure program quality” by assessing outcomes and side effects
(Stufflebeam and Coryn, 2014). A summary of how formative and summative
evaluations can fit with the CIPP evaluation is shown in Table 1.
49
Table 1: Relevance of the CIPP Evaluation Type to Formative and Summative
Evaluations
Evaluation Role Types of Evaluation
Context Evaluation Input Evaluation Process Evaluation Product Evaluation
Formative
evaluation:
Prospective
application of CIPP
information and
judgments to assist
with decision
making, program
implementation,
quality assurance,
and accountability
Providing guidance
for identifying
needed
interventions,
choosing goals, and
setting priorities by
assessing and
reporting on needs,
problems, assets,
and opportunities
Providing guidance
for choosing a
program strategy
and settling on a
sound general
implementation plan
and budget by
assessing and
reporting on
alternative strategies
and resource
allocation plans and
subsequently closely
examining and
judging the specific
operational plan
Providing guidance
for implementing
the operational plan
by monitoring,
documenting,
judging, and
repeatedly reporting
on program
activities and
expenditures
Providing guidance
for continuing,
modifying,
adopting, or
terminating the
program by
identifying,
assessing, and
reporting on
intermediate and
longer-term
outcomes, including
side effects
Summative
evaluation:
Retrospective use of
CIPP information to
sum up the
program’s value (for
example, its quality,
worth, probity,
equity, feasibility,
cost, efficiency,
safety, and/or
significance)
Judging goals and
priorities by
comparing them to
assessed needs,
problems, assets,
and opportunities
Judging the
implementation plan
and budget by
comparing them to
the targeted needs of
intended
beneficiaries,
contrasting them
with those of critical
competitors, and
assessing their
compatibility with
the implementation
environment
Judging program
implementation by
fully describing and
assessing the actual
processes and costs,
plus comparing the
planned and actual
processes and costs
Judging the
program’s success
by comparing its
outcomes and side
effects to targeted
needs, examining its
cost-effectiveness,
and (as feasible)
contrasting its costs
and outcomes with
those of competitive
programs; also by
interpreting results
against the effort’s
outlay of resources
and the extent to
which the
operational plan was
both sound and
effectively executed
Adapted from (Stufflebeam and Coryn, 2014)
As Stufflebeam and Zhang emphasize in their textbook on applying the CIPP
framework, care must be taken to identify the purpose and audience of an evaluation
(Stufflebeam and Zhang, 2017). This informs the methods and focus of the investigation.
The study performed here is a formative evaluation using the CIPP model, with a focus
on its process evaluation component. This component focuses the study on the
effectiveness of the program’s implementation - what impediments need to be addressed,
how design could be improved, and how implementation could be strengthened
50
(Stufflebeam and Zhang, 2017). To develop data, a survey methodology was proposed to
obtain information on needs, problems, strategies, and interim outcomes (Stufflebeam
and Zhang, 2017), related to the implementation of the single IRB mandate. The
organizations implementing this mandate are the Institutional Review Boards themselves,
so the survey was targeted toward IRB senior staff. We have known that a multiplicity of
implementation efforts exists. The survey catalogued those efforts and provided
information on the details of their implementation, the issues that IRBs have encountered,
the lessons that they have learned, and their views on the effectiveness of their changes.
51
CHAPTER 3: METHODOLOGY
3.1 Introduction
The purpose of this exploratory study is to survey the views and experiences of
IRB professionals in the US as they attempt to implement single IRB approaches for
multicenter clinical trials. Their responses were used to assess strengths and weaknesses
of the approaches, to suggest best practices and further areas of study, and ultimately to
improve implementation outcomes. The study used a web-based survey using
information gained from the literature review. A draft of the survey was refined and
validated with the help of a focus group.
3.2 Survey Population
The study sought the views and experiences of the directors and managers of
IRBs in the United States. Prospective participants were identified by association with
AAHRPP accredited institutions, conference attendee lists, and web searches. In all, 171
participants were identified and a response rate of roughly 33% was targeted (Nulty,
2008). In an effort to boost the response rate, several of the potential participants were
approached prior to sending the survey through personal interactions (email
communication or phone calls) to introduce them to the research. Reminders were also
sent out several times to non-responding persons.
3.3 Survey Development and Validation
The survey was developed using a web-based survey tool, Qualtrics
(http://www.qualtrics.com). Qualtrics provides an environment for the design, creation,
distribution, and evaluation of the survey. The survey included a combination of yes/no,
52
multiple choice, scaled, and open-ended questions. For questions structured to elicit
choices between multiple options, the order of the offered options was randomized within
the Qualtrics software to remove any possible ordering bias.
The survey was created to provide information on the implementation and
effectiveness of single IRBs and had three areas of focus: demographics, implementation,
and effectiveness. Demographic questions captured the size of the IRB and the resources
associated with it, including the number of FTE personnel, the number of studies that the
IRB manages and the number of multicenter trials using the single IRB model at the
institution. The demographic questions paralleled the structure of questions already
solicited from the IRB when it must report to accrediting bodies such as AAHRPP.
Questions to explore implementation probed the numbers of trials using a single IRB
both as the reviewing IRB (sIRB) and as a relying or participating IRB (pIRB), the nature
and magnitude of effort required, the way that they are managed, and the hurdles that
have been faced. To assess some elements of effectiveness, questions were also posed
regarding the estimated costs of each method of review. The survey was designed to be
answerable within 20 minutes.
A preliminary set of questions were created, guided by insights gained from
current literature on the subject matter, the CIPP evaluation framework, and my
knowledge of constraints related to survey methods. Those questions were provided to a
focus group for refinement and validation. The focus group consisted of individuals
selected for their expertise in clinical trials and/or survey development. The focus group
was sent the draft survey several days prior to a scheduled 90-minute meeting, organized
on the videoconference platform, Zoom (https://zoom.us), allowing the participation of
53
individuals living at a distance. The session was recorded so that all comments were
captured and later reviewed. The meeting included a general introduction to the survey
and its purposes, a discussion of the questions in the survey, and then general comments
from the focus group about the survey length and methodology. The survey was fine-
tuned from the focus group input.
3.4 Survey Deployment
The finalized survey was deployed using Qualtrics and was available to the
survey population for two months. Qualtrics has the facility to email the prospective
participants directly. However, it also could be accessed using an anonymous web link. I
developed a distribution list for those individuals with whom I am acquainted or to whom
I have been referred. Additionally, I generated a list of IRB directors and associated
individuals based of web searches of IRBs in the United States. The prospective
participants were selected for their management role within their institution’s HRPP and
their familiarity with the single IRB processes in their institution. No renumeration for
completing the survey was given as an incentive.
3.5 Survey Analysis
The survey results were collected anonymously within the Qualtrics platform,
which can provide a variety of analyses and calculations based on the collected data.
Data collation, percentages, counts, standard deviations, and other statistical tools are
available. Questions that have selectable answers (yes/no, multiple choice, or scaled)
were graphed and analyzed using appropriate methods such as percentages and actual
counts. Some crosstabulations based on demographic data were also examined (to
54
explore, for example, whether the size of an IRB has any bearing on cost or issues).
Open ended questions were examined for information and analyzed for possible trends.
55
CHAPTER 4: RESULTS
4.1 Survey Participation
The survey was sent by email to 171 prospective participants between December
4, 2021 and January 10, 2022, when it closed. Sixty-five started the survey and 62
completed at least one question, providing a 36% response rate. Two of the respondents
indicated that their organization had never ceded a review and followed an abbreviated
path of two questions to complete the survey. Of the 60 respondents presented with the
complete survey, 9 (15%) only answered the first question and another 5 (8%) abandoned
the survey by the end of the demographic questions. This yielded a completion rate of
77%.
4.2 Demographic Profile of Responding Institutions
The first block of questions collected demographic information about the
participants and their institutions. The first question was designed to differentiate
respondents whose IRB had or had not ceded IRB reviews to another institution
(Figure 1). Of the 62 people who answered the first question, 60 (97%) indicated that
their IRB had ceded reviews, whereas 2 (3%) had not.
Figure 1: Participation in Single IRB Review
Has your organization ever ceded the review of a study to another IRB?
60
2
0 10 20 30 40 50 60 70
Yes
No
N=62
56
The two respondents whose institution had not ceded a review were directed to a
pair of terminal questions asking if their organizations plan to participate in the future, to
which both answered yes. The second question, asking the respondents to explain why
their institution had not participated as of now, was not answered.
The other 60 respondents who reported that their institutions had ceded reviews
were asked about their position at the HRPP (Figure 2). Fifty-one of them continued with
the survey. Most self-identified as IRB/HRPP Directors (88%, 45/51), 5 as IRB or HRPP
analysts (10% 5/51) and one as an IRB chair (2% 1/51).
Figure 2: Respondent’s Position in the HRPP
Which position below best describes your role at the IRB/HRPP?
Respondents were given several options to describe their institution, including the
ability to choose “other” and write in a different description (Figure 3). Half identified
their institution as a university with a medical center (51%, 26/51), 40% as a
university/college without a medical center (20/51), 8% as medical center (4/51) and one
(2%) as an independent research site. Options for an independent commercial IRB or
Other were left unselected.
1
5
45
0 5 10 15 20 25 30 35 40 45 50
IRB Chair
IRB Analyst
Director/Manager
N=51
57
Figure 3: Institution Description
Please choose the best description of your institution:
Respondents were asked how many HRPP analysts reviewed research at their
institution (Figure 4). Nearly one-third had three to five (30%, 15/50) and another third
had more than 10 (30%, 15/50). The other respondents identified that the institution had
less than three (22%, 11/50) or six to 10 (18%, 9/50) analysts.
Figure 4: Number of HRPP Staff
How many IRB Analysts/Administrators review research at your institution across all
IRBs? (please estimate FTE if duties are shared)
A weighted average (Figure 5) (estimating the actual number of staff by assigning
numbers to each of the options, i.e. less than 3=2, 3 to 5 =4, 6 to 10 = 8, more than 10
1
4
20
26
0 5 10 15 20 25 30
Independent research site
Medical Center/Hospital
University/College without a medical center
University with a medical center
N=51
15
9
15
11
0 2 4 6 8 10 12 14 16
More than 10
6 to 10
3 to 5
Less Than 3
N=50
58
=12) shows that universities with a medical center average more than twice the number of
HRPP staff than when no medical center is present (9 to 4).
Figure 5: IRB Staff Size at Universities With and Without Med Centers
Figure 6 shows the number of staff members dedicated to single IRB reviews.
Most respondents indicated that their institution had one or less staff members dedicated
to single IRB review (72%, 36/50). Twelve institutions had two to three (24%, 12/50),
and only two institutions had more than 3 (4%, 2/50).
Figure 6: Number of Staff Dedicated to Single IRB Review
Of these staff members, how many are dedicated to single IRB/Ceded review?
(please estimate FTE if duties are shared)
As expected, the universities with medical centers also devote more staff (roughly
double) to single IRB review (Figure 7).
8
2
9
5
3
4
0
14
0 2 4 6 8 10 12 14 16
University w/o Med Center
University with Med Center
More than 10 6 to 10 3 to 5 Less than 3
2
12
36
0 5 10 15 20 25 30 35 40
More than 3
2 to 3
1 or less
N=50
59
Figure 7: Single IRB Review Staff at Universities With and Without Med Centers
Respondents were asked to estimate the number of clinical trials started at their
institution in the past year (Figure 8). They reported on three types of clinical trials,
“traditional,” where the review was done by the institution’s IRB, “ceded,” where the
institution accepted a review from another IRB, and “sIRB - IRB of Record,” where they
acted as the IRB for multiple sites. For traditional reviews, 18 said that they had opened
more than 100 (36%, 18/50), three between 51-100 (6%, 3/50), six between 25-50 (12%,
6/50), 18 fewer than 25 (36%, 18/50). Three opened none (6%, 3/50) and 2 could not say
(4%, 2/50). For ceded reviews, nine institutions reported opening more than 100 clinical
trials (18%, 9/50), six between 51-100 (12%, 6/50), 11 between 25-50 (22%, 11/50), and
18 fewer than 25. Four reported none (8%, 4/50) and two were unable to say (4%, 2/50).
For clinical trials in which the institution was the IRB of record for multiple sites, only
one opened over 100 (2%, 1/49), two opened between 51-100 (4%, 2/49), 7 opened
between 25 and 50 (14%, 7/49), and 27 opened fewer than 25 trials (55%, 27/49). Eleven
institutions opened none (22%, 11/49), and one respondent could not say (2%, 1/49).
20
14
0
9
0
2
0 5 10 15 20 25
Universities w/o Med Center
Universities with Med Center
More than 3 2 to 3 1 or less
60
Figure 8: Number of Clinical Trials in Past Year
How many of the following types of clinical trials did you start at your institution over the
last 12 months?
If we cross-tabulate this with the type of institution, universities with a medical
center generally take on many more clinical trials and many more ceded reviews (see
Figure 9). A weighted average was calculated by taking the middle of each of the options
(i.e. 12.5 for Fewer than 25, 37.5 for 25 to 50, 75 for 51 to 100, and assigning 120 for
Greater than 100) and then dividing by the number of responses for each.
Figure 9: Number of Trials by Type of Institution
1
2
2
11
4
3
27
18
18
7
11
6
2
6
3
1
9
18
0 5 10 15 20 25 30
sIRB - IRB of record
Ceded review
Traditional review
N = 50
Greater than 100 51 to 100 25 to 50 Fewer than 25 None Cannot Say
0
9
1
16
0
4
0
1
2
7
0
5
14
3
15
2
4
0
3
0
0 2 4 6 8 10 12 14 16 18
Univ. w/o Med Center - Ceded
Univ. w/Med Center - Ceded
Univ. w/o Med Center - Traditional
Univ. w/Med Center - Traditional
None Fewer than 25 25 to 50 51 to 100 Greater than 100
WA = 92
WA = 16
WA = 73
WA = 13
61
Respondents were asked to estimate how long it took on average for the IRB to
approve or clear trials that began in the past year (Figure 10). Traditional approvals were
completed by 11 institutions in less than 30 days (24%, 11/46), by 24 institutions between
30 and 59 days (52%, 24/46), by 9 institutions between 60 and 99 (20%, 9/46) and by two
institutions over 100 days (4%, 2/46). For ceded trials, 20 institutions were reported to
take less than 30 days (48%, 20/42), 12 between 30 and 59 days (29%, 12/42), 6 between
60 and 99 days (14%, 6/42) and a further four in 100 days or more (10%, 4/42). Trials
for which the IRB served as an IRB of Record had similar review times to those for
traditional reviews. Nineteen percent completed approvals in less than 30 days (7/36),
58% between 30 and 59 days (21/36), 17% between 60 and 99 days (6/36) and 6%, in
100 days or more (2/36).
Figure 10: Clinical Trial Review Times
How long, on average, does it take for a clinical trial to be approved/cleared at your
institution?
When we cross tabulate these results with the type of institution, we find that
review times for traditional clinical trials are generally the same as for universities
without a medical center (see Figure 11). The weighted average is created by assigning
2
4
2
6
6
9
21
12
24
7
20
11
0 5 10 15 20 25 30
sIRB
Ceded
Traditional
Less than 30 days 30 to 59 days 60 to 99 100 days or more
N = 42
N = 36
N = 46
62
numbers to each of the ranges, multiplying that number by the responses in the respective
ranges, and then dividing by the number of responses. In this case, less than 30 days is
assigned 20, between 30 and 59 days is assigned 45, between 60 and 99 days is assigned
80, and 100 days or more is assigned 120. By this metric, a traditional clinical trial takes
somewhat longer at universities with a medical center at 51 days, but ceded reviews take
roughly the same time at both types of institutions (40 or 41 days, on average), almost as
long as an average clinical trial at a university without a medical center.
Figure 11: Review Times by Type of Institution
4.3 Developing Single IRB Capacity
Asked if review times have changed since 2018, when the single IRB mandate was
introduced, most of the respondents believed that review times had not changed (58%,
29/50) (Figure 12). However, a minority, 22%, believed that review times are longer
(11/50), two (4%) that they are shorter (2/50) and 8 (16%) that they were unable to say
(8/50).
1
1
0
1
1
4
2
5
7
3
10
14
7
13
5
4
0 2 4 6 8 10 12 14 16
Univ. w/o Med Center - Ceded
Univ. w/Med Center - Ceded
Univ. w/o Med Center - Traditional
Univ. w/Med Center - Traditional
Fewer than 30 30 to 59 60 to 99 100 or more
WA = 51
WA = 42
WA = 40
WA = 41
63
Figure 12: Change in Review Times
How have your overall review times changed since the introduction of the NIH mandate?
Ten of the respondents who believed that review times were longer expanded on
one or more reasons for their view (Table 2). Most (6/10) identified communication and
coordination with other sites as a contributing factor. Another common factor was the
management of contracts or reliance agreements because of legal review or policy
differences (4/10). Respondents also indicated that administrative burden on the IRB
(3/10) and the combination of local (ceded) forms and reviewing IRB forms (3/10)
contributed to delay.
Table 2: Longer Review Times Explanation
Response Why are the review times longer?
1 The NIH mandate barely preceded the Final Rule effective date. Together these new requirements
added to the responsibilities of HRPP staff while minimizing the IRB role (when the IRB is NOT
providing sIRB services.) In simple terms the staff have more work to do, the committee members
have less work. This is true for most institutions so particularly for cooperative research, the tasks
related to reliance mingle with other administrative/staff burden and, overall, things take longer.
2 Local process plus external review process. more complicated legal review of contracts, agreements,
etc. lack of clear communication and understanding of the sIRB review process by local and lead sites
leads to a far longer progress.
3 Coordination of documentation when serving as the IRB of record for the lead PI or coordinating
center. SMART IRB flexible terms when the relying site and the reviewing site has indemnification
language that must be evaluated by legal. When the relying IRB in an NIH mandated sIRB and the
approvals at the IRB of record are discordant with institutional policies or processes. Only area in
which the review cycle time is improved is with use of external independent IRBs.
4 Many steps to establish reliance agreement and significant coordination needed to communicate local
requirements/complete local ancillary reviews. Also, researchers are usually no familiar with
system/requirements/process for IRB review at external IRBs.
5 It takes time to negotiate reliance agreements between institutions. Additionally, institutions still have
responsibilities outside of IRB review that take place, so most institutions require investigators to
submit a shorter local application for studies that are ceded. It can also be challenging to provide local
context to the reviewing IRB. Each IRB has similar (yet different) forms and requirements, so there
8
2
29
11
0 5 10 15 20 25 30 35
Cannot Say
Review times are shorter
Review times have stayed the same
Review times are longer
N = 50
64
Response Why are the review times longer?
are often misunderstandings in how to complete forms by study teams that are unfamiliar with other
IRB's processed.
6 More communication required between sites/investigators.
7 The length of review time in the previous question is not clear. If you are asking only about NIH ceded
reviews to external IRB (non commercial), reviews are longer mainly due to communication issues
and institutional differences in policies. Commercial IRB ceded reviews (including NIH sIRB
mandated ones) are shorter due to being well organized and familiarity with handling multiple
institutions and having central information systems.
8 Extra administrative burden
9 Delay in submission being entered into local institution submission system and final external IRB
approval being submitted for acknowledgement. Also some delays in finalizing inter-institutional
agreements.
10 Coordination with other institutions to deal with local context review.
Two respondents who indicated that review times were shorter gave opinions, one
that the process of ceding a review added efficiencies and the other they did not have to
wait for the monthly IRB meeting (Table 3).
Table 3: Shorter Review Times Explanation
Why are the review times shorter?
Depending on the direction of review - if we are the Relying IRB, the process of ceding review is efficient. If we
are the Reviewing IRB, we sometimes find that Relying IRB's may lengthen the overall review time.
Mostly because we do not have to wait for our internal monthly IRB meeting ( if the sIRB has already approved
the study).
Respondents were asked how they prioritize the processing of ceded studies
(Figure 13). The majority (63%) treated them in the same way as other submissions
(29/46). However, about a third processed them separately (30%, 14/46). Two
respondents reported that ceded studies were prioritized over the others (4%, 2/46), and a
third chose “other”, stating that prioritization is “dependent on funding.” No respondent
indicated that ceded studies would be prioritized below the others.
65
Figure 13: Prioritization of Ceded Reviews
How do you prioritize the processing of studies whose review has been ceded to another
IRB?
Respondents were asked to rank the top three challenges that constrain IRB
capacity to manage single IRB studies (Figure 14). The reported numbers were analyzed
also by weighting the rank of the choice (WS) with a value of 3 to the first choice, 2 to
the second choice and 1 to the third choice and adding the cumulative numbers
(Figure 15). Educating investigators was chosen most often, by 35 of 40 (88%)
respondents and had a WS of 77; 18 respondents ranked it as the highest challenge, 10 as
second and 9 as third. Twenty-three respondents (58%) chose adjusting documentation
systems as important and had a WS of 50; 8 ranked it as first, 11 as second, and four as
third. Seventeen (43%) respondents chose aligning policies to manage single IRB studies
(WS:35); 6 ranked it as first, 6 as second, and 5 respondents as third. Sixteen
respondents (40%) chose obtaining additional resources ( WS of 26) with five
respondents placing it as first, four as second, and seven as third. Educating HRPP staff
was chosen by 14 respondents (35%; WS of 23). Two respondents chose it first, five as
second, and seven as third. Managing expectations from administration (WS of 8) was
only chosen by four respondents, one as first, two as second, and one as third. No one
1
0
29
2
14
0 5 10 15 20 25 30 35
Other
Lower than others
The same as others
Higher than others
Separately
N = 46
66
selected convincing IRB members that participants would be protected as a significant
challenge. Three selections of “other” were made, two as the top challenge and one as
the third most important, without clarifying the nature of the challenge.
Figure 14: Challenges for Expanding Single IRB Capacity
The following have been suggested by others as challenges when expanding an
institution's capacity for managing single IRB studies. Please choose up to three of the
most significant challenges faced by your institution
Seven respondents added “other” comments, reproduced in Table 4. These were
mostly consistent with the reasons given in Table 2 to explain why review times have
lengthened under single IRB.
Table 4: Other Challenges for Expanding Single IRB Capacity
Response Other Challenges for Expanding IRB Capacity
1 Dealing with inefficient or inexperienced IRBs
2 Communication/coordination between local researcher, local IRB office, local ancillary reviews, research
at IRB of record, and IRB of record
3 Managing expectations from investigators and research staff
4 aligning template consent form language and HIPAA compliance
5 working with multiple institutional policies. Managing communication between institutions.
1
0
1
7
7
5
4
9
0
0
2
5
4
6
11
10
2
0
1
2
5
6
8
16
0 2 4 6 8 10 12 14 16 18
Other
Convincing IRB members that participants are
protected
Managing administration expectations
Educating HRPP staff
Obtaining additional resources
Aligning policies to manage single IRB studies
Adjusting documentation systems to
accommodate
Educating Investigators
N = 40
First Second Third
67
Response Other Challenges for Expanding IRB Capacity
6 Getting all to understand that only the IRB review is outsourced and all other responsibilities and
considerations remain local.
7 Managing the expectations of the reviewing IRB and communicating with them. E.g. they will sometimes
agree to terms on the reliance agreement and then reneg on those commitments at key times - for example,
reviewing to our local context, providing us with copies of outcome documents or reporting letters,
commitments to perform certain activities such as GDS certification
Figure 15: Weighted Scoring for Challenges to Expanding Single IRB Capacity
Respondents were asked if they required additional resources to deal adequately
with single IRB studies (Figure 16). Sixty percent of the respondents replied that they
needed additional training resources for staff (28/47), and 57% that they needed
additional staff (57%, 27/47). Thirteen respondents needed funds to enhance their
documentation systems (28%, 13/47). Nine respondents indicated that they needed
training resources for investigators (19%, 9/47). However, another 13% reported that
they did not need additional resources (6/47) and 15% could not say (7/47).
0
7
8
23
26
35
50
77
0 10 20 30 40 50 60 70 80 90
Convincing IRB members that participants are…
Other
Managing expectations from administration
Educating IRB/HRPP staff
Obtaining additional resources
Aligning policies to manage single IRB studies
Adjusting documentation systems to…
Educating investigators and research staff
Score
68
Figure 16: Additional Resources Needed to Expand Single IRB Capacity
What additional resources (if any) did you need to expand the IRB's capacity for single
IRB studies? (check all that apply)
When asked to estimate the cost for changes (Figure 17), many respondents
skipped the question. Of the 25 respondents who answered, 24% reported that costs
exceeded $100k (6/25) and a further 36% that costs were between $50k and $100k
(9/25). Only two respondents identified that the changes cost between $25-50k (8%),
another two that it cost less than $25k (8%), and 4 (16%) that no cost was incurred.
Figure 17: Cost for Changes
Can you estimate the cost for these changes?
Two respondents selected Other. One respondent could not estimate the costs and
the other needed resources but did not obtain those resources (Table 5).
7
6
9
13
27
28
0 5 10 15 20 25 30
Cannot Say
No additional resources needed
Training resources for investigators
Funds for documentation systems
Additional staff
Training Resources for staff
N = 47
2
4
2
2
9
6
0 2 4 6 8 10
Other
No Cost
Less than 25k
Between 25-50k
Between 50-100k
More than 100k
N = 25
69
Table 5: Other Responses to Cost for Changes
Other Responses to Cost for Changes
Do not know
We did not obtain additional resources, we have used existing resources, but we do need training resources and
resources for documentation
Respondents were asked to evaluate the usefulness of the resources that they used
when training staff (Figure 18). All respondents had used SMART IRB materials, and
nearly all (98%) found that they were at least somewhat useful (18/45= very useful,
26/45= somewhat useful). Only one (2%) identified that the materials were not very
useful and none reported that they were not useful. Most (86%, 38/44) had also used
materials provided by PRIM&R. Of those that used them, 94% found them to be at least
somewhat useful (5/38 = very useful, 29/38 = somewhat useful), although 4 respondents
(9%, 4/38) characterized them as not very useful. Materials from AAHRPP were used
much less often, with only 56% of respondents reporting that they used them (24/43). Of
those that used the materials, 79% found them at least somewhat useful (5/24 = very
useful, 14/24 = somewhat useful). Another 17% did not find them very useful (4/24)
while one respondent, representing 4%, found them not useful at all. Materials from
CITI were also not used very often, with 57% reporting that they used them (24/42).
Similarly, 75% of those that used CITI materials found them at least somewhat useful
(4/24 = very useful, 14/24 = somewhat useful). 21% found them not very useful (5/24)
with 4% finding them not useful at all (1/24). NIH/HSS materials were used by 77% of
respondents (33/43) and were reported to be at least somewhat useful by 70% of them
(2/33 = very useful, 21/33 = somewhat useful); 21% of respondents found them not very
useful (7/33) and 9% not useful (3/33). About three quarters of respondents (77%, 33/43)
also developed their own materials and all that used them found them to be at least
70
somewhat useful (13/33 = very useful, 20/=33 = somewhat useful). Several respondents
(15) also used materials developed at other universities or medical centers and found
them at least somewhat useful (2/15 = very useful, 13/15 = somewhat useful). Two
respondents identified that the materials from Johns Hopkins University as somewhat
useful, and another that those from Emory were somewhat useful. Two respondents also
selected “other”, and both wrote in “IREx”. One of them found those materials very
useful and the other somewhat useful.
Figure 18: Useful Training Resources
What resources (such as webinars, guidance documents, or training materials) were most
useful in training IRB/HRPP staff?
When asked to identify areas in which the training materials could be improved
(Table 6), most respondents chose better guidance and training, especially to help PIs and
research teams (13 responses). A smaller number identified the lack of standardized
processes and variability of implementation (4 responses), and the need for better
15
16
10
10
18
19
6
0
0
0
0
3
1
1
0
0
0
0
0
7
5
4
4
1
1
13
20
21
14
14
29
26
1
2
13
2
4
5
5
18
0 5 10 15 20 25 30 35
Other
Other Uni/Med center
Our Own
NIH/HHS
CITI
AAHRPP
PRIM&R
SMART
N = 45
Very useful Somewhat useful Not very useful Not Useful NA - Did not use
71
tracking systems (4 responses). Two respondents suggested that better consent templates
and processes would be useful. Also mentioned, with one comment each, was
inadequacy of oversight between the lead institution and the relying ones, help with
international IRBs, and a concern that there is still too much change.
Table 6: Gaps and Improvements for Training Materials
Response
Overall, were there areas that are still in need of better materials? Please
indicate any gaps and what you would need to address them.
1 The regulations/guidance on what the practice should entail is very broad. Every institution we work with
has different forms, requirements, expectations- there should be a more standardized process. It makes it
difficult to anticipate how long and detailed the process will be. SMART IRB helps.
2 1) Internal processes for IRB staff to process single IRB studies, 2) Site specific local context worksheets,
3) Better instructions to PIs, 4) Better protocol templates, 5) Improvement to electronic IRB submission
platform for single IRB processes
3 General guidance to study teams on what it takes to become an unofficial "coordinating center". Research
teams are often not prepared for the amount of work it takes to take on the responsibilities of managing a
single IRB study. They are less familiar with multi-site processes and therefore cannot instruct the sites
very well. IRBs need resources to be able to produce "quick guides" that study teams can reference or
distribute to relying sites.
4 PIs and research staff at the site and lead sites are still not knowledgeable or savvy about how the process
should work. Industry studies don't see much delay due to the change, but studies under the NIH mandate
are investigator-initiated and tend to not really be written to account for multiple sites and require
substantive modification to add a site while investigators think this is all a magic bullet and should be
easy. Local site staff and PIs are still learning the difference between IRB review and local context review
and all the things that are outside the scope of IRB but still need to be evaluated and addressed at the site
level.
5 There seems to be different approaches to the process flow and when documents are needed by
institutions. This creates challenges when institutional standards are in conflict. It would be good to have
a best practice recommendation as to the order for agreements, local context etc...
6 Utilization of various software platforms to accommodate sIRB studies. Better understanding of the
concept of "relying" on an other IRB which is an institution as opposed to the central IRBs in which
institutional requirements and templates are vetted and agreed upon so that each study does not require
significant time to come to language agreement. Reporting responsibilities of the lead site, e.g. continuing
review
7 Tracking systems (stand alone) or suggestions for best practice when you do not have this incorporated
into your regular submission tracking system
8 Training resources for PIs and IRB staff - site specific materials and documentation of applicable training
Better systems for managing and track sIRB studies
9 More guidance on post approval monitoring and exactly what documents the institution should have on
file and monitor when ceding review for reduced burden on investigators.
10 Consent templates and examples for multi-site studies, however we are building our own. Otherwise, I
don't need better MATERIALS I need better communication from the reviewing IRBs.
11 Need to create University specific resources, SOPs etc
12 I believe there should be an easier way to process consent documents. When an Institution cedes review
to an external IRB, we are not always sure that the IRB is following the regulations. For example, the
NCI CIRB approves protocols that do not comply with 21 CFR 312.164. Nevertheless, we investigators
72
Response
Overall, were there areas that are still in need of better materials? Please
indicate any gaps and what you would need to address them.
are required to use this IRB. There doesn't seem to be adequate oversight of the IRBs serving as central
IRBs.
13 Difficult to provide comprehensive training when each Single IRB has different processes, policies and
submission systems.
14 Guidance documents for both investigators and reviewers. Constantly changing as process adjustments
and improvements are made.
15 Every institution has different reliance processes--this is NOT efficient.
16 The institutional responsibilities when sites are not acting as the IRB. Many sites are still conducting
lengthy reviews even if they are not the IRB of record.
17 Training other institutions to use our system is still a need.
18 Help with international IRBs.
Respondents were then asked how the HRPP staff were trained from a selection
of five choices, from which they could choose as many as applicable (Figure 19).
Respondents could select multiple methods of training. Most commonly, management
participated in training the HRPP staff (65%, 30/46). Almost as many indicated that they
used conferences or training sessions from professional organizations (56%, 26/46). Half
(50%, 23/46) indicated that staff trained each other in peer-to-peer interactions. About a
quarter identified that they used individual study (26%,12/46), while only 6% were
trained by a third party (3/46).
73
Figure 19: Training of HRPP Staff
How were the IRB/HRPP staff trained to manage single IRB reviews?
(check all that apply)
Of the respondents who selected “other” (Table 7), one indicated that little
training has occurred. The other identified that they involved a reliance manager,
presumably a type of management position, so this response was included in the response
number for “management” reported above.
Table 7: Other HRPP Training Mechanisms
Reliance Manager
The was little education of staff until recently
Respondents were asked what they would change to increase the effectiveness of
the training. One subgroup suggested that no additional measures were needed
(4 responses). Most of the remaining responses could be grouped into 4 themes. One
theme focused on increasing the organization and structure of the training (4 responses).
A second focused on the need for better consistency between or within institutions
(3 responses) and a third, on better documentation for IRB staff or for PIs and study staff
(3 responses). The fourth identified that more real examples would be helpful
2
0
3
12
23
26
29
0 5 10 15 20 25 30 35
Other
Cannot Say
Third Party
Individual Study
Peer to Peer
Conferences
Management
N = 46
74
(2 responses). Additionally, a few noted the fact that training was handicapped by lack of
time, personnel, or institutional buy in (1 response each).
Table 8: Comments to Increase Effectiveness of Training
Response
Given what you know now, what would you change to increase the
effectiveness of the training?
1 The SMART IRB harmonization committee continues to work on trying to wrangle all actors onto the
same page, but we are clearly not there yet - and it is very clear that once you have had one study go
through one IRB that is all you have done... you haven't learned "the process" because it varies with
each IRB and study team.
2 Make the process the same across institutions or get rid of it altogether. It has NOT added efficiency
and investigators have no clue what it takes to be a successful lead site and IRBs can't do it for them.
3 It would be organized and PIs would also have to undertake training to understand what it means to be
lead or to cede.
4 Train all of the analysts on reliance, instead of just a few of them to focus. Staff vacations or turnover
have led to periods of time where only one or two people in the office were knowledgeable about
reliance issues, enough to conduct a review. Many know the basics, but are not confident enough to
take on a review themselves.
5 The best training is just going through the process at our institution. I find that as a whole institutions
are not consistent in how single IRB is handled. This makes it more difficult to learn from other
groups.
6 Graphic representations of the order events would have been a helpful training aide.
7 Have tools available and a clear process map for staff and study teams.
8 Accessible website content is key. Perhaps mandatory training for study teams that are coordinating a
single IRB study for the first time.
9 1) Establish the IRB process for single IRB review since there is no clear process 2) Have a structured
training process since there is currently no training documentation
10 A refresh would be useful as initial training was done at the time of the implementation and knowledge
and practices have evolved since that time. Use of third party to establish process and SOPs was
critical but no longer reflects process by current reliance team who are developing their own processes.
11 Pushed harder for institutional buy-in so key administrators would push information out to researchers
12 Walk staff members through actual review of particular studies so we can see where gaps in
knowledge lie.
13 Written SOPs
14 Our institution has not done much in the way of collaborative research. We are an office of 1 so I
handle everything and I would be the only person trained. I already had familiarity with single IRB
review and did not require much additional training.
15 More documentation of processes and materials on hand
16 I wouldn't change anything.
17 We just need more people available to train.
18 More time to devote to in-depth training
19 Nothing. I intentionally hired very experienced IRB staff who would be able to adapt to the policies
and how they were being implemented across time because I knew that it would take several years for
national practices and standards to evolve and we would have to adapt very quickly and deal with
75
Respondents were asked to rate the importance of certain factors influencing the
decisions of their institutions to cede a review (Figure 20). Responses were weighted by
assigning a value of 3 to “most important” (MI), 2 to “very important” (VI), 1 to
“somewhat important” (SI), and -1 to “not important” (NI). The weighted averages
(WA) were calculated using the number of respondents (45) as the denominator. The
highest average of 2.44 (representing higher importance) was given to ensuring the safety
of participants (MI: 56%, 25/45; VI: 38%, 17/45; SI: 4%, 2/45; NI: 2%, 1/45). Ensuring
the adequacy of consent forms and understanding local context had similar and somewhat
lower weighted averages, 2.04 and 2.00 respectively (Consent form adequacy: MI: 29%,
13/45; VI: 49%, 22/45; SI = 20%, 9/45; Local context: MI: 24%, 11/45; VI: 60%, 27/45;
SI: 11%, 5/45; NI: 4%, 2/45). Managing reliance agreements had a weighted average of
1.80 (MI: 13%, 6/45; VI: 60%, 27/45; SI: 22%, 10/45; NI: 2%, 1/45) and controlling
review quality, of 1.69 (MI: 18%, 8/45; VI: 49%, 22/45; SI: 22%, 10/45; NI: 4%, 2/45).
Managing liability for conduct had a lower weighted average of 1.42 (MI: 13%, 6/45; VI:
44%, 20/45; SI: 22%, 10/45; NI: 9%, 4/45) and retention of local control was lowest,
0.84 (MI: 4%, 1/45; VI: 22%, 10/45; SI: 51%, 23/45; NI: 18%, 8/45). One respondent
selected “other” as most important and wrote that “Single IRB for many studies is slower
Response
Given what you know now, what would you change to increase the
effectiveness of the training?
unique situations all the time. Because of that, it would be very difficult to develop a top down training
program. I have 4 dedicated reliance admins/single IRB reviewers and will add 2 more in 2022. We all
learn together as a team as new policies/processes come up and they contribute significantly to the
development of our own policies and processes. They are all wonderful and my institution would never
have been able to do this without their expertise. I solved the single IRB problem with people rather
than documents and processes and systems (although we have those too) and I have never regretted it.
At some point in the future as systems and processes mature and become consistent, it's possible we
may reduce the FTE, but for now there are too many complex situations that require people skills.
20 I don't think training is the problem
76
and adds MORE administrative burden without greater protection for subjects
(sometimes there is less protection).”
Figure 20: Factors Influencing Adoption of Single IRB Review
The following have been identified in the literature as factors influencing the adoption of
single IRB review. Please rate the current significance of the following concerns when
your institution cedes ethical review:
FDA has not mandated single IRB reviews to this time, but respondents were
asked if they were taking any proactive measures to prepare for the possibility
(Figure 21). None of the respondents answered in the affirmative. Most (65%, 30/45)
indicated that they are not taking any measures and that they believe their current policies
and procedures will be adequate. A minority (20%, 9/45) found it likely that the FDA
will require single IRB review soon but had done nothing to prepare. The remainder
3
1
2
2
0
0
0
0
0
8
4
2
1
2
0
1
0
23
10
10
10
5
9
2
0
10
20
22
27
27
22
17
1
1
6
8
6
11
13
25
0 5 10 15 20 25 30
Other
Retaining local control
Managing liability for conduct
Controlling review quality
Managing reliance agreements
Understanding local context
Ensuring the adequacy of consent forms
Ensuring safety of participants
N = 45
Most Important Very Important Somewhat Important Not Important Cannot Say
77
(16%, 7/45), had not taken any measures and do not think the FDA will mandate single
IRB soon.
Figure 21: Preparing for FDA Mandate for SIRB
Are you taking any measures to prepare for the possibility that FDA will mandate single
IRB reviews?
4.4 Implementation of Single IRB Review
Another block of questions explored implementation of single IRB review in the
respondents’ institutions. The respondents were asked first how they structure their
reliance agreements when their institution is the lead (reviewing) IRB (Figure 22). Most
respondents (85%) indicated they use the SMART IRB agreement at least frequently (2/46
= always, 37/46 = frequently); a few (11%) used it infrequently (5/46) and none had
“never” used it. Two did not know (4%, 2/46). Nearly half of respondents used their
own template at least frequently (2%, 1/46 = always; 43%, 20/46 = frequently) and about
a quarter used it infrequently (26%, 12/46); most of the remainder (17%, 8/46) never used
their own template and two (4%, 2/46) did not know. Almost a third of respondents used
the OHRP template for reliance agreements at least frequently (2%, 1/46 = always; 28%
13/46 = frequently); a quarter (26%, 12/46) used it infrequently, 30% (14/46) did not use
it at all and three (7%, 3/46) did not know. Only 9% of respondents used unique
agreements at least frequently (0/46 = always; 9%, 4/46 = frequently), 52% used them
0
0
7
9
30
0 5 10 15 20 25 30 35
Yes, we have done the following
Yes, cross-trained and educating
No, not happen soon
No, anticipate likely but no action
No, current policies & procedures will handle
N = 46
78
infrequently (24/46), 24% did not use them at all (11/46) and two (4%, 2/46) did not
know. One respondent replied “never” for the “other” category and an additional two did
not know (2/46).
Figure 22: Reliance Agreement Structure
Reliance agreements specify the roles and responsibilities of the lead IRB and the ceding
participants. If negotiating a relationship today as the lead IRB, what method(s) do you
use to structure your reliance agreements?
Respondents were asked about their level of satisfaction with the reliance
agreements that they use (Figure 23). About a quarter (26%, 12/46) indicated that they
were “extremely satisfied”, half (50%, 23/46) were “somewhat satisfied”, and 17% (8/46)
were neither satisfied nor dissatisfied. Only 4% (2/46) were “somewhat dissatisfied” and
2% (1/46) were “extremely dissatisfied”.
2
2
3
2
2
1
11
14
8
0
0
24
12
12
5
0
4
13
20
37
0
0
1
1
2
0 5 10 15 20 25 30 35 40
Other
Unique Agreements
OHRP template
Own Template
SMART IRB
N = 46
Always Frequently Infrequently Never Do Not Know
79
Figure 23: Reliance Agreement Satisfaction
How satisfied are you with the reliance agreements that you use?
When asked how reliance agreements could be improved (Table 9), respondents
commonly indicated that standardization would be helpful. The OHRP template was
called out as inadequate and the SMART IRB template was mentioned frequently as
needing further refinement and more consistent use by institutions.
Table 9: Comments on Improving Reliance Agreements
1
2
8
23
12
0 5 10 15 20 25
Extremely dissatisfied
Somewhat dissatisfied
Neither satisfied nor dissatisfied
Somewhat satisfied
Extremely satisfied
N = 46
Response How could the reliance agreements be improved?
1 When using the SMART IRB agreement, there are several other documents that can go along with it
such as the flexible terms agreement, letter of indemnification, etc., I would like to see one signed for
each institution rather than on a study by study bases as this can reduce the administrative burden for
each site. Also the SMART IRB Letter of Acknowledgment (LOA) is a simple document that is used
to document that the SMART IRB agreement will be used. SMART IRB gave us a template that does
not require a signature, but institutions have modified it and added a signature line, which seems
redundant and increases administrative burden as well.
2 All seems to be working well
3 Standardization across the board
4 OHRP needs to provide a template that provides for all of the elements that it expects institutions to
address.
5 Standardization of the process would be helpful across all platforms and systems. The purpose was to
lessen the burden but that has not happened.
6 As the years go by these reliance agreements get longer and longer as people find the need to include
details not previously outlined in the agreements in years past. It is nice to have that specificity, but if
you look at the Smart IRB agreement for example, it still includes the 'flexible terms' and most arent
documenting those choices - so its not always useful in that respect.
7 We use the SmartIRB suite of documents, and it would be helpful if the documents could be made into
a single pdf where values could be entered once and auto-populate through the form as needed. For
example, the Site Agreement to Cede solicits the reviewing and relying institution names in several
spots, and that same information is entered in the Implementation Checklist and Local Context Form.
8 Our institutional agreement that is modeled after the OHRP template is not specific enough regarding
who maintain certain responsibilities such as education and conflict review.
80
Respondents were then asked the frequency with which they ceded to different types of
institutions (Figure 24). Most common were institutions using the SMART IRB template
(Frequently (F): 61%, 27/44; Occasionally (O): 36%,16/44; Infrequently (I): 2%, 1/44).
The options for commercial IRBs and institutions using their own agreements had
precisely the same results (F: 59%, 26/44; O:18%, 8/44; I:18%, 8/44; Never (N): 4%,
Response How could the reliance agreements be improved?
9 SMART IRB reliance agreement is helpful to reduce negotiations; however, not all institutions follow-
up with how they will implement the flexible terms. This has led to some discussions after the fact.
10 Standardized across institutions
11 1) Have the SMART IRB reliance agreement website be able to link to our electronic submission
platform 2) Have the reviewing IRBs requirements for local context and other questionnaires readily
available to PIs and relying IRBs when the reliance request is submitted 3) Provide information to PIs
about their responsibilities when they are a relying PI or lead PI
12 Establish a consistent template that is used in most cases and reduces modifications to the document
such that timelines are impacted by risk management and legal review.
13 SmartIRB agreement templates (LOAs) should consider State institutions that must follow different
indemnification rules. It is always challenging when we rely on private orgs who do not understand the
limitations of being a State institution
14 Institutions need to stop adding addenda to the SMART agreement. It puts us right back where we
started, having to negotiate terms and get lawyers and back-and-forth communication involved.
15 Standardization
16 The OHRP agreement could be updated
17 All institutions should be a part of SMART IRB to standardize the process and outline the
responsibilities that can vary per institution.
18 The main issue with the SMART Reliance agreement is that there is little consistency regarding how it
is used (and the purpose of the agreement and platform as I understand it was to create consistency.)
19 Indemnification can be an issue. Some institutions serving as a the IRB for the study require
significant indemnification agreements. Institutions must agree to these provisions or not conduct the
study. Investigators should not lose an opportunity to conduct research because of a contractual
provision in an agreement.
20 An industry standard agreement would be ideal. Similar to SMART IRB but we cannot always require
the SMART IRB agreement/system.
21 The SMART IRB template is fine; the flexibility that then needs to be addressed on a study level can
be overlooked which is a challenge, but it's better than what we had before and since it has been
reviewed by legal departments across the US it's not bad at all for an early attempt at standardization. I
use it as a basic framework now when working with institutions who have not signed the joinder
agreement.
22 Make everyone use the same document.
23 Sites should all use SMART IRB
81
2/44). Somewhat less common was the use of the NCI CIRB representing cancer studies,
in part because about a quarter had never used the NCI IRB (F: 52%, 23/46; O:16%,
7/44; I: 2%,1/44; N: 25%; 11/44). Three respondents chose “other” and expanded on that
choice by identifying “other reputable schools”; “other disease specific groups such as
NEALS”; and “everyone”.
Figure 24: Who is Serving as IRB of Record
When you cede reviews, what types of organizations are serving as the IRB of record?
Respondents were asked to rate the quality of the reviews done by the different
types of organizations to which they cede (Figure 25). The NCI CIRB was rated as
“excellent” by 30% (7/23), as “average” by 61% (14/23) and as “poor” by 9% (2/23).
Commercial IRBs were rated as “excellent” by 25% (8/32), as “average” by 72% (23/32),
and as “poor” by 3% (1/32). Institutions using the SMART IRB agreement were rated as
“excellent” by 16% of respondents (6/37) and as “average” by 68% (25/37). None found
the reviews to be “poor”, and 16% (6/37) found that the quality varied too much to rate.
Institutions using their own agreements were rated as “excellent” by 18% (6/34) and as
1
11
2
2
0
1
1
8
8
1
0
7
8
8
16
1
23
26
26
27
0 5 10 15 20 25 30
Other
NCI CIRB
Institutions - own agreements
Commercial IRBs
Institutions - SMART IRB
N = 44
Frequently Occasionally Infrequently Never
82
“average” by 59% (20/34). None were thought to be “poor”, but 24% (8/34) were found
to vary too much to rate.
Figure 25: Quality of Reviews
Please rate the quality of reviews done in the past two years by the following
organizations
To understand the consistency of policies underlying reliance agreements for
ceded studies, respondents were asked how likely it would be that the reliance agreement
would assign specific tasks to their own institution even when the study itself was ceded
(Figures 26 and 27). The four most likely tasks retained by the ceding institution are
shown in Figure 26. Two with highest likelihood were ranked similarly- ensuring that
Conflict of Interest review is complete and current (Always (A): 60%, 27/45; Likely (L):
36%, 16/45; Neither Likely or Unlikely (NLU): 2%, 1/45; Unlikely (U): 2%, 1/45), and
managing local study team training and education (A: 60%, 27/45; L: 33%, 15/45; NLU:
2%, 1/45; U: 4%, 2/45). Retained somewhat less frequently were responsibilities to
assure that language describing cost and injury related to the trial was correct on
consent forms (A: 44%, 20/45; L: 33%, 15/45; NLU: 4%, 2/45; U: 9%, 4/45; Never (N):
0
8
6
0
0
0
0
0
1
2
0
20
25
23
14
0
6
6
8
7
0 5 10 15 20 25 30
Other
Institutions with own agreement
Institutions using SMART IRB
Commercial IRBs
NCI CIRB
N = 37
Excellent Average Poor Vary too much to rate
83
4%, 2/45) and carrying out Biosafety and Radiation Safety reviews (A: 38%, 17/45; L:
40%, 18/45; NLU: 4%, 2/45; U: 9%, 4/45; N: 4%, 2/45).
Figure 26: Likelihood of Tasks for Relying Institution
When you cede a review, how likely is it that the reliance agreement will assign the
following tasks to your institution?
Figure 27 shows four additional elements that were retained less frequently.
Responses were mixed about whether they would be responsible for conducting audits,
(A: 9%, 4/45; L: 38%, 17/45; NLU: 20%, 9/45; U: 18%, 8/45; N: 2%, 1/45) and making
HIPAA determinations (A: 11%, 5/45; L: 31%, 14/45; NLU: 18%, 8/45; U: 16%, 7/45; N:
18%, 8/45). Least likely to be retained were responsibilities to review adverse events (A:
2%, 1/45; L: 20%, 9/45; NLU: 11%, 5/45; U: 42%, 19/45; N: 16%, 7/45) or stamp
consent forms (A: 0%, 0/45; L: 7%, 3/45; NLU: 4%, 2/45; U: 24%, 11/45; N: 51%,
23/45).
3
2
0
0
2
4
2
1
2
2
1
1
18
15
15
16
17
20
27
27
0 5 10 15 20 25 30
Ensuring Biosafety/Radiation Safety Review
is complete and current
Ensuring the correct cost and injury
language in consent forms
Ensuring local study team training and
education
Ensuring Conflict of Interest review is
complete and current
N = 45
Always Likely Neither likely or unlikely Unlikely never
84
Figure 27: Likelihood of Tasks for Relying Institution (2)
Respondents were asked to identify who would be responsible for communicating
the results of activities related to the lead IRB to which review had been ceded
(Figure 28). More than one choice was permitted so the number of responses exceeds the
number of respondents and suggests that many have multiple channels for such
communications. Three-quarters (76%, 35/46) relied on local study teams for some of
these communications; 43% (20/46) relied on the local IRB and 7% had an electronic
system that would communicate with the lead IRB (3/46). Two respondents (4%)
selected “other” and clarified their choice by identifying assignment of responsibilities
would vary or that the reliance manager would be responsible. One respondent (2%,
1/46) could not say.
23
7
8
1
11
19
7
8
2
5
8
9
3
9
14
17
0
1
5
4
0 5 10 15 20 25
Stamping consent forms
Reviewing Adverse Events
Making HIPAA determinations
Conducting audits (QA/QI or for cause)
Always Likely Neither likely or unlikely Unlikely never
85
Figure 28: Responsibility for Sending Results
When your institution has local site responsibilities (such as those above), who is
responsible for sending results to the reviewing IRB?
The respondents were asked how they keep track of trials whose review had been
ceded to another institution (Figure 29). More than one answer was permitted. Eighty
percent (37/46) reported that they used an electronic system and 30% that they used a
manually maintained spreadsheet (14/46). Only 4% (2/46) reported that they used a
separate system, and one respondent selected “other” (2%,1/46), stating “We have a home
grown system within our main application to track sIRB studies that we have ceded
review for; this is the same system as our normal protocols, but not accessible to
investigators and very limited in functionality.”
Figure 29: Methods for Keeping Track of Ceded Trials
How do you keep track of ceded trials?
Asked about how satisfied they were with the way ceded studies were tracked
(Figure 30), most respondents indicated that they are at least somewhat satisfied (17%
1
2
3
20
35
0 5 10 15 20 25 30 35 40
Cannot say
Other
Electronic system
Local IRB
Study Teams
N = 46
N = 46
1
2
14
37
0 5 10 15 20 25 30 35 40
Other
Separate Electronic System
Spreadsheet
Electronic System
N = 46
86
extremely satisfied (8/46), and 43% somewhat satisfied (20/46)); 15% indicated that they
are neither satisfied nor dissatisfied (7/46) and 15 % that they are somewhat dissatisfied
(7/46). Four respondents, representing 9%, indicated that they were extremely
dissatisfied with the way they track ceded studies.
Figure 30: Satisfaction with Tracking Ceded Trials
Are you satisfied with the way you track ceded studies?
When asked whether there they are planning to change the way they track ceded
studies (Figure 31), respondents were split. Nine percent reported that they were
definitely planning to change (4/45) and 24% that they will probably be changing
(11/15). About a third indicated that they might or might not (33%, 15/45), 28% that
they would probably not be changing (13/45) and 7% that they are definitely not planning
to change (3/45).
Figure 31: Plans to Change Tracking Ceded Studies
Are you planning to change the way that ceded studies are tracked in the next three
years?
0
4
7
7
20
8
0 5 10 15 20 25
Cannot Say
Extremely dissatisfied
Somewhat dissatisfied
Neither satisfied nor dissatisfied
Somewhat satisfied
Extremely satisfied
N = 46
3
13
15
11
4
0 2 4 6 8 10 12 14 16
Definitely Not
Probably Not
Might or might not
Probably Yes
Definitely Yes
N = 46
87
4.5 Monitoring and Feedback
Respondents were asked to indicate their level of agreement regarding the effects
of single IRB review on IRB activities/ effectiveness (Figure 32). A weighted average
(WA) was calculated by assigning 4 to “strongly agree”, 2 for “somewhat agree”, -2 for
“somewhat disagree” and -4 for “strongly disagree” (Table 10). The resulting total was
then divided by 44, the number of respondents to the question. Respondents
overwhelmingly agreed that new policies and procedures must be created (WA: 3.1).
More respondents agreed than disagreed that the protection of participants has remained
unchanged (WA: 0.7). A more balanced split was seen for two statements, IRB shopping
is threatening the safety of participants (WA of -.3) and Liability issues have been
resolved (WA of -.3). Disagreement was more typical for the statement, starting a
clinical trial is easier under single IRB (WA: -0.8) and scaling to meet future single IRB
demand will be easy (WA: -1.3). Strongest disagreement was associated with the
statement, the workload for the IRB has decreased (WA: -2.5).
88
Figure 32: Experience with Changes Needed for Ceded Studies
Single IRB review has been suggested to solve several challenges faced by multi-center
clinical trials. Do the following statements describe your experience with changes that
involve the use of ceded reviews?
Table 10: Weighted Results for Changes needed for Ceded Studies
Statement Strongly
agree (4)
Somewhat
Agree (2)
Somewhat
disagree
(-2)
Strongly
Disagree
(-4)
Average
New policies and procedures must be created 28 14 0 1 3.1
Protection of participants has been unchanged 7 14 6 3 0.7
Liability issues have been resolved 2 6 10 4 -0.3
IRB shopping is threatening the safety of participants 3 9 12 5 -0.3
Starting a clinical trial is easier under single IRB 3 5 13 8 -0.8
Scaling to meet future single IRB demand will be easy 2 5 19 9 -1.3
The workload for IRB staff has decreased 2 2 10 25 -2.5
Respondents were asked whether to identify institutions that they considered to be
leaders in fostering the development of single IRB review (Table 11). Johns Hopkins
1
1
2
4
2
2
1
25
9
8
4
5
3
1
10
19
13
10
12
6
0
4
8
12
18
12
12
0
2
5
5
6
9
14
14
2
2
3
2
3
7
28
0 5 10 15 20 25 30
The workload for IRB staff has decreased
Scaling to meet future single IRB demand will be easy
Starting a clinical trial is easier under single IRB
Liability issues have been resolved
IRB shopping is threatening the safety of participants
Protection of participants has been unchanged by
single IRB studies
New policies and procedures must be created
N = 44
Strongly Agree Somewhat Agree Neither agree or disagree
Somewhat Disagree Strongly Disagree N/A
89
was mentioned most commonly, by seven, followed by Vanderbilt, University of Utah
and SMART IRB by four each. Washington University was mentioned twice, and the
University of Michigan and WCG IRB were each mentioned once.
Table 11: Leaders in Single IRB
Response
Do you see any institutions as leaders in fostering development of single IRB
review? Please explain.
1 Those which have taken on the role of reviewing IRBs with a larger scale, e.g. Vanderbilt, Hopkins,
University of Utah, have taken the challenge and worked to make it efficient and effective for institutions
to emulate the work that happens easily at independent IRBs.
2 Those who had been used to executing reliance agreements prior to the NIH policy had a head start. those
with dedicated Reliance staff or personnel have had the advantage as well. We are a large enough
institution that we execute more than 400 reliance agreements each year - but what works for us at our
institution may not work for another - for a variety of reasons.
3 Utah, Vanderbilt and Johns Hopkins have been very generous in sharing their experiences as TIN sIRBs
4 John Hopkins University has an incredible site "start up" process that includes good, clear communication.
5 SMART IRB has developed several resources for single IRB and has involved many institutions when
creating guidance materials and tools.
6 I really like how WashU and University of Utah approach things. I am absolutely and totally surprised that
Vanderbilt seems to be chosen as the model by NIH. They won't even do HIPAA waivers! They totally
renegged on a commitment to do HIPAA waivers in a reliance agreement. They don't answer their phones.
Even for me! I'm an HRPP director! Their systems and polices seem very inflexible. I have had many of
our investigators complain about Vanderbilt and literally *beg* to not have to go there. I mean, if single
IRB review is going to work at all the single IRB is going to have to take on some responsibilities and be a
little flexible or we are going to end up with some kind of hellish double review where the IRB reviews
everything and then the site HRPP has to also review everything. How does that decrease administrative
burden. It doesn't. It makes it worse.
7 Vanderbilt, who created IREx really stepped up in the single IRB world because they are able to capture
all local context considerations electronically, versus many other institutions who have it completed on a
Word document. No other reliance system really captures the amount of information that IREx captures.
Now University of Utah uses IREx for all their studies where they serve as the reviewing IRB, so I can see
them coming up as an institution that can foster development of single IRB review. John Hopkins has a
very thorough method of serving as a reviewing IRB through their checklists and hand held walk through
process for their investigators through each step of the reliance process.
8 JHU, Vanderbilt, WashU
9 Johns Hopkins was an early adopter willing to share best practices.
10 Smart IRB WCG IRB
11 University of Michigan, Harvard Catalyst, SmartIRB
12 Not really. Some institution received additional funding to set up infrastructure. I believe that some
quality is lost because local IRBs know much more about the local culture and acceptance. My institution
is in a multi-cultural area that is somewhat unique. The IRB needs to consider this difference when it sets
expectations for informed consent, use of short forms, translated consent forms, etc. They need to balance
respect for persons with the principle of justice. Nevertheless they seem to treat all geographical areas
similarly, which regard for the differences in potential research participants in a geographical area. I also
believe a lot of time is wasted when single IRB review is used for minimal risk studies.
13 JHU School of Medicine
14 Yes, there are a few that have the proper operating systems but overall most of the work is going to
commericial IRBs.
15 Not really. It seems like everyone is still struglling.
90
Respondents were asked to rate the effectiveness of their institution’s
implementation of the single IRB mandate, in comparison with peer institutions
(Figure 33). Half rated their implementation as “average” (50%, 23/46), 20% as
“somewhat better” (9/46) and 15% as “much better” (7/46). Only 13% thought that their
implementation was “somewhat worse” (6/46), none thought it “much worse”, and a
single respondent could not say (1/46).
Figure 33: Effectiveness of Institution's Implementation
How would you rate the effectiveness of your institution's implementation of the single
IRB mandate compared with your peers?
Respondents were asked to rate the satisfaction of different stakeholders with the
single IRB process (Figure 34). The weighted average (WA) was calculated by assigning
a weight of 4 to “extremely satisfied”, 2 to “somewhat satisfied”, -2 to “somewhat
dissatisfied” and -4 to “extremely dissatisfied”. The resulting total was then divided by
45, the number of respondents to the question, with the exception of one option related to
IRB members, which garnered only 16 responses. Respondents found IRB staff to be the
most satisfied of the stakeholders (WA: .93). Administration (WA: .84) and Researchers
(WA: .80) followed. Most did not provide an opinion with respect to the satisfaction of
IRB members or trial participants.
1
0
6
23
9
7
0 5 10 15 20 25
Cannot say
Much Worse
Somewhat Worse
Average
Somewhat Better
Much Better
N = 46
91
Figure 34: Satisfaction of Stakeholders
Overall, how satisfied are the following groups at your institution with the single IRB
process, as reflected by the feedback you may have received?
Respondents were asked to identify if any stakeholders in the single IRB system
have been particularly affected and explain how (Table 12). Eight responses indicated
that researchers or their study coordinators have been affected negatively, whereas one
respondent said that their researchers had found it positive. IRB staff were also
mentioned, with two responses indicating a negative and one a positive effect; one not
indicating whether the effect was positive or negative.
33
6
3
11
1
0
0
1
0
0
1
2
8
2
10
10
2
10
16
10
0
1
18
11
17
1
5
5
5
7
0 5 10 15 20 25 30 35
Trial participants
IRB Members
Researchers
Administration
IRB Staff
N = 45
Extremely satisfied Somewhat Satisfied
Neither satisfied or dissatisfied Somewhat Dissatisfied
Extremely Dissatisfied Cannot Say
(N=16)
92
Table 12: Stakeholder Effects
Response
If any stakeholders have been particularly affected, either positively or
negatively, please explain:
Positive
1 Researchers, as well as IRB staff, have appreciated the sIRB mandate. I cannot speak to how research
participants feel about the sIRB mandate or if they would notice the difference from prior practice.
Negative
2 The workload for IRB Staff has increased and they must take into account (and remember) how individual
institutions manage review/reliance (e.g. which ones do not document reliance on the SMART platform
even when they are a participating institution).
3 The research community has complained that it is a burden to have to submit applications to two IRBs
instead of just one. They don't always understand that they still have a responsibility to notify the local
IRB of issues with the research. It is not clear to researchers or IRB staff that single IRB is saving time or
resources when local IRBs still have to maintain some level of oversight.
4 Investigators thought they would deal with just the reviewing IRB under the sIRB model - but the reality
is now they have to deal with 2. They don’t understand why they have to still submit to the local IRB to
request to use the external IRB. To them - this all seems like doubling the work, since AAHRPP
requirements mean they still come to us for a local annual review also.
5 A lot of our research teams have been negatively affected and voice their dissatisfaction regularly to me.
We have ceded industry research for a long time to commercial IRBs and that process is pretty
streamlined. Also for the NCI CIRB. However, the process for other, typically NIH-funded research, is
still a little nuts.
6 If our investigators have a choice, they will come through our local IRB instead of an external IRB. They
can't talk to anyone at NCI CIRB and they can't talk to the same person twice at WIRB.
7 All of our NIH funded PIs have had some bad experiences with reliance - takes forever and makes them
submit multiple forms to us, the reviewing IRB, etc. They think it is more work (seems like it to us as
well.)
8 PIs using single IRB are particularly unhappy and it is less efficient and harder to know how to submit
things when using multiple IRBs.
9 The study teams have to assume more responsibility when working with a single IRB while also meeting
local requirements. There is also a misconception that single IRB reduces the work of the local HRPP/IRB
staff. In almost all cases, it increases the work of staff
10 Study Coordinators feel their workload has multiplied dues to reporting responsibilities to the local
institution and the external IRB.
11 Our HRPP budgeted to have a staff member to support sIRB administration, but the position has not been
consistently staffed. This has imposed a burden on other HRPP staff.
12 The IRB staff has been impacted the most.
13 For exempt studies, researchers are dissatisifed that most institutions will not enter into a reliance
agreement
14 As stated above, I believe local context does not receive the same consideration when non-local review is
used.
Neutral
15 The whole research institution has been affected and its not clear the level of understanding that each
person has about single IRB review. In my experience, not many people have that much experience until
they are faced with having to deal with it and then they have taken the time to understand and learn it. I
93
Response
If any stakeholders have been particularly affected, either positively or
negatively, please explain:
would not necessarily say the whole thing is positive or negative, because it is different and we are
evolving to better understand single IRB review, what it means, who is doing what, etc.
Respondents were asked to rate their institution’s preparedness for the single IRB
mandate (Figure 35). Over two thirds of respondents felt that their institution was at least
somewhat prepared for the change, with 9% reporting “completely prepared” (4/46), 40%
reporting “mostly prepared” (17/46), and 22% being “somewhat prepared” (10/46).
Thirteen percent of respondents reported that they were “mostly not prepared” (6/46), an
additional 13% felt that the institution was “not at all prepared” (6/46) and three
respondents could not say.
Figure 35: Preparation for IRB Mandate
Looking back to when the single IRB mandate from NIH went into force in 2018, was
your institution prepared for the change?
Respondents were asked to share lessons learned for an institution starting to
participate in ceding reviews (Table 13).
Table 13: Lessons Learned for Relying Institutions
Response
Can you share any lessons learned for an institution that is still at an early
stage of participating in ceded reviews?
1 A lot of hand holding is needed for researchers and their teams. Despite putting so much information on
our website about single IRB review, daily I get the same questions over and over again. Everyone seems
to be afraid of it and they need to talk to someone directly about their study. So you have to have a lot of
patience and be ready to repeat yourself. Second, your institution should be clear about what information
3
6
6
10
17
4
0 2 4 6 8 10 12 14 16 18
Cannot Say
Not at all prepared
Mostly not prepared
Somewhat prepared
Mostly prepared
Completely prepared
N = 46
94
Response
Can you share any lessons learned for an institution that is still at an early
stage of participating in ceded reviews?
do you need to collect when you are relying and when will you require an amendment, continuing review,
or reportable event. Third, train all your staff on single IRB review, whether that will be their primary job
or not, because eventually it may affect all the IRB staff. Determine if your site will rely on other
institutions who made an exempt determination or if you will conduct your own review.
2 Educating the research community and IRB staff is an important piece to the successful implementation at
any IRB.
3 Be sure to clearly outline responsibilities for the reviewing IRB and the local site and have clear
workflows established for the process and plans for communication.
4 It is often much more work to set up a reliance than to review in-house. Unless you are required to do so,
be selective about which studies you enter into agreements for.
5 Ensure that the administration understands that there will be a shift in the workload and not a reduction. It
may be necessary to dedicate some personnel to the process so that other review activities are not
interrupted.
6 It is much easier to cede review than it is to serve as the reviewing IRB where your institution is
responsible for working with all of the ceding institutions. We were much more willing to cede review
than to serve as the reviewing IRB until we adopted an electronic system that could manage the
documentation from relying sites.
7 Be flexible. Find personnel that are full time or mostly full time dedicated to building the sIRB ceded
review process for the site. Understand that every institution has come up with their own process to
manage single IRB when they are on the hook for the review. What you will be asked to do/provide for
reviewing IRB A will llikely not be the same as the next one. They are all different and you need to be ok
with that and simply ask "we are relying on your IRB, what would you like from us?" Dont expect the
process to be the same each time. You might get used to working with IRBs that your institution
collaborates a lot with - but for the most part, it is reinventing the wheel each time.
8 As part of the AAHRPP reaccreditation process, it was recommended that our institution adopt the
SmartIRB framework as closely as possible. Having these standardized documents that are readily
accepted by many other institutions has been very helpful.
9 Good to talk to the other IRB early on to understand expectations (for example, will a separate local
context form be required, template consent forms, does the IRB serve as a HIPAA privacy board, etc.)
The SMART IRB implementation checklist is a helpful tool for this.
10 1) Know what your local responsibilities are, 2) Be prepared to answer a lot of questions about the process
from the research community, 3) Be familiar with SMART IRB and their tools and resources to save time
creating your own, 4) Ask questions of the reviewing IRB about their process and their responsibilities, 5)
Create a process for submitting and reviewing ceded applications, 6) Become familiar with reliance
agreements
11 Remain flexible, open and collaborative. Take what help, guidance and suggestions come from colleague
institutions that have progressed further in the usage. Remember that agreeing to rely is just that and don't
try to enforce unproductive encumbrances. Establish a workflow and SOPs to help guide staff, have
resources for the study teams that help with successful implementation of this process.
12 Have enough resources to hold the PIs hands through the whole thing. And do not assume they know their
responsibilities for updating both the local and reviewing IRB offices.
13 Be flexible!
14 Communication between the designated IRB and the site IRB is essential. Investigators are not a good
source of information. Communication must be between IRBs.
15 seek to hire someone with experience with reliance agreements and managing multiple sites. It can all be
learned but if you can get a dedicated person with experience, the process will be much smoother .
16 Develop clear procedures for researchers for initial and post approval submissions to the local IRB.
Develop an easy way to track which studies rely on an external IRB. Learn to evaluate reliance
agreements.
17 Creat documentation - types of studies you will allow reliances for (e.g., full board, expedited, exempt,
greater than min risk vs no greater, FDA regulated studies vs non-FDA regulated studies, etc.) Create
95
Response
Can you share any lessons learned for an institution that is still at an early
stage of participating in ceded reviews?
expectations for training documentation, COI review/processes, intake forms (e.g., reliance application),
reliance agreements
18 First, determine if the sIRB is required and if it is in the best interest of all parties (subjects, investigators,
IRB staff). The sIRB is much more valuable for some studies than others. In example, multi site clinical
trials work wonderfully under a sIRB. Wheras a two site trial where the sIRB is not mandated, is best to
stay with local review. The sIRB model tends to be more work for these. If the sIRB is required due to
funding. Reach out early to the reviewing IRB to start a line of communication. Prepare start up education
materials for site investigators and instructions for the process/expectations to allow for consistency in
each ceded study. Track the studies, track expiration dates. Stay on tope of expired studies to make sure
you receive renewal documentation and are able to honor your reliance agreement responsibilities. Visit
SMART IRB resources, edit documents to work for your institution (i.e. communication plan, checklists)
Google is your friend. DO not reinvent the wheel. Many large institutions have created wonderful
documents that they are willing to share with smaller institutions with less resources.
19 If the study involves only minimal risk, work with investigators to see if the protocol can be revised so it is
exempt from the IRB oversight requirement. Review the IRB approved study documents to ensure they
are consistent with your institution's requirements and federal regulations.
20 Ensure your submission system facilitates the process.
Last, respondents were asked if they could provide any lessons learned for
institutions serving as an IRB of record (Table 14).
Table 14: Lessons Learned for IRBs of Record
Response
Can you share any lessons learned for an institution serving as the IRB of
record?
1 Local considerations must be collected from all relying sites because this is where you as the reviewing
IRB will be able to collect information that is important for IRB review. For example, institution X may
have certain restrictions for recruitment and this information would be communicated through this
document. This document should be reviewed by the IRB review staff and chairs as each site is added to
make sure expedited review is appropriate and does not require full board approval. Train your staff in
what each document means, what is needed in order to approve the study/amendment, and encourage
communication with the review staff and the reliance team, if there is one. A lot of the IRB staff do not
want to take ownership of the reliance process, so its important that they do that to ensure adequate review
of each site. The protocols of multi-site investigator initiated studies needs to address standard of care at
each relying site to ensure the relying sites are not doing something against their standard of care practice
that could cause issues further down the road.
2 Be mindful of what you are including in your approval letters. These letters are critical in conveying
information to sites that are considering relying on your IRB. Is the study minimal risk or GTMR? What
are the regulatory findings or other determinations? Did you approve waivers or alterations? This
information helps drive the relying sites local reviews, and we are often needing to ask instead of
obtaining the info from the letter. For example, we dont require our institutional injury language if the
reviewing IRB has determined the project presents minimal risk -- often we cant tell that from their
approval letter and have to ask. When we are the reviewing IRB we also include a bulleted list of sites we
have activated/onboarded/approved, that grows as sites are added. It should always be clear who is
approved and when. Not just naming the site on the letter associated with the submission when the site
was approved, but on every subsequent letter - to ensure 'this mod applies to these relying sites' etc
3 Be sure to clearly outline responsibilities for the reviewing IRB and the local site and have clear
workflows established for the process and plans for communication.
4 Be sure to keep communication channels open with the other institutions.
96
Response
Can you share any lessons learned for an institution serving as the IRB of
record?
5 Ensure that all involved understand their obligations/responsibilities. Ensure that there is a clear
communication plan for reporting.
6 It is important to learn the bounds of your IRB software for managing the local context for ceding
institutions. Most IRB software used by institutions is not "built" to accommodate multi-site IRB
oversight so it is important to know how to creatively use the functions that are available to achieve
effective oversight for multiple external institutions.
7 Our IRB has served as a single IRB for many years and has large coordinating centers that do this work;
however, sIRB has prompted many new teams to take on this role. As new teams start doing single IRB
work, we have realized that our training materials could be strengthened for our local study teams. Many
are new to this work and don't understand the process or what is required from a resource perspective.
8 1) Obtain institutional support so that all departments affected can support the single IRB requirements, 2)
If possible, create processes that align within the institution for single IRB requirements, 3) Create internal
IRB processes early on
9 Ensure lead sites know their responsibilities and are effective communicators with their colleagues, other
PIs on the study.
10 Have plenty of resources and do not assume the ceding institution knows what to do. Never trust the
relying PIs to know what to do. They probably haven't even told their institution they are engaged in a
collaborative project.
11 Find a good tracking system for documenting reliance so you're not volleying emails back and forth and
losing track. Educate your board members each time a Single IRB study goes to board about what their
review will actually cover. Document EVERYTHING.
12 Communication between the designated IRB and the site IRB is essential. Investigators are not a good
source of information. Communication must be between IRBs.
13 Set expectations for when your institution will/will not serve as the IRB of record and inform your
investigators Collect information up front, i.e., work with your SPO team at proposal stage to find out
when you are identified as the IRB of record, rather than at award stage, as it may already be too late to
negoiate which IRB is appropriate to serve in that capacity.
14 Start up documentation is extremely important. Create process and guidance for collecting local context.
Educate the IRB staff that will be handling the review and discuss best practices for your intuition (how
will consent language be altered, etc). Conduct a start up meeting with the local PI and site PIs if possible,
so everyone is on the same page and understands expectations. a little communication goes a LONG way.
Make sure you have IRB contacts at each site.
15 Map out the process. Consider each multi-center trial under your institution's oversight a "project," and
use project management principles to keep the review process moving forward. Keep the mission in mind
- not just the process, to ensure the review actually considers the participants who may enroll in the study.
16 Stress early on with your PI the importance of having a clear communication plan in place and require this
to include who is responsible for submitting what to whom and who the IRB will/will not directly
communicate with, etc. Getting your PI/study team to understand why they need to obtain and provide
information on each site specifically for certain things requires training and patience (like, since there is no
national standard of care - explain how this disease/condition is routinely treated at each enrolling site so
the IRB can determine how this study differs from standard care at each site for which it is the Reviewing
IRB). Create a meaningful local context form for the local site and for the local HRPP to complete. This
itself can be challenge as there is no standardization of what a local context review consists of... also,
understand that asking another HRPP to review for any other applicable laws/regulations or standards that
are triggered and how those are being addressed by the site requires the site to not pay attention to the easy
stuff that it knows best and to instead spend time with the protocol, etc. to identify the harder issues that
may be at play. This requires very knowledgeable folks and a lot of time.
17 Don't do it unless you have to!
97
CHAPTER 5: DISCUSSION
5.1 Methodological Considerations
5.1.1 Delimitations
Clinical research oversight is a responsibility of many. However, the current
research was directed toward understanding one specific aspect of that work, the
oversight of the IRBs, and within that aspect, the introduction of a new approach based
on the use of central IRBs. This delimitation was important because it allowed a deeper
analysis of a relatively new and disruptive way to manage clinical trials. The individuals
closest to the implementation of these changes are the personnel associated with IRBs, so
it seemed important to look first at those experiences and views to understand the
logistical challenges involved with being or interacting with a central IRB. Thus, the
respondent pool was delimited quite narrowly to IRB managers and directors. Because
this research was intended to provide a formative assessment of IRB implementation
strategies, broadening the participant pool would not serve that interest. Many of the
questions asked here could only be answered by IRB management. If the participant pool
were to be widened, the questions would have had to be more general, and this would
sacrifice the depth of the collected insights relevant to issues of interest. Nonetheless,
delimiting the study to IRB managers has its cost. Perspectives from IRB members, the
study staff, sponsors, or federal regulators (e.g., FDA or OHRP) were not solicited so the
research will give only a partial picture of the effects of the Single IRB mandate. Further
research will be needed to add these missing perspectives and to develop more complete
picture.
98
A further delimitation was that related to geography. This research focused only
on the HRPPs in the United States. However, protection programs for human subjects
exist in most countries in some form or another, and they typically assign responsibilities
for oversight to some form of ethics board. The term, IRB, is used for ethics boards in
the US (OPRR, 1991). Ethics boards in most other countries follow similar ethical
principles enunciated in the Declaration of Helsinki and expanded in the ICH guidelines.
However, the organization and local regulations governing the ethics boards often differ,
particularly with respect to sharing of responsibilities through some form of centralized
system (WHO, 2011). To engage participants from ethics boards outside of the US
would risk confounding the data because the law and regulatory agreements in other
countries will not be the same. In future, an analysis of international HRPPs and their
response to a single IRB model could be very interesting and instructive but was out of
scope for this exploratory research.
5.1.2 Limitations
The research conducted here depended on using an electronic survey sent by
email to prospective participants. An electronically distributed approach now appears to
be a preferred method for such research; it is considered efficient, has a potentially broad
reach, and overcomes confidentiality concerns if data collection is anonymized (Sue and
Ritter, 2012). Periodic reminders to those who were invited to take the survey were also
easy to implement, to help boost participation (Sauermann and Roach, 2013). Results
reported here had a response rate of 36% (62/171), a rate that appears typical for web
based surveys (Nulty, 2008) and higher than normal for more detailed online surveys
(Sauermann and Roach, 2013).
99
When only some of the invited participants take part in a survey, it is important to
consider the survey’s “representational validity”, the degree to which the sample of
individuals who complete the survey give adequate representation of the population at
large (Orcher, 2016). “Non-response bias” is understood as a systematic difference
between people who respond to a survey and those who do not (Rogelberg and Stanton,
2007). Distortions related to such factors can call into question the representativeness of
the collected data. Accordingly, researchers must carefully consider the factors that have
affected the selection of their target population and must take care to mitigate bias to the
extent possible (Halbesleben and Whitman, 2013). Importantly, however, recent meta-
analyses of survey response data have failed to implicate response rate by itself as a
significant cause of non-response bias (Groves and Peytcheva, 2008, Hendra and Hill,
2019). Non-responders tend to hold similar views to responders unless demographic or
other factors affect the way that responses are collected (Rogelberg et al., 2003).
Nevertheless, it is important to select a range of appropriate persons to represent all types
of IRBs that are affected by the single IRB mandate.
Thus, considerable effort was made to assure representation from a variety of
different organizations from different areas of the United States. The respondent profile
showing that most respondents were IRB directors suggested that I was successful in
targeting the people best able to give a detailed and thoughtful account of how their
human research protection programs have been affected by the Single IRB mandate.
Further, they represented a mix of institutions- a 5:4 ratio of universities with medical
centers versus universities without medical centers and a further 8% from medical centers
without a university association. These distributions strengthened my belief that the
100
respondents were sufficiently diverse to provide relatively representative data. Further,
the relative homogeneity of views suggest that sufficient saturation has been reached for
the purposes of identifying trends and suggesting further courses of study.
Surveys in general are challenged by two competing factors that threaten
completion, the need to keep the survey short in order to avoid “survey fatigue” (Herzog
and Bachman, 1981), versus the need to gain as much information as possible from a
highly experienced respondent sample. The survey in this study was relatively long, with
38 questions, many of them open-ended, yet most respondents were able to complete it,
as reflected by a completion rate of 77%. This completion rate is similar to rates reported
elsewhere (Liu and Wronski, 2018). Nevertheless, it is well-documented that long
questionnaires can affect the completion or quality of responses (Callegaro et al., 2015,
Galesic and Bosnjak, 2009). Thus, it is important to consider the nature and flow of the
survey to be sure that the questions are well-structured and appropriate. A focus group
was used to evaluate questions, optimize survey flow and comprehensibility, and to point
out possible biases. The survey was improved by the insight and clarity that they were able to
provide.
Another way to understand if participants continue to engage in later parts of a
survey and are not choosing answers without thought is to look at more complex multiple
choice questions to see if there is evidence for “straight-lining”, an effect in which the
same answer is given for all of the multiple items in a set (Herzog and Bachman, 1981).
In the 32
nd
question of this survey, for example, respondents were asked to rate their
agreement with several statements about their experience with ceding review. Responses
for a single respondent tended to vary across statements, from most strongly agreeing to
most strongly disagreeing. This observation suggested that straight-lining was not a
101
major problem. Another indicator of fatigue identified elsewhere is the gradual
shortening of responses in open-ended comment fields as the respondent progresses
through the survey (Galesic and Bosnjak, 2009). Such a problem was manifestly not
significant in this survey, because text responses at the end of the survey were both
numerous and robust. These observations, including the insightful and helpful text
responses near the end, suggest that the participants may have felt that the topic was
important enough to engage thoughtfully in the survey to its end.
5.2 Consideration of Results
The research presented here is multifaceted. Thus, the CIPP framework proved to
be a helpful tool to structure the research systematically. This framework can also be
used to divide the discussion below into different themes according to the framework’s
key components- implementation strategies (5.3.1), interim outcomes (5.3.2), and
needs/problems (5.3.3) (Stufflebeam and Zhang, 2017).
5.2.1 Process and Implementation Strategies
Single IRBs have been used widely for only a few years, so the modest literature
related to single IRBs has mostly focused on implementation processes and strategies.
Thus, the research here provides an interim examination of the current progress toward
integrating the single IRB system as part of the “way of doing business”. Of interest,
then is the experience the IRBs have gained with various steps in the development and
use of the relationships between the lead and subordinate IRBs.
One of the first activities in setting up a relationship with a lead IRB is the
development of an effective reliance agreement. This has been considered by some as a
challenging activity to manage (Lidz et al., 2018). A few years before the present study
102
was conducted, Burr and his colleagues conducted an in-depth case study to characterize
the experience with this process at the University of Utah. They identified that it took up
to three years and a large amount of effort to secure its reliance agreements with 10
participating sites (Burr et al., 2019). Some challenges to a timely conclusion included
delays in legal review and in the negotiation of changes with the individual sites. This
type of experience was not unique to the University of Utah. A 2018 survey of over 100
institutions by Resnik and colleagues showed that 71% of respondents identified this as
an issue, putting negotiations at the top of the list of potential issues (Resnik et al.,
2018a). However, the experiences that IRBs have obtained in the last few years may
have given them more confidence with such negotiations. In this research, less than 10%
of respondents were dissatisfied with their reliance agreements. Only a few mentioned
the time that it takes to negotiate reliance agreements as a factor in extending the review
period or commented about the negotiation process as a place in which the single IRB
process could be improved. At the same time, however, the task of managing reliance
agreements was still identified as important for most respondents. Some suggested that a
more standardized reliance agreement would facilitate implementation.
Perhaps the clearest indicator of the shift from creating to managing reliance
agreements is the now-prevalent use of the SMART IRB platform. SMART IRB was
created to address the problem of negotiating reliance agreements between institutions by
making a master agreement that would hold between any participating institutions (Cobb
et al., 2019). It is perhaps the closest approach to the implementation of a standardized
reliance agreement. Respondents in this research overwhelmingly chose the SMART
IRB platform when asked about the methods they used to structure their reliance
103
agreements as lead IRB. Further, institutions that are ceding their reviews do it most
often to institutions using the SMART IRB template. In 2019, Burr et al. suggested that
the “use of emerging, standardized reliance agreements, such as those provided by
SMART IRB may help to improve efficiency with this process [of negotiating reliance
agreements]” (Burr et al., 2019). Both the common use of SMART IRB and the
diminished concern with negotiating reliance agreements by respondents in this survey
suggest that his prediction was correct. Respondents in this research acknowledged the
benefit of SMART IRB, though they also suggested that improvements could be made to
reduce inconsistencies associated with implementation of local processes and
responsibilities.
The important role played by “local context” can be another consideration that is
advanced by some but not all of those who have previously discussed the hurdles to
single IRB implementation. Henrikson and colleagues (Henrikson et al., 2019)
emphasized the importance of considering local context for the success of single IRB
reviews, when she investigated the needs of researchers and IRB professionals in
multisite genomics studies with a single IRB review. Specifically, she found that several
of the 13 respondents in her survey had concerns regarding the reviewing IRB’s ability to
negotiate differences in state/local laws and regulations. In contrast, Klitzman and
colleagues surveyed over 100 individuals involved in IRB activities and found different
views (Klitzman et al., 2019). Many respondents of that study believed that local context
concerns were “overblown” or “usually inconsequential”. Nevertheless, both Klitzman
and Henrikson concluded that additional questions or communications about local
knowledge could be helpful to educate the central IRB about local concerns when they
104
conduct their reviews, although the two investigators differed in the areas to which they
drew attention. Klitzman et al. focused on the usefulness of cultural, geographic,
institution specific, and researcher specific information (Klitzman et al., 2019) whereas
Hendrikson et al. considered local regulatory and legal concerns (Henrikson et al., 2019).
The current research did not distinguish between types of local knowledge, so
respondents who rated understanding local context as very important may have different
interpretations about what this term meant. Issues related to local context were
surprisingly absent when respondents were asked how reliance agreements could be
improved, even though such comments might have been expected given the prior
research. The only place where issues regarding local context could be found was from
one respondent from an IRB of record who was asked about lessons learned. That
respondent commented that: “Local considerations must be collected from all relying
sites because this is where you as the reviewing IRB will be able to collect information
that is important for IRB review”. Given that no concerns or deficiencies with the current
system of obtaining local knowledge are mentioned, more research might be needed to
deepen the exploration by asking a few more pointed questions to IRB staff.
Also, notably mixed were concerns about liability. Determining who is
responsible for mistakes, misconduct, or participant injury had been a sticking point for
single IRB review ever since multisite studies began. Mello and colleagues described the
concern over liability as an important issue influencing the use of single IRBs about 20
years ago, when the use of such reliance relationships was optional (Mello et al., 2003).
Concerns about liability were echoed by Loh and Meyer as an important factor in the
decision not to participate in single IRB review. In that work, participants from 67
105
medical schools that refused to participate in single IRB review were asked why they did
not participate (Loh and Meyer, 2004). Of these respondents, 31% strongly agreed and a
further 43% agreed with the statement that the IRB did not participate because of
concerns with liability. At the time of research, in 2003, most of the medical schools had
never participated in single IRB review (Loh and Meyer, 2004). Nonetheless, the
passage of time appeared to have had little effect on the extent of this concern. As
recently as 2018, about two-thirds of surveyed HRPP officials chose legal liability as a
potential problem with single IRB review (Resnik et al., 2018a).
However, the last results presented above were collected nearly 5 years ago.
Since the NIH policy went into effect in 2019, the concern over potential liability may
have been mitigated by the reality of conducting single IRB studies. Respondents in the
current research, all of whom now participate in single IRB review, did not feel strongly
enough about the issue of liability to mention it in any of the free text comment fields.
Further, they had decidedly mixed responses when asked directly whether Liability issues
have been resolved, as reflected by a weighted average of -.03. Such a score reflects the
fact that about as many respondents agreed as disagreed with the statement and many had
neutral opinions. The results may suggest that concerns about liability have diminished
over time, perhaps in part because the NIH, FDA, and OHRP have provided policy and
guidance to clarify the roles and responsibilities between the reviewing IRB and the
relying IRBs (NIH, 2016). The final NIH policy, the FDA guidance from 2006, and the
OHRP reliance template from 2011 all made clearer which parties should be accountable
for specific aspects of the review and conduct of the study (FDA, 2006, OHRP, 2011).
106
The tasks assigned to IRBs/HRPPs covered by reliance arrangements are not
restricted to examinations of submitted experimental protocols and plans. HRPPs are
increasingly called on to perform or at least verify that other institutional requirements
are met by the research. O’Rourke and colleagues discussed how HRPPs commonly
serve as gatekeepers at institutions for more than human subjects’ protection (O’Rourke
et al., 2015). Such tasks include ensuring conflict of interests are managed, providing
HIPAA determinations, and ensuring that local study teams have required training. She
has not been alone in that assessment. Diamond et al. also observed that many of those
requirements must be carried out locally. This requires subordinate HRPPs to remain
involved with research projects even though they are no longer required to evaluate
compliance with human subjects’ regulations (Diamond et al., 2019). Local HRPPs are
still tasked with tracking trials and are frequently required to ensure that other required
institutional approvals or notifications occur and remain current. In most cases, this
means that the reliant IRBs will still require abbreviated applications that will provide
them with study information such as approval letters and the ability to track and enforce
other institutionally-based requirements. The current survey thus confirms the position of
the HRPP as gatekeeper. When asked about the likelihood of tasks being assigned to
their institution, the top four most likely tasks chosen by respondents here were the same
as those mentioned by the literature: ensuring conflict of interest review is complete and
current; local study team training; cost and injury language correctness on consent forms;
and ensuring Biosafety and Radiation Safety reviews are complete and current.
While it is clear that respondents believed that the HRPP retains several
responsibilities even after the human subjects’ regulations have been reviewed elsewhere,
107
one worry highlighted by previous investigators has been related to inconsistencies in the
way that duties are divided between the lead and ceding HRPPs (Diamond et al., 2019).
Additional research conducted with 34 stakeholders, including 10 IRB representatives,
confirms that this concern is ongoing and suggested to the FDA that they develop a
matrix to illustrate the roles and responsibilities to increase consistency (Corneli et al.,
2021). This concern is also apparent in the results reported here. Several tasks, such as
HIPAA determinations, conducting audits and reviewing adverse events, were reported to
be divided between the IRBs in a more idiosyncratic way. Further, several respondents
stressed the importance of establishing more clearly the responsibilities of the local IRB
in their free text comments. This suggests that variation still exists in the duties of
relying/ceding institutions so care should be taken in those partnerships to ensure that
local IRBs know what is expected of them for each research project.
5.2.2 Interim Implementation Outcomes
Now that many Human Research Protection Programs in the United States have
participated in some form of single IRB review, it may be possible to assess the current
state of this evolving system as reflected by certain interim outcomes. One common
metric is the prevalence of its use. In this study, nearly all survey respondents, 97%, had
participated in single IRB review. This is quite a contrast from the survey of Loh and
Meyer in 2003, when only 24% of medical schools had used a central IRB (Loh and
Meyer, 2004). Two factors seem to have made all the difference. First was the National
Cancer Institute’s change in policy related to CIRBs in 2013, which switched from
facilitated to mandatory centralized reviews. By 2016, 96% of institutions enrolling NCI
studies were found to employ the single IRB model (Massett et al., 2018). Second, other
108
US organizations mandating single IRB review have forced further expansion. The NIH
mandate forcing NIH-funded multisite clinical trials to use the single IRB model (unless
prohibited by law) went into effect in 2018 and the revised common rule forcing
cooperative research to use the single IRB model (except when prohibited) went into
effect in 2020. The expansions in use make it easier to draw some conclusions about
whether IRBs have achieved certain outcomes that were predicted several years ago,
when the use was still in its infancy.
One key benefit that was predicted for single IRB review was that it would reduce
“wasted time and effort” at the IRB (Menikoff, 2010). This presumptive benefit helped
to drive the decision of the NIH to mandate single IRB review (Wolinetz and Collins,
2017) and probably also the regulatory and policy decisions related to single IRB review
made by other agencies. However, several researchers in the past have questioned
whether such expected improvements are realistic in light of the multiple roles that the
HRPPs must play (O’Rourke et al., 2015). More recent research appears to support that
more pessimistic view. For example, Diamond and his colleagues in 2019 found that
review times for 18 sites using a single IRB model were somewhat faster, but time to
study initiation was slower (Diamond et al., 2019). They identified that the multifaceted
role of the HRPP contributed to the delays. The research presented here is consistent
with their conclusions. If time and effort have been saved, this research suggests that it
does not seem to happen at the level of the HRPP. Instead, respondents overwhelmingly
identified that review times have not become shorter; 22% indicated that review times are
longer and 58% indicating that reviews have stayed the same. The most common factors
implicated in the delays were challenges related to communication and coordination
109
between sites. Further, more than half of respondents indicated that they required
additional staff – far from the expectation that effort would be saved by the introduction
of single IRB reviews.
5.2.3 Implementation Needs and Problems
It seems clear from the extent and level of satisfaction with single IRB review that
the system is here to stay. However, this does not mean that the system is free of
problems. A survey such as this can have a positive role to identify where challenges
exist so that those challenges can be addressed. In this study, at least five areas can be
identified in which respondents have issues.
5.2.3.1 Assuring that HRPP staff are capable
The efficient conduct of IRB activities depends on its people. At least two
challenges were obvious in this regard from the present research. First was that
associated with the training of HRPP personnel. When respondents were asked to
prioritize the resources that they would need to expand the HRPP’s capacity for single
IRB studies, they ranked additional training resources for staff first. Their responses
align well with the views of most respondents studied by Resnik et al. in 2018, that a key
need to meet the IRB mandate was to train the staff (Resnik et al., 2018a). Second was
obtaining useful training materials. While many of the current training resources were
rated highly, comments also suggested that both the training materials for the HRPP staff
and the training methods used to help them could be improved.
110
5.2.3.2 Creating consistent policies and expectations
Comments also underscored the importance of clear and consistent policies and
expectations. When several trials at the same institution are handled differently,
expectations on the HRPP or study teams are also likely to vary, and the differences can
be difficult to manage. One respondent summed it up quite succinctly:
The regulations/guidance on what the practice should entail is very broad.
Every institution we work with has different forms, requirements,
expectations- there should be a more standardized process. It makes it
difficult to anticipate how long and detailed the process will be. SMART
IRB helps. [Table 6, Response 1]
The comments also made clear that inconsistent implementations of single IRB
review make it difficult to train their staff or manage expectations from their
investigators. Respondents recommend that written SOPs would be useful and that the
HRPPs “create processes that align within the institution for single IRB requirements”.
A related theme that shows itself repeated in this research is the need for inter-
institutional consistency in policies and expectations. SMART IRB recognized this need
and created a committee to harmonize single IRB review implementation between
institutions (Hahn et al., 2019). However, as a respondent in the current research
identified:
The SMART IRB harmonization committee continues to work on trying to
wrangle all actors onto the same page, but we are clearly not there yet -
and it is very clear that once you have had one study go through one IRB
that is all you have done... you haven't learned "the process" because it
varies with each IRB and study team. [Table 8, Response 1]
The issue of inter-IRB/HRPP consistency came up with several respondents:
“Make the process the same across institutions or get rid of it altogether”, “You might get
used to working with IRBs that your institution collaborates a lot with - but for the most
111
part, it is reinventing the wheel each time.” This is perhaps best illustrated, however
unintentionally, by another respondent, who said, “Training other institutions to use our
system is still a need.” Ironically, I would say the fact that there is an “our system”,
distinct from others, is precisely the problem. It is conceivable that some of the
consistency, communication, and coordination issues that have increased the burden on
HRPPs will resolve over time, especially if they are specifically targeted with better
SOPs and guidance. This is perhaps the most critical area where research to drive
effective policy making is needed.
5.2.3.3 Educating clinical site personnel
When respondents were asked to rank the challenges their institution has faced in
expanding the capacity for single IRB studies, the top ranked challenge was clearly
educating investigators and research staff with a weighted score of 77. In comparison,
educating the HRPP staff was ranked 5
th
, with a weighted score of 23. One respondent
reflected,
…as new teams start doing single IRB work, we have realized that our
training materials could be strengthened for our local study teams. Many
are new to this work and don't understand the process or what is required
from a resource perspective. [Table 14, Response 7]
Another mentioned that “a lot of hand holding is needed for researchers and their
teams”. We can speculate that it is more challenging to educate investigators because
their numbers are much larger, they are involved less frequently in single IRB studies,
and they do not report directly to the HRPP. Literature discussions on education of
clinical site staff are remarkably absent. Burr et al. mention that there is “a steep learning
curve on the part of both the participating sites and their IRBs” (Burr et al., 2019), but
112
little specific discussion on how to prepare study teams for single IRB is available. There
are SMART IRB materials targeted toward clinical staff (Cobb et al., 2019), but, again,
little discussion in the literature about how to ensure training is carried out and is
effective. Ultimately, perhaps a narrow focus on the IRB/HRPP has kept this issue in the
blurred background. Understanding the needs and effective methods for educating study
staff could be a fruitful avenue for future research.
5.2.3.4 Assuring a sufficient number of staff
Contrary to the idea that single IRB review would relieve workload on the HRPP,
the reduction in the number of studies seen by the full IRB committee sees is countered
by the increase in administrative workload. This may explain in part why more than half
of the respondents in this study indicated that they needed additional staff to expand their
HRPP’s capacity for single IRB studies. This is an area that warrants further study, not
only to understand better how HRPP staff workload has shifted, but also to examine the
amount of work for which the staff are staff are responsible.
5.2.3.5 Altering documentation systems
A fifth need emerging from this research is technical – addressing the challenges
of supporting single IRB studies with legacy documentation systems used traditionally
for studies over which HRPP had sole control. Educating investigators was the top
ranked implementation challenge, but the challenge ranked second was the need to adjust
documentation systems (WS=50). Electronic IRB systems have been created to satisfy
many operational needs. They must capture study information and at the same time
support business processes that involve putting protocols to be reviewed on an IRB
agenda, facilitating comments and change requests, and capturing the results of
113
committee meetings. Very few of these activities or processes are the same with single
IRB review. Thus, it is not surprising that adjusting documentation systems has proven
difficult. Given that 80% of respondents used an electronic system to manage ceded
trials, it seems clear why adjusting those systems to accommodate ceded studies was
deemed significant. There has been little research into the cost or requirements for
changing the systems the HRPP uses. The closest is at best orthogonal: a case study on
the effectiveness of a shared platform for single IRB studies was published in 2016 and
discusses harmonizing the IRB processes and data between five IRBs in South Carolina.
(Obeid et al., 2016). The expected cost and how best to alter HRPP systems, both to
track trials effectively and to perhaps relieve some documentary burden on the
HRPP/study teams, looks to be an important topic for future research.
5.2.4 Future Directions
The research presented here addresses a significant change in the systems that we
use to safeguard the rights and safety of human subjects in American clinical trials.
However, it examines only one part of that system, reflected in the experiences and views
of those working directly in the IRBs and associated departments of HRPP. Through
their eyes we see a system that has been coping with shifting operational burdens and
new challenges, not all of which have been managed seamlessly.
However, several other stakeholders are involved in the development and
management of the single IRB system. Any conclusions on the state of the new system
will only be complete if future research seeks to understand the views of these
stakeholders. For example, one of the most compelling reasons articulated in favor of
single IRB review is its predicted benefit to sponsors who depend on the timely and cost-
114
effective conduct of the trials. Multicenter trials in particular are difficult, time
consuming, and expensive. When multiple IRBs must review the same protocol,
sponsors must often deal with a tangle of suggested changes from different local IRBs,
some of which may be mutually exclusive. Patients and the public also have a stake in
decreasing the cost and increasing the speed of clinical trials. Further work to see how
single IRB reviews affect drug/device approval time, sponsor costs, patient access, and
the validity of the trials is needed to understand the overall impact of single IRB review.
Another important avenue of exploration could be the further exploration of
methods to reduce the hurdles that respondents identified in the present study. At one
time, perhaps, the negotiation of reliance agreements and the concern over liability were
thought to be the most important issues. Now it seems that these issues are less important
than the need for consistency in implementation and the need to communicate effectively
between the parties involved in the single IRB trial. What can be done and what is being
done to address these challenges will need more study. Thus, the single IRB system
would seem to be well served if ways could be found to alleviate the challenges that seem
to be most related to the logistical management of the single IRB relationship. Another
fruitful area for future work could be the development of novel methods to educate
clinical-site and IRB staff. At this point in the evolution of the single IRB system, there
seems to be room to improve a good system, so that it can realize all the benefits that
originally were promised.
115
REFERENCES
Amdur, R. J. & Bankert, E. A. 2010. Institutional review board: Member handbook,
Sudbury, MA, Jones & Bartlett Publishers.
Anderlik, M. R. & Elster, N. 2001. Currents in contemporary ethics: Lawsuits against
IRBs: Accountability or incongruity? The Journal of Law, Medicine & Ethics, 29,
220-228.
Annas, G. J. G., Edward R 1992. The Nazi doctors and the Nuremberg Code: Human
rights in human experimentation, New York, Oxford University Press.
Bankert, E. A. & Amdur, R. J. 2006. Institutional review board: Management and
function, Sudbury, MA, Jones & Bartlett Learning.
Barondess, J. A. 1996. Medicine against society: Lessons from the Third Reich. Journal
of the American Medical Association, 276, 1657-1661.
Beecher, H. K. 1966. Ethics and clinical research. New England Journal of Medicine,
274, 1354-1360.
Bell, J., Whiton, J. & Connelly, S. (1998). Evaluation of NIH implementation of Section
491 of the Public Health Service Act, mandating a program of protection for
research subjects. Arlington, VA: National Instituttes for Health
Brady, J. V. & Jonsen, A. R. 1982. The evolution of regulatory influences on research
with human subjects. Human Subjects Research. New York: Springer.
Brandt, A. M. 1978. Racism and research: The case of the Tuskegee Syphilis Study.
Hastings Center Report, 21-29.
Bull, J. P. 1959. The historical development of clinical therapeutic trials. Journal of
Chronic Diseases, 10, 218-248.
Burman, W. J., Reves, R. R., Cohn, D. L. & Schooley, R. T. 2001. Breaking the camel's
back: Multicenter clinical trials and local institutional review boards. Annals of
Internal Medicine, 134, 152-157.
Burr, J. S., Johnson, A. R., Vasenina, V., Bisping, S., Coleman, R. W., Botkin, J. R. &
Dean, J. M. 2019. Implementing a central IRB model in a multicenter research
network. Ethics & Human Research, 41, 23-28.
Callegaro, M., Manfreda, K. L. & Vehovar, V. 2015. Web survey methodology, London,
Sage.
116
Calvert, S. 2020. Working Together to Improve the sIRB Model: Suggestions from a
Multi-stakeholder Project Team. Practical & Ethical Considerations for Single
IRB Review. OHRP.
Caruso, R., Myatt, T. & Bierer, B. E. 2020. Innovation in biosafety oversight: The
Harvard Catalyst Common Reciprocal IBC Reliance Authorization Agreement.
Journal of Clinical and Translational Science, 4, 90-95.
Check, D. K., Weinfurt, K. P., Dombeck, C. B., Kramer, J. M. & Flynn, K. E. 2013. Use
of central institutional review boards for multicenter clinical trials in the United
States: A review of the literature. Clinical Trials, 10, 560-567.
Christian, M. C., Goldberg, J. L., Killen, J., Abrams, J. S., McCabe, M. S., Mauer, J. K.
& Wittes, R. E. 2002. A central institutional review board for multi-institutional
trials. New England Journal of Medicine, 346, 1405-1408.
Cobb, N., Witte, E., Cervone, M., Kirby, A., MacFadden, D., Nadler, L. & Bierer, B. E.
2019. The SMART IRB platform: A national resource for IRB review for
multisite studies. Journal of Clinical and Translational Science, 3, 129-139.
Corneli, A., Dombeck, C. B., McKenna, K. & Calvert, S. B. 2021. Stakeholder
Experiences with the Single IRB Review Process and Recommendations for Food
and Drug Administration Guidance. Ethics & Human Research, 43, 26-36.
Craun, K. 2021. Personal conversation: Single IRBs. (July 9, 2021)
CTTI 2013. CTTI recommendations: Use of central IRBs for multicenter clinical trials.
Clinical Trials Transformation Initiative. Available: https://ctti-
clinicaltrials.org/wp-content/uploads/2021/06/CTTI_sIRB_Report.pdf [Accessed
July 23 2020].
DHEW. (1979). The Belmont Report. Ethical principles and guidelines for the protection
of human subjects of research. Department of Health, Education, and Welfare.
Washington, DC. Available: https://www.hhs.gov/ohrp/regulations-and-
policy/belmont-report/read-the-belmont-report/index.html [Accessed July 21
2020].
DHHS. (1998). Institutional review boards: A time for reform. Office of the Inspector
General, Department of Health and Human Services. Washington, DC
DHHS. (2000). Status of Recommendations. Office of Inspector General Report. Office
of Inspector General, Department of Health and Human Services. Washington,
DC
Diamond, M. P., Eisenberg, E., Huang, H., Coutifaris, C., Legro, R. S., Hansen, K. R.,
Steiner, A. Z., Cedars, M., Barnhart, K. & Ziolek, T. 2019. The efficiency of
single institutional review board review in National Institute of Child Health and
117
Human Development Cooperative Reproductive Medicine Network–initiated
clinical trials. Clinical Trials, 16, 3-10.
Emanuel, E. J., Wood, A., Fleischman, A., Bowen, A., Getz, K. A., Grady, C., Levine,
C., Hammerschmidt, D. E., Faden, R. & Eckenwiler, L. 2004. Oversight of human
participants research: Identifying problems to evaluate reform proposals. Annals
of Internal Medicine, 141, 282-291.
Faden, R. R. & Beauchamp, T. L. 1986. A History and Theory of Informed Consent, New
York, Oxford University Press.
FDA. (1969). Federal Register. Food and Drug Administration. Washington, DC:
National Archives And Records Administration
FDA. (2006). Guidance for industry. Using a centralized IRB review process in
multicenter clinical trials. Food and Drug Administration. Washington, DC: Food
and Drug Administration
Fisher, R. A. 1935. The design of experiments, New York, Hafner Press.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., Wallace, F., Burns, B.,
Carter, W., Paulson, R., Schoenwald, S. & Barwick, M. 2005. Implementation
research: A synthesis of the literature. Tampa, FL: University of South Florida,
Louis de la Parte Florida Mental Health Institute, The National Implementation
Research Network.
Flynn, K. E., Hahn, C. L., Kramer, J. M., Check, D. K., Dombeck, C. B., Bang, S.,
Perlmutter, J., Khin-Maung-Gyi, F. A. & Weinfurt, K. P. 2013. Using central
IRBs for multicenter clinical trials in the United States. PloS One, 8, e54999.
Forster, D. 2001. Independent institutional review boards. Seton Hall Law Review, 32,
513.
Frankel, M. S. 1972. The Public Health Service guidelines governing research involving
human subjects, Washington, D.C., Program of Policy Studies in Science and
Technology, The George Washington University.
Freyhofer, H. H. 2004. The Nuremberg medical trial: The Holocaust and the origin of the
Nuremberg medical code, New York, Peter Lang.
Frye, A. W. & Hemmer, P. A. 2012. Program evaluation models and related theories:
AMEE guide no. 67. Medical Teacher, 34, e288-e299.
Gail, M. H. 1996. Statistics in action. Journal of the American Statistical Association, 91,
1-13.
118
Galesic, M. & Bosnjak, M. 2009. Effects of questionnaire length on participation and
indicators of response quality in a web survey. Public Opinion Quarterly, 73,
349-360.
Good, M., Castro, K., Denicoff, A., Finnigan, S., Parreco, L. & Germain, D. S. 2020.
National Cancer Institute: Restructuring to Support the Clinical Trials of the
Future. Seminars in Oncology Nursing, 36, 151003.
Gray, F. D. 1998. The Tuskegee syphilis study: The real story and beyond, Montgomery,
AL, NewSouth Books.
Green, S. A. 2002. The origins of modern clinical research. Clinical Orthopaedics and
Related Research, 405, 311-319.
Groves, R. M. & Peytcheva, E. 2008. The impact of nonresponse rates on nonresponse
bias: a meta-analysis. Public Opinion Quarterly, 72, 167-189.
Hahn, C., Kaufmann, P., Bang, S. & Calvert, S. 2019. Resources to assist in the transition
to a single IRB model for multisite clinical trials. Contemporary Clinical Trials
Communications, 15, 100423.
Halbesleben, J. R. & Whitman, M. V. 2013. Evaluating survey quality in health services
research: A decision framework for assessing nonresponse bias. Health Services
Research, 48, 913-930.
Hendra, R. & Hill, A. 2019. Rethinking response rates: New evidence of little
relationship between survey response rates and nonresponse bias. Evaluation
Review, 43, 307-330.
Henrikson, N. B., Blasi, P. R., Corsmo, J. J., Sheffer Serdoz, E., Scrol, A., Greene, S. M.,
Matthews, T. L. & Ralston, J. D. 2019. “You really do have to know the local
context”: IRB administrators and researchers on the implications of the NIH
single IRB mandate for multisite genomic studies. Journal of Empirical Research
on Human Research Ethics, 14, 286-295.
Herzog, A. R. & Bachman, J. G. 1981. Effects of questionnaire length on response
quality. Public Opinion Quarterly, 45, 549-559.
Hilbrich, L. & Sleight, P. 2006. Progress and problems for randomized clinical trials:
From streptomycin to the era of megatrials. European Heart Journal, 27, 2158-
2164.
Hill, A. B. 1963. Medical ethics and controlled trials. British Medical Journal, 1, 1043.
Hilts, P. J. 2003. Protecting America's health: The FDA, business, and one hundred years
of regulation, New York, Alfred A. Knopf.
119
Hoffman, S. & Berg, J. W. 2005. The suitability of IRB liability. University of Pittsburgh
Law Review, 67, 365.
Hotelling, H. 1951. The impact of RA Fisher on statistics. Journal of the American
Statistical Association, 46, 35-46.
Howard-Jones, N. 1982. Human experimentation in historical and ethical perspectives.
Social Science & Medicine, 16, 1429-1448.
Hutt, P. B. & Temple, R. 2013. Commemorating the 50th anniversary of the drug
amendments of 1962. Food and Drug Law Journal, 68, 449-465.
ICH. (2016). International council for harmonization of technical requirements for
pharmaceutical for human use (ICH) harmonization guideline integrated
addendum to ICH E6 (R1): Guideline for Good Clinical Practice E6 (R2).
International Council for Harmonisation of Technical Requirements for
Pharmaceuticals for Human Use. Geneva: International council for harmonization
of technical requirements for pharmaceutical for human use
Infectious Diseases Society of America 2009. Grinding to a halt: The effects of the
increasing regulatory burden on research and quality improvement efforts.
Clinical Infectious Diseases, 49, 328-335.
Jenkins, J. & Hubbard, S. 1991. History of clinical trials. Seminars in Oncology Nursing,
7, 228-234.
Jones, J. H. 1993. Bad blood, New York, Simon and Schuster.
Junod, S. W. 2008. FDA and clinical drug trials: A short history. A Quick Guide to
Clinical Trials. Washington, DC: Bioplan.
Katz, J. 1996. The Nuremberg code and the Nuremberg trial: A reappraisal. Journal of
the American Medical Association, 276, 1662-1666.
Katz, J., Capron, A. M. & Glass, E. S. 1972. Experimentation with human beings: The
authority of the investigator, subject, professions, and state in the human
experimentation process, New York, Russell Sage Foundation.
Khan, M. A., Barratt, M. S., Krugman, S. D., Serwint, J. R., Dumont-Driscoll, M. &
Investigators, C. 2014. Variability of the institutional review board process within
a national research network. Clinical Pediatrics, 53, 556-560.
Klitzman, R. 2019. Commentary on Diamond et al.: The efficiency of single institutional
review board review in National Institute of Child Health and Human
Development Cooperative Reproductive Medicine Network–initiated clinical
trials. Clinical Trials, 16, 11-13.
120
Klitzman, R., Pivovarova, E., Murray, A., Appelbaum, P. S., Stiles, D. F. & Lidz, C. W.
2019. Local Knowledge and Single IRBs for Multisite Studies: Challenges and
Solutions. Ethics & Human Research, 41, 22-31.
Koski, G., Aungst, J., Kupersmith, J., Getz, K. & Rimoin, D. 2005. Cooperative research
ethics review boards: A win-win solution? Ethics & Human Research, 27, 1.
Ladimer, I. 1954. Ethical and legal aspects of medical research on human beings. Journal
of Public Law, 3, 467.
Lasagna, L. 1955. The controlled clinical trial: Theory and practice. Journal of Chronic
Diseases, 1, 353-367.
Lax, E. 2005. The mold in Dr. Florey's coat: The story of the penicillin miracle, New
York, Macmillan.
Levine, R. J. 1996. International codes and guidelines for research ethics: A critical
appraisal. The Ethics of Research Involving Human Subjects: Facing the 21st
Century. Frederick, Maryland: University Publishing Group.
Lidz, C. W., Pivovarova, E., Appelbaum, P., Stiles, D. F., Murray, A. & Klitzman, R. L.
2018. Reliance agreements and single IRB review of multisite research: Concerns
of IRB members and staff. AJOB Empirical Bioethics, 9, 164-172.
Liu, M. & Wronski, L. 2018. Examining completion rates in web surveys via over 25,000
real-world surveys. Social Science Computer Review, 36, 116-124.
Loh, E. D. & Meyer, R. E. 2004. Medical schools' attitudes and perceptions regarding the
use of central institutional review boards. Academic Medicine, 79, 644-651.
Marks, H. M. 2000. The progress of experiment: Science and therapeutic reform in the
United States, 1900-1990, New York, Cambridge University Press.
Massett, H. A., Hampp, S. L., Goldberg, J. L., Mooney, M., Parreco, L. K., Minasian, L.,
Montello, M., Mishkin, G. E., Davis, C. & Abrams, J. S. 2018. Meeting the
Challenge: The National Cancer Institute’s Central Institutional Review Board for
Multi-Site Research. Journal of Clinical Oncology, 36, 819-824.
McNeil, C. 2005. Central IRBs: Why are some institutions reluctant to sign on? Journal
of the National Cancer Institute, 97, 953-955.
McWilliams, R., Hoover-Fong, J., Hamosh, A., Beck, S., Beaty, T. & Cutting, G. 2003.
Problematic variation in local institutional review of a multicenter genetic
epidemiology study. Journal of the American Medical Association, 290, 360-366.
Meisel, A. 1977. The expansion of liability for medical accidents: From negligence to
strict liability by way of informed consent. Nebraska Law Review, 56, 51.
121
Mello, M. M., Studdert, D. M. & Brennan, T. A. 2003. The rise of litigation in human
subjects research. Annals of Internal Medicine, 139, 40-45.
Menikoff, J. 2010. The paradoxical problem with multiple-IRB review. The New England
Journal of Medicine, 363, 1591.
NIH. (2014). Notice of revised NIH definition of “Clinical Trial”. Washington, DC
NIH. (2016). Final NIH policy on the use of a single institutional review board for multi-
site research. Washington, DC
Nulty, D. D. 2008. The adequacy of response rates to online and paper surveys: what can
be done? Assessment & Evaluation in Higher Education, 33, 301-314.
Nuremberg Code 1949. Trials of war criminals before the Nuremberg military tribunals
under control council law no. 10, volume II, Washington, DC, US Government
Printing Office.
Nyiszli, M. & Bettelheim, B. 1993. Auschwitz: A doctor's eyewitness account, New York,
Arcade Publishing.
O’Rourke, P. P. 2017. The final rule: When the rubber meets the road. The American
Journal of Bioethics, 17, 27-33.
O’Rourke, P. P., Carrithers, J., Patrick-Lake, B., Rice, T. W., Corsmo, J., Hart, R.,
Drezner, M. K. & Lantos, J. D. 2015. Harmonization and streamlining of research
oversight for pragmatic clinical trials. Clinical Trials, 12, 449-456.
Obeid, J. S., Alexander, R. W., Gentilin, S. M., White, B., Turley, C. B., Brady, K. T. &
Lenert, L. A. 2016. IRB reliance: An informatics approach. Journal of Biomedical
Informatics, 60, 58-65.
OHRP. 2011. IRB Authorization Agreement Template [Online]. Available:
https://www.hhs.gov/ohrp/register-irbs-and-obtain-fwas/forms/irb-authorization-
agreement/index.html [Accessed May 29 2021].
OPRR. (1991). Federal Register: Federal Policy for the Protection of Human Subjects.
Office for Protection from Research Risks. Washington, DC: National Archives
And Records Administration
Orcher, L. 2016. Conducting research: Social and behavioral science methods, New
York, Routledge.
Pappworth, M. H. 1990. " Human guinea pigs"-- A history. British Medical Journal, 301,
1456.
Ravina, B., Deuel, L., Siderowf, A. & Dorsey, E. R. 2010. Local institutional review
board (IRB) review of a multicenter trial: Local costs without local context.
122
Annals of Neurology: Official Journal of the American Neurological Association
and the Child Neurology Society, 67, 258-260.
Reid, D. 1950. Statistics in clinical research. Annals New York Academy of Sciences, 52,
931-934.
Resnik, D. B., Smith, E. M. & Shi, M. 2018a. How US research institutions are
responding to the single Institutional Review Board mandate. Accountability in
Research, 25, 340-349.
Resnik, D. B., Taylor, J., Morris, K. & Shi, M. 2018b. A study of reliance agreement
templates used by US research institutions. IRB, 40, 6.
Rogelberg, S. G., Conway, J. M., Sederburg, M. E., Spitzmüller, C., Aziz, S. & Knight,
W. E. 2003. Profiling active and passive nonrespondents to an organizational
survey. Journal of Applied Psychology, 88, 1104.
Rogelberg, S. G. & Stanton, J. M. 2007. Introduction: Understanding and dealing with
organizational survey nonresponse. Sage Publications Sage CA: Los Angeles,
CA.
Rothman, D. J. 1991. Strangers at the bedside: A history of how law and bioethics
transformed medical decision making, New York, Routledge.
Sauermann, H. & Roach, M. 2013. Increasing web survey response rates in innovation
research: An experimental study of static and dynamic contact design features.
Research Policy, 42, 273-286.
Scanlon, J. W., Horst, P., Nay, J. N., Schmidt, R. E. & Waller, J. D. 1977. Evaluability
assessment: Avoiding type III and IV errors. In: Gilbert, G. R. & Conklin, P. J.
(eds.) Evaluation Management: A Sourcebook of Readings. Charlottesville, VA:
US Civil Service Commission.
Schmidt, U. 2004. Justice at Nuremberg: Leo Alexander and the Nazi doctors' trial, New
York, Palgrave Macmillan.
Schnipper, L. E. 2017. Central IRB review is an essential requirement for cancer clinical
trials. Journal of Law, Medicine & Ethics, 45, 341-347.
Shah, S., Whittle, A., Wilfond, B., Gensler, G. & Wendler, D. 2004. How do institutional
review boards apply the federal risk and benefit standards for pediatric research?
Journal of the American Medical Association, 291, 476-482.
Shuster, E. 1997. Fifty years later: The significance of the Nuremberg Code. New
England Journal of Medicine, 337, 1436-1440.
123
Silverman, H., Hull, S. C. & Sugarman, J. 2001. Variability among institutional review
boards’ decisions within the context of a multicenter trial. Critical Care Medicine,
29, 235.
Stair, T. O., Reed, C. R., Radeos, M. S., Koski, G. & Camargo, C. A. 2001. Variation in
institutional review board responses to a standard protocol for a multicenter
clinical trial. Academic Emergency Medicine, 8, 636-641.
Strong, R. P. & Crowell, B. 1912. The Etiology of Beriberi. Philippine Journal of
Science, 7.
Stufflebeam, D. 2013. The CIPP evaluation model: Status, origin, development, use, and
theory. In: Alkin, M. C. (ed.) Evaluation Roots: A Wider Perspective of Theorists’
Views and Influences. Los Angeles: Sage.
Stufflebeam, D. L. 1967. The use and abuse of evaluation in Title III. Theory into
Practice, 6, 126-133.
Stufflebeam, D. L. 1971. The use of experimental design in educational evaluation.
Journal of Educational Measurement, 8, 267-274.
Stufflebeam, D. L. 2000. The CIPP model for evaluation. In: Madaus, G. F. &
Stufflebeam, D. L. (eds.) Evaluation Models. New York: Kluwer Academic
Publishers.
Stufflebeam, D. L. & Coryn, C. L. 2014. Evaluation theory, models, and applications,
San Francisco, CA, John Wiley & Sons.
Stufflebeam, D. L. & Zhang, G. 2017. The CIPP evaluation model: How to evaluate for
improvement and accountability, New York, Guilford Publications.
Sue, V. M. & Ritter, L. A. 2012. Conducting online surveys, Los Angeles, Sage.
Taylor, H. A. & Ervin, A. M. 2017. A measure of effectiveness is key to the success of
sIRB policy. The American Journal of Bioethics, 17, 41-43.
US Congress. (1962). Drug Amendments of 1962: Public Law 87-781: An Act to protect
the public health by amending the Federal Food, Drug, and Cosmetic Act to
assure the safety, effectiveness, and reliability of drugs, authorize standardization
of drug names, and clarify and strengthen existing inspection authority; and for
other purposes. Washington, DC: Congressional Record
US Congress. 1974. National Research Act [Online]. Available:
https://www.govinfo.gov/content/pkg/STATUTE-88/pdf/STATUTE-88-
Pg342.pdf [Accessed August 9, 2020].
Wagner, T. H., Cruz, A. M. E. & Chadwick, G. L. 2004. Economies of scale in
institutional review boards. Medical Care, 817-823.
124
Wagner, T. H., Murray, C., Goldberg, J., Adler, J. M. & Abrams, J. 2010. Costs and
benefits of the national cancer institute central institutional review board. Journal
of Clinical Oncology, 28, 662.
Weindling, P. 2004. Nazi medicine and the Nuremberg trials: From medical warcrimes
to informed consent, New York, Palgrave Macmillan.
WHO. (2011). Standards and operational guidance for ethics review of health-related
research with human participants. (9290218819). World Health Organization.
Geneva: World Health Organization
Winkler, S. J., Witte, E., Bierer, B. E., Harvard Catalyst Regulatory Foundations, E. &
Program, L. 2015. The Harvard Catalyst Common Reciprocal IRB Reliance
Agreement: An innovative approach to multisite IRB review and oversight.
Clinical and Translational Science, 8, 57-66.
WMA. 1964. Declaration of Helsinki - Version 1964 [Online]. World Medical Assembly.
Available: https://www.wma.net/what-we-do/medical-ethics/declaration-of-
helsinki/ [Accessed July 21 2020].
WMA. 1975. Declaration of Helsinki - Version 1975 [Online]. World Medical Assembly.
Available: https://www.wma.net/what-we-do/medical-ethics/declaration-of-
helsinki/ [Accessed July 21 2020].
WMA. 2013. Declaration of Helsinki - Version 2013 [Online]. World Medical Assembly.
Available: https://www.wma.net/policies-post/wma-declaration-of-helsinki-
ethical-principles-for-medical-research-involving-human-subjects/ [Accessed
March 27 2021].
Wolinetz, C. D. & Collins, F. S. 2017. Single-minded research review: The Common
Rule and single IRB policy. The American Journal of Bioethics, 17, 34-36.
Young, J. H. 1992. The Medical Messiahs: A Social History of Health Quackery in 20th
Century America, Princeton, NJ, Princeton University Press.
Young, J. H. 2014. Pure food: Securing the Federal Food and Drugs Act of 1906,
Princeton, NJ, Princeton University Press.
125
APPENDIX A. SURVEY
Survey
Start of Block: Demographics
Thank you for taking the time to participate in my research!
The purpose of this study is to explore the experiences and challenges that IRBs or human
research protection programs (HRPPs) may have had when implementing the single IRB mandate
for multisite clinical trials. In this survey, the terms IRB and HRPP are used interchangeably and
refer to the unit responsible for ensuring that human research participants are protected
Please feel free to skip questions if they are not appropriate for you. Your responses to this survey
will remain confidential.
1 Has your organization ever ceded the review of a study to another IRB?
o No
o Yes
Skip To: 2 If 1 = Yes
1a Does your organization plan to participate in single IRB review in the future?
o No
o Maybe
o Yes
o Don't know
1b Please explain why you will not participate or have not participated before now:
________________________________________________________________
________________________________________________________________
________________________________________________________________
126
________________________________________________________________
Skip To: End of Survey If Condition: Please explain why you will... Is Displayed. Skip To: End of Survey.
2 Which position below best describes your role at the IRB/HRPP?
o Director/Manager
o IRB Analyst/Administrator
o IRB Chair
o IRB Member
o Other ________________________________________________
3 Please choose the best description of your institution:
o University with a medical center
o University/College without a medical center
o Medical center/hospital
o Independent research site
o Independent or commercial IRB
o Other ________________________________________________
4 How many IRB Analysts/Administrators review research at your institution across all IRBs?
(please estimate FTE if duties are shared)
o Less than 3
o 3-5
o 6-10
o More than 10
5 Of these staff members, how many are dedicated to single IRB/Ceded review? (please estimate
FTE if duties are shared)
o 1 or less
o 2-3
o More than 3
127
6 How many of the following types of clinical trials did you start at your institution over the last
12 months?
None
Fewer than
25
25-50 51-100
Greater
than 100
Cannot Say
Traditional
- Individual
review by
your IRB
o o o o o o
Ceded - you
accepted a
review that
was done
elsewhere
o o o o o o
sIRB - you
were the
IRB of
record for a
multisite
trial (other
sites ceded
to you)
o o o o o o
7 How long, on average, does it take for a clinical trial to be approved/cleared at your institution?
Fewer than
30 days
30-59 days 60-99 days
100 days or
more
Cannot say
Traditional -
individual
review by
your IRB
o o o o o
Ceded - you
accepted a
review that
was done
elsewhere
o o o o o
sIRB - you
were the IRB
of record for a
multisite trial
o o o o o
End of Block: Demographics
Start of Block: Develop
128
8 How have your overall review times changed since the introduction of the NIH mandate?
o Review times are shorter
o Review times have generally stayed the same
o Review times are longer
o Cannot say
Display This Question:
If 8 = Review times are longer
9 Why are the review times longer?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
Display This Question:
If 8 = Review times are shorter
10 Why are the review times shorter?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
11 How do you prioritize the processing of studies whose review has been ceded to another IRB?
o Separately
o Higher than others
o The same as others
o Lower than others
o Other ________________________________________________
129
12 The following have been suggested by others as challenges when expanding an institution's
capacity for managing single IRB studies. Please choose up to three of the most significant
challenges faced by your institution:
Challenges Faced
______ Obtaining additional resources
______ Aligning policies to manage single IRB studies
______ Adjusting documentation systems to accommodate and track studies
______ Convincing IRB members that participants are still protected
______ Educating IRB/HRPP staff on the process
______ Educating Investigators and research staff on the process
______ Managing expectations from administration
______ Other (please explain)
13 What additional resources (if any) did you need to expand the IRB's capacity for single IRB
studies? (check all that apply)
▢ Additional Staff
▢ Training resources for staff
▢ Funds to develop training resources for investigators
▢ Funds to add or alter documentation systems
▢ No additional resources were needed
▢ Cannot say
14 Can you estimate the cost for these changes?
o Over $100,000
o Between $50,000 and $100,000
o Between $25,000 and $50,000
o less than $25,000
o No cost
o Cannot say
o Other ________________________________________________
130
15 What resources (such as webinars, guidance documents, or training materials) were most
useful in training IRB/HRPP staff?
Very useful
somewhat
useful
Not very
useful
Not useful
N/A - Did
not use
Resources
provided by
NIH/HHS
o o o o o
Resources
from the
SMART IRB
Initiative
o o o o o
Resources
from
PRIM&R
o o o o o
Resources
from
AAHRPP
o o o o o
Resources
from CITI
o o o o o
Resources we
created
ourselves
o o o o o
Resources
from another
university or
medical
center (please
specify)
o o o o o
Other (please
specify)
o o o o o
16 Overall, were there areas that are still in need of better materials? Please indicate any gaps and
what you would need to address them.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
131
17 How were the IRB/HRPP staff trained to manage single IRB reviews? (check all that apply)
▢ By HRPP management
▢ By peer to peer interactions
▢ By individual study
▢ By a third party
▢ By conferences or training sessions from professional organizations
▢ Cannot say
▢ Other ________________________________________________
18 Given what you know now, what would you change to increase the effectiveness of the
training?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
132
19 The following have been identified in the literature as factors influencing the adoption of
single IRB review. Please rate the current significance of the following concerns when your
institution cedes ethical review:
Most
Important
Very
Important
Somewhat
Important
Not
Important
Cannot Say
Understanding
local context
o o o o o
Retaining local
control
o o o o o
Managing
liability for
conduct
o o o o o
Ensuring
safety of
participants
o o o o o
Managing
reliance
agreements
o o o o o
Ensuring the
adequacy of
consent forms
o o o o o
Controlling
review quality
o o o o o
Other
o o o o o
20 Are you taking any measures to prepare for the possibility that FDA will mandate single IRB
reviews?
o No, our current policies and processes will accommodate the change
o No, we anticipate that it will be likely but have not yet implemented proactive measures
o No, we don't think it will happen soon, so proactive measures are premature
o Yes, we have cross trained several IRB analysts and are educating faculty and staff
o Yes, we have done the following:
________________________________________________
End of Block: Develop
133
Start of Block: Implement
21 Reliance agreements specify the roles and responsibilities of the lead IRB and the ceding
participants. If negotiating a relationship today as the lead IRB, what method(s) do you use to
structure your reliance agreements?
Always Frequently Infrequently Never Do not know
SMART IRB
o o o o o
The OHRP
template
reliance
agreement
o o o o o
Our own
standardized
template
o o o o o
Unique
agreement
specific to
each trial
o o o o o
Other
o o o o o
22 How satisfied are you with the reliance agreements that you use?
o Extremely dissatisfied
o Somewhat dissatisfied
o Neither satisfied nor dissatisfied
o Somewhat satisfied
o Extremely satisfied
23 How could the reliance agreements be improved?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
134
________________________________________________________________
24 When you cede reviews, what types of organizations are serving as the IRB of record?
Frequently Occasionally Infrequently Never
NCI CIRB
o o o o
Commercial
IRBs (such as
WCG IRB or
Advarra)
o o o o
Institutions using
the SMART IRB
agreement
o o o o
Institutions that
developed their
own reliance
agreements
o o o o
Other
o o o o
135
25 Please rate the quality of reviews done in the past two years by the following organizations:
Excellent Average Poor
They vary
too much to
rate
N/A
NCI CIRB
o o o o o
Commercial
IRBs (such as
WCG IRB or
Advarra)
o o o o o
Institutions
using the
SMART IRB
agreement
o o o o o
Institutions
that developed
their own
reliance
agreements
o o o o o
Other
o o o o o
136
26 When you cede a review, how likely is it that the reliance agreement will assign the following
tasks to your institution?
Always Likely
Neither
likely nor
unlikely
Unlikely Never
Don't
know
Making HIPAA
determinations
o o o o o o
Reviewing Adverse
Events
o o o o o o
Ensuring local
study team training
and education
o o o o o o
Ensuring the
correct cost and
injury language in
consent forms
o o o o o o
Stamping consent
forms
o o o o o o
Conducting audits
(QA/QI or for
cause)
o o o o o o
Ensuring Conflict
of Interest review is
complete and
current
o o o o o o
Ensuring
Biosafety/Radiation
Safety Review is
complete and
current
o o o o o o
137
27 When your institution has local site responsibilities (such as those above), who is responsible
for sending results to the reviewing IRB?
▢ Our local IRB/HRPP
▢ Our local study teams
▢ Our electronic system (communicates directly with the reviewing IRB's electronic
system)
▢ ⊗Cannot say
▢ Other ________________________________________________
28 How do you keep track of ceded trials? (select all that apply)
▢ We have an electronic IRB system that accommodates ceded trials
▢ We use a separate electronic system to maintain ceded trials
▢ We use a manually developed spreadsheet
▢ Other ________________________________________________
29 Are you satisfied with the way you track ceded studies?
o Extremely dissatisfied
o Somewhat dissatisfied
o Neither satisfied nor dissatisfied
o Somewhat satisfied
o Extremely satisfied
o Cannot say
30 Are you planning to change the way that ceded studies are tracked in the next three years?
o Definitely not
o Probably not
o Might or might not
o Probably yes
o Definitely yes
138
Display This Question:
If 30 = Probably yes
And 30 = Definitely yes
And 30 = Might or might not
31 What changes will you be considering?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
End of Block: Implement
Start of Block: Monitor/Feedback
139
32 Single IRB review has been suggested to solve several challenges faced by multi-center
clinical trials. Do the following statements describe your experience with changes that involve the
use of ceded reviews?
Strongly
agree
Somewhat
agree
Neither
agree nor
disagree
Somewhat
disagree
Strongly
disagree
N/A
New
policies and
procedures
must be
created
o o o o o o
Starting a
clinical trial
is easier
under single
IRB
o o o o o o
Protection
of
participants
has been
unchanged
by single
IRB studies
o o o o o o
IRB
shopping is
threatening
the safety of
participants
o o o o o o
Liability
issues have
been
resolved
o o o o o o
Scaling to
meet future
single IRB
demand will
be easy
o o o o o o
The
workload
for IRB
staff has
decreased
o o o o o o
140
33 Do you see any institutions as leaders in fostering development of single IRB review? Please
explain.
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
34 How would you rate the effectiveness of your institution's implementation of the single IRB
mandate compared with your peers?
o Much better
o Somewhat better
o Average
o Somewhat worse
o Much worse
o Cannot say
35 Overall, how satisfied are the following groups at your institution with the single IRB process,
as reflected by the feedback you may have received?
Extremely
dissatisfied
Somewhat
dissatisfied
Neither
satisfied
nor
dissatisfied
Somewhat
satisfied
Extremely
satisfied
Can't
say
IRB/HRPP
staff
o o o o o o
IRB members
o o o o o o
Researchers
o o o o o o
Administration
o o o o o o
Trial
participants
o o o o o o
Other
o o o o o o
141
36 If any stakeholders have been particularly affected, either positively or negatively, please
explain:
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
37 Looking back to when the single IRB mandate from NIH went into force in 2018, was your
institution prepared for the change?
o Yes, completely prepared
o Mostly prepared
o Somewhat prepared
o Mostly not prepared
o Not at all prepared
o Cannot say
38 Can you share any lessons learned for an institution that is still at an early stage of
participating in ceded reviews?
________________________________________________________________
________________________________________________________________
________________________________________________________________
________________________________________________________________
39 Can you share any lessons learned for an institution serving as the IRB of record?
________________________________________________________________
________________________________________________________________
142
________________________________________________________________
________________________________________________________________
End of Block: Monitor/Feedback
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Institutional review board capabilities to oversee new technology: social media as a case study
PDF
A survey of US industry views on implementation of decentralized clinical trials
PDF
21 CFR Part 11 compliance for digital health technologies in clinical trials
PDF
Contract research organizations: a survey of industry views and outsourcing practices
PDF
Sharing the results of clinical trials: industry views on disclosure of data from industry-sponsored clinical research
PDF
An industry survey of implementation strategies for clinical supply chain management of cell and gene therapies
PDF
Clinical trials driven by investigator-sponsors: GCP compliance with or without previous industry sponsorship
PDF
Use of electronic health record data for generating clinical evidence: a summary of medical device industry views
PDF
Experience with breakthrough therapy designation: an industry survey
PDF
Benefits-risk frameworks: implementation by industry
PDF
Current practices of U.S. investigators in the management of the clinical trial agreement: a survey of knowledge, attitudes, perceptions, and engagement
PDF
Implementation of tobacco regulatory science competencies in the tobacco centers of regulatory science (TCORS): stakeholder views
PDF
Challenges to implementation of alternative methods to animal testing for drug safety assessment in North America
PDF
Computerized simulation in clinical trials: a survey analysis of industry progress
PDF
Regulatory agreements for drug development collaborations: practices in the medical products industry
PDF
Examining the cord blood industry views on the biologic license application regulatory framework
PDF
IRB insiders: perspectives from within the institutional review board
PDF
Regulatory CMC strategies for gene and cell therapies during mergers and acquisitions: a survey of industry views
PDF
Evaluation of FDA-sponsor formal meetings on the development of cell and gene therapies: a survey of industry views
PDF
Regulatory dissonance in the global development of drug therapies: a case study of drug development in postmenopausal osteoporosis
Asset Metadata
Creator
Koning-Bastiaan, Martinus Jan
(author)
Core Title
Institutional review board implementation of the single IRB model for multicenter clinical trials
School
School of Pharmacy
Degree
Doctor of Regulatory Science
Degree Program
Regulatory Science
Degree Conferral Date
2022-08
Publication Date
07/08/2022
Defense Date
06/06/2022
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
clinical trials,human subjects research,IRB,multicenter clinical trials,OAI-PMH Harvest,single IRB
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Richmond, Frances (
committee chair
), Beringer, Paul (
committee member
), Church, Terry (
committee member
), Pire-Smerkanich, Nancy (
committee member
)
Creator Email
koningba@usc.edu,mjkb30@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-oUC111369231
Unique identifier
UC111369231
Legacy Identifier
etd-KoningBast-10816
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Koning-Bastiaan, Martinus Jan
Type
texts
Source
20220708-usctheses-batch-951
(batch),
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the author, as the original true and official version of the work, but does not grant the reader permission to use the work if the desired use is covered by copyright. It is the author, as rights holder, who must provide use permission if such use is covered by copyright. The original signature page accompanying the original submission of the work to the USC Libraries is retained by the USC Libraries and a copy of it may be obtained by authorized requesters contacting the repository e-mail address given.
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Repository Email
cisadmin@lib.usc.edu
Tags
clinical trials
human subjects research
IRB
multicenter clinical trials
single IRB