Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Clinical trials driven by investigator-sponsors: GCP compliance with or without previous industry sponsorship
(USC Thesis Other)
Clinical trials driven by investigator-sponsors: GCP compliance with or without previous industry sponsorship
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
CLINICAL TRIALS DRIVEN BY INVESTIGATOR-SPONSORS: GCP
COMPLIANCE WITH OR WITHOUT PREVIOUS INDUSTRY SPONSORSHIP
By
Ellen R. Whalen
A Dissertation Presented to the
FACULTY OF THE USC SCHOOL OF PHARMACY
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF REGULATORY SCIENCE
May 2013
Copyright 2013 Ellen R. Whalen
ii
DEDICATION
To Mark, my husband, who gives joy to my life, strength to endure my challenges and supports
me no matter how silly my ideas.
To Michael and Matthew, my M&M’s
To my family and friends for your unconditional love and support, funny cards and little gifts;
you have made this possible. To Frances and Mike, dear colleagues, thank you for your gifts of
time and wisdom. To Autumn and Debbie, Fight On Forever!
iii
TABLE OF CONTENTS
DEDICATION ii
LIST OF FIGURES v
ABSTRACT vii
1: INTRODUCTION 1
Statement of the Problem 4
Purpose of the Study 5
Significance of the Study 6
Delimitations, Limitations and Assumptions 6
Definitions 7
Organization of the Study 8
2: LITERATURE REVIEW 9
History of Clinical Trial Development Before Regulation 9
Early Regulation of Clinical Trials 11
The Changing Clinical Trial Landscape 15
An Ethical Framework for Clinical Trials 17
Evolution in the Regulation of Clinical Trials: 1980s and Beyond 19
Regulations Governing Human Protection 22
Regulations Governing the Logistics of Clinical Trial Conduct 27
Gauging Compliance with Clinical Trial Regulations 34
Monitoring Investigator-Sponsored Trials 40
Methods for Assessing Regulatory Compliance 41
Evaluating Clinical Trials with a PDCA Framework 43
Summary 49
3: METHODOLOGY 51
Audit Tool Development 52
Focus Group 52
Audit Process 53
Data Analysis 55
iv
4: RESULTS 57
Characteristics of Studies Reviewed 57
Planning 59
Doing 69
Checking 81
Acting 91
Findings Related to the Experience of Investigators 92
Scoring of Individual Studies 97
5: DISCUSSION 99
Consideration of Methods 99
Consideration of the PDCA Framework 104
Consideration of Results 108
Consideration of the Experience of Principal Investigators 121
REFERENCES 126
APPENDIX A: SUMMARY OF FDA WARNING LETTERS 136
APPENDIX B: AUDIT TOOL 138
APPENDIX C: IRB APPROVALS 145
APPENDIX D: INFORMATION LETTERS TO PI AND COORDINATOR 151
APPENDIX E: RAW DATA TABLES 153
v
LIST OF FIGURES
Figure 1: 1997 FDA Modernization Act 20
Figure 2: 2007 FDA Amendments Act 21
Figure 3: Six Principal Focuses of Good Clinical Practices 31
Figure 4: Warning Letters with Selected Types of Deficiencies (adapted from FDA Warning
Letters 2005-2010) 40
Figure 5: The PDCA Cycle (Modified from American Society for Quality, 2011) 42
Figure 6: Characteristics of Studies Reviewed 58
Figure 7: Planning for Documentation 59
Figure 8: Planning for Informed Consent 61
Figure 9: Planning for IRB/FDA Communications 63
Figure 10: Planning for the Test Article 65
Figure 11: Planning for Recruitment 67
Figure 12: Planning for Staffing 68
Figure 13: Carrying Out Documentation Requirements 70
Figure 14: Carrying Out Informed Consent Requirements 72
Figure 15: Carrying Out IRB/FDA Requirements 74
Figure 16: Carrying Out Test Article Requirements 76
Figure 17: Carrying Out Recruitment Requirements 78
Figure 18: Carrying Out Staffing Requirements 79
Figure 19: Checking for Documentation Compliance 81
vi
Figure 20: Checking for Informed Consent Compliance 83
Figure 21: Checking for IRB/FDA Communication Compliance 84
Figure 22: Checking for Test Article Compliance 86
Figure 23: Checking for Compliance with Recruitment 88
Figure 24: Checking for Staffing Compliance 90
Figure 25: Inexperienced vs. Experienced Investigators Compliance with Planning 93
Figure 26: Inexperienced vs. Experienced Compliance with Doing 94
Figure 27: Inexperienced vs. Experienced Compliance with Checking 95
Figure 28: Inexperienced vs. Experienced Compliance with Acting 96
Figure 29: Inexperienced vs. Experienced Investigator Individual Study Scores 97
vii
ABSTRACT
The present study was conducted to evaluate using the Plan-Do-Check-Act cycle (PDCA)
typical of quality control systems to evaluate the state of compliance with Good Clinical Practices
in clinical trials directed by investigators who also had the role of sponsor, either with or without
recent experience in trials sponsored by industry. A targeted audit tool was constructed with
reference to the most common deficiencies found in FDA and OHRP warning letters to principal
investigators. The tool was validated by a focus group of individuals with an established record
of engagement in the conduct or oversight of clinical trials. A sample of clinical trials conducted
by both industry experienced and inexperienced principal investigators was audited using the tool
at a single research university. Industry inexperienced investigators in this sample were at least
as compliant, and sometimes more compliant, with GCPs than industry experienced investigators.
Areas of challenge for both groups included documentation systems, staffing and end-of-trial care
transitions. Using the PDCA cycle, deficiencies were often recognized at the planning stage, and
additional challenges associated with monitoring were common at the checking phase. These
results may help educators and risk experts in universities to target initiatives that focus on areas
of greatest need and to maximize the impact of training resources that improve the safety of
participants in clinical trials.
1
1: INTRODUCTION
Expectations regarding the conduct of clinical trials have undergone substantial change
and critical review over the past two decades. In part this scrutiny has been driven by concern
regarding the safety and rights of study participants. In part it can also be attributed to the
increased sophistication of the tools available to conduct trials, especially of large, multicenter
trials that underlie new drug development and commercialization. Thus, much has been written
about the logistical and ethical management of mega-trials that might have sites in many
countries or even continents, advanced designs, and elaborate clinical-product distribution needs.
These multisite, industry-sponsored clinical trials cost millions of dollars to execute, and
deservedly undergo several levels of review, both before and after the start of the operational
phase of a trial (Califf 2006). Large, multicenter studies enroll thousands of study participants.
Typically, the industry sponsor develops the product under study and is well-aware of the nature
and expected behavior of the product. Pharmaceutical industry sponsors have sufficient resources
to pay for the manpower, communication vehicles, clinical tests, and materials to support all
aspects of the study. Within the company, the protocols and management methods are studied
intensively by a team of scientists, clinicians, and logistical specialists. In addition, the clinical
trial often must pass at least two additional levels of review. First the trial sponsor typically must
submit a request for permission to initiate clinical trials [called an investigational new drug
(IND), or investigational device exemption (IDE) application in the US] to the regulatory
authorities in the countries in which the trial will be conducted. This allows the regulatory
authorities to assess the appropriateness of test materials and protocols. Second, all trial protocols
must undergo disciplined comprehensive review by the institutional review boards (IRB)
associated with the trial.
2
This thesis is not primarily directed at such trials. Rather it is concerned with a second
type of trial, called the “investigator-initiated” or “investigator-sponsor” trial, in which many of
the logistical support and oversight mechanisms typical of large, industry-sponsored trials are not
present, for the most part because of restrictions on resources. Investigator-sponsor trials are
typically conducted at a single site, under a research protocol that is designed and executed by a
single health-care professional. Unlike industry trials, where principal investigators (PI) are
selected by the industry sponsor based on their proven ability to conduct clinical research within
the confines of a highly disciplined protocol, investigator-sponsor trials may be directed by
individuals with little or no previous experience in clinical trials. Unlike industry trials, in which
principal investigators and their staff receive training, competency validation, and frequent
monitoring over the course of the study by the company funding the trial, investigator-sponsor
trials often have fewer oversight opportunities, because typically no third party is engaged to
supervise and train the investigator and staff. Unlike industry trials where irregularities,
deviations from the protocol, and significant adverse events must be reported to the sponsor, no
such reporting through a third party is required. Rules that necessitate investigators to report
protocol deviations and adverse events still exist for the investigator-sponsor trial, but they are
harder to enforce (Christian, Goldberg et al. 2002). Finally, investigator-sponsor trials may be
carried out using approved medical products. As long as those studies do not change the dosage,
route of administration or indication for which the product is used, there may be no requirement
for permissions from the national regulatory authorities. Protocols are often approved only by a
local IRB whose ability to assess compliance with the study protocol may be limited. Enrollment
of patients is frequently not blinded or randomized (Bellomo, Warrillow et al. 2009). In short,
investigator-sponsor trials often lack the multiple redundant systems to ensure that the trial is
conducted according to the rules and that its progress is monitored effectively.
3
Single-center studies have advantages. They are logistically simpler, less expensive, and
faster, because they do not require prolonged negotiations with the sponsor; they are easier to run
with simpler data collection and they often enroll less heterogeneous populations. An argument
can be made that single-center studies are valuable for hypothesis generation and may be used
efficiently and effectively as a starting point for new directions of research or larger, multicenter
studies (Bellomo, Warrillow et al. 2009). It is important that oversight of such studies does not
compromise the research with overly intrusive and expensive requirements that cannot be put into
place with the reduced resources of the investigator. It is often thought that local review is
advantageous because a local oversight body can consider specific knowledge of local conditions
that allow the study, particularly the informed consent document, to be written in a manner
appropriate for the local population (Burman, Breese et al. 2003). Nevertheless, such trial
logistics have also been criticized. For example, the Institute of Medicine (IOM) and others have
found that review of documents by multiple local IRBs can increase the variability of research
protocols and informed consent forms, which if meaningful, may introduce bias into the study
and detract rather than enhance human subjects’ protection (InstituteofMedicine 2001, Menikoff
2010).
Regulatory agencies expect that all clinical trials will be monitored but these agencies do
not typically conduct that monitoring, unless they receive information that a trial is out of
compliance. However, significant problems may exist with the oversight of research involving
human subjects when only a single, often volunteer-based, IRB is solely responsible for
approving and monitoring a trial (Randal 2001). Problems of experimental design, on-site
operating procedures, and staff performance are often missed by an overworked and under-
resourced IRB whose members may not be expert in the type of research that is being conducted.
Major reforms are proposed related to accreditation of institutional IRBs, that may address some
4
of the problems associated with IRB oversight (Emanuel, Wood et al. 2004), but such
accreditation cannot ensure that the level of oversight is always appropriate and timely. Further,
issues of conflict of interest that may arise from an investigator’s financial relationships with
industry must be addressed (Martin and Kasper 2000).
Statement of the Problem
Today, in many research intensive universities, investigator-sponsor clinical trials do not
receive the same oversight as industry initiated studies conducted by the same researcher. This
difference in oversight calls into question the level of discipline and knowledge of regulatory
compliance required of the investigator when involved in the conduct of investigator-sponsor
trials. The oversight of many investigator-sponsor trials conducted by university faculty is often
the responsibility of a local university IRB. The IRB has many limitations on its activities,
including the scope of authority, management capabilities related to resource and personnel
limitations, availability of expertise, and time constraints from large numbers of competing trials
under its jurisdiction. Concern has been expressed that the local IRB may not be able to provide
investigator-sponsor trials with support and oversight commensurate with industry, leaving
assurance of regulatory compliance to the investigator. Further, little study has been directed at
understanding the nature of investigator oversight and its perceived effectiveness in ensuring
patient safety and maintaining public trust.
Purpose of the Study
This study has explored the nature and effectiveness of the planning for and oversight of
trials where the responsibility for compliance rests with the investigator. It queried whether
similar, less rigorous or enhanced standards for oversight are in place compared to those used to
review industry-sponsor trials. Utilizing a standardized audit tool, this research examined the
state of GCP compliance in a sample of investigator-sponsor clinical trials. The audit tool used
5
the PDCA (Plan-Do-Check-Act) cycle as a framework to examine a range of aspects of trial
planning, execution and audit in areas previously identified by the FDA and OHRP as areas of
common deficiency when site investigations were conducted. By comparing the results of trials
in which the principal investigator had recently conducted industry trials with those who had not,
we were able to see if the inexperienced investigators had poorer compliance outcomes, which in
fact, they did not.
Significance of the Study
A central tenet of clinical trials is the importance of minimizing risks to human
participants. Concern has been expressed that investigator-sponsor trials are not subject to the
same degree of oversight as industry-sponsor trials and this may increase the risk of unsafe
practices and compliance issues. This study contributes information that will help to identify
gaps in the mechanisms for ensuring safety of human subjects in research initiated by faculty
investigators. Only with a systematic understanding of such challenges will it be possible to
develop adequate solutions to minimize safety issues. Thus, information provided by this
research can help to provide information needed to guide policy development by the IRB and
granting and regulatory agencies, so that investigator-sponsor protocols have sufficient
safeguards to assure the safety of enrolled human subjects.
The information collected by this study may also help to guide educational initiatives to
ensure individuals conducting clinical trials are well educated and trained in methods that ensure
compliance with Good Clinical Practices (GCPs). Training initiatives often attempt to deal with a
wide variety of issues, some of which are redundant for most personnel, and thus waste time and
increase dissatisfaction with the training program. A well-focused training program that
concentrates on issues known to be of particular concern would increase the effectiveness and
6
decrease the time spent on training. Such focus might help to make the training more valuable
and palatable.
Delimitations, Limitations and Assumptions
Many types of clinical trials are carried out on therapeutic products in many different
environments. This study is delimited in its focus to investigator-sponsor trials in the United
States (US) and carried out by faculty employed at one top-tier Research University. The study
will focus on the planning and oversight activities of the principal investigator with regard to
selected elements of GCPs. Exploration of the background of the investigators studies herein was
limited to understanding if they had experience as PI in industry sponsored studies.
Considerations such as professional background, education, training and/or mentorship in the
management of clinical trials were not explored. Thus, it is not clear from this study if the
qualifications and experience (outside of industry trials) of the investigators affected the
outcomes of compliance.
This study is a preliminary examination to compare the approaches of a small number of
principal investigators in a small subset of trials. Thus it has limitations related to the modest
sample of PIs who may not represent adequately the approaches of PIs in general, and particularly
PIs who do not work in research intensive universities. In future, the study would have to be
extended to smaller educational institutions and hospitals if we are to characterize the approaches
of PIs more fully. Further, it is not clear from this study whether the observations made from
relatively few cases will be representative of cases in general. Finally, there may also be the
restricted ability to find sufficient cases.
The study may be limited by the willingness of investigators to discuss their activities
honestly and fully if they perceive that their professional capabilities might be called into
question. Such a limitation was not apparent but such bias cannot be discounted. The policies of
7
the IRB may restrict the ability to inspect certain documents or to obtain certain types of
information. Policies of the IRB and access to study documents are restricted to internal use and
can be difficult to obtain. Finally, the study may be limited by the inexperience of the author,
because the results depend on the skill of the author to carry out effective and thorough interviews
and artifact analysis.
Definitions
Clinical Trial: A clinical trial is a research study in human volunteers to answer specific health
questions. A clinical trial may study prevention options, new treatments or new ways to use
existing treatments, new screening and diagnostic techniques or options for improving the quality
of life for people who have serious medical conditions (FDA 2011).
Study Protocol: Clinical trials are conducted according to a plan called a protocol. The protocol
describes what types of patients may enter the study, schedules of tests and procedures, drugs,
dosages, and length of study as well as the outcomes that will be measured. Each person
participating in the study must agree to the rules set out by the protocol (FDA 2011).
Institutional Review Board (IRB): Any board, committee, or other group formally designated by
an institution to review, approve the initiation of and to conduct periodic review of biomedical
research involving human subjects. The primary purpose of such review is to assure the
protection of the rights and welfare of the human subjects (21CFR56 2011).
Local IRB: The IRB of a university responsible for approval and oversight of clinical trials
conducted by investigators of the university.
Principal Investigator: The person who is responsible for conducting the trial, has access to and
control over the data from the trial and has the right to publish the results of the trial. For the
8
purpose of this study the principal investigator of an industry sponsored trial is that person who is
responsible for the conduct of the study in one university setting (FDA 2011).
Faculty Investigator: A principal investigator who is employed as university faculty.
Study Sponsor: A person who initiates an investigation but does not actually conduct the
investigation. The study subject is under the immediate direction of another individual. The
sponsor may be a corporation or agency that uses one or more of its own employees to conduct a
clinical investigation it has initiated. In this case the employees are considered to be investigators
(FDA 2011).
Investigator-Sponsor: An individual who both initiates and actually conducts, alone or with
others, a clinical investigation. The study subject is under this person’s immediate direction. The
term does not include any person other than an individual, e.g. a corporation or agency (FDA
2011).
Organization of the Study
In the chapters that follow, oversight of clinical trials will be explored in selected
investigator-sponsor and industry-sponsor trials. Chapter 1 provides an overview of the problem
and an introduction to the research. Chapter 2 reviews the current state of knowledge in this field
by studying the available literature relating to clinical trials in general and to university IRB and
investigator-sponsor oversight in particular. Chapter 3 outlines the methods used to guide the
analysis of clinical trials and apply this analysis in a research intensive, university setting under
the oversight of a local institutional IRB. Chapter 4 presents the findings from analysis of the
degree of oversight of each type of clinical trial individually and comparatively, and chapter 5
then discusses the results and their implications.
9
2: LITERATURE REVIEW
History of Clinical Trial Development Before Regulation
“If there’s something new on the market that might be better than the traditional
program they’ve been using, why not try it?” (HastingsCenter 1996)
For as long as there has been medicine, there has been interest in how well a particular
treatment might work for a specific clinical condition. Thus clinical experimentation goes back
informally or semi-formally for centuries in most cultures. In modern Western culture, the
beginnings of more structured clinical trial design can be recognized, for example, in the 18
th
century, when Jenner inoculated children with an experimental variola vaccine. Far from the
randomized, double blind, placebo controlled studies that might be required in 2012, Jenner’s
studies nonetheless did exhibit many of the expected protocol elements required by clinical trials
regulations today. Jenner’s inclusion criteria were well defined. He sought study participants
who had previously been ill with specific, reproducible symptoms following contact with infected
cows. However, with no defined sample size, sophisticated statistics, or data quality board,
Jenner simply continued to enroll study patients until such time as he felt that he had proved his
hypothesis. Jenner’s writings suggested a strong personal concern about the safety of
participants, but safety monitoring was assured only by short and long term follow up visits to his
study patients. Jenner exhibited an understanding of the need for integrity in the analysis of data
when he discussed study participants who did not react as he expected. He presaged today’s
requirements to adhere to protocols in his criticisms of colleagues who did not follow the
established study protocol and had less than optimal outcomes (Jenner 1798, Jenner 1799, Jenner
1800).
Perhaps the first formalized clinical trial was that of Walter Reed, who studied the
usefulness of vaccination to prevent yellow fever. In 1900, Walter Reed was appointed to head a
10
United States (US) Army commission to study infectious diseases in Cuba with a focus on
determining the etiology of yellow fever. Reed based his work on prior investigations by other
scientists that demonstrated a potential connection between mosquitoes and yellow fever. Study
of living individuals was not Reed’s first choice. His team first tried to isolate bacillus from the
blood of patients with active disease. When this failed, he moved on to performing autopsies on
patients who died from yellow fever. Reed used human subjects only after laboratory testing and
autopsies failed to provide answers. The clinical trials that then followed were unusually well-
designed. Reed was later honored for his thorough and careful study methods that predated
formalized clinical trial regulations. Reed used volunteers and obtained informed consent. At
first volunteers were members of the research team; it could be assumed that these volunteers
were fully appraised of the risks and benefits. Subsequently Spanish and American military
personnel and civilians were approached. Reed took the heretofore unprecedented step of
obtaining a written informed consent from participants (Tan and Ahana 2010).
However, several aspects of the early trials by Jenner and Reed would be criticized by
today’s regulatory agencies and IRBs because of particular violations of certain ethical principles
that today set the boundaries for clinical research; beneficence, autonomy and justice (Lebacqz
1980). The principle of beneficence requires that researchers endeavor to ensure the wellbeing of
their study participants beyond simply protecting them from harm. Research may only be
justified based on a favorable assessment of benefits to risks, and usually requires that
investigators conduct animal studies before moving to human trials, so that test articles of
sufficient quality and potency can be assured. Clearly, such rigorous quality and toxicity testing
was not in place before Jenner or Reed inoculated the first human subjects. The principle of
justice requires that individual study participants be selected objectively, and that the pool from
which they are drawn distributes the risks fairly amongst those who are most likely to receive
11
benefits from the research outcomes. While Reed used volunteers and obtained informed
consent, his cohorts of volunteers were drawn only from individuals within the military or
associated with the military. Reed notes that he received permission from senior military officers
to conduct his research on the military base; however, yellow fever was not confined to military
personnel. One must question whether the use of military personnel was related to the ready
availability of the personnel. The preferential recruitment of military personnel might also be
seen to violate the third principle of ethics, autonomy, which includes the requirement to assure
voluntary enrollment with a full understanding of risks and benefits. How voluntary is the
permission to enroll in a trial when superiors in the military personnel endorse the trial and
potentially could exert a coercive influence?
The environment in which Reed was operating about a century ago was not devoid of
rules to govern interactions with patients. At its first meeting in 1847, the AMA adopted a newly
defined code of medical ethics, and enunciated the view that “there is no profession of which
greater purity of character, and a higher standard of moral excellence are required than the
medical” (AMA 1847). However, these ethical codes did not speak directly to the conduct of
clinical trials; responsibility for data integrity and ethical conduct remained largely in the hands
of the investigator, with little oversight or review by others.
Early Regulation of Clinical Trials
Governmental oversight and intervention in medical experiments was not in place
throughout the 1880s and early 1900s, and the situation was not greatly changed even in the
1930s with the introduction of the Food, Drug and Cosmetic (FD&C) Act of 1938. The FD&C
Act was introduced in the US under huge public pressure. It was the response to a tragedy in
which a marketed elixir formulation of sulfonamide, made with the diluent, diethylene glycol,
killed 107 individuals, many of whom were children. The FD&C Act required that new drugs be
12
proven safe before marketing (FDA 1938). However, the bar for proving safety was low. Many
of the regulations extending from the FD&C Act of 1938 were focused on labeling requirements
designed to enhance the information given to the consumer in order that safer choices could be
made. The new regulations that governed labeling required information about directions for use,
including warnings governing the use of such products in certain medical conditions or in
children. However, the regulations allowed for exemptions from the labeling requirements if the
drug was to be dispensed on the written prescription of a physician or dentist or if the drug was to
be repackaged before sale to the consumer. The Act did not specify which drugs required a
written prescription and most drugs, with the exception of narcotics, could be purchased with or
without a written prescription (Temin 1979). Without regulations that ensured the oversight of
clinical trials, the data on which drug safety was based continued to be suspect, and protection of
patients using unapproved drugs remained weak.
Legislation involving human subject protections has evolved greatly from 1938, driven
both by scandals that exposed the abuse of human rights, and by pressure from advocates for
clinical trials regulation (Seto 2001). Perhaps the most important driver to shape the rules
regarding human protections in the mid-twentieth century was the response of physicians and
concerned public to the Nuremberg trials of 1946 and 1947. These trials highlighted crimes
against humanity perpetrated by Nazi scientists engaged in experimental interventions that
caused, rather than cured, pain, suffering and death. From the Nuremburg trials an initiative
resulted to define a set of rules, enunciated in the World Medical Association’s Declaration of
Helsinki that could be used to guide human research ethics on an international level. The
Declaration of Helsinki was a document directed primarily at physicians that begins by declaring
that the health of the patient must always be a physician’s first concern. Basic principles of the
Declaration include the sacristy of human dignity and the obligation of the researcher to respect
13
each individual’s right to protect their personal health and interests, through the process of
informed consent (Rickman 1964). The Declaration of Helsinki warned researchers not to place
the interests of society before the good of the individual (Seto 2001). This document has been
modified several times since its first publication in 1964. The current 32 item code includes a
2000 amendment recognizing the importance and necessity of research protocols being subject to
“consideration, comment and guidance” as well as oversight monitoring from a committee that is
independent of the investigator and sponsor (WorldMedicalAssociation 1964). The Declaration
continues to be an important underpinning of modern clinical trials ethics.
Tenets of the Declaration of Helsinki were not immediately adopted as a basis for the
regulation of clinical research. The United States did not fully embrace this code until a series of
scandals ensued (McCarthy 1994). One of these scandals, that highlighted failures on several
fronts to adhere to the principles of the Declaration of Helsinki, was the Tuskegee syphilis study
carried out between 1932 and 1972. In this study, hundreds of impoverished black men were
misled about the state of their health and denied antibiotic treatment for their syphilis even though
penicillin was known to be effective, in order to study the natural course of the disease and racial
differences in the clinical manifestation of syphilis (Corbie-Smith 1999).
In 1966, further fuel to the ethical fire was provided by an exposé by Beecher that cited
twenty two examples of unethical clinical investigations, many of which were performed on
service men, economically disadvantaged individuals, those with mental deficiency, juvenile
delinquents, and prison inmates. In some studies, known effective treatments were withheld; for
example in one such study, chloramphenicol was withheld from patients with typhoid fever, and
twenty three people died. In others, interventions caused medical problems in an attempt to
understand the nature of disease; for example, one intervention involved transferring melanoma
from a daughter to her mother. Other studies were undertaken to develop surgical techniques;
14
one such study of percutaneous catheterization of the left heart ventricle resulted in eight deaths.
Beecher concluded by opining, “An experiment is ethical or not at its inception; it does not
become ethical post hoc...” (Beecher 1966).
Ultimately the most influential investigative intervention for changing US law was
almost certainly that associated with the use of a novel anti-nausea medication, called
Thalidomide. The use of this relatively unstudied drug in Europe and Canada by pregnant
women resulted in thousands of birth defects (Rajkumar 2004). FDA Medical Director, Francis
Kelsey, played a key role in restricting the use of Thalidomide in the US until after the dangers of
the drug were revealed in Europe. Nevertheless, the near-miss that could have resulted in similar
birth defects across the US underlined the fragility of regulations to govern drug safety, and
increased public support for stronger regulation of clinical trials (FDA 2010).
The Kefauver-Harris amendments to the FD&C, passed in 1962 as a response to the
Thalidomide tragedy, required a much more stringent approach to clinical trials. This amendment
substantially changed the level of governmental intervention by introducing requirements that
governmental regulatory agencies approve clinical trial protocols for new drugs, audit quality
assurance practices in drug manufacturing, and insist on rigorous evidence of drug safety and
efficacy before novel products are marketed. Before the Thalidomide tragedy, the Kefauver-
Harris amendments had been in the process of development for a different reason. They were
originally aimed at preventing economic loss to the public from the purchase of new, unproven
drugs with extravagant claims that were being sold at outrageous prices. The basis for the new
legislation was the view that improvements in drug quality and regulation of marketing claims
would prevent ineffective drugs from entering commerce (Peltzman 1973). However, the
Kefauver-Harris Amendments were passed into law in response to growing public concern over
the safety of testing new drugs in humans. After the Amendments were adopted, a new drug
15
manufacturer was required to prove to the FDA that the drug in fact met all claims made by the
manufacturer as proven by substantial scientific evidence. To that end, the manufacturer was
required to submit testing plans along with supportive preclinical experimentation in animals in
the form of an Investigational New Drug Application. The manufacturer could only market the
drug after the IND had been approved and clinical trials had been carried out under controlled
methods that were soon to be regulated under a set of rules called “Good Clinical Practices”
(GCPs).
The Changing Clinical Trial Landscape
The period between the introduction of the FD&C Act and the Kefauver-Harris
Amendments was a period of challenge not only because regulations governing human research
were underdeveloped but because the economic climate was changing as well. These economic
changes had significant implications for the evolution of clinical trial design and methodologies.
Until about the Second World War (WWII), most clinical trials could be characterized as
investigator-sponsor trials, not unlike those of Jenner and Reed. A substantial role for business,
and in particular the pharmaceutical industry, was not apparent in those trials. However, the war
brought efforts to develop antibiotic technologies, such as the work of scientists at Oxford
University, lead by Howard Florey, in the extraction and purification of penicillin, produced
methods for drug scale-up and manufacturing that opened the door to large-scale drug production
(Torok 1998). These new methodologies drove the development of a burgeoning post-WWII
pharmaceutical industry. A “golden” age of drug development ensued; rapid growth of the
pharmaceutical sector brought hundreds of new drugs to market. The requirements to prove that
these new drugs were safe led to the introduction of a new model of “sponsored” clinical trials, in
which pharmaceutical companies contracted with medically qualified investigators to evaluate
new drug products in the university or clinic. By the 1960s, when the rules governing IND
16
applications were put into place in response to the Kefauver-Harris amendments, the
pharmaceutical industry was so well entrenched that a relationship between a sponsor and
investigator to conduct what is often called a “sponsor-investigator trial” was typically assumed.
Such trials were generally larger than experiments run by a single independent investigator.
Often they involved many investigational sites, each with its own principal investigator who
recruited patients under the same experimental protocol as that used by counterparts at other sites.
This coordination allowed data to be collected efficiently and then collated for statistical analysis.
The 1967 Heart Special Project Committee report, known as the “Greenberg Report”, endorsed
the need for such a cooperative structure through which multi-site, multi- investigator trials could
maintain the integrity of research data and rapidly produce “definitive answers to significant
questions”. The report is still considered relevant guidance in the evolution of procedures for the
organization and operation of multi-center trials (NationalHeartInstitute 1988).
By the 1970s, the numbers of clinical trials were growing rapidly. President Nixon
declared war on cancer and millions of dollars of public money began to pour into clinical trials.
With this growth came concerns about the integrity and ethics of trials that might be hugely
valuable if an important drug were to be marketed on the basis of positive results. Strong
motivation to produce positive clinical results was seen to be a potential problem because it had
inherent conflict of interest elements that might encourage data fraud. Thus attention began to
focus on the structure and operation of clinical trials. What followed through the 1970s and
1980s was a flurry of activities to enhance clinical trial regulation and oversight. A complete
discussion of these activities is beyond the scope of this thesis; however, certain of the more
important events are reviewed herein.
17
An Ethical Framework for Clinical Trials
Until the early 1970s, the ethical basis for clinical trial decision-making came mostly
from the Declaration of Helsinki, a document written primarily for medical practitioners. Perhaps
one of the most influential activities that spurred a more broadly based consideration of ethical
frameworks was the 1973 National Academy of Science/National Research Council Ad Hoc
Committee for the Study of Ethical Criteria in Drug Evaluation, followed soon after by the 1974
Food and Drug Administration (FDA) Report on the Special Survey: Sponsors and Investigators
of Clinical Investigations. These reports highlighted serious issues associated with clinical trials
including problems relating to the management of informed consent, drug accountability,
protocol adherence, and documentation of study results (Shapiro 1989). The reports drove
increased efforts by the FDA to improve the quality and integrity of data submitted to FDA. FDA
developed the Bioresearch Monitoring Program in 1976 in response to findings from its audits
that deficiencies were common both in non-clinical laboratories that were conducting pivotal
animal trials, and in the conduct of sponsors and monitors of clinical trials. Under this new
program, FDA began more systematic, periodic inspections and audits of investigative sites. In
its earliest activities, it scrutinized clinical investigators and IRBs with particular attention to the
process and documentation of informed consent (Nightingale 1983).
In 1974 Congress passed the National Research Act. This Act authorized regulations
requiring that clinical trials have IRB oversight in order to be eligible for government research
awards. It also prompted the establishment of a new commission, called the National
Commission for the Protection of Human Subjects of Biomedical and Behavioral Research.
Amongst its other tasks, this eleven member commission was mandated to “identify the basic
principles that should underlie the conduct of biomedical and behavioral research involving
human subjects and to develop guidelines which should be followed to assure that such research
18
is conducted in accordance with those principles” (BelmontReport 1978). The work of the
Commission led to a guidance document titled “Ethical Principles and Guidelines for the
Protection of Human Subjects of Research”. This seminal document, also known as the “Belmont
Report”, documented the basic ethical principles identified by the Commission in the course of its
deliberations.
The Belmont Report was a hugely important document for the conduct of clinical
research in the US. It is the document of record to which most subsequent ethical policy has
referred, because of its clear description of three “Basic Ethical Principles”- respect for persons,
beneficence, and justice- as a basis to adjudicate ethical problems that occur in the course of
human research. Part A of the Report makes the distinction between research and medical
practice. It advises that a protocol for a clinical study should be reviewed by an arms-length
oversight body if there is any element of research in the prescribed therapy. Part B of the Report
defines and describes the applications of the three “basic ethical principles”. Respect for persons
is explained as the need to treat individuals as autonomous agents who should be given
information on which they can base non-coerced decisions; if an individual has diminished
autonomy, that person should have the right to enhanced protection. Beneficence is described as
an obligation to first “do no harm” and second to “maximize possible benefits and minimize
possible harms”. The Report acknowledges that some ambiguity exists in decisions to accept risk
for the benefit of greater societal good. The principle of justice addresses the question of “who
ought to receive the benefits of research and bear it burdens”. With regard to justice, the Report
calls for scrutiny in the selection of research subjects. When research leads to the development of
new therapies, those therapies should be tested in a way that does not confer special advantages
or disadvantages on any particular class of persons. Part C discusses the application of informed
consent, assessment of risks and benefits, and selection of subjects (BelmontReport 1978).
19
Evolution in the Regulation of Clinical Trials: 1980s and Beyond
Since the Food and Drug Act of 1906, there have been more than 200 laws enacted to
create a comprehensive network of public health and consumer protections (FDA 2010). A
significant number of these laws address aspects of clinical trial design and management. Full
descriptions of all of these laws are beyond the scope of this thesis. However, certain laws and
amendments were pivotal in the development of the US rules of today. The following is a listing
of critical changes that have been put into place since the Kefauver-Harris Amendments of 1962.
1987 Prescription Drug Marketing Act: enacted to prevent the entry of substandard,
counterfeit, and ineffective drugs into the drug distribution system. Prevented the sale and/or
purchase of sample drugs (often used to allow unapproved use of products for testing purposes)
and regulated their distribution, prevented the exportation and subsequent re-importation of the
same drug, required state licensing of wholesale distributors and established penalties for
violations of the Act (FDA 1987).
1992 Prescription Drug User Fee Act and Amendments: established that if the sponsor
of a drug or biologic study fails to complete a required clinical study by the agreed upon deadline,
FDA will publish the failure on the internet. The sponsor is required to notify practitioners who
prescribe the drug or biologic of the failure and questions of clinical benefit and/or safety that
remain due to failure to complete the study (FDA 1992).
1997 Food and Drug Administration Modernization Act: improved FDAs ability to
fulfill its mission in an era of increased complexity of trade, technology, and issues of public
health (FDA 1997). Figure 1 outlines relevant provisions of the law.
20
Figure 1: 1997 FDA Modernization Act
Aims of the 1997 FDA Modernization Act Relevant Provisions
Patient Access
Codified FDA practice to increase access to
experimental drugs
Public access to clinical trials database
Trials may be required to show promise of new
drugs in pediatric population
Required Secretary to develop guidance for
inclusion of women and minorities in clinical
investigations
Accelerated Review
Codified that one clinical trial may be sufficient in
certain circumstances
Provided for fast tracking when drug shown likely
to produce clinical benefit
Allowed that clinical trials may begin 30 days post
FDA’s receipt of complete submission
2003 Pediatric Research Equity Act: required applications for new active ingredients,
new indications, new dosage forms, or new routes to include data for each pediatric age group for
each claimed indication (FDA 2003).
2007 Food and Drug Administration Amendments Act: authorized that resources be
made available to FDA to ensure ability to conduct complex reviews necessary to approve new
drugs and medical devices and enhanced FDA post-market authority. Implementation milestones
continued into January 2012 (FDA 2007). Relevant provisions of the law are outlined in Figure
2.
21
Figure 2: 2007 FDA Amendments Act
Aims of the 2007 FDA Amendments Act Relevant Provisions of the Law
Greater Access to Clinical Trials Expanded resources for research and
development of pediatric drugs, biologics, and
devices
Established public-private partnerships for
research, education and outreach to accelerate
development, marketing and translational
therapeutics
Expanded clinical trials registry databank to
enhance enrollment
Safety of Human Subjects Authorized post market surveillance for drugs
and class II and III medical devices
Authorized FDA to require risk and mitigation
strategies when aware of new safety
information
Established website to improve transparency
and access to information on drugs for
consumers and healthcare professionals
Comprehensive laws, including the FD&C Act with its requirements for clinical trials,
and the additional laws controlling specific aspects of clinical trial logistics, such as drug
distribution, clinical trial registries, and pediatric trial requirements, are not by themselves
sufficient. Essentially all of these laws require some authority, typically the Food and Drug
Administration, to amplify the law by promulgating regulations and writing guidance documents
to assist investigators, sponsors and institutions in clinical trial conduct. The regulations in turn
often reference standards and other forms of delegated legislation, that further expand on
acceptable approaches and practices in particular areas of concern.
22
Regulations Governing Human Protection
Numerous regulations now deal with clinical trial conduct and these are undergoing
continual change. They tend to be divided into two main subgroups; regulations governing the
protection of human subjects during research and regulations governing the way that trials should
be conducted and reported. Most rules that consider how human subjects are to be protected
during research are outlined in 21CFR50 and 56. “Protection of Human Subjects”, 21CFR50,
contains general requirements for informed consent and describes those circumstances that, when
present, allow for exemption from informed consent requirements. It further details the required
elements that must be present as part of informed consent documentation. “Institutional Review
Boards”, 21CFR56, defines the “composition, operation, and responsibilities” of IRBs. It
identifies when IRB oversight is required, how IRBs are to be structured, and what authority
should be exercised. The regulations provide IRBs with significant responsibilities and authority
to review, approve, disapprove, or require modifications to proposed research.
As the main body responsible for the oversight of essentially all clinical studies, the IRB
appears to be viewed by the regulations as the first line of defense against trials that have
inadequate designs, unacceptable risk-benefit outcomes, and faulty conduct. However, the
effectiveness of the IRB has been subject to scrutiny over the last decade. In a 2000 letter to
academic research institutions, Donna Shalala, then Secretary of Health and Human Services
(HHS), wrote that the “responsibility for protecting human subjects must be borne by the
institutions that perform the research” (Shalala 2000). The question then was whether the IRBs
were able to carry out this responsibility effectively.
The adequacy of IRB activities as a mechanism to assure clinical trial safety was
challenged in 2000-2002 by two well-publicized US trials that led to a reevaluation of the way
that subjects were admitted to clinical trials and monitored by the IRB. First was the death of 18
23
year old Jesse Gelsinger at University of Pennsylvania in 1999. In this study, researchers at the
University of Pennsylvania “held” the IND and thus acted as both sponsor and investigator. The
death of this young subject from an immunogenic response to the biological agent administered
using a viral vector led to investigations by both the FDA and the federal Office of Human
Research Protections (OHRP), a relatively small office at that time that is described in more detail
below. Both sets of investigations found serious failures related to the management of the clinical
trial. FDA cited deficiencies that began prior to the first human subject being enrolled and
included misrepresentation of the true toxicity profile of the investigational agent, as reflected by
findings in animals that suggested potential risks for humans. Once human studies began, a range
of deviations from Good Clinical Practices was evident. Administrative failures included lack of
standard operating procedures; failure to train study personnel; failure to maintain complete and
accurate informed consent forms, case report forms and source documents; and failure to select
study monitors with sufficient training and experience (CBER 2000). Striking was the lack of
collaboration between the investigators and the IRB. Sponsor-investigators failed to notify the
IRB and FDA of many types of information that are normally expected to be provided, including
changes in protocols and protocol exclusion criteria, as well as presence of serious adverse
events. Criticisms of the adequacy of the IRB went farther in the review by OHRP. OHRP
criticized both the PI and IRB. The wide variety of cited problems was to some extent ascribed
not only to the inadequate trial management at the site, but also the failure of the IRB to oversee
the trial properly. Amongst the criticisms of the IRB was failure to ensure that the informed
consent was adequate. For example, they concluded that the informed consent document did not
address the purpose of the research and contained statements implying the researcher knew the
outcome. Further, study procedures were referred to as “therapy” without reason to believe the
procedures would be therapeutic; the risks and benefits were not inclusive of known facts; and the
24
document contained language written at a level too high for easy comprehension by the typical
study participant. All of these elements were presumed to be under the oversight of the IRB but
slipped through the review process. Additional findings of deficiency included inappropriate use
of the expedited review process and lack of timely, substantial continuing review by the IRB
(OHRP 2001).
Soon after the tragedy involving Jesse Gelsinger, a clinical trial at Johns Hopkins
University in 2001 resulted in the death of a healthy volunteer. This first-in-human trial was
designed to study the usefulness of an inhaled chemical irritant to produce a human experimental
model of asthma, again by an investigator who also took the role of sponsor. In this trial, a 24
year old volunteer, Ellen Roche, developed progressive respiratory failure and died within a
month after inhaling the chemical. Investigations by FDA and OHRP found similar failures in
trial management and in the collaboration between the investigators and the IRB. Inadequacies
were found in the way that the investigator conducted the trial, but criticisms were also directed at
its IRB review. Further, the sponsor-investigator and IRB failed to identify even the need to
involve the FDA in the oversight of the trial. FDA found that these same researchers had
previously submitted an IND application in other research that was not approved by FDA due to
the poor quality of the submission. No IND had been filed for the use of the chemical in this
application, although it was required; this failure should have been caught by a vigilant IRB. In
addition, the closed loop that was supposed to ensure that the IRB was apprised of changes and
problems in a timely manner clearly had broken down. The investigators failed to notify the IRB
of changes to the composition of the drug, changes to the dose and delivery system, and
significant adverse reactions of subjects to the chemical. The informed consent document also
had many inadequacies that should have been identified during IRB review. For example, the
informed consent form referred to the chemical as a “drug” when it was graded only for use in the
25
laboratory; failed to describe risks of toxicity; and was not updated to reflect newly discovered
risks to subjects (CBER 2003). The audit by OHRP cited similar problems. Under 45CFR46,
OHRP found that the IRB failed to obtain sufficient information about the nature of the chemical
and prior use in animals and humans. Additionally, the IRB approved an informed consent
document that did not describe all study procedures and omitted significant information regarding
reasonably foreseeable risks to subjects. It cited administrative issues including ineffective
expedited and continuing review processes, conflict of interest by voting members, lack of
diversity of membership, insufficient staff to manage the volume of work, approvals issued prior
to completion of review, inadequate standard operating procedures and lack of training of IRB
members. OHRP suspended the federal assurance of this IRB that then resulted in suspension of
all federally supported research at the University (OHRP(a) 2001).
Lessons learned from these two clinical trials highlighted recurring concerns regarding
the safety of human subjects that related in some part to the failure of the IRB to provide effective
oversight. Some of these failures were attributed to structural issues in the way that IRBs were
developed and resourced at institutions. IRBs at universities are typically voluntary tours of duty
for already busy researchers. This has placed institutional IRBs under strain, as workloads
increase to deal with a progressive increase in the number of clinical trials. Further HHS
announced a series of requirements to strengthen protections for research subjects in National
Institutes of Health (NIH) funded clinical trials that added further obligations for clinical
researchers. Included were initiatives requiring training in bioethics for investigators and their
staff, specific guidelines for informed consent, expectations that research institutions audit
compliance with informed consent regulations, and submission of monitoring plans along with
grant applications in the case of gene transfer research and small scale phase I and II clinical trials
(Shalala 2000). The question then arose, what is the role of the IRB in ensuring that all of these
26
requirements are met? A focus on IRB capabilities was strengthened by an expansion of the
“Office for Human Research Protections” to replace the smaller and less proactive “Office for
Protection from Research Risks”. Now fully focused on protecting human subjects of research,
the new OHRP was charged with providing leadership for all federal agencies that fund research
involving human subjects (OHRP(b) 2011) and additional methods of certification that IRBs were
equipped to function effectively.
Asking an IRB to provide the same level of oversight as an industry sponsor may be
unrealistic. This concern can be appreciated by considering the traditional role and composition
of an IRB. The IRB generally plays a passive role by accepting and evaluating information
provided by the study sponsor and investigator. Approval of research by an IRB consists mainly
of document review (Hoffman 2002). The investigator submits approved grant proposals, budget
worksheets, investigator’s brochure, study protocol, informed consent documents, patient
recruitment materials, subject’s Bill of Rights, plans for privacy assurance, laboratory
worksheets, drug inventory worksheets and other relevant study documents. Based on these
documents, the IRB membership may ask for clarification, additional information, or
modifications to the proposed study, before it gives or declines approval. Typically, IRB
members do not visit study sites to validate the contents of the documents or to inspect
environmental or staffing conditions. It is assumed that such audits and inspections are to be
arranged by the study sponsor and/or the principal investigator. Further, even if the IRB was
authorized to be an active auditing body, its composition typically is not appropriate for such a
role, at least as currently configured under the guidelines of ICH E6 Good Clinical Practices.
These prescribe that an IRB must be comprised of at least five members who “collectively have
the qualifications and experience to review and evaluate the science, medical aspects, and ethics
of the proposed trial”. One of the members must be non-scientific and one member must be
27
independent of the institution/trial site (ICH 2011). However, active oversight requires
substantial time and training in auditing methods, and time and audit expertise are both things that
are in short supply amongst busy faculty. Such audits presumably would require that universities
hire additional specialized staff. Thus despite the strengthening of IRBs, there still exists concern
that IRBs have neither the resources nor the institutional authority to conduct continuing
oversight of clinical trials.
In April of 2010, 21CFR Part 56 was revised to include the requirement that an IRB must
register federally if reviewing research that will support an IND application or marketing
application regulated by FDA or research that is federally funded. The registration must be
renewed every three years (21CFR56 2011). As part of this certification process, a much more
rigorous oversight of IRBs was put into place, but this move has not as yet resolved the problem
of IRB composition, authority and expertise to provide a greatly expanded oversight of
investigator-sponsor trials.
Regulations Governing the Logistics of Clinical Trial Conduct
An additional regulation central to the conduct of clinical trials is that dealing with the
management and logistics of the trials. Key regulations setting out the basic rules for
investigational new drug (IND) studies are found in 21CFR312. An IND is required when a
“new drug, antibiotic drug or biologic drug is used in a clinical investigation” (21CFR312 2013).
According to 21CFR312,
(a) An investigational new drug may be used in a clinical investigation if the following
conditions are met:
(1) The sponsor of the investigation submits an IND for the drug to FDA; the IND is in
effect under paragraph (b) of this section; and the sponsor complies with all applicable
28
requirements in this part and parts 50 and 56 with respect to the conduct of the clinical
investigations; and
(2) Each participating investigator conducts his or her investigation in compliance with
the requirements of this part and parts 50 and 56.
More recently, confusion about when and whether an IND must be sought has been clarified in a
draft guidance document, titled “Guidance for Industry Investigational New Drug Applications
(INDs) Determining Whether Human Research Studies Can Be Conducted without an IND”.
This guidance further defines the boundaries that determine whether a trial must be conducted
under FDA oversight. It states that drugs lawfully marketed in the US may be exempt from IND
requirements if the investigator is not seeking a new indication or significant labeling or
marketing changes. Nevertheless, an IND for a commercially available drug may be needed even
for studies of the same indication for use if certain conditions, such as changes to route of
administration, dose ranges or nature of the patient population, are associated with increased risk.
The definition of boundaries laid out in this part of the regulation is important to make clear that
not all studies of pharmaceuticals will require FDA oversight. However, such studies would then
be conducted without government oversight and thus may run the risk of a lower level of
scrutiny, where the only oversight body is the IRB. Often such studies are conducted as
investigator-sponsor trials and have the additional challenges of oversight that will be described
below.
Another part of the regulations spelled out in 21CFR312 that is particularly relevant to this
thesis is that which identifies the relative roles of the sponsor and investigator in a clinical trial.
The sponsor is the entity applying for the IND and subsequently holding the IND number.
29
Requirements that must be satisfied by the sponsor include the
“need to conduct an investigation properly, ensuring proper monitoring of the
investigation(s), ensuring that the investigation(s) is conducted in accordance with the
general investigational plan and protocols contained in the IND, maintaining an effective
IND with respect to the investigations, and ensuring that FDA and all participating
investigators are promptly informed of significant new adverse effects or risks with
respect to the drug.”
In contrast, the investigator is described to have a narrower role that includes conducting the trial
“in accordance with the relevant, current protocol(s) [and]…mak(ing) changes in a
protocol after notifying the sponsor, except when necessary to protect the safety, the
rights, or welfare of subjects”.
The investigator is expected to ensure informed consent and IRB approval, and under most
circumstances continues to be the conduit by which information about problems of safety or
protocol adherence are reported to the sponsor and IRB, though not the FDA. An arrangement in
which the sponsor and investigator have different but overlapping responsibilities is important
because it provides a system of checks and balances to minimize the possibility of fraud, protocol
deviations, and unreported issues of participant problems. The sponsor is required to audit the
work of the investigator through monitoring activities, usually conducted by an individual or
organization under the employ of the sponsor, so that any problems during the conduct of the trial
at the clinical sites can be detected and corrected quickly and effectively.
By separating the sponsor and investigator in the way outlined in the regulation above,
one can see a clinical trial arrangement typical for many studies today, in which the sponsor is a
separate entity from the investigator(s). The regulation nevertheless can be satisfied by an
alternative arrangement in which an investigator is also the trial sponsor. In this situation, the
30
role of the investigator is greatly expanded. A “sponsor-investigator” is typically an individual
who both initiates and conducts a clinical investigation. The product under study is administered,
dispensed or used by the subject under this person’s immediate direction. To satisfy the
regulation, the responsibilities of the sponsor-investigator include both those of the sponsor and
those of the investigator (Wolf, Katz et al. 2005).
The regulations governing clinical trials in the US and elsewhere have gradually
increased the attention paid to the details of clinical trial logistics. They now assume that trials
will implement a system of Good Clinical Practices, a well-defined term for best practices to
assure quality and consistency in clinical trial management and integrity of the data derived from
those studies to satisfy the statute and regulations and to ensure the safety of trial subjects. The
detailed description of acceptable practices is not outlined in specific detail in the regulation
itself. Rather the regulation points to a set of guidance documents, including documents
developed by other organizations with which the FDA works closely. Perhaps the most important
of these organizations is the International Conference on Harmonization (ICH). The ICH brings
together the pharmaceutical industry and global drug regulating bodies of the US, Europe and
Japan for the purpose of “achieving greater harmonization in the interpretation and application of
technical guidelines and requirements for pharmaceutical product registration”. The members of
the organization attempt to develop documents that harmonize practices that are common to all of
the countries (ICH 2011). The goal of the ICH is to ensure a common set of practices that can
satisfy all of the regulatory agencies for drug trials. It replaces an older set of challenging
practices that were inherent when each regulatory system made up its own rules. In such a case it
was often necessary to duplicate clinical trials in different jurisdictions using different rules.
Common standards help to prevent duplication of preclinical testing and clinical trials and to
increase the efficiency of introducing new drugs to the global market. The ICH has expended
31
much effort to develop a series of standards that have recently been accepted by not only the FDA
but by drug regulating bodies in Europe and Japan.
Good Clinical Practices encompass ethical and scientific quality standards for “designing,
conducting, recording and reporting trials that involve human subjects”. When practiced, GCPs
ensure the well-being of the research subject and the integrity of data generated by the trial.
Figure 3 focuses on the six principal areas of GCPs.
Figure 3: Six Principal Focuses of Good Clinical Practices
Principal Focus Good Clinical Practice
Institutional Review Board Responsibilities
Composition
Function
Operations
Clinical Investigator Qualifications
Responsibilities with regard to
Resources
Medical care of study subjects
Informed consent
Trial procedures
Documentation
Retention of records
Quality Assurance Trial management and monitoring
Submissions to regulatory authorities
Investigational product handling
Adverse event reporting
Trial Protocol Design Objectives and purpose
Assessment of safety and efficacy
Statistics and data management
Selection and withdrawal of study subjects
Investigator’s Brochure Inclusiveness of contents
Essential Documents Availability and completeness before, during
and after the trial
32
Although the guidelines for Good Clinical Practices were developed in large part to
ensure the proper conduct of research pivotal to the approval of new drugs and conducted under
an Investigational New Drug protocol, their use is not isolated to trials conducted by commercial
sponsors of new pharmaceutical products. It is now the expectation that such principles and
practices will underlie clinical trials that are conducted by sponsors who are also investigators,
even if the trial is small and local. Further, similar requirements are assumed for smaller medical
device trials that are often directed by investigators who exercise the role of sponsor as well. This
is because many of the tenets of the GCP guidelines are important to assure data integrity, quality
and safety of human subjects regardless of trial size, purpose or management structure. To
illustrate, 21CFR Part 812 defines the circumstances under which an investigational device may
be used in a clinical trial to “collect safety and effectiveness data required to support a Premarket
Approval (PMA) application or a Premarket Notification 510 (k) submission to FDA”, including
responsibilities of investigators and sponsors (21CFR812 2013). Device trials must be approved
by FDA and an IRB if the device is classified as significant risk; trials of devices of non-
significant risk require only the approval of an IRB. In addition to an IDE and/or IRB approval,
clinical trials of devices require informed consent, labeling of the device for investigational use
only (21CFR801 2013), monitoring of the trial, and records and reporting. The regulations
discussed previously, Protection of Human Subjects (21CFR50 2013) and Institutional Review
Boards (21CFR56 2011), apply to investigational devices just as they apply to investigational
drugs.
It may be more difficult to assure the compliance of smaller trials conducted by sponsor-
investigators, because conducting a clinical trial requires not only scientific process knowledge
but also business and regulatory acumen (Sakraida, 2010). Additionally, considerable expense
can be involved in satisfying some of the specific regulatory requirements. When large trials are
33
conducted by pharmaceutical companies, significant resources are directed toward assuring
oversight and support of clinical investigators. Trials conducted under the supervision of a
corporate sponsor are generally monitored intensively by that sponsor throughout the duration of
the trial. To this end, “study monitors” are deployed to study sites to assess the investigator’s
compliance with the study protocol and good clinical practices (Switula 2000). A typical
monitoring visit consists of interviews with the investigator(s) and study personnel, chart reviews
to assess that recorded data can be traced to source documents and documentation reviews to
assure that data have been recorded accurately and completely. The monitor further evaluates
informed consent documentation to assure that it contains all required signatures, and was
obtained prior to subject entry into the trial. Additional inspections are directed at confirming
that: enrolled subjects meet inclusion/exclusion criteria; serious adverse events are recorded,
investigated and reported; recruitment is occurring in a timely manner; study drugs are stored,
inventoried and administered according to protocol; and all regulatory documents are complete
and sent to data processing centers.
However, such oversight may not exist for a trial in which the sponsor is also the
investigator. Unlike industry trials where frequent monitoring by the sponsor over the course of
the study takes place, investigator-sponsor trials often have fewer oversight opportunities,
because typically no third party is engaged to supervise and train the investigator and staff.
Unlike industry trials where irregularities, deviations from the industry protocol, and significant
adverse events must be reported to the sponsor, no such reporting is required in an investigator-
sponsor trial. The question then must be asked: Who is monitoring the investigator-sponsor trials,
and is this oversight equivalent to that in industry-sponsored trials and adequate to satisfy FDA
regulations?
34
Gauging Compliance with Clinical Trial Regulations
Whenever specific practices are required by regulation, it is important to have methods to ensure
compliance and metrics to measure the state of compliance both in a single trial and across a
portfolio of trials. Typically three ways are used to assess compliance with GCPs: audits by
sponsoring organizations, audits by investigational review boards and audits by government
regulators.
1) Audits by Sponsoring Organizations
In commercially sponsored trials the investigator and the sponsor contractually agree that a
monitor representing the sponsor will be allowed on site at regular intervals to audit records of the
study. The primary purpose of the visits is to determine that the site is adhering to the protocol
and the collected data are complete and accurate (Wolf, Katz et al. 2005). The investigator agrees
to cooperate with the study monitor and allow access to source documents required to verify
entries into case report forms. Because legally protected patient information may be inspected
during the monitoring visit, the sponsor agrees to maintain anonymity and confidentiality of study
participants. In addition, the visit may be used to ensure that adequate financial and material
resources are available and training is provided to the staff at the investigative site. Usually the
relationship between the monitor and the site is interactive. The visits allow the principal
investigator and study staff to discuss issues that have been troubling or problematic, such as
challenges of recruitment, workload leveling, and patient-related questions or adverse events of
particular concern. Both parties in this partnership share a common goal of developing accurate
data of high quality in a timely manner.
In 2010 Tufts Center for the Study of Drug Development surveyed 157 centers in North
America, Europe, Latin America and Asia regarding operating characteristics of their sites. Sites
35
reported an average of 5.5 industry monitor visits per month compared to one visit by a
government oversight agency in the past five years (Getz and Zuckerman 2010).
2) Audits by IRBs
GCPs dictate that an IRB “should safeguard the rights, safety and well-being of all study
subjects”. This includes ensuring that the study design is sound and that risk to study subjects is
minimized, selecting participants in a way that does not favor or disadvantage any subset of
persons, obtaining and documenting informed consent from each person, ensuring the research
plan contains adequate provisions for monitoring data collected in the study, and protecting the
privacy of individuals and their data. The IRB accomplishes this through a variety of
mechanisms. Most importantly, the IRB reviews the study protocol and investigator’s brochure
and reviews and approves the informed consent document. Additionally, the IRB is responsible
to consider the qualifications of the investigator, conduct on-going continuing review of the
study, review changes to protocol and informed consent documents and review adverse events
and deviations from protocol. Changes to the risk profile of the drug, including new information
that may adversely affect subjects or conduct of the trial, may cause the IRB to withdraw
approval of a study within their jurisdiction.
As described in detail above, IRBs might have limited abilities to audit trials because of
scarce resource. However, some IRBs or associated administrative offices are starting to conduct
audits of clinical trials under their oversight. Scripps Clinic in La Jolla, California is one example
of such an approach. Scripps Clinic is a non-academic, multi-hospital, multi-clinic enterprise that
has developed an internal Office for the Protection of Research Subjects (SOPRS). Scripps
OPRS supports investigators and research staff with training in federal and local regulations and
provides a limited auditing function that “stresses prevention rather than punishment”. SOPRS
visits investigative sites three times during the course of a clinical trial: prior to enrollment of the
36
first subject to review plans for the conduct of the study; at the time of consent of the first subject
to witness the informed consent discussion and documentation; and a third time once several
subjects have been enrolled to review study records and documentation (Bigby 2002).
A 2008 survey by the Clinical and Translational Science Award (CTSA) consortium
identified three potential models for support and oversight of investigator-sponsors in academic
health centers. In the first model the investigator-sponsor is responsible for all IND/IDE
regulations with no institutional support beyond that of the IRB. In this model, the investigator-
sponsor may perceive less bureaucracy, but greater variability may be present in the application
of IND/IDE regulations. Further by making the investigator solely responsible, there is a risk of
less institutional awareness and greater institutional exposure, as well as a higher potential for
issues to arise if this investigator is responsible for work at multiple sites. The second model
provides sponsor-investigators with access to regulatory expertise through a central office that is
consultative in its approach, like that at Scripps and other institutions, for example Duke
University and Mayo Clinic (DukeUniversity 2011, MayoClinic 2011). This model provides for
some institutional involvement, allows for tracking of INDs/IDEs, and has the potential to define
institutional best practices. Investigator-sponsors are not required to interact with the central
office or to avail themselves of the expertise or educational resources offered, potentially
resulting in “standards drift”. The third model is one of full-service institutional involvement in
which an institutional office ensures sponsor-investigators comply with all IND/IDE
requirements. This model permits a standardized approach within the institution and allows for
full tracking of IND/IDE compliance. It permits greater institutional control of risk because
training/certification can be required and central monitoring and/or auditing activities are part of
the oversight. In this model the central office must have adequate resources to prevent high
volume investigators from getting more/quicker attention than lower-volume investigators. Such
37
a model is currently being implemented in a few academic centers where sufficient funding exists
from CTSI grants or institutional coffers, for example at the University of Pennsylvania
(UniversityofPennsylvania 2011). Because such centralized services might involve charges to
investigators for their activities, there is potential for increased costs to the investigator (Berro,
Burnett et al. 2011).
3) Audits by Governmental Agencies
Governmental agencies can have different roles in clinical trials. Some agencies that fund
research, such as National Institutes of Health (NIH) and National Science Foundation (NSF), are
essentially sponsors for trials. However, unlike sponsoring companies, these sponsoring agencies
seldom provide monitoring or auditing functions for the trials. Instead, they shift the burden to
the recipients of the grants or contracts. For example, NIH requires that entities have a system in
place to audit clinical trial data and protocol compliance. It expects that trials under its
jurisdiction will be monitored by some kind of a quality assurance unit as well. NIH policy for
data and safety monitoring requires oversight commensurate with the nature, size, complexity and
degree of risk associated with the trial (NIH 1998). The expectation is that the trials should not
need to have yet another additional level of audit from the government agency. Thus, an onsite
review by NIH is unlikely, but in the event of such an audit, it will include verification of
oversight by an IRB, documentation and content analysis of informed consent forms and review
of handling of investigational drugs. Additionally, individual subject files are reviewed for
completeness of informed consent documents, eligibility criteria, compliance with protocol
directed treatments, toxicity grading, treatment response or non-response and outcome and
accuracy of data recorded (Weiss and Tuttle 2006).
In contrast, regulatory agencies such as FDA have a well-developed system for audits,
and these tend to be arms-length or even confrontational (Kuehn 2009). FDA’s compliance
38
program, called the Bioresearch Monitoring Program (BIMO), conducts onsite inspections of
studies used to support applications/submissions to FDA and includes monitoring of clinical
investigators, sponsors, study monitors, contract research organizations and IRBs (Meeker-
O’Connell and Ball, 2011). The goal of these audits is to confirm that trials are being monitored
and controlled adequately. Audits by BIMO may be conducted in the routine course of the trial
or may be conducted for cause. Routine audits are generally done in support of a marketing
application or submission. These audits are usually conducted following completion of studies
and involve comparison of data submitted to source documents at the study site. For-cause audits
may be conducted in response to significant adverse events at a study site or to investigate a
complaint. If less serious deviations in the trial are found, a list of observations is documented in
writing by the investigator before leaving the study site visit. For findings of this nature, the
sponsor and/or investigator must respond in writing to FDA with corrective actions and plans for
prevention of reoccurrence. If the deviations are systemic or serious, the FDA calls for
immediate remedial action in a “warning letter” to the investigator, institution and/or sponsor that
is also made public on the FDA website. FDA issues warning letters as a means of providing
opportunity to the investigator, institution, and/or sponsor to take prompt corrective action before
it initiates enforcement actions (Meeker-O'Connell and Ball 2011). The expectation is that a
response to the warning letter will be become a critical component of the entity’s continuous
quality improvement process which then leads to higher quality research data and enhanced
human subjects protections (Marcarelli 2008).
Between January 2005 and December 2010, FDA’s Center for Drug Evaluation and Research
issued warning letters to 53 clinical investigators (FDA 2005-2010). Outlined below are seven
common deficiencies together with the relative percentage of letters that contained reference to
the deficiency.
39
1) Staffing challenges (28%): Failure to supervise individuals to whom study procedures
were delegated, inadequate training of study staff, not notifying FDA and IRB of sub-
investigators, not ensuring staff were carrying out their responsibilities fully and
according to the investigational plan.
2) IRB oversight challenges (49%): Failure to ensure that an IRB was responsible for initial
approval and ongoing review of the study and failure to notify the IRB of changes to the
investigational plan and/or significant adverse events.
3) Informed consent challenges (55%): Failure to obtain informed consent from each
subject, use of expired consents, consents without required statements and information,
missing signatures, times, and dates, non-qualified persons obtaining consent, and
process deficiencies including not allowing sufficient time for potential subjects to
consider alternatives.
4) Recruitment challenges (42%): Enrollment of subjects not meeting inclusion and/or
meeting exclusion criteria.
5) Deviations from Protocol (72%): Failure to conduct the study according to the
investigational plan, including, omission and improper timing of study related procedures
and administering study procedures prior to obtaining informed consent.
6) Documentation challenges (66%): Failure to maintain accurate and complete case
histories, missing source documentation, entries into case report forms that conflict with
source documentation, apparent falsification of data entered, missing dates and times.
7) Test article Accountability challenges (43%): Failure to maintain adequate records to
account for the receipt, disposition, storage and dispensing of study drugs.
Figure 4 outlines the seven common deficiencies cited in FDA warning letters to clinical
investigators and the relative percentage of letters that contained reference to the deficiency.
40
Figure 4: Warning Letters with Selected Types of Deficiencies (adapted from FDA Warning Letters
2005-2010)
Monitoring Investigator-Sponsored Trials
The foregoing summary of monitoring practices underlines an important concern about
investigator-sponsor trials. In such trials, there is clearly no industry partner to monitor the trial.
Further, neither the IRB nor funding source (even if present) is usually in a position to monitor
such trials on a regular basis. How then are such trials managing to assure the quality and
integrity of the trial conduct?
This research is directed at understanding better this aspect of trial management. In this
thesis, we are interested in the methodologies used by investigator-sponsors to ensure compliance
to GCPs. We are particularly interested in the challenges faced by investigator-sponsors in the
vulnerable areas commonly cited as deficiencies in clinical trials by FDA and OHRP. To ensure
that this investigation is organized systematically and comprehensively, it is helpful to begin with
a scientific framework for the analysis. In this study, we will rely on two frameworks. The first
0%
10%
20%
30%
40%
50%
60%
70%
80%
% of Letters
with Reference
to the
Deficiency
Seven Common Deficiencies
41
is the GCP framework established by ICH guidance and discussed previously as a benchmark for
GCPs. The second is a framework, developed by quality systems professionals, called the PDCA
(plan-do-check-act) cycle described below, that allows a project such as a clinical trial to be
divided into four phases that can be used as an iterative cycle of continuous quality improvement.
Methods for Assessing Regulatory Compliance
To understand the current state of oversight in a clinical trial, it is important to have a
systematic approach to evaluate their quality systems. Such an evaluation is not the same as an
audit carried out by a regulatory agency, which essentially is directed at understanding whether
the clinical trial is following all of the regulations. It instead should be broader, so that the
systems used to ensure quality can be characterized. By understanding the root causes of certain
types of non-compliance, it may be possible to identify targeted solutions that will facilitate
adherence with GCP.
The challenge of ensuring a robust quality system in a clinical trial is similar in many
respects to that of ensuring a quality system in any other activity. Effective approaches to
manage quality practices have been subject to much study for more than a century, since Walter
Shewhart began to focus on controlling manufacturing processes in order to assure predictable
outcomes. Shewhart applied statistical quality-control methods to data from industrial processes,
to determine whether the process was stable and “in control” or whether the process was affected
by problems that needed to be fixed (Plsek 1993). Edwards Deming, a physicist working for the
US Department of Agriculture, became a proponent of Shewhart’s methods. Following WWII,
Deming gained prominence as a result of his work to improve quality practices in Japan, where
his ideas were rapidly demonstrated to result in improved production outcomes. Deming made
huge contributions to quality management. Perhaps the most cited of these accomplishments was
one that may actually have been developed by his Japanese colleagues, but has nonetheless
42
become famous as the “Deming Cycle” (Moen and Norman 2010). This four-step model for
continuous quality improvement, Plan, Do, Check, Act (PDCA) (Figure 5), remains one of the
most widely used models of process improvement in many industries, including healthcare (ASQ
2011). It has been instrumental as a foundation for studies of quality systems in part at least
because of its simplicity and ease of application to many different situations.
Figure 5: The PDCA Cycle (Modified from American Society for Quality, 2011)
In the PDCA framework, the quality of a project is first instilled into the project by
Planning. It is in this early stage that the project is focused and clarified, and that it is possible to
identify and select opportunities for improving compliance and anticipating risks, including an
analysis of possible causes and solutions. The second step is Doing, in which activities are
carried out in accordance with the plans. The third step is Checking, in which activities are
assessed and benchmarked against desired states. Finally the fourth stage, Acting, involves taking
remedial measures to assure that deficiencies identified in the earlier stages are corrected and
prevented from recurring. These results are fed back into the planning stage so that the project is
improved iteratively by repeating the PDCA cycle (Plsek 1993, Moen and Norman 2010). For a
Do
Check Act
Plan
43
quality system to be effective, all four elements must be operating effectively. Diagnosing
quality issues can be aided by deconstructing the quality cycle to see where problems exist.
In this thesis the PDCA cycle will be explored as a way to gain insight into quality
management in investigator-sponsor trials. I would like to probe the extent to which quality
deficiencies, if present, are associated with one or more phases of the quality cycle preferentially.
Thus we must design an instrument and approach that would permit us to differentiate
deficiencies in different parts of the cycle. What is the evidence for planning to assure GCP
compliance? What is the evidence that the trial has been conducted in a way that adheres to GCP
precepts? What is the evidence that assessments or audits have been carried out to identify
deficiencies? What is the evidence that follow-up actions have been taken to correct those
deficiencies?
The challenge in this study is to design an assessment tool that can identify the state of
quality management at different stages of the clinical trial. It should be possible to assess the
effectiveness of the four PDCA steps by examining the mechanisms underlying the organization,
conduct and auditing practices of the study, in addition to the overall effectiveness of the trial to
adhere to GCPs and to achieve safe outcomes for study subjects (Wagner-Bohn, Ripkens-
Reinhard et al. 2007). Further, by knowing the most common deficiencies that are typical of such
trials, it should be possible to focus on particular areas of concern that have been consistently
highlighted as problematic by regulatory agencies.
Evaluating Clinical Trials with a PDCA Framework
Step 1: Plan
When a clinical study is planned, an experimental protocol is developed that guides the
logistical activities of the study, but also lays out the elements needed to ensure that those
activities are carried out in compliance with GCPs. A well written protocol facilitates monitoring
44
and prevents unnecessary deviations which might impact subject safety and study integrity
(Dixon, 2006). At this stage, investigators must plan for effective operations in all of the seven
areas of activity that are commonly identified as deficient by regulatory agencies. Deficiencies in
the planning process should be evident from artifacts associated with that planning process,
including the documents developed to guide the trial. Such documents include the protocol itself
as well as associated data collection forms [often called case report forms (CRFs)], the informed
consent form, submissions to the IRB, staffing plans and documentation, FDA submissions if
relevant, and documented methods to distribute and account for test and control articles during
the trial (Dixon, 2006).
1) Experimental Protocol
In the experimental protocol there should be evidence of a recruitment plan to identify
subjects with appropriate risk/benefit ratios that meet inclusion/exclusion criteria. The
protocol should also define a number of other logistical elements important as part of
GCPs. These would include planned safety evaluations at defined intervals and activities
and data to be collected at each study visit. Provisions in the protocol should include a
process for revision in the case of unexpected significant adverse events as well as
defined criteria for when to discontinue the trial for reasons of safety.
2) CRFs/Case Histories
Case report forms and other data collection forms must be developed by the study team in
the case of an investigator-sponsor trial and thus mirror the clarity with which the study
team approaches the trial. The forms should be organized so that all necessary
information is captured and available. Mechanisms to organize and secure the case report
45
forms should be present and traceability methods to ensure the presence of source
documentation should be described.
3) Informed Consent
Planning for informed consent should be evident by the presence of informed consent
forms that have all of the required elements. It should also be clear from specific
instructions in the protocol or investigator’s binder that personnel qualified to conduct
this process have been identified and methods have been put into place to ensure that
subjects will be thoroughly informed about the trial prior to the first trial procedure.
4) Submissions to the IRB
Effective planning for information-sharing with the IRB would include documentation to
describe the timing and methodology for periodic reporting. It would also include formal
documentation of the requirements and methods to be used to report serious,
unanticipated adverse events and protocol revisions.
5) Investigators/Study Team
Planning for an effective study team would include evidence that investigators have
considered the substantial time and resource needs for comprehensive support of their
study. Documents to support such planning might include not only the presence of
organizational charts and curricula vitae, but also planning for clinical staff training and
evaluation, as demonstrated by memos, lesson plans and certificates from training
exercises. Potential conflicts of interest for the investigators and study team should be
defined with a determination of actions to minimize the impact of such conflicts.
46
6) Regulatory Considerations
Planning to assure compliance with regulatory agency requirements might include notes
suggesting that regulatory requirements were assessed. In cases where no IND or IDE
was filed, planning notes might identify why a decision not to submit an application was
taken. Evidence that key documents are reviewed and approved by the study director
should be apparent before the study has enrolled its first patient.
7) Test Article Control
Planning efforts to manage test and control articles should be apparent from the
investigational protocol, but should also be apparent from the presence of test and control
article log books or data collection forms. Further, such planning documents should
include standard operating procedures for the storage, security and disbursement of the
drugs or devices, in place and approved before the start of the study.
Step 2: Do
When a clinical trial is carried out, ample evidence should be present to demonstrate
compliance with GCPs. Deficiencies in the planning phase can be compensated in some degree
by a well-trained team with a strong understanding of GCPs and good experience in conducting
trials. Conversely, a well-planned trial can show deficiencies in the “doing” phase if the trial is
executed poorly. Failures in the “doing” phase should also be detectable in the seven areas of
concern to regulatory agencies.
1) Informed Consent
Appropriate execution of informed consent activities should be, at least in part, apparent
from associated documentation. The most obvious evidence of compliance would be the
presence of properly completed informed consent forms, including the signing of the
47
informed consent document prior to treatment start and might include such supporting
materials as video recordings of informed consent processes for vulnerable individuals
including those with language problems; memos to translators; or subject enrollment logs
that document the conduct of informed consent procedures.
2) Submissions to IRB
Adherence with GCPs should be evident from the IRB submissions themselves. One
might expect that in addition to IRB submissions there would be IRB correspondence on
file that identifies adverse events or protocol deviations, and should be supported by a log
of serious adverse events.
3) Investigators/Study Team
Evidence of compliance with GCP regulations should be demonstrated by documentation
regarding the appropriateness of investigator qualifications and roles. That the
investigator and study team understand and behave in accordance with expectation can be
identified by well-maintained calendars, screening logs, and records regarding the
management of each participant. However, evidence such as training sign-in sheets and
certificates from required courses should also be present to support the assertion that
members of the study team are properly trained with regard to the protocol and GCP
requirements.
4) CRFs/Case Histories
A study must collect accurate data regarding interventions and outcomes. The proper
conduct of this aspect of the trial should be apparent from the presence of fully complete
CRFs that are consistent with source documentation.
48
5) Regulatory Considerations
Compliance with regulatory requirements should be evident from the presence of an
approved IND or IDE if such a submission is warranted, and more importantly from the
presence of well-organized subsequent amendments, annual reports and adverse events
notifications.
6) Test Article Control
Test and control article management is in compliance if drugs or devices can be identified
in storage according to the protocol requirements. It is also reflected in accurate
documentation regarding the disposition of every test and control article.
Step 3: Check
The “checking” phase, unlike the “planning” and “doing” phase, typically cannot be
easily decomposed into particular areas of concern, because clinical trial “checking” often takes
the form of audits that cut across all of the areas highlighted above. Thus, at this stage, evidence
for deficiencies of the checking phase would typically be evident in the failure to conduct internal
assessments and audits. There are several approaches that an investigator might take to ensure an
effective “checking” phase. For example, the investigator could hire a third-party auditing entity,
or could arrange auditing activities through an experienced colleague or a university oversight
body. Such quality controls could be apparent through several types of evidence that may be
found in various study documents, including internal audit reports, updates or notifications to the
IRB and FDA regarding deviations from protocol, or notes in CRFs suggesting that the forms
have been inspected by a third-party. Of particular interest is how such audits are organized, if in
fact they are organized, in investigator-sponsored trials.
49
Step 4: Act
In this final stage, “acting” on information about quality deficiencies would be reflected
by evidence that the results of audits or assessments have been followed by actions to correct the
deficiencies, for example, through training, staffing or protocol changes. Evidence for
deficiencies at this stage might be seen for example, in the failure to correct issues identified by
audits. Thus, it may be the case that success of this final stage is predicated on the effective
conduct of the precedent “checking” stage. Nevertheless, even without a well-developed set of
activities to audit the conduct of the trial, good performance in this final stage might be detected
as actions taken by staff to identify and correct deficiencies in the protocol or processes, or efforts
to inform regulatory agencies of irregularities, even without a formal audit process. Evidence of
actions taken to ensure no further deviations might include, for example re-education of staff,
memos or reminders about issues of concern, changes in forms and interventions, or staffing
changes.
Following an analysis of their application review documentation in 2010 and early 2011,
FDA noted that “an oversight model premised largely on retrospectively identifying critical errors
may be outmoded and inefficient” (Meeker-O'Connell and Ball 2011). Thus the time for “acting”
may very well be during the “planning” phase of study development.
Summary
From the literature, it is clear that problems with the ethical conduct of clinical research
have led to the promulgation of rules and logistical frameworks for the conduct of clinical trials.
Regulations and guidance outline a system of GCPs to ensure integrity of study data and to ensure
the ethical treatment of study participants. What is also clear from the literature and from other
documents associated with governmental oversight is that compliance with these rules is difficult,
even for sponsor-initiated trials where robust mechanisms exist to ensure correct practice. What
50
is not so clear from the literature is the way in which investigator-sponsor trials assure
compliance with regulations governing GCPs. A small amount of anecdotal literature suggests
that investigator-sponsor trials have more problems in maintaining compliance, but the exact
nature of these problems and how they are managed has not typically been studied systematically.
In this study, we will first construct an assessment tool that can be used to examine several
individual trials according to a consistent framework that will allow us to probe where prevalent
problems occur most frequently. We will then examine a subset of investigator-sponsor clinical
trials. From this study, we hope to gain insight into aspects of such trials that might need special
attention when such studies are designed and when investigators of such studies are trained
outside of a supportive industry structure.
51
3: METHODOLOGY
Using a three-step approach, I examined the manner in which investigators assured
compliance with Good Clinical Practices. First, an evaluative instrument was drafted to guide the
review of key trial documents for evidence of planning and compliance with GCPs. This
instrument used a PDCA framework as its base; within each of its four categories, I evaluated six
defined areas of GCPs often identified by FDA and OHRP to be non-compliant. Second, a focus
group was convened to give advice on, and validate the appropriateness of, the developed
instrument. The goal of the focus group was to identify whether experts could identify through
brainstorming any new elements or dimensions of GCPs within the context of clinical trials that
had not been captured in the literature review and/or warning letters. Ideas generated were then
used to modify the audit instrument. In addition, the focus group provided feedback on the
appropriateness and structure of the individual questions composing the instrument. Third, the
revised final instrument was used to frame the field assessments of a sample of investigator-
sponsor clinical trials.
Audit Tool Development
The initial audit tool was constructed using a PDCA framework and with reference to the six
most common deficiencies found in FDA and OHRP warning letters to principal investigators
with respect to ICH E6 Good Clinical Practices (Appendix A). The referent deficiencies were:
(1) Documentation, (2) Informed Consent, (3) IRB/FDA Considerations, (4) Test Article
Management, (5) Participant Recruitment and (6) Staffing. Specific elements of compliance
within each section were developed from literature review and from considerations of
law/regulation. The audit tool was constructed so as to lead the assessor through a process in
which key clinical trial documents were reviewed systematically and study staff was interviewed
while on site. Each element of compliance was judged to be “compliant”, “partially compliant”,
52
“noncompliant” or “not applicable” (Appendix B). To standardize the evaluation, the files of two
participants were selected for review from each study. The following definitions were used:
Compliant: every instance of the element in the reviewed participant files, or in other
evidence from interviews with study staff, was found to comply with GCPs.
Partially compliant: at least one instance was found to comply with CGPs and at least one
instance was found to be non-compliant with GCPs.
Noncompliant: no instance of the element was found to comply with GCPs.
Not applicable: the element did not apply to the study under review.
To facilitate comparison of investigators within the experienced and inexperienced groups as well
as between these two groups, each element of compliance judged was assigned points as follows:
Compliant = 3 points
Partially compliant = 2 points
Noncompliant = 1 point
Not Applicable = 0 points
Individual studies were assigned a “study score” which equaled the number of points achieved by
adding together the points assigned to each element. In studies in which there were one or more
"not applicable" designations, the denominator was adjusted to ensure that the score was not
penalized by the absence of these elements.
Focus Group
A focus group was convened to query whether the audit tool captured an adequate range
of features and whether the questions were well-crafted. The experts invited to this focus group
were senior administrative and research executives with an established record of engagement in
the conduct or oversight of clinical trials. Six individuals attended the focus group. Four were
investigators and/or coordinators with experience in the conduct of industry and investigator-
53
sponsor clinical trials, one had experience as the “owner” of an investigational drug exemption
(IND), one was experienced in reviewing both industry and investigator-sponsor clinical trials as
a member of a university based IRB, and one regulatory scientist had experience in the collection
and maintenance of regulatory documents for both industry and investigator-sponsor clinical
trials. Group members were provided with the purpose and significance of the study, the
literature review and a preliminary draft of the survey tool, one week in advance of the meeting to
allow time for thoughtful review. The meeting began by welcoming those present. Following
introductions, participants were assured anonymity and confidentiality with respect to their
participation. The purpose, structure and significance of the study were reviewed and general
questions were answered. As we began to discuss the audit tool, I asked the group to consider
and share their experiences and the challenges of compliance with each GCP under discussion.
As the principal moderator, I encouraged participation and brainstorming through the use of open
ended questions and requests for elaboration. By introducing sections of the audit tool
sequentially and pausing for discussion between each section, I guided the group through each
audit element and received feedback from the group as to whether the element was written
clearly, was amenable to an objective audit, and was sufficiently important to be included. The
group was asked whether they could identify any significant omissions. Group members shared
insights from their experience and offered thoughtful recommendations to improve the content
validity of the audit tool.
Audit Process
The protocol of this study was approved under expedited review by the USC Health
Sciences IRB (Appendix C). This approval was contingent on first explaining the study to the
principal investigator and study coordinator of each site, and obtaining their verbal consent to
participate. The IRB also gave permission to access protected health data as part of the
54
assessments required for successful study execution with the provision that no protected health
data would be recorded or removed from the study site. Principal investigators with active
clinical research programs and who were faculty one major research university in the Los
Angeles area were approached for inclusion in the study. The study targeted two main groups of
interest: first, investigators who were conducting a clinical trial without previous experience as a
principal investigator in an industry-sponsor trial and second, investigators who had, in the past,
or who were concurrently conducting industry-sponsor and investigator-sponsor trials. The goal
of the study was to include 5 investigators from each group. At this university, there was no
searchable database useful in this regard and finding a fair sample size of industry naïve
investigators required searching different clinical specialties and types of clinical trials. We began
our search with investigators and chairs of departments known to the researcher and expanded
based on referrals from these individuals. This “snowball” approach facilitated inclusion of more
investigators, but had the potential drawback of providing a circle of participants, who might not
represent evenly the entire population of potential investigators. Ultimately this process yielded
10 experienced PIs across 13 studies and 7 inexperienced PIs across 8 studies. Review of studies
of experienced PIs was capped at 10 because finding inexperienced PIs for comparison did prove
challenging and therefore, numbers obtained were small. In many instances, compliance rates
were estimated on data sets of fewer than 10 studies. Absolute numbers as well as percentages
were reported in order that the reader could make an independent assessment of the strength of
the differences.
Investigators were approached for their consent to participate if they met the following
criteria:
1. They were conducting either an investigator-sponsor clinical trial without previous
experience as a principal investigator in an industry-sponsor trial, or they had both an
55
industry-sponsor and an investigator-sponsor clinical trial, at least one which was in
progress, and the other which was in progress or had been completed within the last two
years;
2. They were clinical professionals conducting their trials on site at the research university;
3. They were conducting studies approved by the IRB of the university.
Those who met the criteria and who were willing to participate were given two information
letters, one for themselves and a second for their study coordinator (Appendix D). The
information letters explained why they were being asked to participate, the aims and objectives of
the study, how data collected might be used, a promise of confidentiality and anonymity and
contact information for me and my faculty advisor. Following receipt of an information letter,
PIs were asked for their verbal consent to access study documents. The PI and staff were offered
no remuneration but were offered a report of findings at study completion. Results of the audits
of individual studies were held confidential and identified by a code. Findings from particular
audits were shared with principal investigators upon request.
Data Analysis
Twenty one (21) audits were conducted. Thirteen (13) principal investigators were
experienced in both industry-sponsor and investigator-sponsor trials and eight (8) investigators
had experience only in investigator-sponsor trials. The primary goal of the analysis was to
identify those areas of function that were carried out well or badly according to specific criteria.
To accomplish this objective, data from each audit were graphed and analyzed using raw numbers
and percentages. Data gathered from interviews with PIs and staff were examined for their
information content and analyzed to see if any trends or common elements appeared. Given the
nature of the questions used in this study and the relatively small number of data points, it was
56
not possible to ensure that the data had a normal distribution and no further statistical analysis
was done on the results.
57
4: RESULTS
Characteristics of Reviewed Studies
Twenty one investigator-sponsored clinical trials were audited using the audit tool
developed specifically for this study (see appendix B). It proved possible to review a full range
of regulatory documents, including study protocol, informed consent forms, case report forms and
source documents, for two study participants in each study but one, which had not yet enrolled a
participant. Of the 21 studies, 13 were conducted by PIs with recent experience in industry-
sponsored and externally monitored studies (designated as experienced investigators; “EI”) and 8
were conducted by investigators who had never participated in an industry-sponsored, externally
monitored study (designated as not experienced investigators; “NEI”). Five studies were “pilot”
studies, 11 were phase I (first-in-man) and 5 were phase II (first-in-patient) studies. Ten studies
had no funding source, 10 were funded by industry but without industry oversight and one (1)
was funded by the National Institutes of Health (NIH). Twelve studies were open to enrollment, 5
were closed to enrollment and continuing to collect data (ongoing) and 4 studies were closed.
Sixteen studies were concerned with pharmaceutical test articles, two (2) with medical devices,
two (2) tested physical interventions and one (1) was observational in nature (Figure 6). The year
of enrollment of the first participant ranged from 1996 to 2012.
58
Figure 6: Characteristics of Studies Reviewed
Data and findings were systematized by grouping them into four sections corresponding
to the phases of the PDCA cycle, described separately below. Each section was further broken
down into six subsections corresponding to selected aspects of Good Clinical Practices that were
reviewed for each study (documentation, informed consent, IRB/FDA, test article, recruitment
and staffing). In each subsection, evidence of compliance with GCPs was scored as compliant,
partially compliant or noncompliant as defined in Chapter 3.
5
5
11
Study Phase
Pilot
Phase I
Phase II
10
10
1
Funding
None
Industry
Federal
16
2
2
1
Intervention
Drug
Device
Physical
Observe
12
5
4
Enrollment Status
Open
Follow-up
Closed
59
Planning
1) Documentation
Planning for documentation was identified by comparing the objectives of the trial, as described
in the protocol and IRB application, with the types of templates and other forms of documentation
that were generated in advance of study start. Overall, strong evidence existed that investigators
planned to document data appropriately even before the start of patient recruitment (Figure 7).
Figure 7: Planning for Documentation
a) Data sources are defined (N = 21)
Of the 21 studies evaluated, 18 studies were compliant, one (1) study was partially compliant and
two (2) studies were noncompliant with respect to defining the specific documents to be used as
source documents that validate recorded data. In the partially compliant study, one document
served as both the source and CRF. In the noncompliant studies, documentation required by the
protocol was missing from the list of source documents.
b) Case Report Form contains space for every required data element (N = 20)
Most investigators (18 of 21 studies) planned the documentation needed to memorialize the data
to be collected by having case report forms, logs or similar templates with preparatory spaces for
the information. Two studies were found to be partially compliant in that data was collected and
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Audit of documents
Source documents available
Design of CRFs
Data sources defined
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
60
telemetered via the study device. No CRF would be expected for this data, but no case report
forms or similar preparatory documents were present for other aspects of data to be collected,
such as demographics of study participants. This criterion did not apply to one study where
source documents were the only documents that were specifically required by the study protocol,
and no plan for the aggregation of study related data was evident in the study design.
c) Source documents were available (N = 21)
Planning for the availability of source documents to assure the traceability of study related data
was judged to be compliant in 18 studies and partially compliant in 3 studies. In compliant
studies, the coordinator or the PI was able to produce or describe the specific location of all
source documents. These documents were frequently found in a combination of paper and
electronic files. Three studies were judged to be partially compliant because study staff were
unable to produce, or to describe in interview, the location of source documents for some data
elements.
d) Method of audit of study documents was defined (N = 21)
Advance planning to audit the documentation associated with collected data was less often in
evidence compared with the other elements considered under this “Planning for Data
Documentation” subsection. Of the 21 studies, 13 studies were judged to be compliant because
some form of review body was identified in the experimental protocol or the IRB submission.
Investigators used one of three approaches when establishing a review body for their study:
reviewers internal to the study (3/13), reviewers external to the study and internal to the
department (9/13) or reviewers internal to the university but external to the department and the
study (1/13). Compliance was achieved in most studies through a plan to engage a data safety
management board or departmental review committee. In two partially compliant studies, the PI
61
was required to submit to audit by funding sponsors for significant adverse events but not other
elements. Additionally, interviews with staff of these partially compliant studies revealed that the
investigator was working on an audit plan that had yet to be fully developed. Six studies showed
no evidence of compliance with regard to advance planning for data audit.
2) Informed Consent
Compliance with respect to the adequacy of informed consent was found to be mixed, with
greater compliance related to adequacy of IRB communication and consideration of vulnerable
populations than with the presence of required elements of the informed consent form and
logistics for discussion of the informed consent (Figure 8).
Figure 8: Planning for Informed Consent
a) Contents of the informed consent form are defined (N = 21)
Planning for elements to be included in the informed consent form was judged to be compliant in
9 of 21 studies and partially compliant in 12 studies. In compliant studies, elements of the
informed consent form, relating both to IRB required statements and to elements particular to the
study were clearly outlined in the protocols or in the IRB application and associated documents.
In 12 partially compliant studies, IRB required statements were addressed, but elements specific
to the particular study were not.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Vulnerable populations
Report significant new findings
IC process defined
Contents of ICF
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
62
b) Who, what, when, where of informed consent process is defined (N = 21)
Overall, planning to assure the adequacy of the process of informed consent could be identified,
and thus was judged to be compliant, in only three (3) studies. The compliant studies detailed
who could obtain consent, what would be discussed during conversation with a potential study
participant, and the location and timing of the consent discussion. Eight studies were partially
compliant in that they showed evidence of planning related to the “who” and/or “where” of the
consent process but did not consider the other elements. Ten studies showed no evidence of
planning related to the process of obtaining informed consent other than which was evident in the
consent form itself.
c) Plan to report significant new findings to the IRB (N = 21)
Planning to report significant new findings to the IRB/FDA could usually be identified by a
statement to that effect in the protocol or IRB submission. Nineteen studies were found
compliant, with plans in place that were consistent with IRB/FDA requirements. The other two
studies were judged to be partially compliant because no reporting plan was found in the
protocols even though the investigators indicated the intent to report in the IRB submission.
d) Provisions for vulnerable populations if included (N = 21)
Consideration of the need to consent or exclude vulnerable or special populations was considered
to be compliant in 20 studies. These studies explicitly identified populations to be included
and/or to be excluded. For example, one study described the rationale for excluding persons who
could not speak English and several studies defined rationale for excluding women of child
bearing potential. In one noncompliant study, no consideration of vulnerable populations was
evident in the study protocol.
63
3) IRB/FDA
Planning for required interactions with the IRB/FDA was found in protocols, IRB, and/or
IND/IDE submissions. Nearly all protocols defined the content and timing of required
interactions with the IRB. In contrast, documentation suggested that the need for an IND/IDE had
been considered and planning for consistency between the informed consent form and contract
with the funding sponsor was often not apparent (Figure 9).
Figure 9: Planning for IRB/FDA Communications
a) Protocol defines required interactions with IRB/FDA (N = 21)
In 19 of 21 studies, the protocol defined required interactions, such as the reporting of adverse
events, protocol deviations, changes to the study team and protocol revisions. In two (2) partially
compliant studies, planning for interactions with the IRB was found only in general terms in
departmental SOPs and/or in the IRB application and associated documents, but not in study
specific protocols.
b) Protocol defines timing of interactions with IRB/FDA (N = 21)
Timing of the defined interactions went hand-in-hand with the identification of such interactions.
The 19 compliant studies that defined the interactions also defined their timing. The two (2)
studies partially compliant for defining interactions were not specific with regard to their timing.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Contract/consent consistency
Need for IND/IDE
Timing of communications
Required interactions defined
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
64
c) Documentation considers need for IND/IDE (N = 18)
It was possible in 18 of the 21 studies to identify evidence that the possible need for an IND/IDE
had been considered. In three (3) studies, the test article was not a drug or device for which an
IND/IDE was expected. Of the 18 studies for which an IND/IDE might have been needed, four
were under an IND or IDE. This by itself was considered to be evidence of compliance with
respect to this element in the planning phase, since a required part of the submission is a complete
description of FDA and IRB interactions. In 6 studies, documentation was present to show that
the possible need for an IND/IDE was considered but such approval was found to be unnecessary.
The remaining 8 studies were judged to be noncompliant because no evidence was found that the
need for an IND/IDE had been considered.
d) Contract and consent form were consistent (N = 11)
Eleven of the 21 studies had external funding either from industry or a government agency. The
remaining studies were funded internally and no contract was expected. Planning to ensure that
the language in the sponsor agreement was consistent with the language in the informed consent
document was evident in 4 of 11 studies and noncompliant in 7 studies. At this University,
formal audit of the consistency between the informed consent form and the contract relating to
the trial was not required until 2009. Studies opened prior to 2009 were missing this
documentation, and those after 2009 had such documentation.
4) Test Article
Planning related to tracking of the test article could be inferred from the manner in which
logs and CRFs were designed or from the presence of SOPs that provided guidance on test article
management. In most cases the protocol did not address tracking of the test article. Plans to train
the person responsible for delivering the test article were considered to be evident if the
65
individual’s professional licensure was appropriate for this role or documentation was present to
show that study specific training had been completed (Figure 10).
Figure 10: Planning for the Test Article
a) Protocol (SOP) defines documentation required related to test article (N = 18)
In 5 compliant studies, it was evident the investigator had thought through the “lifecycle” of the
test article, and had designed a tracking system to include dates and times of receipt,
administration, and where necessary, disposal of the test article. For 8 of the partially compliant
studies, the PI had contracted with a third party to manage the test article. In these studies the
protocol simply referenced the SOPs of the third party. In the remaining two (2) partially
compliant studies, evidence was found that the investigator planned to track some, but not all,
elements throughout the life cycle of the test article. For example, the administration of the test
article was planned, but no evidence could be found for planning related to the disposition of the
article in case it was not administered. In three (3) studies there was no evidence of planning to
track the test article at any point in time. This element was not applicable for three (3) studies;
one study was observational in nature and two studies had interventions that did not involve a test
article.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Training to administer
Source and disposition
Documentation
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
66
b) Protocol defines source and disposition of the test article (N = 18)
Planning related to the acquisition and disposition of the test article was not well defined in most
protocols. Only 5 studies defined the source from which the test article was to be obtained and the
ultimate disposition of the product, by, for example, consumption by the study participant, return
to supplier, or disposal according to a defined method. In 10 partially compliant studies, there
were SOPs that defined management of test articles generally but were not specific to any given
study. Plans found in protocols usually included the source but not the disposition of test articles
that were not administered. In 3 studies evidence of planning for the source or disposition of the
test article could not be found. This element was not applicable for 3 studies; one study was
observational in nature and two had interventions that did not involve a test article.
c) Training was planned for person to administer test article (N = 18)
Planning to train persons to administer the test article was compliant in 15 of 18 studies. In
several studies, the person to administer the test article was deemed competent based on his/her
professional licensure and experience. In other studies, there was extensive training, including
tests of inter-rater reliability in one study with a behavioral evaluation. In three (3) partially
compliant studies, no formal training plan was described but the person was trained “on the job”.
This element was not applicable for three (3) studies; one study was observational in nature and
two studies had interventions that did not involve a test article.
5) Recruitment
Most investigators planned to enroll participants consistent with study objectives.
Inclusion/exclusion criteria and planned recruitment methodologies were well described in all
studies. Planning for the transition to other care at the end of the study was, however, identified
less commonly (Figure 11).
67
Figure 11: Planning for Recruitment
a) Protocol defines inclusion/exclusion criteria (N = 21)
All studies were compliant in defining criteria by which a potential participant would be eligible
for inclusion in or exclusion from the study. These criteria were described both in study
protocols and IRB submissions.
b) Protocol defines recruitment methods (N = 21)
All studies were compliant in defining recruitment methods. Studies where recruitment materials
had been developed, for example using letters and flyers, had these materials reviewed and
approved by the IRB as evidenced by the IRB approval “stamp” on the documents.
c) Care transition when study ends or participant no longer participates (N = 18)
Only 3 studies included a transitional plan for the provision of care at end of study. Generally,
these plans simply referred the study participant back to their primary care provider. In 4 partially
compliant studies, coordinators/PIs were able to articulate a plan for such transition, but the plan
was not documented in study records. In 11 studies, there was no evidence of planning for any
form of post-study hand-off to a primary care provider. Coordinators and PIs were interviewed in
this regard; most assumed that the participant would seek follow up care from their primary care
provider independently. This aspect of planning assessment did not apply to three (3) studies in
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Care transitions at study end
Recruitment methods defined
Inclusion/exclusion criteria
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
68
which participants were continuously under the care of their primary care providers for the
duration of their participation in the study.
6) Staffing
Planning for staffing was, overall, the least compliant of all reviewed aspects of GCPs
(Figure 12).
Figure 12: Planning for Staffing
a) Qualification/credentials are defined (N = 21)
With the exception of 4 studies, the protocol did not fully describe the qualifications/credentials
required of staff. The 4 compliant studies described IRB required credentials related to protection
and privacy of human subjects of research, as well as to study specific credentials. In 10 of the
17 partially compliant studies, only IRB required credentials were addressed. In addition to
specifying IRB required credentials, the remaining 7 partially compliant studies also referred to
SOPs that defined, in general terms, the qualifications and credentials required of staff.
b) Required training is defined (N = 19)
Only two (2) studies fully defined the training required for staff to assume the responsibilities for
study related activities delegated to them. In 15 partially compliant studies, planning to meet IRB
required training was evident, but a plan for training related to the needs of the study was not. In
two (2) noncompliant studies, no evidence was found that training of study staff was planned.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Processes to be delegated defined
Required training defined
Qualification/credentials defined
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
69
The experience of study coordinators ranged from none to more than 15 years. At one extreme,
the study coordinator was the department secretary and at the other, the coordinator held a
certificate in clinical trials design and management from a major university. Many coordinators
were foreign medical graduates unlicensed to practice in the United States. This criterion did not
apply to two (2) studies. In one study, the PI collected all study data and in the other, staff did not
need to perform activities outside the scope of their usual practice.
c) Specific processes to be delegated by PI to others are defined (N = 20)
Only 4 studies defined specific study activities that were to be delegated to others by the PI. In 8
partially compliant studies, SOPs defined, in general terms, what could be delegated and to
whom, but these were not study specific. In two (2) additional partially compliant studies, the
coordinator made certain that the activity did not require an MD license before assigning the
activity to a member of the study staff and in the remaining study the protocol defined the
required professional licensure but not the activities to be delegated. In 5 studies there was no
evidence of planning for delegation. One coordinator stated that it was unnecessary to include a
delegation plan in the protocol because it was understood that any activity for which a study staff
member was licensed could be delegated to that person. In one (1) study this criterion was not
applicable because the PI was the only member of the study staff.
Doing
1) Documentation
Evidence that documentation was “done” as planned was identified by comparing activities
described in experimental protocols and IRB applications with source documents and completed
documents such as CRFs. Also considered were the ability to respect and maintain confidentiality
and privacy of documents and integrity of study data (Figure 13).
70
Figure 13: Carrying Out Documentation Requirements
a) Data source documents match data recorded on CRFs (N = 19)
Of the 19 studies for which this element was applicable, 16 were compliant and three (3) were
noncompliant with respect to consistency between source documents and CRFs. In one
noncompliant study, all data entry spaces on the electronic CRF were completed by entering the
number “99”. In another, data was documented in a running log onto which data was entered as it
was obtained. The log was organized by date such that data from an individual study participant
was together on one page and the pages were ordered consecutively according to the time that
participants were seen on that day. The template of the log did not include spaces for all data
elements required by the protocol. In the third noncompliant study, the only documentation to be
found was an informed consent form for each participant that had the identification number of the
test article administered to that participant clipped to it. Of the two (2) studies to which this
element did not apply, one study had yet to enroll a participant. Source documentation for the
second study was stored off-site and unavailable for review.
b) CRFs are completed for each study visit (N = 20)
Fifteen studies were judged compliant with regard to completion of CRFs for each study visit
because every data entry space was completed on all CRFs. As mentioned above, one study kept
a running log in lieu of individual CRFs. Because each data entry space on the log was
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Audits are documented
Sources kept confidential
CRFs completed
Source matches CRF
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
71
completed, the study was judged to be compliant. In the 5 noncompliant studies, data entry spaces
were frequently left blank. This element did not apply to one study that had yet to enroll a
participant.
c) Source documents are kept confidential (N = 21)
In all 21 studies, source documents, as well as CRFs, regulatory documents, study
correspondence and informed consent forms were kept in protected physical locations. Electronic
documents were housed on password protected computers and within password protected data
repositories. Computers were found to be locked in private offices and laptop computers were
attended or in locked desk drawers if held within a shared office. Paper documents were scanned
into a protected, HIPAA compliant, electronic medical record with the original documents housed
in an accredited medical records department, in an accredited off-site storage facility or in locked
files in the office of the investigator. All study staff had documented evidence that HIPAA
compliance training had been completed.
d) Audits are documented (N = 21)
Of the four indices in this “Documentation” set, documentation of study audits was most often
out of compliance. Only three (3) studies were audited either internally or externally. Nine
studies were judged to be partially compliant because they had a mechanism in place for audit,
but were audited in rotation with more than a hundred other studies and came up in the rotation
queue no more frequently than annually. In fact audits often only happened in response to a
regulatory or operational concern with the study. The remaining 9 studies were judged to be
noncompliant because they had no mechanism in place for an audit. Several PIs commented that
they were in the process of planning to conduct audits. One PI stated that due to the
observational nature of the study, no audits were necessary.
72
2) Informed Consent
Compliance with the process of obtaining informed consent was judged primarily by
reviewing the executed informed consent forms of individual study participants. Other
supporting materials such as the availability of consent forms translated into the primary language
of a given participant were also considered (Figure 14).
Figure 14: Carrying Out Informed Consent Requirements
a) Consent form most recently approved by IRB is in use (N = 21)
In all 21 studies, the current IRB approved informed consent form was in use as evidenced by the
printed date of IB approval and date of expiration on the informed consent form. It was also
evident, from review of IRB correspondence that, for all studies, PIs had kept the IRB informed
of study findings or logistics that required revision of the informed consent form.
b) Informed consent is obtained prior to first study activity (N = 21)
For all 21 studies, informed consent appeared to be obtained prior to the first study activities.
The primary source of evidence was informed consent forms that were signed and dated by the
study participant prior to the date and time of their first study related activity.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Consent of vulnerable populations
Consent signed/dated
Obtained prior to first activity
Consent approved by IRB
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
73
c) Consent is signed and dated by the participant and PI (or authorized person) (N = 20)
In 15 of 21 studies, informed consent forms were signed, dated and timed by both the study
subject and the PI (or IRB approved PI designee). In 5 studies missing signatures of the PI,
missing dates and/or times by the PI and/or study participant were noted. These studies were
judged to be noncompliant. This element was not applicable to one study that had yet to enroll a
participant.
d) Consent process considers special needs of vulnerable populations (N = 21)
In many studies, review of inclusion/exclusion criteria revealed justifications for enrolling or
refraining from enrolling vulnerable populations. No evidence was found in these reviews that
vulnerable subjects were enrolled or managed in a way that contravened expected practice. The
most common populations to be included were participants whose primary language was not
English. In these studies, informed consent in the participant’s primary language and/or evidence
of verbal translation was found. One study included pregnant women; however, this was a non-
interventional study where such participation was not contraindicated.
3) IRB/FDA
Adherence to GCPs related to communication with IRB/FDA was evident from the IRB/FDA
submissions. All investigators were compliant with regard to securing IRB approvals and IND
approvals, and assuring that serious adverse events and protocol revisions were reported.
Investigators were less compliant with regard to reporting protocol deviations (Figure 15).
74
Figure 15: Carrying Out IRB/FDA Requirements
a) Documentation of IRB approval was present (N = 21)
All 21 studies were in compliance with IRB approval prior to study start as evidenced by the fact
that signatures on informed consent forms and dates of initiation of study activities for individual
participants occurred after the date on which the study was approved by the IRB.
b) Appropriate IND/IDE approval was documented (N = 4)
The PI held an IND/IDE in only 4 studies. For these studies, supporting correspondence between
FDA and the PI was found, and a letter to permit the trial was present in the file. Of the 17
studies for which this was not applicable, 9 PIs had documented correspondence from FDA that
waived the requirement for an IND/IDE; in a further 6 studies, the PI indicated on the IRB
submission that they were not considered to be the investigator-sponsor and the IRB approved the
study with this stipulation. This criterion was not applicable in the remaining three (3) studies
that had no investigational drug or device.
c) Serious adverse events were reported to IRB (N = 17)
Seventeen of the 21 studies had documented serious adverse events (SAEs), either internal or
external to the study site. These SAEs were typically reported to the IRB by letter from the PI to
the IRB at the time that the event became known to the investigator. SAEs were also noted on
annual reports to the IRB. Studies that had oversight from a centralized data safety monitoring
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Revisions approved by IRB
Deviations reported to IRB
Adverse events reported to IRB
IND approval documented
IRB approval documented
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
75
board first reported the SAE to the DSMB that then reported the information to the IRB.
Reporting in these instances could be found in minutes of DSMB meetings. This element did not
apply to 4 studies that had no significant adverse events to report.
d) Protocol deviations were reported to the IRB (N = 19)
When asked about protocol deviations, many study coordinators and PIs considered only
“significant” variations from the study timeline or study-related activities as a deviation. For
example, few coordinators considered changing a study visit date that should have occurred
between day 8 and 10 to day 11 as a protocol deviation. They did, however, consider missing an
entire study visit as a deviation. Consequently, only 6 studies reported all protocol deviations to
the IRB. In 6 partially compliant studies, some protocol deviations were reported to a DSMB.
For these studies, annual reports to the IRB did not reiterate these deviations. It was noted that
the DSMB did not report deviations to the IRB on behalf of the PIs. Seven studies were
noncompliant as evidenced by blank spaces in response to the question about protocol deviations
on IRB annual reports. This criterion did not apply to two (2) studies, one with no enrollment to
date and the other with a short history and no deviations to report.
e) Protocol revisions were approved by IRB/FDA (N = 20)
Compliance with reporting protocol revisions to the IRB appeared to be excellent, as evidenced
by documented communication between the PI and IRB/FDA with regard to protocol revisions in
17 studies. Three studies were judged to be noncompliant. In one study the protocol indicated
that the PI would be blinded as to whether the study participant received the test article or active
control article. However, the PI was planning to prepare the test article personally because its
preparation by a third party was considered to be too costly. This intention had not been reported
to the IRB. In the second and third noncompliant studies, informed consent was being obtained
76
and study data was collected by individuals not identified to the IRB. This criterion did not apply
to one (1) newly initiated study that had no revisions from the originally approved protocol.
4) Test Article
Compliance with GCPs related to test articles was judged by seeking evidence of tracking,
maintaining limited access and assuring administration by qualified individuals. Strong evidence
existed that researchers were at least partially compliant with respect to all of these criteria
(Figure 16).
Figure 16: Carrying Out Test Article Requirements
a) Form/logs recorded data related to test article tracking (N = 18)
Fourteen studies were judged to be compliant. Eight of these studies engaged an experienced
third party to control and track test articles; the 6 remaining compliant studies used tracking logs
effectively to maintain control of test articles from receipt to ultimate disposition. Four studies
were judged to be partially compliant. Two studies affixed a sticker, documenting the test article
control number, to individual informed consent forms. This facilitated the identification of test
articles administered to each participant, but did not allow for tracking the receipt of articles by
the participant or the disposition of unused articles. Two other partially compliant studies tracked
the administration of test articles on the CRFs of individual participants. Again, this allowed the
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Administration by qualified staff
Article is stored locked
Tracking forms developed
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
77
specific test article to be identified and matched to each participant, but did not confirm the
receipt of articles or disposition of unused articles. This criterion did not apply to three (3)
studies in which there were no test articles to track.
b) Test article is stored in a locked, limited access area (where required) (N = 17)
All 17 studies to which this criterion applied were judged to be compliant. Test articles for the 8
studies that utilized a third party were found locked in an environment controlled by the third
party. PIs and/or study coordinators were required to make arrangements with the third party to
access the test articles. In the remaining 9 studies, test articles were found to be stored in locked
rooms, cupboards and/or drawers to which only the PI and study coordinator had access. This
criterion did not apply to 4 studies. In three studies there was no test article to control and in the
fourth study, the participant purchased and stored the test article in their home.
c) Qualified/trained staff administered the test article (N = 18)
In 16 studies, staff were judged to be qualified/trained to administer the test article from the
identification of documented didactic and/or practical education and training specific to the study
and, where required, by evidence of professional licensure. This included the study in which
participants self-administered the test article because the participant had been educated regarding
the article and had been trained to self-administer it. Two studies were judged to be partially
compliant, because unlicensed staff were minimally trained to apply a medical device. This
criterion did not apply in three (3) studies, because no test article was administered.
5) Recruitment
Recruitment and transition of subjects was managed throughout the study
Good compliance with rules for the recruitment of participants according to prespecified criteria
was evident by comparing characteristics of those who were enrolled against inclusion/exclusion
78
criteria as defined in study protocols and IRB submissions. In contrast, only modest evidence
was seen for transition of care at end of the study (Figure 17).
Figure 17: Carrying Out Recruitment Requirements
a) Subjects meet inclusion/exclusion criteria (N = 20)
All 20 studies to which this criterion applied were judged to be compliant. Evidence of
compliance was found by comparing characteristics of individuals enrolled with
inclusion/exclusion criteria defined in study protocols and IRB submissions. In one study, the PI
consented every individual receiving standard care in a particular care setting because no criteria
restricted the recruitment of individuals in that setting. This study was assigned a status of “not
applicable” because many individuals had been consented but none had yet been enrolled.
b) Recruitment methods were consistent with experimental plan (N = 21)
All 21 studies were judged to have used methods of recruitment appropriate to methods defined
in study protocols and IRB submissions. The most frequent method of recruitment was an in-
person, one-on-one, explanation and invitation. Other common methods included posting flyers
and telephone solicitations. All printed materials reviewed were noted to be “stamped” with
current IRB approval and expiration dates.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Evidence of care transition
Recruitment consistent with plan
Inclusion/exclusion criteria met
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
79
c) Transition of care was assured (N = 18)
Evidence that care was transferred effectively at the end of study was evident in only 7 studies.
In these 7 studies, evidence consisted of explanations by staff regarding their actions to ensure
that study participants received follow up care as required post-study. In all 18 studies, protocols
and IRB submissions defined requirements for actions when a study participant ended the study,
either because he or she no longer met inclusion/exclusion criteria or voluntarily ended
participation. However, no study documented the manner in which the participant would be
transitioned, when necessary, to primary care at end of study. Most PIs and coordinators stated
that they assumed participants would seek care on their own. This criterion did not apply to three
(3) studies in which the participants remained in the care of their primary providers throughout
the course of their participation in the study.
6) Staffing
Overall, staffing plans were carried out as defined in study protocols and IRB submissions.
Only two studies were judged to be partially compliant and with only one of three indices (Figure
18).
Figure 18: Carrying Out Staffing Requirements
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Duties performed per delegation
Participants educated by qualified staff
Staff identified to IRB
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
80
a) PI, sub-investigators and study staff identified to IRB/FDA
In 19 of 21 studies, personnel associated with the study, including, but not limited to, principal
and sub-investigators, study coordinators, data input and analysis staff, quality control staff, and
others directly involved in study implementation and/or operations were identified. It was noted,
in these compliant studies, that updates were provided to the IRB as these individuals changed
roles or left the study and as new individuals became involved. In studies that used a central
staffing pool, every member of the central pool was identified because each had the potential of
being involved in one or more aspects of any given study. In two studies, sub-investigators, who
rotated through the study briefly and on a frequent basis, and who were assigned the
responsibility of obtaining informed consent, were not identified in IRB submissions.
b) Participants are educated by qualified personnel (N = 21)
In all 21 studies education regarding risks and benefits of participating in the study were
explained to participants by principal investigators or sub-investigators. Study coordinators
familiar with the details of the informed consent form and process as well as the study protocol
reinforced this education and ensured questions were answered. In 19 studies, there was
documentation that individuals interacting with study participants had been trained in later
aspects of the study through which they were responsible to guide participants. In two (2)
studies, judged to be partially compliant, individuals with minimal training were responsible for
aspects of participant education that coincided with application of the study device.
c) Staff only perform duties for which they have been delegated and trained (N = 21)
Without exception, staff from all 21 studies stated during interviews that they performed only
those duties for which they were qualified and trained. Each staff member indicated confidence
that they could carry out their assignments and, where applicable, staff stated that they were
practicing within the scope of their professional licensure.
81
Checking
1) Documentation
Evidence that some form of checking (or monitoring) to ensure that outcome measures and
safety data were recorded accurately and completely was identified by the presence of a sign off
on CRFs and safety data source documents by the PI (or treating provider). Typically, such
evidence was limited. Further, evidence that the DSMB met and provided feedback was
generally lacking (Figure 19).
Figure 19: Checking for Documentation Compliance
a) Treating/PI signature is present on CRFs (N = 18)
Signatures verifying that data were reviewed by the PI or treating provider were present on CRFs
in only 7 of 18 studies to which this criterion was applicable. One study coordinator stated that
she “made it her mission” to ensure that documents were reviewed and signed by the PI. In 13
noncompliant studies, evidence of CRF review could not be found. Several coordinators
commented that CRFs were reviewed but it was not the routine of the PI to verify those reviews
by signature. This element did not apply to three (3) studies, one in which only source documents
were kept and two (2) in which data was submitted telemetrically to the funding sponsor via the
test article and no CRFs were maintained for other data (refer to 4.2.1.2 for detail).
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Monitoring board meetings held
PI signature on safety data results
Staff articulate audit process
Pi signature on CRFs
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
82
b) Staff can articulate an audit process consistent with protocol requirements (N = 21)
As described more fully in 4.2.1.4, most studies had established some form of audit process.
During interview, staff from all 21 studies were knowledgeable about the presence or absence of
established processes. In many instances where no audit process existed, staff were able to speak
to a plan that was developed or in the process of development but had yet to be operationalized.
c) Treating/PI signs safety data results (lab, imaging, other) (N = 18)
Evidence that the PI or treating provider reviewed adverse events reports and other safety linked
data results could be found in only 4 studies. In most other studies, coordinators indicated that
safety data was reviewed, but it was not in the routine of the PI to provide verifying signatures.
In 14 studies that lacked evidence of review, many coordinators stated that they personally
reviewed the data and brought results of concern to the attention of the PI/treating provider. This
element did not apply to three (3) studies. In two, no lab or imaging safety data was collected
under the existing protocol and in the other, source documents were off site and unavailable for
review.
d) Data safety monitoring board meetings were held (N = 21)
Meetings of a data safety monitoring board or similar review bodies were held routinely in only 6
studies; one (1) study was reviewed monthly and 5 others no less than quarterly. Eleven studies
were judged partially compliant because these studies were reviewed in rotation with more than
100 other studies. In these instances, studies came up in the rotation no more frequently than
annually and were usually only reviewed when safety or operational issues were known or
suspected. Four studies were judged to be noncompliant. Three studies had no plan for DSMB
review; in one study the coordinator stated they were working on a plan. A PI of one of the
noncompliant studies stated that no DSMB was necessary because the study was observational in
nature.
83
2) Informed Consent
Checking to ensure informed consent forms and processes were operationally consistent with
study protocols and IRB submissions was assessed by determining if current forms were in use,
audit processes were in place and staff carried out their roles accordingly. Strong evidence of
compliance existed for three of four indices used (Figure 20).
Figure 20: Checking for Informed Consent Compliance
a) Consent document has been assessed by others to match protocol requirements (N = 21)
For all 21 studies, consent documents were reviewed and approved by the IRB based on the
committee’s assessment of the accuracy and completeness of the contents of consent forms
compared to study protocols. Nine studies underwent preliminary scrutiny by a departmental
review committee prior to submission to the IRB.
b) Staff verbalizes consent process consistent with protocol requirements (N = 21)
For all 21 studies, coordinators described the “who”, “what”, “when” and “where” of the consent
process, often in greater detail than was described in protocols and IRB submissions.
Additionally, it was clear that coordinators were knowledgeable in complex processes related to
consent, such as written and verbal translations and consideration of special needs for vulnerable
populations.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Consent process compliant
Current consent in use
Staff verbalizes consent process
Consent document matches protocol
Number of Studies
Compliant Partially compliant Noncompliant Not Applicable
84
c) Current consent was in use (N = 21)
For all 21 studies, current consent forms were in use. Consent forms were judged to be current
when the date of execution was after the IRB approval date and prior to the IRB expiration date
“stamped” on the consent form.
d) Consent process was observed and found compliant (N = 21)
Only rarely did oversight activities include an observation of the real time interaction between
study team members providing information and study participants giving consent. As a surrogate
measure, consent processes were judged to be compliant in 13 studies where there was some form
of monitoring of the study. Conversely, 8 studies, where no audit or monitoring processes could
be found, were judged to be noncompliant.
3) IRB/FDA
Studies showed that compliance with checking for follow-through with the substance and
timing of required IRB/FDA interactions was mixed. While staff could explain compliant
processes that they followed, processes were not always evident (Figure 21).
Figure 21: Checking for IRB/FDA Communication Compliance
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
IND safety reports filed
Annual reports to IRB/FDA
Monitoring has occurred
Staff verbalize communication processes
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
85
a) Staff verbalizes process consistent with protocol requirements (N = 21)
During interviews, staff from all 21 studies could describe required interactions with the
IRB/FDA that were consistent with protocols and IRB requirements. Correspondence between
investigators and the IRB/FDA was generally well documented and served as evidence that
required reports had been sent and received.
b) Monitoring has occurred (N = 20)
In 13 studies, some form of monitoring had occurred. Methods of monitoring ranged from
DSMB oversight to informal systems in which coordinators “double checked” their own work
product. In one instance, the study coordinator checked the work of data collectors, but only to
assure that data collectors made an entry into each data-entry field. Two studies were judged to
be partially compliant because the funding sponsor monitored the study for SAEs but not for
other aspects of interaction with the IRB/FDA. In 5 noncompliant studies no evidence was found
that any form of monitoring had occurred. This element was not applicable to one (1) newly
initiated study that had yet to enroll a participant.
c) Annual reports to the IRB (N = 16)
Annual reports to the IRB were present in 16 studies. These reports were developed using the
IRB’s standard format and included summaries of elements such as enrollment history, adverse
events, protocol revisions, changes in study personnel, and significant new findings. Each report
was accompanied by documentation of receipt and acceptance by the IRB. This element did not
apply to 5 studies that had not yet reached the one-year mark.
d) IND/IDE safety reports (when required) (N = 4)
Of the 4 studies to which this element applied, only two (2) studies were judged to be compliant.
In these studies, copies of IND/IDE safety reports were found kept with other regulatory
documents. The remaining two (2) studies were judged to be partially compliant because
86
coordinators stated that reports had been filed but they could not locate copies of the reports or
other substantiating evidence. This element was not applicable to 17 studies in which the PI did
not hold an IND/IDE.
4) Test Article
Management of test articles, including administration, accountability, and tracking from
receipt to final disposition was “checked” according to study protocols through audits of logs,
CRFs and interviews with PIs and staff (Figure 22).
Figure 22: Checking for Test Article Compliance
a) Forms/logs are complete with evidence of second party audit (N = 17)
In 13 studies records existed to show that handling of test articles had been audited. Studies that
used a third party to control test articles had mechanisms in place to reconcile inventory with
distribution to study participants or the return/disposal of articles. In 4 partially compliant
studies, logs/forms kept at the study site were found to be completed but no evidence could be
found for reconciliation or audit. This element did not apply to 4 studies, three (3) in which there
were no test articles and one (1) that had no enrollment to date.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Waste observed
Administration by qualified staff
Test articles accounted for
Logs complete and audited
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
87
b) All test articles were tracked (N = 18)
In 14 studies, test articles were tracked and in these studies it was possible to determine the
ultimate disposition of the test articles. In two (2) partially compliant studies, inventory was not
tracked systematically and this made it difficult to determine the whereabouts of all articles. Two
studies were judged to be noncompliant because study staff had been given one or more test
articles for their personal use. These articles did not appear on tracking logs and a comprehensive
inventory of articles was not kept. This element did not apply to three (3) studies where no test
articles were used.
c) Administration by qualified staff (N = 20)
Education and training of staff to administer test articles was evident in 18 studies. Generally this
took the form of certificates, copies of professional licenses and/or printouts indicating program
completions. Two studies were judged to be partially compliant because nonmedical staff with
limited training administered test articles that were medical in nature. This element did not apply
to 4 studies, three (3) where there were no test articles and one (1) that had yet to enroll a
participant.
d) Drug/device is destroyed appropriately (when applicable) (N = 14)
In 12 compliant studies coordinators were able to describe a process that included verification by
others of destruction of unused/returned inventory. Supporting documentation was particularly
thorough in pharmaceutical studies. One study was judged partially compliant because the
coordinator was uncertain how to manage expired test articles but they nonetheless were kept in
separate inventory from in-date articles while the coordinator waited for direction. This element
did not apply to 7 studies; three (3) had no test articles and two (2) required expired/unused
articles be returned to the funding sponsor or hospital inventory, one (1) had no wasted articles to
88
date and one (1) required that the study participant purchase study articles over the counter and
without the need to inventory or track articles.
5) Recruitment
Verification that recruitment activities during the “doing” phase were carried out in
accordance with requirements of study protocols and IRB approvals was accomplished by
checking that all contacts with potential participants were tracked and those enrolled met
inclusion/exclusion criteria. The three indices used to judge compliance are described in Figure
23 and below.
Figure 23: Checking for Compliance with Recruitment
a) Independent verification of inclusion/exclusion compliance (N = 20)
Thirteen studies were judged to be compliant because some form of study audit was carried out
that included a review of adherence with inclusion/exclusion criteria. In 7 studies judged
noncompliant, no evidence could be found that files were checked for adherence with
inclusion/exclusion criteria. This criterion was not applicable in one (1) study that had yet to
enroll a participant.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Telephonic verification of eligibility
Log documents all contacts
Verification of inclusion/exclusion criteria
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
89
b) Screening log documented all contacted individuals (N = 20)
Screening/contact logs were judged to be compliant in 18 studies in which PIs/coordinators stated
that contact with every potential participant was recorded. Two studies were judged to be
noncompliant because participants were entered onto a log after they were enrolled in the study.
Thus individuals were not recorded if they did not enter the trial. This criterion did not apply to
one study because this was a secondary site and recruitment and enrollment was performed by the
primary site.
c) Telephonic system was used to verify eligibility (N = 20)
This measure was used as a proxy to judge the sophistication of sites. In 20 sites, potential
participants were considered eligible/ineligible based on the PI’s personal assessment of the
individual compared to inclusion/exclusion criteria. Some studies used a checklist to ensure all
criteria were considered during the assessment. This element did not apply to one (1) study that
did not enroll participants at the site.
6) Staffing
Checking to ensure that staff performed duties consistent with their qualifications and
professional credentials was assessed by interviewing staff and inspecting training records such as
certificates of completion. The three indices used to judge compliance are described in Figure 24
and below.
90
Figure 24: Checking for Staffing Compliance
a) Training defined by protocol was evident (N = 21)
IRB-required training in HIPAA privacy laws and protection of human subjects was documented
for staff in all 21 studies. Sixteen studies were judged to be compliant when study-specific
training was present as well. Five studies were judged to be partially compliant because
documentation was not on file to substantiate study specific training even though such training
was asserted to be in place when staff were interviewed.
b) Staff shows study-specific knowledge consistent with protocol (N = 21)
In all 21 studies study coordinators appeared during interviews to understand the investigational
protocol and stated that they felt qualified through formal or informal training to carry out their
delegated responsibilities in accordance with protocol requirements.
c) Duties delegated to staff duties are consistent with the protocol (N = 21)
Staff in all 21 studies stated that their daily activities were consistent with their understanding of
the roles delegated to them. All staff asserted that they were practicing within the scope of their
education, training and professional licensure.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Staff verbalize delegated duties
Staff verbalize protocol requirements
Evidence of required training
Number of Studies
Compliant Partially Compliant Noncompliant Not Applicable
91
Acting
The fourth and last major element of the PDCA cycle, “acting" was more difficult to
assess than the other three areas defined by the PDCA cycle in part because it was predicated on
preceding activities such as monitoring that were not always conducted. Thus, the evaluation was
not easily subdivided into specific subsections and limited documentation of such activities were
apparent. A single index was therefore used to evaluate this element based on whether at least
some actions were taken to ensure timely follow-up when deviations were observed, and when
documentation showed little or no evidence of repeated deviations after one or more initial
deviations had been identified.
The majority of studies (19/21) were judged to be compliant as measured by findings that the
PIs appeared to be genuinely aware of the need to correct deficiencies once they were identified.
That PIs took action to correct deficiencies was determined through interviews of PIs and
coordinators, identification of documented revisions in the protocol and informed consent forms
submitted to the IRB, and the absence of repeated deficiencies. Principal investigators and staff
were asked to describe any corrective actions taken when deficiencies were identified. Below are
examples given by PIs and staff.
As described in the previous section titled “Doing”, one coordinator had been found to
complete CRFs by entering “99” into each space; this coordinator received remedial
training regarding the importance of completing CRFs accurately and in a timely manner.
In studies with no formal audit plan, PIs and coordinators described planning activities
for audits that were in progress and most had a timeline for implementation.
One PI stated that she realized a need to refer participants formally to follow up care at
end of study. She described a very thorough process that had been implemented to
evaluate the problem and stated she was currently revising the protocol accordingly.
92
One coordinator realized the importance of participant cooperation after the study had
begun and developed a two (2) hour orientation for those individuals entering the study.
One PI, whose study was judged compliant overall, commented that over the course of 20
years, she “finally figured out most things”.
In addition to interviews with staff, regulatory documents were reviewed for evidence of
corrective actions.
In one study patients were asked to sign a second informed consent form when it was
discovered that a study team member obtaining consent had not signed at the time
consent was obtained.
In all studies, revisions were found to informed consent forms and protocols in response
to IRB comments and contingencies.
In several studies, corrections and supplemental entries onto CRFs were evidenced by
corresponding initials and dates on the CRFs.
Two studies were marked partially compliant because many repeated deficiencies were noted
throughout the study even though the coordinator stated that steps were taken to correct
deficiencies and prevent recurrence. During the course of the interview, it became apparent that
the coordinator was inexperienced and not aware of the significance of deviations from the
protocol.
Findings Related to the Experience of Investigators
A key question identified during the literature review was the degree to which experience with
industry-sponsored studies would affect the degree of compliance with the six elements of GCPs
studied here: documentation, informed consent, IRB/FDA requirements, recruitment and staffing.
Thus, the results for investigators identified as “experienced” versus “inexperienced” with
93
industry studies were compared with respect to the GCP variables described in each of the
“Planning”, “Doing”, “Checking” and “Acting” sections of the analysis.
1) Planning
Differences in compliance with “planning” GCPs between “experienced” and “inexperienced”
investigator-sponsors are summarized in Figure 25 and more fully described below and in
Appendix E.
Figure 25: Inexperienced vs. Experienced Investigators Compliance with Planning
Inexperienced investigators when evaluated as a group appeared to equal or outperform
experienced investigators in 6 of 6 “planning” elements: documentation (EI = 79% v. NEI =
84%), informed consent (EI = 52% v. NEI = 75%), IRB/FDA regulations (EI = 67% v. NEI =
92%), test article management (EI = 36% v. NEI = 73%), recruitment (EI = 74% v. NEI = 76%)
and staffing (EI = 8% v. NEI = 33%).
94
2) Doing
Differences in compliance with “doing” GCPs between “experienced” and “inexperienced”
investigator-sponsors are summarized in Figure 26 and more fully described below and in
Appendix E.
Figure 26: Inexperienced vs. Experienced Compliance with Doing
Inexperienced investigators when taken as a group appeared to equal or outperform experienced
investigators in 6 of 6 “doing” elements: documentation (EI = 62% v. NEI = 79%), informed
consent (EI = 92% v. NEI = 97%), IRB/FDA regulations (EI = 80% v. NEI = 83%), test article
management (EI = 84% v. NEI = 100%), recruitment (EI = 79% v. NEI = 85%) and staffing (EI =
90% v. NEI = 100%).
3) Checking
Differences in compliance with “checking” GCPs between “experienced” and “inexperienced”
investigator-sponsors are summarized in Figure 27 and more fully described below and in
Appendix 6.
95
Figure 27: Inexperienced vs. Experienced Compliance with Checking
As a group, inexperienced investigators appeared to outperform experienced investigators in three
(3) of 6 “checking” elements: documentation (EI = 37% v. NEI = 57%), IRB/FDA regulations (EI
= 83% v. NEI = 89%), and test article management (EI = 78% v. NEI = 94%). Experienced
investigators appeared to outperform inexperienced investigators in two (2) of 6 elements:
informed consent (EI = 92% v. NEI = 88%) and recruitment (EI = 51% v. NEI = 50%).
Inexperienced and experienced investigators performed similarly related to staffing management
(EI = 92% v. NEI = 92%).
4) Acting
Differences in compliance with “acting” between “experienced” and “inexperienced”
investigator-sponsors are summarized in Figure 28 and more fully described below and in
Appendix E.
96
Figure 28: Inexperienced vs. Experienced Compliance with Acting
As a group it appeared that inexperienced investigators more frequently took actions to correct
identified deficiencies than did experienced investigators (EI = 87.5% v. NEI = 100%) in this
study.
0 10 20 30 40 50 60 70 80 90 100
Experienced
Inexperienced
% Compliance with Acting
97
Scoring of Individual Studies
Studies were scored according to the methodology described in Chapter 3 in order to determine if
there were patterns of compliance within and/or across the two groups of interest, experienced
and inexperienced investigators (Figure 29).
Figure 29: Inexperienced vs. Experienced Investigator Individual Study Scores
With the exception of two outliers, studies conducted by industry experienced
investigators scored between the 80
th
and 90
th
percentile for compliance across all GCPs reviewed
(average score, including outliers = 0.81) (Figure X). Two outlier studies had individual scores in
the 57
th
and 61
st
percentiles. Both of these studies were conducted by the same principal
investigator.
Scores of individual studies conducted by inexperienced investigators were less tightly
grouped than those of the experienced investigators and ranged from the 79
th
to the 94
th
percentile
(average score = 0.85). Average scores of studies conducted by experienced and inexperienced
differ by 5% (EI = 0.81; NEI = 0.86) with inexperienced investigators scoring better. However,
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1.00
Compliance Percentile
Individual Studies
Experienced vs. Inexperienced Investigators
Study Compliance
Experienced Invesitgator
Studies
Inexperienced
Investigator Studies
98
when the two outlier scores are not included in the calculation, the scores are nearly the same for
both groups of investigators (EI = 0.85; NEI = 0.86).
99
5: DISCUSSION
This research explored the state of control of clinical trials in which responsibility for
compliance with Good Clinical Practices rested with the principal investigator and was not a
shared function with an industry sponsor. A review of the literature in Chapter 2 suggested that
the conduct of such trials might be challenged by many factors, including limited availability of
money, expertise and monitoring options when no industry sponsor was available to provide
necessary resources to the trial site. Further, investigator-sponsored trials may be initiated by
investigators with limited expertise in the conduct of trials, yet such individuals often lack in-
depth training opportunities that might have been acquired if they had been involved previously
in clinical trials under the tutelage of an industry sponsor or other partner. Absent the resources
of industry sponsorship, it seems important to question whether investigator-sponsor trials can
comply fully with Good Clinical Practices and at what stage in the lifecycle of the trial that
problems occur most commonly. It is further of interest to understand whether differences in
compliance with GCPs might be related to the level of experience of the investigator with
industry-sponsored trials.
Consideration of Methods
1) The Studied Population
A number of delimitations and trade-offs were made to focus this study around a defined set of
questions and a defined population. First, I collected data from a sample of investigator-
sponsored clinical trials at a single top-tier research university. This approach was chosen
deliberately in order to ensure that the investigators all came from a similar academic culture in
order to control some of the variables that might come from differences in level of institutional
support and policy that might be present from one university to another. It also was chosen
because I was exploring the use of a new tool. To gain experience with the tool it was felt wise to
100
use it first in a pilot setting that had a sufficient collection of research investigators doing
industry-sponsored and investigator-sponsored research. However, these decisions necessarily
restrict the external validity of the study because the analysis does not represent the state of
compliance of investigators at other universities or in private practice. To address this challenge,
the tool might be adapted and validated to explore issues of concern to “local institutional
culture[s], personalities and environment[s]” (Califf, Morse et al. 2003). In this manner, the tool
could become one of several mechanisms by which the state of operational control of investigator
led trials (or even sponsor-led trials) might be assessed.
At the outset of this study, I was concerned that the selected university might not have a
sufficient number of investigators who were conducting investigator-sponsor clinical trials.
Fewer than 5% of the 800+ studies submitted to this university’s IRB were classified as
investigator-sponsor studies (University IRB Director, personal communication). I was also
concerned that the recruitment of investigators might require overcoming the sensitivity of
investigator-sponsors to allow audit of their studies in an institution that has historically not
provided this type of oversight. The fact that only one investigator declined the invitation to
participate was encouraging and was felt to increase internal validity by assuring reasonable
representativeness of the sample. It was also critical to assure sufficient numbers of investigators
in the two target groups, those with experience as principal investigators in industry-sponsor
studies and those without such experience respectively. The identification of investigators was
based on a nonprobability snowball approach because of its usefulness in studying hard-to-reach
populations. This approach was of some concern, but other methods considered proved infeasible
with the relatively small population under study (Heckathorn 2011). Many universities, including
the one in which this work was conducted, do not have a searchable database of investigator
initiated trials, but efforts to create more flexible resources through IRB submission materials are
101
underway (Executive Director, University Office for the Protection of Research Subjects,
personal communication). Such a database would not only facilitate research like that undertaken
here but would allow the institution to identify and monitor the compliance of these trials.
The yield of 10 experienced PIs across 13 studies and 7 inexperienced PIs across 8
studies was in my opinion a sufficient number for the type of work conducted here. I was able to
meet the aims of a pilot study, that typically include testing of the tool to determine its ability to
collect data reliably, identify and refine items that lack clarity, detect flaws in the logic of the
instrument and allow for testing of analysis procedures (Burgess 2001, Rattray and Jones 2007).
It was interesting that it was much easier to find investigators with industry experience
than investigators without. In 2011, the Association for the Accreditation of Human Research
Protection Programs compiled data that included, among other metrics, a general description of
the clinical research conducted or overseen by 193 research organizations in the United States.
Amongst university respondents, 31.5% of research was sponsored federally, 11.3% by industry,
and 35.6% from “other” sources including investigator sponsors (AAHRPP 2011). However, in
the university studied here, the proportion of research supported by industry was higher; 61%
were sponsored from non-industry, including federal sources, 34% were sponsored by industry
and 5% were sponsored by the academic department (Program Director, University OPRS,
personal communication).
2) The Audit Tool
At the core of this research were questions regarding those aspects of management that assure the
quality and integrity of clinical trials. I was particularly interested in the challenges faced by
investigator-sponsors in vulnerable areas commonly cited as deficiencies by FDA and OHRP. To
delimit and systematize the study around a manageable set of GCP benchmarks, a decision was
made to develop an audit tool that explored the six most common deficiencies of principal
102
investigators of investigator-sponsored trials as identified in warning letters from the
FDA/OHRP. This analysis therefore did not examine all aspects of GCPs but rather those
elements that I considered to be key potential deficiencies that might challenge investigator-
sponsors. This approach is in line with FDA's current, risk-based, approach in which they suggest
that investigators focus on the most critical elements which are “more likely to ensure subject
protections and overall study quality" (FDA(a) 2011).
The design of the audit instrument in this study differs in focus and level of detail from
audit tools used by the FDA and other leading research universities. The FDA’s protocol for the
audit (that they call inspection) of human drug, biologic or device studies spans the full range of
practices and procedures necessary to determine compliance with applicable regulations (FDA
2012). Their audit tool provides detailed instructions to inspectors to ensure a thorough and
systematic site review. The audits conducted of trials in Dana-Farber/Harvard Cancer Center are
also relatively broad. The Clinical Trials Audit Manual of the Dana-Farber/Harvard Cancer
Center, often used as a model for this type of activity, defines potential violations across eleven
elements of review including randomization schemes, toxicity and data quality (DanaFarber
2009). The University of Rochester provides clinical researchers with a self-study audit tool.
This tool is more similar in scope to the tool used in this study and focuses on documentation,
informed consent and test article management (UniversityofRochester 2012).
In the present study, the relatively smaller number of elements in the tool allowed for a
deeper analysis of each element and this proved effective when judging compliance within and
across studies. However, the results are to some extent limited by the fact that the selected
elements may not be sufficiently inclusive to characterize compliance fully. The use of a focus
group of professionals experienced in the design, operation and administration of clinical trials
appeared to be a useful way to enhance confidence in the face and content validity of the audit
103
tool and to reduce the risk of bias due to the preconceived notions of the survey designer.
Nevertheless, there may be value in future to explore other interesting topics related to
investigator-sponsor compliance with GCPs, such as clinical trial protocol development and
investigator responsibilities and oversight.
A qualitative, on-site audit approach was selected because of the varied nature of the data
that I planned to collect. A good on-site audit relies on cooperation between the auditor who is
generally more knowledgeable about the audit criteria and the investigator who is normally more
familiar with the processes at the study site. The auditor contributes by providing an independent
analysis of process strengths and weaknesses and the investigator knows best how to capitalize on
the audit findings (Karapetrovic and Willborn 2002). In this study, the audits on-site allowed the
researcher to access documents of interest and provided the researcher with opportunity to ask
questions and clarify unclear elements with principal investigators and study coordinators. In
some instances, this approach also helped to validate what was documented in records and
observe directly any discordance between what was done and what was being told to the
investigator. Face to face explanation of the purpose and methods of my study along with a
written promise of confidentiality and anonymity appeared to create an environment of trust and
allowed for more open discussions and access to documents.
Although the principal approach was qualitative, findings were “scored” as “compliant”,
“partially compliant”, “noncompliant” or “not applicable” using numerical values so that the
results could be compared between investigators. However, such a grading system should not be
considered as a good quantitative tool because the scores on individual items were not (and could
not be) weighted according the relative seriousness of non-compliance. In some cases, such as
the inability to locate documentation of a waiver of an IND, a low score does not necessarily
indicate poor performance, but perhaps just poor record keeping. In others, such as failure to
104
notify the IRB of changes in protocol, the non-compliance was more serious. As a consequence,
the cumulative score has relatively little meaning in terms of the seriousness of poorly compliant
or noncompliant items. Because the cumulative scores were a summed value based on a number
of individual scores, two different studies could have the same overall score and yet be different
in terms of the seriousness of their noncompliance (Reber, Allen et al. 2009). Further, the scores
represent a snapshot of circumstances at the time of the audit on the basis of examinations of
records associated with only a couple of patients. All of these concerns underline the fact that the
scoring is more effective as a coarse measure to provide general insights rather than a
quantitatively valid evaluation of individual trials.
In this study, all of the criteria within each of the six GCP domains were considered to
have an equal level of concern. A different and interesting approach used by auditors at Dana-
Farber might in future be used to improve the tool used in this study. The Dana-Farber approach
includes scoring of “major” and “minor” violations and scores the final audit as “exceptional”,
“satisfactory” or “acceptable, needs follow up” (DanaFarber 2009). This type of scoring system
is still influenced by the judgment of the auditor, but allows the principal investigator and their
team the opportunity to prioritize and correct more serious deficiencies.
Consideration of the PDCA Cycle Framework
In this research, the study instrument was designed around a PDCA approach that
captures aspects of the trial lifecycle in ways that have not been previously explored. PDCA was
chosen because it facilitates characterization of systems used to ensure quality by isolating
iterative activities in a consistent way. Although at the outset of this study, the PDCA cycle was
not found to be typically used in evaluations of clinical trials, in the last two years at least two
publications have advocated its use in this way (Kleppinger and Ball 2010, Bhatt 2011). By
pairing PDCA with a framework of commonly cited deficiencies, I postulated that it may be
105
possible to identify particular gaps that prevent adherence to GCPs, so that more targeted
educational or other solutions to improve compliance could be suggested. Results showed that
PIs had particular difficulty with planning and checking. However, as this study illustrated,
difficulty in any one PDCA phase has implications for the other phases.
Results here would seem to suggest that problems with GCP compliance can be
recognized as early as the planning phase. Defining at the outset all of the tasks that need to be
accomplished may be a significant obstacle faced by the project team (Glancszpiegel 2009).
Early consultation with the IRB may improve the chances of developing an investigational plan
that meets regulatory requirements because their feedback and “insightful comments can be quite
helpful” (Guyatt 2006). An effective IRB can greatly aid the PI with the planning phase by
walking the investigator step by step through a series of preparatory questions and required
responses. Many IRBs, including the IRB of the university under study, are implementing pre-
review processes whereby an IRB staff member reviews initial submissions with particular focus
on the protocol and consent form and works with the PI to ensure that all questions on the
application are clearly answered in sufficient detail to meet regulatory requirements and facilitate
approval at committee (Anonymous 2012). These observations point to the critical role of the
IRB application process to identify issues of concern. If for some reason, an item is not contained
in that checklist, it is more likely that it will be missed or viewed as less important by the
investigator. We might suggest for example, that in reference to this study, if the IRB submission
instructions required that investigators discuss what is done with patients at the end of the study
period, it might be more likely that the investigator would think through this activity and perhaps
put into place more effective systems to refer patients to the healthcare option most suited to
them. This leads to the question: What happens if an IRB is not so careful or vigilant in their
submissions requirements and oversight of the protocol? Would the investigators be as well
106
prepared? In a future study, it may be interesting to determine how much assistance is given by
the IRB and how many and what type of contingencies the PI must address prior to approval.
An area that has received perhaps the most attention with regard to potential challenges of
investigator-initiated trials is that of quality assurance and monitoring, that is, “checking”.
Effective quality assurance/quality improvement programs can be used to protect the well-being
of clinical trial participants and are typically recognized as essential components of high quality
research (Wolf and O'Rourke 2002). Proper monitoring is necessary to “assure adequate
protection of the rights of human subjects and the safety of all subjects involved in clinical
investigation and the quality and integrity of the resulting data” (FDA 1998). FDA guidance for
monitoring of clinical investigations describes an approach to monitoring that includes;
1. Selection of a monitor with education, training or expertise in the nature of the
disease/condition and product under study.
2. Pre-investigation visits to assure that the investigator clearly understands the nature of
the test article, investigational plan, operational requirements and her/his ethical
obligations.
3. Periodic visits that are frequent enough to assure the protocol/investigational plan is
being followed.
4. Review of participant records to assure they are complete, accurate and legible; all visits
and examinations are recorded; reasons for participant failing to complete the study; and
informed consent is documented.
5. Findings, conclusions and actions taken to correct deficiencies are recorded such that it is
apparent the sponsor’s obligations for monitoring the progress of the study are being
fulfilled.
107
For the investigator-sponsor, these requirements may seem daunting. In the present study,
only 12 of 21 trials were found to be monitored, and only three of these were monitored more
often than annually. Moreover, monitoring was most often accomplished through internal
mechanisms, either internal to the study team itself or internal to the sponsoring academic or
clinical department. For obvious reasons of conflict of interest, monitoring may be more
effective when conducted by an independent body, but some evidence in the literature suggests
that internal monitoring programs can be effective. For example, in 2006 the Society for Clinical
Trials determined that little guidance was available for organizing monitoring plans for early
phase trials and subsequently commissioned a committee to “propose corresponding guidelines
for exploratory clinical trials (i.e. most phase I and II trials and others not requiring a fully
independent data monitoring committee)”. The committee concluded that monitoring
implementation did not need to be the same in all locations, was dependent on the nature of the
trial and could remain decentralized and under the purview of the local IRB if appropriate
policies were in place (Dixon, Freedman et al. 2006). To illustrate this point, a team of
researchers at The Hospital for Sick Children in Toronto has developed a template for an
objective internal monitoring plan based on Good Clinical Practices that meets the requirements
of Health Canada and FDA. They developed a set of standard operating procedures that included
sampling methodology and frequency, requirements of monitors including consideration of
conflict-of-interest challenges, checklists outlining essential documents to be reviewed and
checklists outlining data elements to be monitored. The team found that a well established
dissemination plan was essential to ensure that appropriate team members were made aware of
problem areas. When creating action plans, their template calls for appointment of responsible
parties and time frames for correction. The study team found that their internal monitoring
108
program allowed for identification of deficiencies, recognition of opportunities for improvement
and objective measures of success (Mehta and Devine 2012).
The studies examined here employed various methods of monitoring. Because no
consensus currently exists regarding optimal models for monitoring an investigator-initiated trial,
these trials would seem to meet the letter of the law. However, we must consider whether they
are sufficiently effective to ensure participant safety and data integrity in the unlikely situation
that an investigator has a trial that is dangerously out of control. One approach that might
improve the audit process in a way that is not too onerous may be a Quality by Design (QbD)
model such as the one proposed by FDA and Duke University. In this model “those responsible
for the overall conduct of the trial would identify the critical aspects that, if not performed
correctly, would threaten the protection of patients or the integrity of results”. Vulnerabilities and
areas where tolerance for errors is unacceptable could be identified during protocol development.
Monitoring is then targeted to the protocol and encompasses the full range of identified risks;
findings then become part of an improvement feedback loop. The Clinical Trials Transformation
Initiative does not advocate additional layers of regulation but rather encourages a “thoughtful
restructuring of existing practices, emphasizing careful planning and streamlined execution”
(Landray, Grandinetti et al. 2012). In future studies, a closer look at what is being monitored and
the reasons for selection of those measures might provide better insight into whether investigators
are taking appropriate actions to prevent and/or correct failures of process that may affect the
safety of participants and data integrity.
Consideration of the Results
A central aim of this study was to further the understanding of gaps that might be present in
the practices used to protect research participants. Knowledge of such gaps can be useful in
guiding policy development and educational initiatives that could improve practices and reduce
109
risks of non-compliance. With this in mind, several areas of challenge stand out as significant,
and three 1) documentation systems, 2) staffing and 3) care transition will be discussed further
here.
1) Documentation Systems:
a) Record-keeping: Record keeping was observed to be an area in which several
nonconformities were found. For example, study staff often appeared unable to locate
documents such as IND/IDE correspondence and source documents to support recorded
data. Gaining access to these records was further complicated because records from a
single study were often kept in multiple locations, on and off site, and in multiple
formats, including a mix of paper and electronic records. If such documents are difficult
to organize and locate even during ongoing trials like those studied here, we might
predict that they will be even more difficult to reconstruct months or years after the study
has terminated. Nevertheless, regulatory audits are often conducted after the trial is
concluded (FDA(a) 2011). For this reason, regulatory agencies publish guidelines on
record retention, which can vary from periods as short as 5 years after trial completion to
as long as 30 years for some oncology and blood transfusion records (Ronald, Dinnett et
al. 2011). It would be interesting to examine in future research the degree to which
investigators in investigator-sponsor trials are able to establish storage systems that can
meet the expectations of FDA/ICH which state that the investigator is responsible as
follows:
Essential documents should be retained until at least 2 years after the last
approval of a marketing application in and ICH region and until there are no
pending or contemplated marketing applications in an ICH region or at least 2
110
years have elapsed since the formal discontinuation of clinical development of
the investigational product (ICH(a) 1996).
Part of the problem that may underlie the difficulties of document retrieval and storage
may stem from the use of paper-based systems. Clinical trials have historically relied
heavily on paper-based systems. Even in this era of electronic capabilities and internet
based products, the clinical trials industry appears to be behind most other industries in
the adoption of electronic technology (Marks 2004). One way to improve the
standardization and organization of clinical trial data would be to use electronic
documentation systems of the type that have become very common in multicenter clinical
trials sponsored by larger pharmaceutical companies (Allison 2012). However, it can be
challenging to implement electronic data management systems that are able to satisfy the
regulatory requirements for software validation, audit trails, and assured electronic
signatures (FDA 1999). These products are costly to purchase and customize, are
difficult to select, and require substantial investments in training and ongoing technical
support of the system. Electronic systems customized for individual trials and with
sophisticated interfaces and ongoing licensing and updating fees can cost hundreds of
thousands of dollars, potentially beyond the reach of all but the largest academic
institutions (Pless 2009).
An alternative approach that is attracting much support by universities is the use
of a tool called Research Electronic Data Capture (REDCap), originally developed at
Vanderbilt University and now in use at more than 195 universities, for example at the
University of Michigan and the University of Florida (UniversityofFlorida 2012,
UniversityofMichigan 2012). This standardized system for data capture and adverse
event reporting could make electronic systems more accessible to small, investigator
111
sponsored studies throughout an institution. However, not all centers are wedded to the
use of the REDCap system. Universities have the freedom to adopt systems that best
satisfy their needs, and this may reduce the consistency between documentation across
sites. Selection of the product is critical. The REDCap system is designed to “provide
research teams intuitive and reusable tools for collecting, storing and disseminating
project-specific clinical and translational research data” (Harris, Taylor et al. 2008).
Other more costly systems, such as Oncore®, include additional capabilities such as
biospecimen management, registry management, billing compliance, and interfaces with
electronic health records (OnCore 2012).
Once the product is selected, training of staff and attention to workflow changes
required for its use are vital, but often plagued by problems. In one study, novice and
expert users of an electronic clinical trials management system (CTMS) were studied
during the development of a participant-activities calendar to assess the potential for
human-computer interaction problems. The study found that the electronic workflow had
a similar level of complexity to that based on more traditional word processing methods
but the CTMS system challenged users with a larger number of screen transactions and
potential usability problems. The investigators concluded that for both novice and expert
users, non-intuitive user interfaces such as unfamiliar icons also caused ambiguous or
incorrect interpretations by users that could potentially contribute to adverse outcomes
(Starren, Payne et al. 2006).
Currently, electronic research data repositories are not standardized making such
data inflexible and difficult to navigate and interpret. However, the establishment of
standards for the use and archiving electronic research data and metadata is underway by
FDA through its Janus Clinical Repository Project. FDA’s goal is to create open access
112
software and tools to promote the maximum amount of data sharing with industry
(CDISC 2012, FDA(a) 2012). Its repository is designed to receive data in “CDISC”
format. Clinical Data Interchange Standards Consortium (CDISC)is a “global, open,
multidisciplinary, non-profit organization” whose broad membership includes
pharmaceutical and biotech companies, clinical research organizations and government
organizations including Veteran’s Administration and NCI. It is developing and
supporting “platform independent, vendor neutral data standards that enable information
system interoperability to improve medical research and related areas of healthcare”.
At the university in which this study was carried out, the benefits of investing in
an electronic clinical trials management system have recently been decided to outweigh
cost. The University is in the process of product selection and is focusing on more robust
systems with the goal in mind of implementing a system that not only captures data from
the clinical trials but also includes the capabilities to handle billing transactions and
compliance specifications (University Chief Information Officer, personal
communication). It remains to be seen whether this system will result in better
documentation practices at the sites. It is also possible, however, that by paying so much
attention to selection and implementation of the new tool, initiatives to train clinical
coordinators in better documentation practices during the interim may be downgraded in
priority.
b) Protocol Deviations: Protocol deviations are a common problem in all clinical trials. In
their analysis of 80 clinical trials published in four major medical journals, Sweetman and
Doig concluded that deviations were underreported, for example, 33% (26/80: 701 study
participants) did not report any type of protocol deviation. Also found were 38 trials that
did explicitly report deviations, but the number of deviations was typically low, on
113
average 7% (Sweetman and Doig 2011). In this study, protocol deviations were found to
have been reported to the IRB by less than one-third of experienced investigators and by
about half of inexperienced investigators. It is not clear whether such a figure
underestimates the failure to report all appropriate incidents because even those who
reported deviations may have captured only a subset of those deviations. Thus it would
be surprising if the complete absence of deviations in some of the studies examined here
reflected the true extent of protocol adherence. Part of the explanation for the failure to
report deviations may stem from the way that reporting requirements are understood. In
discussions with study coordinators and some PIs, it was apparent that what they
perceived to constitute a protocol deviation was often not aligned with ICH guidelines
which state:
i. The investigator should conduct the trial in compliance with the protocol
….which was given approval by the IRB….
ii. The investigator should not implement any deviation from, or changes of the
protocol without…..prior review and documented approval from the IRB….
iii. The investigator….should document and explain any deviation from the approved
protocol.
iv. The investigator may implement a deviation….to eliminate an immediate
hazard….
The ICH guidelines do not distinguish between major and minor deviations; however,
many study coordinators in this study were found to make this arbitrary distinction and
determined that “minor” deviations did not need to be reported. Is this a serious problem?
Protocol deviations can have implications on study outcomes, even when they do not seem to
be substantive or have effects on participant safety (Alemayehu, Alvir et al. 2012). A short
114
course for PIs and coordinators on this topic may help to improve the compliance with regard
to reporting protocol deviations at this University.
c) Acknowledgement of Adverse Events by PIs: The PI of a trial has a significant
responsibility to understand what is happening “on the ground” for a trial to which he or
she has committed. Good Clinical Practices require the investigator to be “responsible
for all trial-related medical decisions” (ICH(a) 1996). The PI is often in a unique position
to understand the complete context of the investigation and to be able to determine the
significance of overt or subtle findings in safety data that could eventually result in a
serious adverse event. However, in a number of studies, PIs did not routinely
memorialize their review of clinical safety testing by some form of documentation. In
most instances, study coordinators stated they reviewed the results but only when there
were findings of concern did they bring those findings to the attention of the PI. Such an
approach is problematic because the investigator may only become aware of isolated
findings or may miss important findings because they had been filtered by subordinates
who lacked the capabilities to make appropriate decisions. Such practices could put the
trial at risk. Even if a PI delegates the collection of study-related data to another team
member, that PI must retain responsibility for validation of the data (Fortwengel 2011).
The fact that an investigator’s signature does not appear on a data sheet does not
necessarily mean that the investigator was unaware of the results and adverse events that
were collected. However, without documentation, it is impossible to provide evidence to
prove that the PI in fact participated in data management should the study be audited by
regulatory authorities. Good documentation helps to ensure that study results are built on
credible and valid data and helps to protect the participant’s rights and safety (Bargaje
2011). Previous observations that such documentation was often missing at the
115
University of Tennessee led their Medical Group to create documentation checklists for
investigator-sponsors. They noted that even documents supplied by industry did not
include all elements. One example used was particularly germane; they were concerned
that documentation “might not show the principal investigator’s oversight”. Source
documents created by the UT Medical Group include spaces for the investigator’s
signature on each document (Anonymous(a) 2012). The practice of signing each data
sheet is advocated as well in most training programs that are developed by reputable
clinical research training and consulting groups such as Barnett Educational Services
(BarnettEducationalServices 2012).
2) Staffing:
The role played by study coordinators is given much less attention in literature to date
that that played by the PI. This may reflect a prevalent view that the role of study coordinator
is that of an assistant to the PI, with little or no authority or autonomy with respect to the
conduct of the study. Yet, in an analysis of FDA warning letters in 2000, almost half of the
investigators who were cited for not supervising their staff blamed study coordinators for
their citations (Davis, Chandros Hall et al. 2002).
The position of study coordinator requires substantial knowledge and skill across a
variety of domains. A model manual of standard operating procedures for study coordinators,
developed by the Hamot Research Center at the University of Pittsburgh, defines 14 potential
roles and responsibilities, including negotiating budgets, facilitating peer review, developing
study processes and documents, educating staff, recruiting, screening and enrolling
participants, accounting for study funds and reporting adverse events (Fries 2002). In
addition to administrative functions, coordinators are often the key team member charged
with supporting and monitoring the physical and psychosocial wellbeing of participants as
116
they progress through the trial (Arford, Knowles et al. 2008). Given the critical importance
of study coordinators, it is surprising that the coordinators associated with the studies
reviewed here had educational and experiential backgrounds that were highly variable.
However, such variability appears consistent with previous observations. For example, the
Association of Clinical Research Professionals defines eligibility criteria for coordinator
certification as anywhere from a high school diploma and 4,500 hours of experience to a
Bachelor’s degree and 3,000 hours of experience with no clinical background required
(ACRP 2012). Without standardization of coordinator preparation, even within the certifying
body of the profession, it is easier to understand why so much variability is found in clinical
studies.
That all coordinators felt that they were competent and confident to perform their role
was almost universal amongst the studies examined here. Nevertheless, several instances
were found where these individuals appeared to misunderstand the protocol or the regulatory
or legal ramifications of their activities. Of significance, for example, was the finding that a
low-risk, non-invasive medical device was applied to study participants by a newly recruited
and unlicensed member of the study team. In other universities, it has been recognized that
coordinators vary tremendously in their skills and educational backgrounds. Some
universities have taken aggressive steps to improve the capabilities of their clinical
coordinators. At the University of South Carolina, this additional intervention took the form
of a mandatory research coordinator development program. The program focused on human
subjects’ protections such as elements of informed consent and recruitment, administrative
issues such as billing compliance, and regulatory requirements. Three years after the
introduction of the program, the University observed a substantial decrease in the number of
audit findings of concern (Arford, 2008). Training programs for coordinators have also been
117
developed across universities and by government agencies to provide better options for
coordinators to improve their skills (Rosa et al, 2009). In other universities, attention has
been paid to standardizing job descriptions for clinical coordinators so that a greater degree of
homogeneity, increased responsibility and autonomy and a higher level of preparation for the
role of coordinator could be assured (Merry, 2010).
3) Care Transition at End of Study:
A new area of focus in clinical trials research is the concern about what, if anything, researchers
owe volunteers at the end of their study participation. Current regulations do not provide
definitive guidance in this regard (Sofaer, Thiessen et al. 2009). For example, the 2008 revision
to the Declaration of Helsinki states:
At the conclusion of the study, patients entered into the study are entitled to be informed
about the outcome of the study and to share any benefits that result from it, for example,
access to interventions identified as beneficial in the study or to other appropriate care
or benefits (WorldMedicalAssociation 1964).
In 2009 FDA issued guidance for investigators with regard to protecting the rights, safety
and welfare of study participants. Although the guidance is not specific about what should be
done to ensure ongoing care for subjects at the end of the study, it does recommend that the
investigator ensures care for conditions that result as a consequence of the study intervention
whether those conditions are discovered during or after the course of the study. It also instructs
the investigator to take action related to non-study related conditions when such conditions
become apparent. The guidance states that if the participant does not have a treating physician,
the investigator should assist the participant in obtaining needed medical care (FDA 2009). Thus
it was interesting in this study to find that few study teams thought about the fate of subjects once
118
they had completed the trial, assuming instead that the subject would return to their original
primary care team.
What do study participants consider that they are owed after their participation ends? In
a previous study to understand patients' views, focus groups were held with 93 individuals who
had prior experience as enrolled participants in U.S. studies of experimental drugs for one of
three chronic illnesses (diabetes, depression or arthritis). The comments of the participants
regarding expectations focused on three post-trial issues: access to study drug, short-term care
transition and treatment for adverse events. Participants expressed the view that they should be
offered continued access to study drug, if the drug showed benefit to them. They mentioned
several options for obtaining the drug including participation in later phases of the same trial,
participation in follow-up studies or different studies, or receiving access to the drug through
conventional health care systems. Participants felt that researchers and sponsors should be
obligated to bridge the gap between trial and post-trial care to prevent post-trial deterioration in
health. They suggested that options might include helping the participant to be aware of all of
their options, providing short term supply of study drug as necessary, transferring trial-related
medical records to the participant’s non-study physician, and performing limited post-study
surveillance. Groups agreed that former participants should be provided information about
whether they received active drug or placebo and about the adverse events discovered during the
trial even years after the end of their participation. Some believed that the sponsor should be
responsible for the cost of care for long term effects and post-trial health problems. Participants
had many reasons why they felt that these obligations should exist. Most frequently cited was the
potential for post-trial deterioration in health. Others believed the special relationship that they
developed with the PI as a care provider should not be abruptly severed and still others believed
119
that their exposure to risk generated a reciprocal obligation on the part of the sponsor and PI
(Sofaer, Thiessen et al. 2009).
In this study, I observed confusion in the minds of the study team about even the
possibility of an aftercare obligation toward subjects at the end of study. It was striking that when
asked about the transition of the participant to primary care at the end of the study, coordinators
and PIs were generally found to misinterpret the question, describing instead the criteria for
prematurely ending an individual’s participation in the study. Such confusion is perhaps not
unexpected because when pressed most coordinators and PIs appeared unaware of any
responsibility to ensure care for participants after they ended the trial.
As discussed in Chapter 2, the Belmont Report drew a clear distinction between clinical
care and research; research was undertaken for the purpose of developing generalizable
knowledge to benefit future patients whereas clinical care had a therapeutic intent separated from
the research objective. However, much has changed since that report was written more than forty
years ago. More recently, a 2011 Hastings Center Report has suggested that this distinction
between clinical care and research has become “increasingly blurred”. The Report further
advances a new vision that ethically integrates research and care in an effort to support the
development of evidence as an outgrowth of clinical care (Largent, Joffe et al. 2011).
The conclusions of the Hastings Center Report of 2011 build on an earlier report of the
same organization in 2004 that describes an ethical framework for thinking about clinical care
owed to study participants. The study concludes that ancillary care responsibilities need to be
considered in detail every time a protocol is developed. The obligation for ancillary care may be
defined ethically as a limited and partial set of responsibilities that are “well described by saying
that aspects of their subject’s health has been entrusted to them”. Depending on the nature of the
study (simple blood draw versus long term natural history study) the study may uncover needs for
120
ancillary care. The report further describes an imperative for acting compassionately and being
reasonably responsive to an individual’s needs and perspectives, engaging with participants as
whole people and beyond the constraints of a research protocol. Finally, it identifies a need to
express gratitude for participant’s willingness to enter into a vulnerable position from which they
may never benefit (Richardson and Belsky 2004).
There are at least two reasons why it may be important in future to have a closer
relationship between research and clinical care. First, it may enhance the attractiveness of
participating in clinical trials and thus foster better and faster recruitment. It is well-understood
that most clinical trials find it difficult to enroll subjects in a timely manner. Such delays can
greatly increase the length and cost of the clinical trials, and ultimately may delay the marketing
of the new drug. In 2010, the National Heart, Lung and Blood Institute (NHLBI) held meetings
to assess the significance of delayed recruitment and to determine strategies to overcome the
shortage of study volunteers. The importance of primary care providers as “gatekeepers” for
clinical trials recruitment was emphasized. Two strategies considered as options to reduce the
problem were to increase the participation of primary care providers, first by data mining
electronic medical records of primary care providers for potential study participants and second,
by involving and arming the primary care providers with recruitment tools that are “practical,
specific, unobtrusive and easily implementable by the primary care clinician’s staff” (Probstfield
and Frye 2011).
In most of the studies considered in this work, recruitment was managed by recruiting
subjects from principal investigators' private practices. However, it is not clear that PIs would be
motivated to consider the way that subjects were transitioned to primary care even if studies such
as those examined here were to rely on referrals. Instead, it may suggest that investigators
structure their trials based on the way that regulatory agencies or investigational review boards
121
expect. Normally, no questions are asked of investigators regarding the management of patients
beyond the termination of the study, so such considerations may not even come to the mind of the
investigator during planning phases. What is does suggest is that the planning phase for the study
is critically important and the detailed submission that is required by the IRB may play an
important role in cuing the investigator about aspects of the trial that must be managed from the
perspective of Good Clinical Practices.
The termination of study can be difficult, particularly in long term studies where a bond
is developed between the participant and the investigator and staff. Providing participants with
specific recommendations for follow up care is ideal but can be difficult until such time as the
trial is over and the participants can be informed of study results (Friedman, Furberg Curt et al.
2010). Given these circumstances, possible ways of addressing post-trial access to care should be
discussed and negotiated prior to beginning a study and researchers and sponsors should take
responsibility for certain short-term solutions when appropriate (Grady 2005). Next steps for this
University might include education of investigators and staff about the need to consider post-trial
management, and possibly adding a field to the IRB application that requires investigators to
describe their post study transition plans.
Considerations of the Experience of Principal Investigators
At the beginning of this work I postulated that investigators with industry experience
might be more capable of conducting a good trial from the point of view of GCPs because of the
learning that may have resulted from the education and oversight that was provided by industry in
the course of previous research. Thus is was surprising to find no better performance, and in
some aspects, weaker performance, amongst experienced as compared to inexperienced
investigators. This study defined “inexperience” as the lack of experience as a principal
investigator in industry sponsored clinical trials. The study did not consider other characteristics
122
of investigators such as formal education and training in GCPs. However, there is some support
in the literature for a relationship between the level of preparation of an investigator and the state
of compliance of their trials. A study supported by the Hospital Corporation of America sought
to determine if physician PIs certified by the Academy of Clinical Research Professionals had as
many “for-cause” audits by FDA as those who were not certified. They found that the certified
PIs had just as many for-cause inspections as those not certified but that overall the certified PIs
had better inspection outcomes with fewer official actions taken by FDA. They also found that
many non-certified investigators performed very well on FDA inspections (Vulcano 2012). In the
present study, the specific nature of observations were not rated according to level of severity, so
it is difficult to judge from the research reported here whether such a relationship with less
important deviations also exists.
Could it be that principal investigators really do not learn as much from industry
experience as might be supposed? Part of the failure to achieve the kinds of learning that might
be envisioned could relate to the way that investigators are trained in industry-sponsored studies.
For reasons of efficiency and cost-effectiveness, large, multicenter, industry sponsored studies
typically host a centralized meeting where investigators and their teams are introduced to the
study. In a 2006 Center-Watch survey of 102 investigative sites regarding the effectiveness of
traditional pre-study investigator meetings, 64% of respondents preferred hands-on or interactive
learning over the traditional large-group lecture style. Investigators and coordinators found the
large group sessions to be informative but they wanted sessions tailored to their level of
experience and unique needs of their study site (Space 2007). Another drawback of front-loading
protocol-specific training without any serious educational follow-up is that some elements
important to GCPs may not be internalized, or are later forgotten by the investigator and staff as
the study progresses (Lake 2008).
123
Historically, the major responsibility for the study has been considered to be squarely in
the hands of the investigator. Could it be that principal investigators in the academic setting have
more support than might be recognized? Examples of education, training and mentoring
programs offered by government, professional associations and academic centers are evident in
the literature. For example, FDA offers a three day Clinical Investigator Training Course that is
“intended to provide clinical investigators with expertise in the design, conduct and analysis of
clinical trials; improve the quality of clinical trials; and enhance the safety of participants”. The
course covers scientific, regulatory and ethical considerations including pre-clinical information
that is required to support a new drug application. Another objective of the program is to
improve communication between FDA and the investigator (FederalRegister 2012).
The Royal College of Physicians and Surgeons of Canada launched a Clinical
Investigator Program (CIP) in 1995 to assist in the career development of MD-PhD program
graduates. The CIP was designed to be a rigorous and structured approach to developing clinical
investigators. Ten years after the inception of the program, the Royal College studied current and
former trainees to determine the impacts of the program. They found that the program has been
successful in supporting investigators who pursue independent and team projects with many
supported by external awards, publishing papers and attaining academic appointments (Hayward,
Danoff et al. 2011).
This type of effort has also been observed in the Clinical Translational Science Awards
(CTSA) created by NIH for academic health sciences campuses that included a call for the
development of the next generation of clinical research leaders. Typically supported are in-
classroom and on-line education offerings as well as written materials and consultation. The CTSI
at the University of California San Francisco (UCSF) has gone beyond the base requirements for
the program by establishing a Mentor Development Program designed to train mid-career
124
academic researchers to be “more effective as clinical and translational research mentors”. In the
UCSF model, mentors guide and support developing investigators not only in scientific
methodology but also in team building and project management (Johnson, Subak et al. 2010).
This study did not consider the credentials of the PIs studied. An interesting next step might be to
correlate the academic and continuing education backgrounds of PIs with audit outcomes in order
to see if formal education and training in clinical research methods in this way makes a difference
in the ability of the PI to comply with GCPs.
In all of these initiatives, the staff members of the clinical trials are not recognized as
being central to the study, if the degree to which training is directed at them is compared with that
directed at the principal investigator. Nevertheless, numerous publications have highlighted how
much the success and compliance of the trial now rests in the hands of study staff, who often play
a wide-ranging role in all aspects of study conduct (Fries 2002). For example, the role and
impact of study coordinators was evaluated over the course of one particular trial, the Diabetes
Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications
(DCCT/EDIC), now in its 28
th
year. At the outset of the trial, the medical model of the 1980s
limited the role of the coordinator to screening for patient recruitment and coordinating protocol-
mandated interventions. Present day DCCT/EDIC study coordinators describe their roles not
only as coordinators, but as “educators, clinicians and researchers” with responsibilities that
include education of participants, data management, protocol implementation, laboratory testing,
regulatory submissions and leadership activities such as manuscript preparation, fund raising and
committee leadership (Larkin, Lorenzi et al. 2012). Perhaps another potential criticism of the
initial assumption of a study such as this, that the principal investigator is the key person to assure
compliance, is flawed. Much may instead depend on the experience of study team as a whole,
including its study coordinators or managers. No data was collected on the training or industry
125
experience of these individuals, but future research might focus on a better understanding of the
level of involvement of the whole study team rather than just the PI when trying to understand
whether experience with industry-sponsored trials is useful in improving the compliance of a trial
in which the investigator is also the sponsor.
126
REFERENCES
21CFR50 (2013). Protection of Human Subjects
http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?CFRPart=50.
21CFR56 (2011). Institutional Review Boards
http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfCFR/CFRSearch.cfm?fr=56.106.
21CFR312 (2013). Investigational New Drug Application
http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfCFR/CFRSearch.cfm?fr=312.84.
21CFR801 (2013). Labeling
http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfCFR/CFRSearch.cfm?fr=801.109.
21CFR812 (2013). Investigational Device Exemptions
http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?CFRPart=812.
AAHRPP (2011). Selected Types of Research Conducted or Overseen by Organizations
http://www.aahrpp.org/. www.aahrpp.org.
ACRP (2012). Association of Clinical Research Professionals: CRC Certification Handbook
http://www.acrpnet.org/.
Alemayehu, D., J. Alvir, P. B. Chappell and C. A. Knirsh (2012). "Risk Assessment and
Mitigation." Applied Clinical Trials(April): 33-37.
Allison, M. (2012). "Reinventing Clinical Trials." Nature Biotechnology 30(1): 41-49.
AMA (1847). American Medical Association Code of Medical Ethics, http://www.ama-
assn.org/resources/doc/ethics/1847code.pdf.
Anonymous (2012). "Best Practices: Pre-review Process Results in Faster IRB Review Process."
IRB Advisor August.
Anonymous(a) (2012). "Create Tools to Facilitate Efficient Documentation." Clinical Trial
Administrator Web. 2(November).
Arford, P. H., M. B. Knowles and N. V. Sneed (2008). "Competent Human Research
Professionals." The Journal of Continuing Education in Nursing 39(12): 565-567.
ASQ (2011). Quality Tools:Plan-Do-Check-Act Cycle, http://asq.org/learn-about-quality/project-
planning-tools/overview/pdca-cycle.html.
127
Bargaje, C. (2011). "Good Documentation Practice in Clinical Research." Perspectives in Clinical
Research 2(2): 59-63.
BarnettEducationalServices (2012). Clinical Research Training and Consulting
http://www.barnettinternational.com/.
Beecher, H. (1966). "Ethics and Clinical Research." The New England Journal of Medicine
274(24): 1354-1360.
Bellomo, R., S. Warrillow and M. Reade (2009). "Why We Should be Wary of Single Center
Trials." Critical Care Medicine 37(12): 3114-3119.
BelmontReport (1978). Ethical Principles and Guidelines for the Protection of Human Subjects of
Research (Belmont Report) http://www.hhs.gov/ohrp/humansubjects/guidance/belmont.html.
Berro, M., B. Burnett, G. Fromell, K. Hartman, E. Rubenstein and K. Schuff (2011). "Support for
Investigator-Initiated Clinical Research Involving Investigational Drugs or Devices: The Clinical
and Translational Science Award Experience." Academic Medicine 86(2): 1-7.
Bhatt, A. (2011). "Quality of Clinical Trials: A Moving Target." Perpectives in Clinical Research
2(4): 124-128.
Bigby, B. (2002). "A Continuous Quality Improvement Plan for Monitoring Clinical Trials."
Clinical Researcher 2(8): 20-22.
Burgess, T. F. (2001). "A General Introduction to the Design of Questionnaires for Survey
Research." University of Leeds(11).
Burman, W., P. Breese, S. Weis, N. Bock, J. Bernardo and A. Vernon (2003). "The Effects of
Local Review on Informed Consent Documents from a Multicenter Clinical Trials Consortium."
Controlled Clinical Trials 24: 245-253.
Califf, R. (2006). "Clinical Trials Bureaucracy: Unintended Consequences of Well-Intentioned
Policy." Clinical Trials 3(6): 496-502.
Califf, R. M., M. A. Morse, J. Wittes, S. N. Goodman, D. K. Nelson, D. L. DeMets, R. P. Iafrate
and J. Sugarman (2003). "Toward Protecting the Safety of Participants in Clinical Trials."
Controlled Clinical Trials 24: 256-271.
CBER (2000). "Warning Letter Re: Human Gene Therapy Institute."
http://www.fda.gov/downloads/ICECI/EnforcementActions/WarningLetters/2000/UCM068181.p
df.
128
CBER (2003). "Warning Letter To Alkis Togias."
http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/2003/ucm147403.htm.
CDISC (2012). Mission & Principles http://www.cdisc.org/.
Christian, M., J. Goldberg, J. Killen and J. Abrams (2002). "A Central Institutional Review Board
for Multi-institutional Trials." The New England Journal of Medicine 346(18): 1405-1408.
Corbie-Smith, G. (1999). "The Continuing Legacy of the Tuskegee Syphilis Study:
Considerations for Clinical Investigation." The American Journal of the Medical Sciences
317(1): 5-8.
DanaFarber (2009). Dana Farber/Harvard Clinical Trials Audit Manual www.dfhcc.harvard.edu.
Davis, A. M., S. Chandros Hall, C. Grady, B. S. Wilfond and G. E. Henderson (2002). "The
Invisible Hand in Clinical Research: The Study Coordinator's Role in Human Subjects
Protection." Journal of Law, Medicine & Ethics 30: 411-419.
Dixon, D. O., R. S. Freedman, J. Herson, M. Hughes, K. Kim, M. H. Silverman and C. M.
Tangen (2006). "Guidelines for Data and Safety Monitoring for Clinical Trials Not Requiring
Traditional Data Monitoring Committees." Clinical Trials(3): 314-319.
DukeUniversity (2011). https://www.dtmi.duke.edu/for-researchers/regulatory-support/.
Emanuel, E., A. Wood, A. Fleischman, A. Bowen, K. Getz, C. Grady, C. Levine, D.
Hammerschmidt, R. Faden, L. Eckenweller, C. Muse and J. Sugarman (2004). "Oversight of
Human Participants Research: Identifying ProblemsTo Evaluate Reform Proposals." Annals of
Internal Medicine 141(4): 282-291.
FDA (1938). Food, Drug and Cosmetic Act 21 U.S.C 301 et seq.
FDA (1987). Prescription Drug Marketing Act Pub. L. No.100-293,102 Stat. 95.
FDA (1992). Prescription Drug User Fee Act and Amendments Pub. L. No. 102-571, 106 Stat.
4491.
FDA (1997). Food and Drug Administration Modernization Act Pub. L. No. 105-115, 111 Stat.
2296.
FDA (1998). Monitoring Clinical Investigations Guidance for Industry - Guideline for the
Monitoring of Clinical Investigations
129
http://www.fda.gov/ScienceResearch/SpecialTopics/RunningClinicalTrials/GuidancesInformatio
nSheetsandNotices/ucm219433.htm. www.fda.gov.
FDA (1999). Guidance for Industry: Computerized Systems Used in Clinical Trials
http://www.fda.gov/RegulatoryInformation/Guidances/ucm122046.htm.
FDA (2003). Pediatric Research Equity Act Pub. L. No. 108-155, 117 Stat. 1936.
FDA (2005-2010). "Inspections, Citations, Enforcement and Criminal Investigations."
http://www.fda.gov/ICECI/EnforcementActions/WarningLetters/2005/default.htm.
FDA (2007). Food and Drug Administration Amendments Act Pub. L. No. 110-185, 121 Stat.
823.
FDA (2009). Guidance for Industry; Investigator Responsibilities-Protecting the Rights, Safety
and Welfare of Study Subjects
http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/U
CM187772.pdf.
FDA (2010). "Milestones in US Food and Drug Law History."
http://www.fda.gov/AboutFDA/WhatWeDo/History/default.htm.
FDA (2011). "Understanding Clinical Trials." http://clinicaltrials.gov/ct2/info/understand.
FDA (2012). Inspections, Compliance, Enforcement and Criminal Investigations; Part III
Inspectional (Surveyor Guidance) http://www.fda.gov/ICECI/default.htm. www.fda.gov.
FDA(a) (2011). Guidance for Industry: Oversight of Clinical Investigations - a Risk-Based
Approach to Monitoring
http://www.fda.gov/downloads/ScienceResearch/SpecialTopics/CriticalPathInitiative/UCM27752
9.pdf.
FDA(a) (2012). Janus Clinical Trials Repository Project
http://www.fda.gov/ForIndustry/DataStandards/StudyDataStandards/ucm155327.htm.
www.fda.gov/forindustry.
FederalRegister (2012). Clinical Investigator Training Course 77 FR 60439
https://www.federalregister.gov/articles/2011/07/29/2011-19149/clinical-investigator-training-
course.
Fortwengel, G., Ed. (2011). Guide for Investigator Initiated Trials. ebrary.com, Karger
Publishers.
130
Friedman, L. M., D. Furberg Curt and D. L. DeMets (2010). Fundamentals of Clinical Trials.
New York, Springer.
Fries, R. A. (2002). "Standard Operating Procedures for Clinical Research Coordinators." Drug
Information Journal 36: 369-377.
Getz, K. and R. Zuckerman (2010). "Today's Global Landscape." Applied Clinical Trials 19(6):
34-46.
Glancszpiegel, D. (2009). "Key Strategies for Planning and Executing Global Clinical Trials."
Pharmaceutical Executive 10: 4-11.
Grady, C. (2005). "The Challenge of Assuring Continued Post-Trial Access to Beneficial
Treatment." Yale Journal of Health Policy and Ethics: 425-436.
Guyatt, G. (2006). "Preparing a Research Protocol to Improve Chances for Success." Journal of
Clinical Epidemiology 59: 893-899.
Harris, P. A., R. Taylor, R. Thielke, J. Payne, N. Gonzalez and J. Conde (2008). "Research
Electronic Data Capture (REDCap) - A Metadata-driven Methodology and Workflow Process for
Providing Translational Research Informatics Support." Journal of Biomedical Informatics 42.
HastingsCenter (1996). Advisory Committee on Human Radiation Experiments, Subject
Interview Study: Study Subject 333208-7.
Hayward, C. P., D. Danoff, M. Kennedy, A. C. Lee, S. Brzezina and U. Bond (2011). "Clinician
Investigator Training in Canada: A Review." Clinical Investigative Medicine 34(6): E192-E201.
Heckathorn, D. D. (2011). "Comment: Snowball Versus Respondant-Driven Sampling."
Sociological Methodology 41: 355-358.
Hoffman, S. (2002). "Regulating Clinical Research: Informed Consent, Privacy and IRBs."
Capital University Law Review: 72-91.
ICH (2011). "International Council on Harmonization: About ICH."
http://www.ich.org/about/faqs.html.
ICH(a) (1996). "International Conference on Harmonization: Good Clinical Practices."
http://www.ich.org/products/guidelines/efficacy/efficacy-single/article/good-clinical-
practice.html.
131
InstituteofMedicine (2001). "Preserving Public Trust: Accreditation and Human Research
Participant Protection Programs http://www.iom.edu/Reports/2001/Preserving-Public-Trust-
Accreditation-and-Human-Research-Participant-Protection-Programs.aspx."
Jenner, E. (1798). "An Inquiry into the Causes and Effects of the Varolae Vaccinae, or Cow-
Pox." http://www.bartleby.com/38/4/1.html.
Jenner, E. (1799). "Further Observations on the Varolae Vaccine, or Cow-Pox."
http://www.bartleby.com/38/4/2.html.
Jenner, E. (1800). "A Continuation of Facts and Observations Relative to the Variolae Vaccinae,
or Cow-Pox." http://www/bartleby.com/38/4/3.html.
Johnson, M. O., L. L. Subak, J. S. Brown, K. A. Lee, M. D. Feldman and M. Phil (2010). "An
Innovative Program to Train Health Sciences Researchers to Be Effective Clinical and
Translational Research Mentors " Academic Medicine 85(3): 484-489.
Karapetrovic, S. and W. Willborn (2002). "Self-Audit of Process Performance." International
Journal of Quality and Reliability Management 19(1): 24-25.
Kleppinger, C. F. and L. K. Ball (2010). "Building Quality in Clinical Trials With Use of a
Quality Systems Approach." Clinical Infectious Diseases 51(Supplement 1): S111-S116.
Kuehn, B. (2009). "FDA Steps Up Efforts to Find, Remove Violators of Clinical Trial
Regulations." Journal of the American Medical Association 302(16): 1739-1741.
Lake, E. (2008). "Inside Investigator Meetings." Applied Clinical Trials June: 88-96.
Landray, M. J., C. Grandinetti, J. M. Kramer, B. W. Morrison, L. Ball and R. Sherman, E (2012).
"Clinical Trials: Rethinking How We Ensure Quality." Drug Information journal 46(6): 657-660.
Largent, E. A., S. Joffe and F. G. Miller (2011). "Can Research and Care Be Ethically
Integrated?" Asian Bioethics Review 3(2): 100-117.
Larkin, M. E., G. M. Lorenzi, M. Bayless, P. A. Cleary, a. Barnie, E. Golden, S. Hitt and S.
Genuth (2012). "Evolution of the Study Coordinator Role: The 28-year Experience in Diabetes
Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications
(DCCT/EDIC)." Clinical Trials 9: 418-425.
Lebacqz, K. (1980). "Clinical Trials: Some Ethical Issues." Controlled Clinical Trials 1: 29-36.
132
Marcarelli, M. (2008). "Gleaning Good Practices from Warning Letters." MDDI
Magazine http://www.mddionline.com November 11.
Marks, R. (2004). "Validating Electronic Source Data in Clinical Trials." Controlled Clinical
Trials 25: 437-446.
Martin, J. and D. Kasper (2000). "In Whose Best Interest? Breaching the Academic-Industrial
Wall." The New England Journal of Medicine 343(22): 1646-1649.
MayoClinic (2011). "http://ctsa.mayo.edu/resources/regulatory-support.html."
McCarthy, C. (1994). "Historical Background of Clinical Trials Involving Women and
Minorities." Academic Medicine 69(9).
Meeker-O'Connell, A. and L. Ball (2011). "Current Trends in FDA Inspections Assessing
Clinical Trial Quality: An Analysis of CDER's Experience." FDLI Update March/April(2).
Mehta, A. and S. Devine (2012). "Objective Internal Monitoring." Applied Clinical Trials April:
43-45.
Menikoff, J. (2010). "The Paradoxical Problem with Multiple-IRB Review." The New England
Journal of Medicine 363(17): 1591-1593.
Moen, R. and C. Norman (2010) "Circling Back: Clearing Up Myths about the Deming Cycle and
Seeing How it Keeps Evolving." Quality Progress, 22-28.
NationalHeartInstitute (1988). "Organization, Review and Administration of Cooperative Studies
(Greenberg Report): A Report from the Heart Special Project Committee to the National
Advisory Heart Council, May 1967." Controlled Clinical Trials 9: 137-148.
Nightingale, S. (1983). "The Food and Drug Administration's Role in the Protection of Human
Subjects." IRB: Ethics and Human Research 5(1): 6-9.
NIH (1998). Policy for Data and Safety Monitoring, http://grants.nih.gov/grants/guide/notice-
files/not98-084.html.
OHRP (2001). "Determination Letter (MPA) M-1025
http://www.hhs.gov/ohrp/detrm_letrs/sept01b.pdf." Office for Human Subjects Protections.
OHRP(a) (2001). "Determination Letter (MPA) M-1011
http://www.hhs.gov/ohrp/detrm_letrs/oct01a.pdf." Office for Human Research Protections.
133
OHRP(b) (2011). Office for Human Subjects Protections http://www.hhs.gov/ohrp/.
OnCore. (2012). "OnCore eClinical Solution:Improving Clinical Trial and Research
Management." Retrieved November 15, 2012.
Peltzman, S. (1973) "An Evaluation of Consumer Protection Legislation: The 1962 Drug
Amendments." The Journal of Political Economy 81, 1049-1091.
Pless, R. F. (2009). "EDC Project Success." Applied Clinical Trials April: 56-64.
Plsek, P. (1993). "Tutorial: Quality Improvement Project Models." Quality Management in
Healthcare 1(2): 69-81.
Probstfield, J. L. and R. L. Frye (2011). "Strategies for Recruitment and Retention of Participants
in Clinical Trials." JAMA 306(16): 1798-1799.
Rajkumar, V. (2004). "Thalidomide: Tragic Past and Promising Future." Mayo Clinic
Proceedings 79(7): 899-903.
Randal, J. (2001). "Examining IRBs: Are Review Boards Fulfilling Their Duties?" JNCI Journal
of the National Cancer Institute 93(19): 1440-1441.
Rattray, J. and M. C. Jones (2007). "Essential Elements of Questionnaire Design and
Development." Journal of Clinical Nursing 16: 234-243.
Reber, A. S., R. Allen and E. S. Reber, Eds. (2009). Penguin Dictionary of Psychology.
www.credoreference.com.
Richardson, H. S. and L. Belsky (2004). "The Ancillary-Care Responsibilities of medical
Researchers: An Ethical Framework for Thinking About the Clinical Care that Researchers Owe
Their Subjects." The Hastings Center Resport 34(1): 25-33.
Rickman, P. (1964). "Human Experimentation Code of Ethics World Medical Association."
British Medical Journal 18(2): 177.
Ronald, E., E. Dinnett, E. Tolmie and A. Gaw (2011). "Archiving Approach in the United
Kingdom." Applied Clinical Trials June: 46-50.
Seto, B. (2001). "History of Medical Ethics and Perspectives on Disparities in Minority
Recruitment and Involvement in Health Research." The American Journal of Medical Sciences
322(5): 246-250.
134
Seto, B. P. (2001). "History of Medical Ethics and Perspectives on Disparities in Minority
Recruitment and Involvement in Health Research." American Journal of the Medical Sciences
322(5): 246-250.
Shalala, D. (2000) "Protecting Research Subjects - What Must Be Done." The New England
Journal of Medicine 343, 808-810.
Sofaer, N., C. Thiessen, S. Dorr Goold, j. Ballou, K. A. Getz, G. Koski, R. A. Krueger and J. S.
Weissman (2009). "Subjects' Views of Obligations to Ensure Post-Trial Access to Drugs, Care
and Information: Qualitative Results from the Experiences of Participants in Clinical Trials
(EPIC) Study." Journal of Medical Ethics 35(3): 183-188.
Space, S. (2007). "Changing Investigator Meeting Formats." Research Practitioner 8(3): 110-121.
Starren, J. B., P. R. O. Payne and D. R. Kaufman (2006). "Human Computer Interaction Issues in
Clinical Trials Management Systems." AMIA 2006 Symposium Proceedings: 1109.
Sweetman, E. A. and G. S. Doig (2011). "Failure to Report Protocol Violations in Clinical Trials:
A Threat to Internal Validity?" Trials 12(214).
Switula, D. (2000). "Principles of Good Clinical Practice (GCP) in Clinical Research." Journal of
Science and Engineering Ethics 2000(6): 71-77.
Tan, S. and A. Ahana (2010). "Walter Reed (1851-1902)." Singapore Medical Journal 51(5): 360-
361.
Temin, P. (1979). "The Origin of Compulsory Drug Prescription." Journal of Law and Economics
22(1): 91-105.
Torok, S. (1998). "Maker of the Miracle Mould." http://www.abc.net.a.
UniversityofFlorida (2012). REDCap www.ctsi.ufl.edu. www.ctsi.ufl.edu.
UniversityofMichigan (2012). REDCap www.michr.umich.edu. www.michr.umich.edu.
UniversityofPennsylvania (2011).
http://www.ctsaweb.org/uploadedfiles/06_04_Fromell_RegSuccess_2011-01-12.pdf.
UniversityofRochester (2012). Self Study Audit Tools
http://www.rochester.edu/ohsp/education/coordinators/studySelfAuditTools.html.
135
Vulcano, D. M. (2012). "CPI Certification as Predictor of Clinical Investigators' Regulatory
Compliance." Drug Information Journal 46(1): 84-87.
Wagner-Bohn, A., A. Ripkens-Reinhard, G. Benninger-Döring and J. Boos (2007).
"Implementing Good Clinical Practice in Two Noncommercial Phase II Studies in Children with
Cancer." Onkologie 30: 21-26.
Weiss, R. and S. Tuttle (2006) "Preparing for Clinical Trial Data Audits." Journal of Oncology
Practice 2, 157-159.
Wolf, D., s. Katz, M. Safdi, R. Sandler and J. Lewis (2005). "Site Organization and
Management." Inflammatory Bowl Disease 11(Supplement 1): S29-S33.
Wolf, D. and P. O'Rourke (2002). "Ensuring Investigator Compliance and Improving Study Site
Performance: Implementing a Quality-Assurance Program in Academic Health Centers." Clinical
Researcher 2(5): 26-30.
WorldMedicalAssociation (1964). Declaration of Helsinki
http://www.wma.net/en/30publications/10policies/b3/.
136
APPENDIX A: SUMMARY OF FDA WARNING LETTERS
PIs 2005-2010
Informed consent
IRB
communication
Inclusion
/exclusion criteria
Protocol
deviations
Accounting for
test article
CRF not match
source
Failure to
supervise
IND not obtained
Records retention
Study team not
identified on 1572
Conflict of interest
Communication
with sponsor
a-07 X X X
b-09 X X X X X
b-09a X
b-09b X X X
b-09c X X X X X
b-05 X X X
c-08 X X X X
c-09 X X X
c-06 X X
c-07 X X X X X
c-09a X X X X X X X
c-10 X X
d-09 X X X
d-10 X X X X X X
d-09a X X X X
g-08 X X X
g-09 X X X
g-08a X X X X X X
g-06 X X X X X
g-09a X X
h-09 X X X X
h-09 X X X
h-08 X X X X X X
h-08a X X X X X X
j-05 X X X X X
l-06 X X X X X X
l-10 X X X
l-07 X X X X
l-09 X X X X X X X
l-10a X
m-09 X X X X
m-08 X X X X
n-10 X X
o-10 X X X
o-06 X X
137
PIs 2005-2010
Informed consent
IRB
communication
Inclusion
/exclusion criteria
Protocol
deviations
Accounting for
test article
CRF not match
source
Failure to
supervise
IND not obtained
Records retention
Study team not
identified on 1572
Conflict of interest
Communication
with sponsor
p-09 X X X
p-10 X X X X
p-08 X X X
q-09 X X X X
r-07 X X X
s-06 X
s-10 X X X X
s-08 X X X X
s-10a X X X
s-09 X X
s-09a X X X
s-10b X X X X X
s-08a X X X
t-05 X X
v-07 X X X X X
w-09 X X
w-08 X X X
w-09a X X X X
z-09 X X X
138
APPENDIX B: AUDIT TOOL
139
140
141
142
143
144
145
APPENDIX C: IRB APPROVALS
146
147
148
149
150
151
APPENDIX D: INFORMATION LETTERS TO PI AND COORDINATOR
Dear Principal Investigator,
You are being asked to participate in a research study titled “Investigator-Sponsor Versus
Industry Sponsor Driven Clinical Trials: Differences in Management and Implications for
Compliance”. This study is being conducted to partially fulfill the requirements for my Doctoral
Degree in Regulatory Science. My faculty advisor is Dr. Frances Richmond; she can be reached
by calling (323) 442-3531.
You are being asked to participate in this study because you are the principal investigator of an
investigator-sponsored clinical trial in an academic medical center or teaching hospital. The aim
of this study is to explore the nature of the planning for and oversight of trials where the
responsibility for compliance rests with the single-site, faculty-investigator. The objectives of
the study are to provide information needed to guide policy development by the IRB, granting
and regulatory agencies, so that investigator-sponsor protocols have sufficient safeguards to
assure the safety of enrolled human subjects. The information collected by this study may also
help to guide educational initiatives to ensure individuals conducting clinical trials have the
resources they need to ensure compliance with Good Clinical Practices (GCPs).
No identifiable information will be collected about you, your study staff, the participants enrolled
in your study or the study itself. Findings from this study will be published in summary form in
my doctoral thesis and may be published in a professional journal. You will be provided a copy
of my completed thesis upon request. Individual findings from my review of your study will not
be shared with any one or any entity except you as the principal investigator upon your
request.
If you choose to participate, I will make one, approximately 4 hour, visit to your study site where
I will review documents to which you grant access. I may be accompanied by my supervisor,
Frances Richmond. Allowing me access to your study documents will serve as your consent to
participate in this study.
Your participation in my study is truly appreciated. I can be reached on my cell phone (562) 413-
8957.
Ellen Whalen
Doctor of Regulatory Science Candidate
USC School of Pharmacy
152
Dear Study Coordinator,
You are being asked to participate in a research study titled “Investigator-Sponsor Versus
Industry Sponsor Driven Clinical Trials: Differences in Management and Implications for
Compliance”. This study is being conducted to partially fulfill the requirements for my Doctoral
Degree in Regulatory Science. My faculty advisor is Dr. Frances Richmond; she can be reached
by calling (323) 442-3531.
You are being asked to participate in this study because you are the coordinator of an
investigator-sponsored clinical trial in an academic medical center or teaching hospital, where the
principal investigator has agreed to participate in this study. The aim of my study is to explore
the nature of the planning for and oversight of trials where the responsibility for compliance rests
with the single-site, faculty-investigator. The objectives of the study are to provide information
needed to guide policy development by the IRB, granting and regulatory agencies, so that
investigator-sponsor protocols have sufficient safeguards to assure the safety of enrolled human
subjects. The information collected by this study may also help to guide educational initiatives to
ensure individuals conducting clinical trials have the resources they need to ensure compliance
with Good Clinical Practices (GCPs).
No identifiable information will be collected about you, the principal investigator, the participants
enrolled in your study or the study itself. Findings from this study will be published in summary
form in my doctoral thesis and may be published in a professional journal. Individual findings
from my review of your study will not be shared with any one or any entity except the principal
investigator upon his/her request.
If you choose to participate, I will ask you a series of questions related to your role in the study
and I may ask you some questions to clarify some of the regulatory components of the study that
you coordinate. Choosing to answer my questions will serve as your consent to participate in this
study.
Your participation in my study is truly appreciated. I can be reached on my cell phone (562) 413-
8957.
Ellen Whalen
Doctor of Regulatory Science Candidate
USC School of Pharmacy
153
APPENDIX E: RAW DATA TABLES
When 1 file was complete and another not complete = absent
Criteria
N Compliant Partial Non n/a %
Plan Data sources are defined 21 18 1 2 0 86%
Plan CRF contains space for each required data element 20 18 0 2 1 90%
Plan Source documents are available 21 18 3 0 0 86%
Plan Method for audit of study documents is defined 21 13 2 6 0 62%
83 67 6 10 1 81%
Do Data recorded on CRF is consistent with source documents 19 16 0 3 2 84%
Do CRF is completed for each study visit 20 15 0 5 1 75%
Do Source documents are protected (kept confidential) 21 21 0 0 0 100%
Do Audits are documented 21 3 9 9 0 14%
81 55 9 17 3 68%
Check Treating or PI signature is present on CRFs 18 7 0 11 3 39%
Check Staff articulate audit process consistent with protocol
requirements
21 21 0 0 0 100%
Check Treating/PI signature is present on safety data results (labs,
imaging, other) 18 4 0 14 3 22%
Check Data Safety Monitoring Board meetings are held 30 6 10 14 0 20%
87 38 10 39 6 44%
Act Processes reviewed and revised to achieve compliance when out
of compliance 20 18 2 0 1 90%
Criteria
N Compliant Partial Non n/a % N Compliant Partial Non n/a %
Plan Data sources are defined 8 7 0 1 0 88% 13 11 1 1 0 85%
Plan CRF contains space for each required data element 7 7 0 0 1 100% 13 11 0 2 0 85%
Plan Source documents are available 8 7 1 0 0 88% 13 11 2 0 0 85%
Plan Method for audit of study documents is defined 8 5 0 3 0 63% 13 8 2 3 0 62%
31 26 1 4 1 84% 52 41 5 6 0 79%
Do Data recorded on CRF is consistent with source documents 6 6 0 0 2 100% 13 10 0 3 0 77%
Do CRF is completed for each study visit 7 6 0 1 1 86% 13 9 0 4 0 69%
Do Source documents are protected (kept confidential) 8 8 0 0 0 100% 13 13 0 0 0 100%
Do Audits are documented 8 3 1 4 0 38% 13 0 8 5 0 0%
29 23 1 5 3 79% 52 32 8 12 0 62%
Check Treating or PI signature is present on CRFs 7 4 0 3 1 57% 11 3 0 8 2 27%
Check Staff articulate audit process consistent with protocol
requirements 8 8 0 0 0 100% 13 13 0 0 0 100%
Check Treating/PI signature is present on safety data results (labs,
imaging, other) 5 1 0 4 3 20% 13 3 0 10 0 23%
Check Data Safety Monitoring Board meetings are held 8 3 1 4 0 38% 22 3 9 10 0 14%
28 16 1 11 4 57% 59 22 9 28 2 37%
Act Processes reviewed and revised to achieve compliance when out
of compliance 7 7 0 0 1 100% 13 11 2 0 0 85%
Inexperienced Experienced
DOCUMENTATION All Studies
DOCUMENTATION
154
Criteria
N Compliant Partial Non n/a %
Plan Contents of the informed consent document are defined 21 9 12 0 0 43%
Plan Who, what, when, where of informed consent process is defined
21 3 8 10 0 14%
Plan Plan to report significant new findings to IRB is deliniated 21 19 2 0 0 90%
Plan Provisions are made for vulnerable populations, if to be included
21 20 0 1 0 95%
84 51 22 11 0 61%
Do Consent document is approved by IRB (current consent in use) 21 21 0 0 0 100%
Do Informed consent is obtained prior to the first study activity 20 20 0 0 1 100%
Do Consent document is signed and dated by participant and PI (or
authorized person) 20 15 0 5 1 75%
Do Consent process considers the special needs of vulnerable
populations 21 21 0 0 0 100%
82 77 0 5 2 94%
Check Consent document has been assessed by others to match
protocol requirements 21 21 0 0 0 100%
Check Staff describe the consent process consistent with protocol
requirements 21 21 0 0 0 100%
Check Current consent document is in use 21 21 0 0 0 100%
Check Consent process observed by others and found to be compliant 21 13 0 8 0 62%
84 76 0 8 0 90%
Act Processes reviewed and revised to achieve compliance when out
of compliance 21 19 2 0 0 90%
Criteria
N Compliant Partial Non n/a % N Compliant Partial Non n/a %
Plan Contents of the informed consent document are defined 8 6 2 0 0 75% 13 3 10 0 0 23%
Plan Who, what, when, where of informed consent process is defined
8 3 3 2 0 38% 13 0 5 8 0 0%
Plan Plan to report significant new findings to IRB is deliniated 8 8 0 0 0 100% 13 11 2 0 0 85%
Plan Provisions are made for vulnerable populations, if to be included
8 7 0 1 0 88% 13 13 0 0 0 100%
32 24 5 3 0 75% 52 27 17 8 0 52%
Do Consent document is approved by IRB (current consent in use) 8 8 0 0 0 100% 13 13 0 0 0 100%
Do Informed consent is obtained prior to the first study activity 7 7 0 0 1 100% 13 13 0 0 0 100%
Do Consent document is signed and dated by participant and PI (or
authorized person) 7 6 0 1 1 86% 13 9 0 4 0 69%
Do Consent process considers the special needs of vulnerable
populations 8 8 0 0 0 100% 13 13 0 0 0 100%
30 29 0 1 2 97% 52 48 0 4 0 92%
Check Consent document has been assessed by others to match
protocol requirements 8 8 0 0 0 100% 13 13 0 0 0 100%
Check Staff describe the consent process consistent with protocol
requirements 8 8 0 0 0 100% 13 13 0 0 0 100%
Check Current consent document is in use 8 8 0 0 0 100% 13 13 0 0 0 100%
Check Consent process observed by others and found to be compliant 8 4 0 4 0 50% 13 9 0 4 0 69%
32 28 0 4 0 88% 52 48 0 4 0 92%
Act Processes reviewed and revised to achieve compliance when out
of compliance 8 8 0 0 0 100% 13 11 2 0 0 85%
Inexperienced Experienced
INFORMED CONSENT All Studies
INFORMED CONSENT
155
IRB/FDA
Criteria
N Compliant Partial Non n/a %
Plan Protocol defines required interactions with the IRB/FDA 21 19 2 0 0 90%
Plan Protocol defines timing of required interactions with IRB/FDA 21 19 2 0 0 90%
Plan Consideration of need for an IND/IDE is documented 18 10 0 8 3 56%
Plan Consistency exists between the sponsor contract and consent
document 11 6 0 5 10 55%
71 54 4 13 13 76%
Do IRB approval is documented 21 21 0 0 0 100%
Do IND/IDE approval is documented (when required) 3 3 0 0 18 100%
Do Serious adverse events are reported to IRB/FDA 17 17 0 0 4 100%
Do Protocol deviations are reported to IRB/FDA 19 6 6 7 2 32%
Do Protocol revisions are approved by IRB/FDA 20 17 0 3 1 85%
80 64 6 10 25 80%
Check Staff describes processes consistent with protocol requirements
21 21 0 0 0 100%
Check Monitoring has occurred 20 13 2 5 1 65%
Check Annual reports have been submitted to IRB/FDA 16 16 0 0 5 100%
Check IND/IDE safety reports have been submitted (when required) 4 2 2 0 17 50%
61 52 4 5 23 85%
Act Processes reviewed and revised to achieve compliance when out
of compliance 21 21 0 0 0 100%
IRB/FDA
Criteria
N Compliant Partial Non n/a % N Compliant Partial Non n/a %
Plan Protocol defines required interactions with the IRB/FDA 8 8 0 0 0 100% 13 11 2 0 0 85%
Plan Protocol defines timing of required interactions with IRB/FDA 8 8 0 0 0 100% 13 11 2 0 0 85%
Plan Consideration of need for an IND/IDE is documented 5 4 0 1 3 80% 13 6 0 7 0 46%
Plan Consistency exists between the sponsor contract and consent
document 4 3 0 1 4 75% 7 3 0 4 6 43%
25 23 0 2 7 92% 46 31 4 11 6 67%
Do IRB approval is documented 8 8 0 0 0 100% 13 13 0 0 0 100%
Do IND/IDE approval is documented (when required) 2 1 0 0 7 50% 1 3 0 0 10 300%
Do Serious adverse events are reported to IRB/FDA 6 6 0 0 2 100% 11 11 0 0 2 100%
Do Protocol deviations are reported to IRB/FDA 6 3 1 2 2 50% 13 3 5 5 0 23%
Do Protocol revisions are approved by IRB/FDA 7 6 0 1 1 86% 13 11 0 2 0 85%
29 24 1 3 12 83% 51 41 5 7 12 80%
Check Staff describes processes consistent with protocol requirements
8 8 0 0 0 100% 13 13 0 0 0 100%
Check Monitoring has occurred 7 5 0 2 1 71% 13 8 2 3 0 62%
Check Annual reports have been submitted to IRB/FDA 3 3 0 0 5 100% 13 13 0 0 0 100%
Check IND/IDE safety reports have been submitted (when required) 1 1 0 0 7 100% 3 1 2 0 10 33%
19 17 0 2 13 89% 42 35 4 3 10 83%
Act Processes reviewed and revised to achieve compliance when out
of compliance 8 8 0 0 0 100% 13 13 0 0 0 100%
All Studies
Experienced Inexperienced
156
Criteria
N Compliant Partial Non n/a %
Plan Protocol (SOP) defines documentation required related to test
articles 18 5 10 3 3 28%
Plan Protocol (SOP) defines the source and disposition (including
waste) of test articles 18 5 10 3 3 28%
Plan Training for persons who will administer the test articles is
documented 18 15 3 0 3 83%
54 25 23 6 9 46%
Do Forms/logs have been developed to record tracking of test
articles 18 14 4 0 3 78%
Do Test articles are stored in locked and/or limited access area 17 17 0 0 4 100%
Do Qualified/trained staff administer the test articles 18 16 2 0 3 89%
53 47 6 0 10 89%
Check Forms/logs are complete with evidence of second party audit 17 13 4 0 4 76%
Check All test articles have been tracked 18 14 2 2 3 78%
Check Administration of test articles by qualified staff is documented 17 15 2 0 4 88%
Check Destruction of drug/device to be wasted is witnessed 14 12 1 1 7 86%
66 54 9 3 18 82%
Act Processes reviewed and revised to achieve compliance when out
of compliance 17 15 2 0 4 88%
Criteria
N Compliant Partial Non n/a % N Compliant Partial Non n/a %
Plan Protocol (SOP) defines documentation required related to test
articles 5 3 1 1 3 60% 13 2 9 2 0 15%
Plan Protocol (SOP) defines the source and disposition (including
waste) of test articles 5 3 1 1 3 60% 13 2 9 2 0 15%
Plan Training for persons who will administer the test articles is
documented 5 5 0 0 3 100% 13 10 3 0 0 77%
15 11 2 2 9 73% 39 14 21 4 0 36%
Do Forms/logs have been developed to record tracking of test
articles 5 5 0 0 3 100% 13 9 4 0 0 69%
Do Test articles are stored in locked and/or limited access area 5 5 0 0 3 100% 12 12 0 0 1 100%
Do Qualified/trained staff administer the test articles 5 5 0 0 3 100% 13 11 2 0 0 85%
15 15 0 0 9 100% 38 32 6 0 1 84%
Check Forms/logs are complete with evidence of second party audit 4 4 0 0 4 100% 13 9 4 0 0 69%
Check All test articles have been tracked 5 5 0 0 3 100% 13 9 2 2 0 69%
Check Administration of test articles by qualified staff is documented 4 4 0 0 4 100% 13 11 2 0 0 85%
Check Destruction of drug/device to be wasted is witnessed 4 3 0 1 4 75% 10 9 1 0 3 90%
17 16 0 1 15 94% 49 38 9 2 3 78%
Act Processes reviewed and revised to achieve compliance when out
of compliance 4 4 0 0 4 100% 13 11 2 0 0 85%
TEST ARTICLE
All Studies
Experienced Inexperienced
TEST ARTICLE
157
Criteria
N Compliant Partial Non n/a %
Plan Protocol defines inclusion/exclusion criteria 21 21 0 0 0 100%
Plan Protocol defines methods of recruitment 21 21 0 0 0 100%
Plan Care transition at end of study is planned 18 3 4 11 3 17%
60 45 4 11 3 75%
Do Subjects recruited meet inclusion/exclusion criteria 20 20 0 0 1 100%
Do Recruitment methods used are consistent with defined plan 21 21 0 0 0 100%
Do Evidence that care transition occurred at end of study 18 7 0 11 3 39%
59 48 0 11 4 81%
Check Compliance with inclusion/exclusion criteria is verified 20 13 0 7 1 65%
Check Screening log that documents all individuals contacted is used 20 18 0 2 1 90%
Check Telephonic system was used for verification of eligibility 21 0 0 21 0 0%
61 31 0 30 2 51%
Act Processes reviewed and revised to achieve compliance when out
of compliance 21 19 2 0 0 90%
Criteria
N Compliant Partial Non n/a % N Compliant Partial Non n/a %
Plan Protocol defines inclusion/exclusion criteria 8 8 0 0 0 100% 13 13 0 0 0 100%
Plan Protocol defines methods of recruitment 8 8 0 0 0 100% 13 13 0 0 0 100%
Plan Care transition at end of study is planned 5 0 2 3 3 0% 13 3 2 8 0 23%
21 16 2 3 3 76% 39 29 2 8 0 74%
Do Subjects recruited meet inclusion/exclusion criteria 7 7 0 0 1 100% 13 13 0 0 0 100%
Do Recruitment methods used are consistent with defined plan 8 8 0 0 0 100% 13 13 0 0 0 100%
Do Evidence that care transition occurred at end of study 5 2 0 3 3 40% 13 5 0 8 0 38%
20 17 0 3 4 85% 39 31 0 8 0 79%
Check Compliance with inclusion/exclusion criteria is verified 7 4 0 3 1 57% 13 9 0 4 0 69%
Check Screening log that documents all individuals contacted is used 7 7 0 0 1 100% 13 11 0 2 0 85%
Check Telephonic system was used for verification of eligibility 8 0 0 8 0% 13 0 13 0 0%
22 11 0 11 2 50% 39 20 0 19 0 51%
Act Processes reviewed and revised to achieve compliance when out
of compliance 8 8 0 0 0 100% 13 11 2 0 0 85%
RECRUITMENT
All Studies
Experienced
RECRUITMENT
Inexperienced
158
Criteria
N Compliant Partial Absent n/a %
Plan Defines qualification/credentials required 21 4 17 0 0 19%
Plan Defines required training 19 2 15 2 2 11%
Plan Defines specific processes to be delegated by PI to others 20 4 11 5 1 20%
60 10 43 7 3 17%
Do PI, sub-investigators and study staff identified to IRB/FDA 21 19 2 0 0 90%
Do Participants are educated by qualified personnel 21 19 2 0 0 90%
Do Study staff state they perform duties which they have been
delegated and trained 21 21 0 0 0 100%
63 59 4 0 0 94%
Check Evidence of training as required by protocol 21 16 5 0 0 76%
Check Staff verbalizes process consistent with protocol requirements 21 21 0 0 0 100%
Check Staff verbalize their duties consistent with delegation 21 21 0 0 0 100%
63 58 5 0 0 92%
Act Processes reviewed and revised to achieve compliance when out
of compliance 21 19 2 0 0 90%
Criteria
N Compliant Partial Absent n/a % N Compliant Partial Absent n/a %
Plan Defines qualification/credentials required 8 3 5 0 0 38% 13 1 12 0 0 8%
Plan Defines required training 6 2 4 0 2 33% 13 0 11 2 0 0%
Plan Defines specific processes to be delegated by PI to others 7 2 4 1 1 29% 13 2 7 4 0 15%
21 7 13 1 3 33% 39 3 30 6 0 8%
Do PI, sub-investigators and study staff identified to IRB/FDA 8 8 0 0 0 100% 13 11 2 0 0 85%
Do Participants are educated by qualified personnel 8 8 0 0 0 100% 13 11 2 0 0 85%
Do Study staff state they perform duties which they have been
delegated and trained 8 8 0 0 0 100% 13 13 0 0 0 100%
24 24 0 0 0 100% 39 35 4 0 0 90%
Check Evidence of training as required by protocol 8 6 2 0 0 75% 13 10 3 0 0 77%
Check Staff verbalizes process consistent with protocol requirements 8 8 0 0 0 100% 13 13 0 0 0 100%
Check Staff verbalize their duties consistent with delegation 8 8 0 0 0 100% 13 13 0 0 0 100%
24 22 2 0 0 92% 39 36 3 0 0 92%
Act Processes reviewed and revised to achieve compliance when out
of compliance 8 8 0 0 0 100% 13 11 2 0 0 85%
STAFFING
All Studies
Experienced
STAFFING
Inexperienced
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Current practices of U.S. investigators in the management of the clinical trial agreement: a survey of knowledge, attitudes, perceptions, and engagement
PDF
21 CFR Part 11 compliance for digital health technologies in clinical trials
PDF
Sharing the results of clinical trials: industry views on disclosure of data from industry-sponsored clinical research
PDF
Challenges in the implementation of Risk Evaluation Mitigation Strategies (REMS): a survey of industry views
PDF
Evaluation of FDA-sponsor formal meetings on the development of cell and gene therapies: a survey of industry views
PDF
A survey of US industry views on implementation of decentralized clinical trials
PDF
Use of electronic health record data for generating clinical evidence: a summary of medical device industry views
PDF
Examining the cord blood industry views on the biologic license application regulatory framework
PDF
Institutional review board capabilities to oversee new technology: social media as a case study
PDF
Computerized simulation in clinical trials: a survey analysis of industry progress
PDF
Contract research organizations: a survey of industry views and outsourcing practices
PDF
Benefits-risk frameworks: implementation by industry
PDF
An industry survey of implementation strategies for clinical supply chain management of cell and gene therapies
PDF
Institutional review board implementation of the single IRB model for multicenter clinical trials
Asset Metadata
Creator
Whalen, Ellen R.
(author)
Core Title
Clinical trials driven by investigator-sponsors: GCP compliance with or without previous industry sponsorship
School
School of Pharmacy
Degree
Doctor of Regulatory Science
Degree Program
Regulatory Science
Publication Date
02/07/2013
Defense Date
01/23/2013
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
clinical research quality control,clinical trials,GCP compliance,good clinical practices,OAI-PMH Harvest,PDCA cycle,sponsor-investigators
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Richmond, Frances J. (
committee chair
), Baron, Melvin F. (
committee member
), Pacifici, Eunjoo (
committee member
), Welsh, Mickie (
committee member
)
Creator Email
ellen.whalen@med.usc.edu,ewhalen@alumni.usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-218930
Unique identifier
UC11293956
Identifier
usctheses-c3-218930 (legacy record id)
Legacy Identifier
etd-WhalenElle-1429.pdf
Dmrecord
218930
Document Type
Dissertation
Rights
Whalen, Ellen R.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
clinical research quality control
clinical trials
GCP compliance
good clinical practices
PDCA cycle
sponsor-investigators