Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Computerized simulation in clinical trials: a survey analysis of industry progress
(USC Thesis Other)
Computerized simulation in clinical trials: a survey analysis of industry progress
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
COMPUTERIZED SIMULATION IN CLINICAL TRIALS: A SURVEY ANALYSIS OF
INDUSTRY PROGRESS
by
John Hartigan
A Dissertation Presented to the
FACULTY OF THE USC SCHOOL OF PHARMACY
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF REGULATORY SCIENCE
December 2018
Copyright 2018 John Hartigan
2
ACKNOWLEDGEMENTS
I would like to thank all of those individuals whose support and encouragement guided
me throughout my long educational journey.
I would like to thank my dissertation supervisor, Michael Jamieson, DRSc, for his
encouragement and guidance throughout the dissertation process. Many thanks to Dr.
Frances Richmond, PhD, department chair, for all of her direction and patience in
reviewing the many iterations of this dissertation. Also, special thanks to Dr. Klaus
Romero, PhD, and the Critical Path Institute for insights and access to the CAMD
Alzheimer’s Disease Simulator that was reviewed as part of this study. I would also like
to thank my dissertation committee members, Eunjoo Pacifici, Pharm D, PhD, Nancy
Smerkanich, DRSc, and Stan Louie, PhD for the valuable feedback they have provided
throughout the process. Finally, I would like to thank the other students in the 2014
Doctoral Cohort and the staff of the Regulatory Science program for their camaraderie
and support over the last four years.
3
TABLE OF CONTENTS
TABLE OF CONTENTS .................................................................................................... 3
LIST OF TABLES .............................................................................................................. 7
LIST OF FIGURES ............................................................................................................ 8
ABSTRACT ...................................................................................................................... 11
CHAPTER 1. OVERVIEW ...................................................................................... 12
1.1 Introduction .................................................................................................. 12
1.2 Statement of the problem ............................................................................. 16
1.3 Purpose of the Study .................................................................................... 17
1.4 Importance of the Study ............................................................................... 19
1.5 Limitations, Delimitations, Assumptions ..................................................... 20
1.5.1 Limitations ...................................................................................... 20
1.5.2 Delimitations .................................................................................. 21
1.6 Organization of Thesis ................................................................................. 22
1.7 Definitions .................................................................................................... 23
CHAPTER 2. LITERATURE REVIEW .................................................................. 26
2.1 Methodological Approach to the Literature Review .................................... 26
2.2 Evolution of Data Analysis Supporting Clinical Trials ............................... 27
2.2.1 Challenges of Clinical Trial Execution........................................... 34
2.2.2 The Impact of Increasing Trial Sophistication ............................... 41
2.2.3 Data-Driven Metrics: Cost-Driver or Cost-Reduction Agent? ....... 45
4
2.2.4 Logistical Challenges Impeding Effective Simulation ................... 52
2.3 Evolution of Regulatory Approaches in the US ........................................... 58
2.3.1 FDA’s Challenges and Opportunities Report - March 2004 .......... 61
2.3.2 FDA’s Early Electronic Database Tools: Medwatch ...................... 68
2.3.3 Investing in Clinical Trial Simulation ............................................ 71
2.3.4 Simulation Tools Development – Clinical Disease Simulator ....... 71
2.4 Other Initiatives - CDRH ............................................................................. 75
2.5 Comparative Regulatory Approaches in Europe .......................................... 79
2.6 Industry Perspectives ................................................................................... 83
2.7 Assessment Frameworks .............................................................................. 84
2.7.1 Kingdon's Three-Streams Model .................................................... 85
2.7.2 Redington’s Issue Framing Approach ............................................ 86
2.7.3 Fixsen’s Core Implementation Components .................................. 86
2.8 Approach ...................................................................................................... 89
CHAPTER 3. METHODOLOGY ............................................................................ 90
3.1 Introduction .................................................................................................. 90
3.2 Survey Development .................................................................................... 90
3.3 Focus Group and Survey Finalization .......................................................... 91
3.4 Survey Administration and Analysis ........................................................... 93
CHAPTER 4. RESULTS .......................................................................................... 95
4.1 Focus Group Results .................................................................................... 95
5
4.2 Survey Results ............................................................................................. 95
4.2.1 General Information About Survey Respondents ........................... 96
4.2.2 Policy Framing Questions - Exploration ........................................ 99
4.2.3 Policy Framing Questions - Installation ....................................... 108
4.2.4 Policy Framing Questions - Implementation ................................ 116
4.2.5 Policy Framing Questions - Sustainability ................................... 123
CHAPTER 5. DISCUSSION .................................................................................. 133
5.1 Methodological Considerations ................................................................. 133
5.2 Consideration of Results ............................................................................ 139
5.2.1 Exploring the Use of Clinical Trial Simulation ............................ 139
5.2.2 Installation .................................................................................... 142
5.2.3 Implementation ............................................................................. 148
5.2.4 Sustainability ................................................................................ 151
5.2.5 Specific Fit-for-Purpose Tools ..................................................... 152
5.3 Conclusions and Future Considerations ..................................................... 155
APPENDIX A. SURVEY QUESTIONS – FINAL VERSION ................................ 157
APPENDIX B. SURVEY QUESTIONS – DRAFT VERSION ............................... 166
APPENDIX C. SURVEY RESULTS AND REPORTS ........................................... 176
APPENDIX D. VIRTUAL FAMILY MODELS ...................................................... 220
APPENDIX E. 2010 CRITICAL PATH FUNDING ............................................... 221
APPENDIX F. EMA FRAMEWORK ..................................................................... 223
6
REFERENCES ............................................................................................................... 226
7
LIST OF TABLES
Table 1: Example of Clinical Study Reports for a Single Drug Submission ................41
Table 2: CDISC – Standards for Clinical Data .............................................................57
Table 3: Three Dimensions of the Critical Path ............................................................64
Table 4: 2006 Critical Path Initiative Opportunities for Streamlining Clinical
Trials (Topic 2) ...............................................................................................67
Table 5: 2006 CPI Opportunities for Harnessing Bioinformatics (Topic 3) ................68
Table 6: Demographic fields for CAMD’s Clinical Trial Simulation Tool for
Alzheimer’s Disease .......................................................................................73
Table 7: Comparison of Drug Development Tool (DDT) Definitions and
Terms vs Medical Device Development Tool (MDDT) ................................78
Table 8: Recommendations to meet regulator's expectations .......................................82
Table 9: Approximate Breakdown of Survey Questions ..............................................91
Table 10: Focus Group Participants ................................................................................92
Table 11: Virtual Family Attributes ..............................................................................220
Table 12: Summary of FY 2010 Critical Path Initiative Obligations ...........................221
Table 13: Center for Drug Evaluation and Research (CDER) FY 2010 Funding
Summary ......................................................................................................222
8
LIST OF FIGURES
Figure 1: Timeline of Clinical Data Requirements ........................................................34
Figure 2: Drug Research and Development Costs versus Submissions .........................35
Figure 3: The Drug Discovery, Development, and Review Process ..............................42
Figure 4: Overall Trend in R&D Efficiency ..................................................................44
Figure 5: Plot of Moore’s Law .......................................................................................49
Figure 6: Timeline of FDA Regulatory Milestones related to Clinical
Simulation ......................................................................................................61
Figure 7: The Critical Path for Medical Product Development .....................................62
Figure 8: Three Dimensions of the Critical Path ............................................................63
Figure 9: Research Support for Product Development ..................................................65
Figure 10: Respondents in Different Product Space / Industry Sectors (Q6) ..................97
Figure 11: Size of Organizations Represented by Respondents (Q5) ..............................98
Figure 12: Job Level of Respondents (Q2) ......................................................................98
Figure 13: Departmental Area of Primary Activity (Q3) .................................................99
Figure 14: Prevalence of Computer Simulations in Phase 2 and/or 3 Clinical
Studies in the Surveyed Organizations. (Q8) .................................................99
Figure 15: Cross-tabulation: Use of computer simulations (Q8) according to
size of the organization (Q5) ........................................................................100
Figure 16: Cross-tabulation: Nature of resources assigned to CTS (Q13) by
companies using or not using simulations at present (Q8) ...........................101
Figure 17: Types of Computer Simulations Considered in Clinical Trials (Q9) ...........103
9
Figure 18: Resources Used when Preparing to Incorporate Computer
Simulations in Clinical Trials (Q10) ............................................................105
Figure 19: Challenges When Considering the Use of Computer Simulations in
Clinical Trials (Q11) ....................................................................................107
Figure 20: Types of Resources Assigned to the Use of Computer Simulations in
Clinical Trials (Q13) ....................................................................................109
Figure 21: Use of the Alzheimer's Disease Simulator amongst Those Familiar
with the Tool (Q15) ......................................................................................109
Figure 22: Cross-tabulation: (Q14) related to (Q19) ......................................................110
Figure 23: Cross-tabulation: (Q5) related to (Q14 and Q19) .........................................111
Figure 24: Categories of Simulation Tools Used in Clinical Programs (Q21) ..............112
Figure 25: Important Factors When Considering Simulation in Clinical
Programs (Q22) ............................................................................................114
Figure 26: Satisfaction with Activities during Exploration or Initial
Implementation of Simulation in Clinical Programs (Q23) .........................116
Figure 27: Use of Computer Simulations in Regulatory Submissions (Q24) ................118
Figure 28: Cross-tabulation: (Q24) related to (Q6) ........................................................119
Figure 29: FDA Receptivity toward Computerized Simulation (Q25) ..........................120
Figure 30: Cross-tabulation: The number of submissions included as part of
NDA/BLA (Q24) related to FDA’s receptivity toward simulated
results (Q25) .................................................................................................121
Figure 31: Ease and Value of Implementing Computerized Simulation (Q26) .............123
Figure 32: FDA’s Helpfulness (Q27) .............................................................................125
10
Figure 33: Ability to Gauge the Time to Complete Simulations (Q28) .........................126
Figure 34: The Most Difficult Elements when Incorporating Simulation into
Clinical Programs (Q29) ..............................................................................127
Figure 35: Ease of Implementing Computerized Simulations (Q31) .............................127
Figure 36: Regulatory reporting of Computerized Simulation Results (Q32) ...............128
Figure 37: Cross-tabulation: (Q32) related to (Q29) ......................................................128
Figure 38: Regulatory Dissonance in Reviewing Computerized Simulations
(Q33) ............................................................................................................129
Figure 39: Virtual Family ...............................................................................................220
Figure 40: EMA Framework for M&S in Regulatory Review ......................................223
Figure 41: EMA – Continuum of Learn/Confirm/Predict using for using M&S ...........224
Figure 42: EMA – Types of M&S documentation reviewed .........................................225
11
ABSTRACT
Computer simulations are emerging as a powerful tool to augment the data collected in
traditional clinical trials. However, it is not clear to what extent industry is using such
approaches to supplement the evidence used in regulatory submissions to support safety
and efficacy claims. It was further not clear whether regulatory policies and other forms
of guidance limit the adoption of such methods. This study uses survey methods
grounded in an implementation framework to explore the current uses and challenges
associated with clinical trial simulations (CTS). Areas of focus included, but were not
limited to, the uses of CTS, types of simulation tools used, whether CTS results had been
included in regulatory submissions, and challenges cited by practitioners. Data analysis
was conducted on responses from large and small companies.
Survey findings indicate that neither the availability of regulatory policies and guidance
documents related to the use of computerized clinical trial simulations nor the level of
agency expertise advice were the greatest obstacles to wider adoption. Rather, difficulty
in assembling teams with the expertise needed to execute clinical simulations along with
differences in regulatory review process across branches or regions were most often cited
as challenges. Results suggest that the adoption of novel simulation tools will depend not
only on the power of the tools themselves but on the availability of trained personnel, and
point to the need for better educational opportunities related to pharmacometrics and
computer simulations.
12
CHAPTER 1. OVERVIEW
1.1 Introduction
Much has written about the challenges faced by regulatory agencies when trying to adapt to the
rapid technological advances typical of science today. Nowhere is this struggle more apparent
than in the way that computerized modeling, simulation, and data-mining techniques are used
to develop medical products. Today, most recommended testing strategies for drugs and
devices still rely on in-situ and in-vivo testing strategies introduced decades ago. Although
these approaches have been effective they can also be expensive, inefficient, and sometimes
even wrong. Thus, many have looked to alternative computer-based methods that promise to
streamline and even improve the difficult task of assessing the safety and efficacy of new
biomedical products intended for human use (Viceconti et al., 2015).
It is not easy, however, to implement new methodologies in a highly regulated industry.
Newer methods can be difficult to align with traditional regulatory policies that rely on
qualified methods because the results that they produce have few precedents for comparison or
acceptance. Thus, regulatory reviewers may not understand the strengths and weaknesses of
such approaches or even the data derived from them. If regulators are reluctant to accept such
results, companies may also be reluctant to use unqualified methods that can delay product
approvals; this may introduce risk into marketing projections that depend on an ability to
estimate the success and timing of a new product launch. Thus, the rate at which novel
modeling and simulation methods are adopted has been low compared to that in other risk
sectors such as the automotive, aerospace, and nuclear industries (Manolis et al., 2013).
13
Nevertheless, biomedical regulatory agencies are beginning to recognize the power of
modeling and simulation methods to add value in several different areas of preclinical and
clinical product development. Simulations can, for example, reduce the need for certain types
of more expensive and difficult testing that previously could only be conducted on animals and
humans. Nonclinical assessment models are a diverse set of approaches that predict certain
aspects of product function or in vivo performance, and thus can potentially substitute for
another generally accepted test or measurement. They can include in vitro models that replace
or minimize the need for animal testing, tissue and other material phantoms to evaluate
imaging devices, validated computational models for estimating dosage recommendations, and
data-mining methods to reduce the size and increase the validity of clinical trials (FDA,
2017d).
One area of particular focus has been the use of simulation methods to augment clinical data
and to examine that data for its validity. Although clinical trials will remain essential for drug
approval in most cases, specific situations exist in which a reliable predictive model could
conceivably replace a routine clinical assessment or the need for a control group (Viceconti et
al., 2015). For example, “virtual patients” or “virtual cohorts” can be used to construct
simulated patients who have specific anatomical or physiological attributes. One such model is
the Virtual Family (VF) developed by the IT’IS Foundation. The model set, in this instance,
has four virtual members with physical characteristics that have been extracted from high-
resolution magnetic resonance imaging (MRI) data of healthy volunteers. These VF models are
used to simulate reactions to potentially hazardous diagnostic and curative techniques such as
those involving X-Ray, MRI, Ultrasound, or blood-flow and are accessible on FDA’s Virtual
14
Family website (FDA, 2017h), (Appendix D). The VF models facilitate data-based conclusions
through the use of electromagnetic, thermal, acoustic, and computational fluid dynamics (CFD)
simulations that can reduce the risks to human or animal test-subjects. These types of
simulations greatly aid the development of biomedical products. By the end of 2014, the VF
was used in more than 120 medical device submissions to FDA and was cited more than 180
times in peer-reviewed literature (Gosselin et al., 2014).
Another different example is the Alzheimer’s Disease Simulator, developed by the Critical
Path Institute’s (CPI) Consortium Against Major Diseases (CAMD). This model uses data
collected from previous clinical trials to construct “control” groups of Alzheimer’s patients. It
further uses the clinical results of interventions with already marketed drugs to estimate the
effects that might be expected in a real clinical trial on such patients. This simulator is seen as
a resource for sponsors that could provide a quantitative rationale for selecting particular study
designs and inclusion criteria in Alzheimer’s studies (EMA, 2013). More generally, such an
approach may provide insights into the validity of control groups or the comparability of a
potential intervention with other similar products in development or on the market. This in
turn can prevent a development program from proceeding down an inappropriate path, for
example, by cross-checking seemingly successful clinical trial results against models of other
known groups with the same characteristics or treatment plans. In some cases, the modeling
can help to prevent an incorrect go/no go decision on a potentially valuable product whose
initial investigations might have been underpowered or misleading due to errors in design or
selection of atypical control or experimental subject samples (Manolis et al., 2013).
15
As drug development evolves, regulatory bodies now acknowledge that they must remain open
to the use of sophisticated computer-based and statistical tools (Olson & Downey, 2013).
However, regulatory agencies, manufacturers, and academic institutions have struggled to
develop ways in which such methods can be embedded into the regulatory-science tool-box. A
complex set of activities are needed to verify that a particular model is valid for specific uses.
In many instances, those activities rely on partnerships between regulatory bodies and
programs. The Critical Path Institute (CPI), for example, brings scientists from the FDA,
industry, and academia together in order to undertake projects such as the development of the
Alzheimer’s Disease Simulator, which then can be made freely available. In the European
Community, the EMA Modelling and Simulation Workshops (2011) have catalyzed the
development of globally harmonized Modeling and Simulation (M&S) standards and
collaborations such as that to construct a Virtual Physiological Human (VPH) that models the
human biological system.
Even after computerized simulation methods are developed they still face hurdles if they are to
be incorporated into mainstream drug-development programs. An important way in which
regulatory bodies can actively promote the use and standardization of simulation techniques is
by providing guidance about what it recommends for use in regulatory submissions. Globally,
agencies have drafted a number of guidance documents to define regulatory expectations when
submitting evidence that incorporates simulated data. This thinking has been described by the
FDA’s Center for Device and Radiological Health (CDRH) position papers and guidance as
communicated through its Medical Device Development Tools (MDDT) Program (FDA,
2014b) and through FDA’s Center for Drug Evaluation and Research (CDER), “Guidance for
16
Industry and FDA Staff, Qualification Process for Drug Development Tools” (DDT) (FDA,
2014a). These efforts promote and standardize the use of simulation methods by defining
concepts and standardizing methods to qualify tools that sponsors can use with more
confidence that they will be acceptable as part of their regulatory submissions.
At the same time, other regulatory agencies have also taken up the challenge of integrating in-
silico tools into testing strategies. For example, the Japanese Pharmaceuticals and Medical
Devices Agency (PMDA) has released the “Current Position and Expectation for Use of M&S
in Drug Development and Regulatory Decision Making: The PMDA Viewpoint - The PMDA
Viewpoint” (PMDA, 2011). This paper outlines the use of simulation in planning and
optimizing new trials, and it suggests that simulation might prove to be an efficient way to plan
future clinical trials, particularly when dealing with special populations such as children, the
elderly, patients with renal/liver impairment, or subjects with ethnically diverse attributes.
1.2 Statement of the problem
Computerized clinical trial simulation (CTS) techniques have been suggested to shorten
development times and reduce development costs while at the same time improving the quality
of the data on which regulatory decisions are based. However, the challenges inherent in
gaining regulatory qualification and acceptance of simulation methods in clinical trials have
been viewed as impediments that limit their use in clinical trial programs. Further, the impact
of recent regulatory initiatives to promote clinical simulation methods remains uncertain.
While modeling, simulation, and data-mining techniques have been very successful in other
risk-averse sectors such as the aerospace and nuclear industries, initiatives to incorporate these
techniques into the biomedical industry seem to have been only marginally successful.
17
The difficulty of qualifying even a single simulation method can be overwhelming. The
qualification review team may find it necessary not only to gain convergence of opinions
amongst the diverse stakeholders in the consortium undertaking the qualification activities, but
also the wider audience, by holding public discussions through workshops, conferences, or
other public forums (FDA, 2014a). Even after a method is qualified for use, it is not clear
whether the recent regulatory policies and guidance regarding the acceptance of simulated data
in product submissions actually meet the needs of industry, and whether the reviewers in the
“trenches” have bought into their use as respected tools.
Additionally, biomedical innovators may not be aware of the efforts by the global regulatory
community to accept and promote simulation techniques as applied to clinical development.
We know that greater acceptance of these techniques might help to prevent late stage failures
and provide greater clarity with respect to the validity of results in small target populations
typical of pediatrics and rare diseases, for example. We also know that investigators may be
able to find new or over-looked indications for already marketed product by data-mining
decades of existing clinical results (FDA, 2004), (Viceconti et al., 2015). However, what
remains unclear is whether industry is even aware of recent changes in policy and guidance
regarding the use of simulation techniques in clinical trials, and whether regulatory policy and
guidance regarding the acceptance of synthetic, simulated, data in product submissions are
actually meeting the goals that they were intended to achieve.
1.3 Purpose of the Study
The purpose of this research was to explore the views and actions of the biomedical industry
with regard to the use of computerized simulation during the phase II and III clinical trial
18
stages of drug development. In this study, I explored the current views and approaches used by
industry with regard to the development and use of such methods by disseminating a focused
survey instrument to individuals engaged in clinical research and development activities and
regulatory interactions.
The survey probed the biomedical industry on specific views related to the adequacy of FDA
policies for the regulation of simulation used in clinical trials. It also explored the nature of
their activities related to the development, qualification and implementation of simulation
methods. Of particular interest were the experiences of the industry related to the ability to use
such tools as part of regulatory applications. The survey was constructed to give respondents
the opportunity to identify impediments that they have experienced and the influences that
might transform, contribute to, or hinder the development of effective use of simulation
methods. Initially, the high-level policy analysis framework described by Kingdon (1995) was
considered in order to assess whether there existed a policy window of opportunity for
simulation in clinical trials (Kingdon, 1995). However, it was the nomenclature for
implementation maturity and core implementation components introduced by Fixsen (2009),
and his structured approach to implementation science, that provided the more critical
foundation to examine broad areas of diffusion, dissemination, and implementation of clinical
trial simulation policy and programs (Fixsen, Blase, Naoom, & Wallace, 2009).
To critique the survey prior to its dissemination, a focus group of individuals with experience
in the biomedical industry was convened to advise on its approach and questions. The survey
was then administered electronically to a selected group of participants possessing significant
experience with either biomedical product development or regulatory submissions. These
19
individuals were identified not only through personal contacts and membership lists from
relevant professional meetings and advisory groups, but also by searching FDA’s listing of
issued letters of support for given DDTs likely to make use of simulations (FDA, 2017c).
1.4 Importance of the Study
The importance of this study centers on the ability of the biomedical community, policy
makers (regulatory bodies), industry, and academia to remove the bottle-necks that prevent
new technologies from reaching the public domain due to limitations in the current critical path
(FDA, 2004). That restriction is generally viewed to be most severe as industry tries to provide
evidence of safety and efficacy through reliable, safe, and cost-effective testing. The findings
of this the study thus contribute to our understanding of ways in which to foster innovations
that improve the availability of medicines that are more effective, safer, and more affordable.
The study is intended to benefit biomedical pioneers committed to bringing novel products to
market.
Patients may also benefit from this work. Too many novel curative candidates intended to treat
pediatric or orphan populations must be shelved due to limited test populations (EMA, 2011b).
As personalized medicine develops, many cures could ultimately fall into FDA’s orphan
category - as each individual patient is unique, the population for many of these individualized
cures may ultimately be reduced to a sample size too small to be analyzed efficiently with
traditional clinical trials.
Universities and academic coalitions often forgo the rigorous clinical and manufacturing
standards that biomedical regulatory bodies demand of industry, and in this regard, many
discoveries that are made are often not developed effectively for commercialization or
20
translation (Jamieson, 2011). This study may benefit universities and academic coalitions by
helping them to understand some of the opportunities and roadblocks inherent in using
simulated data as part of clinical development to increase confidence and manage costs.
Finally, policy makers may benefit from the feedback provided by respondents as they attempt
to provide consistent messaging about policies and to identify areas in which more guidance or
education may be needed.
1.5 Limitations, Delimitations, Assumptions
1.5.1 Limitations
The use of simulations in clinical development is still in a formative stage. Thus, the research
is inherently limited by the paucity of written material regarding industry views and current
status of simulation in clinical development. Technological developments often seem to
outpace the ability of both industry and government to provide timely and cogent published
insights on quickly evolving technological issues.
It may further be limited by a number of factors common to the use of the survey
methodologies - the ability to find sufficient suitable respondents, to encourage those busy
respondents to participate, and to develop questions for them that provide sufficient depth of
insight to assure meaningful, unbiased results. I must acknowledge the difficulty in recruiting
representative subjects from industry that may be involved in the use of clinical simulations in
sufficient numbers. The field is relatively new, the population of experts in industry is
relatively small, and the exploratory nature of the results may only provide a base for later
studies. As a result, it is likely that this limitation may prevent rigorous statistical analysis at
this stage.
21
1.5.2 Delimitations
This work will not examine the use of modelling or simulations in pre-clinical development. It
will also generally not attempt to examine the use of more traditional Pharmacokinetic
modelling used in clinical development. The broader subject of computerized modelling and
simulation currently includes a spectrum of technical disciplines that can be applied across
various phases of clinical development. However, this work will be delimited to the
examination of regulatory policies regarding computerized simulations used in clinical
development. Particularly, the study will focus on the application of computerized simulations
in phase II and III clinical trials, and the subsequent submission of the simulated data in
regulatory submissions.
Also, the present survey has a US focus and will not be administered to participants outside of
the US, unless those participants have become stakeholders in the CAMD’s Clinical Trial
Simulation Tool for Alzheimer’s Disease. Although other countries have similar standards
regarding clinical trials and the data necessary to prove the safety and efficacy of medical
products, those views and rules may vary in a way that could make it difficult to see the
particular views that are developing in the US on this issue.
The proposed study is delimited to the views of industry representatives who are actively
involved with clinical trial simulations. in another area. It will not attempt to survey the
opinions of government regulatory bodies, because these types of official insights are
communicated though formal publications, many of which are reviewed as part of this work.
The study is time-limited in that it will look at industry views of clinical trial simulation
activities as they exist in the year 2017-2018.
22
1.6 Organization of Thesis
Chapter 1 provides an overview of the topic, identifies the target audience, and presents a case
for the importance of the issues.
Chapter 2 summarizes the extant work from the three principal stakeholders in biomedical
product development: (i) Government, (ii) Industry and (iii) Academia. An emphasis is placed
on literature defining existing and planned regulatory policies governing innovations in the
development of biomedical products with particular emphasis on how clinical trial simulation
plays into these policies and innovations.
Chapter 3 defines the analytical methods used for designing the survey, the complete list of
survey questions used, and the method for gathering data.
Chapter 4 provides a detailed analysis of the survey results in narrative, tabular, and graphical
format.
Chapter 5 provides a summary of results, conclusions, and implications associated with the
research, and includes conclusions and recommendations for further research.
Appendices to this thesis include: A) the survey that was distributed; B) the draft survey prior
to focus group comments, C) results and reports for responses received; D) virtual family
models, E) the 2010 Critical Path budget, and F) the EMA framework.
23
1.7 Definitions
Acronym / Term Definition
AD Alzheimer’s Disease
ASME American Society of Mechanical Engineers
Biomedical product We will use the term biomedical product to indicate any product intended
to prevent, alleviate, or cure any human disease. This includes
pharmaceutical and biological products, as well as medical devices.
CAMD Coalition Against Major Diseases
CDER Center for Drug Evaluation and Research
CDISC Clinical Data Interchange Standards Consortium
CDRH Center for Devices and Radiological Health
CFD Computational Fluid Dynamics
CM&S Computerized Modeling and Simulation
CPI Critical Path Initiative (2004), or Critical Path Institute
CTS Clinical Trial Simulation
DDT Drug Development Tool
24
EMA European Medicines Agency
FDA Food and Drug Administration
FFP Fit-For-Purpose - A DDT is deemed FFP based on the acceptance of the
proposed tool following a thorough evaluation of the information
provided. The FFP determination is made publicly available in an effort
to facilitate greater utilization of these tools in drug development
programs.
ICCVAM the Interagency Coordinating Committee on the Validation of Alternative
Methods (ICCVAM) Authorization Act of 2000
In-silico An expression used to mean "performed on computer or via computer
simulation."
In Vitro In vitro studies are performed with microorganisms, cells or biological
molecules outside their normal biological context. Colloquially called
"test tube experiments".
In Vivo Latin for "within the living". Those studies in which the effects of
various biological entities are tested on whole, living organisms usually
animals including humans, and plants.
IT’IS The Foundation for Research on Information Technologies in Society
25
M&S Modeling and Simulation
NAM Non-clinical Assessment Model
NIH National Institutes of Health
PMDA (Japan) Pharmaceuticals and Medical Devices Agency
PD Pharmacodynamics
PK Pharmacokinetics
REMS Risk Evaluation and Mitigation Strategy - a drug safety program that the
U.S. Food and Drug Administration can require for certain medications
with serious safety concerns
SAS Statistical Analysis System - a software system for data analysis and
report writing
Synthetic data Any data applicable to a given situation that are not obtained by direct
measurement
US United States
VF Virtual Family
V&V Verification and Validation
26
CHAPTER 2. LITERATURE REVIEW
2.1 Methodological Approach to the Literature Review
A literature search was conducted for publications dealing with the use and acceptance of CTS
techniques in the biomedical industry, as a starting point to identifying literature associated
specifically with the use of simulated data in clinical trials. The initial search was conducted
using resources in the USC Norris Medical Library, PubMed database, and the Wiley Online
Library. Filtering an abundant literature that was largely unrelated to the present topic was a
significant challenge. An initial keyword search for ‘simulation’ across public internet sites as
well as technical databases and on-line libraries resulted in over 500,000 citations. As the
review process matured, other keywords were identified and included, for example, ‘In-silico’,
‘Data mining’, ‘Simulation’, and ‘Virtual patient/cohort’. Additional regulatory keywords,
such as ‘Critical Path’, ‘Pipeline Problem’, ‘Qualification’ and ‘Context of use’ were
incorporated into the review process. Also, publicly available internet-based search engines
(e.g. Google, Bing) were employed to look for other materials and presentations that have
contributed to this topic. For those works that had been cited in other articles, a search of
citation indices was particularly effective and provided greater relational evidence that could
not be revealed by relying solely on keywords. By narrowing the search terms, the electronic
results still provided over 28,000 citations. Ultimately, for the purposes of the study,
approximately 170 citations, presentations and papers were determined to be most relevant to
the study. Seminal publications that were identified included, for example, the works by
Manolis and by Viceconti (Manolis et al., 2013) (Viceconti et al., 2015). The collections of
reference works, regulations, and guidance documents produced and curated by regulatory
agencies such as EMA, FDA, and PMDA were critical sources for identifying regulatory
27
policy and guidance regarding the use of simulation techniques for biomedical product
development in general and clinical development in particular. FDA and EMA websites
provided additional content and guidance documents on the projects that they are undertaking
to develop simulation assessment methodologies, and these also provide rich sources of insight
and primary reference. Thus a significant amount of the relevant literature was drawn fromn
governmental publications, about which much is written in trade journals rather than peer-
reviewed publications.
2.2 Evolution of Data Analysis Supporting Clinical Trials
The use of data-driven methods to support clinical decision-making has only recently become
an important part of clinical trial design. In the two centuries before the mid-1900s, most work
related to the analysis of data was directed toward the development of statistical methods as a
“pure” science that barely influenced the conduct of clinical testing. Most elements of basic
clinical design, including patient selection, comparison populations, and observation and
recording of the effects of various treatments (ranging from sea-water to oranges and lemons),
were nonetheless recognizable in the landmark scurvy trial of 1747 conducted by James Lind.
Much later, in 1943, the UK Medical Research Council's (MRC) trial of patulin for the
common cold pioneered the use of a double-blind, controlled trial design and subsequently
paved the way for the first randomized, controlled trial of streptomycin in pulmonary
tuberculosis carried out in 1946 by same body (Bhatt, 2010). Nevertheless, the careful
attention to design shown in these landmark trials was hardly the norm prior to the middle of
the twentieth century. Relatively little guidance was available to drug developers at that time.
Further, the original Food and Drugs Act, written in 1906 to prohibit the interstate commerce
28
of misbranded and adulterated foods and medical products, did not define clearly the evidence
needed to demonstrate that a product complied with this law ("Food Drug and Cosmetic Act,"
1938).
It was not until passage of the Federal Food, Drug, and Cosmetic (FDC) Act of 1938 that the
foundation for clinical testing was established by requiring pharmaceutical products to prove
that their drugs were safe ("Food Drug and Cosmetic Act," 1938). Evidence for that safety had
to be submitted to the FDA as part of a New Drug Application (NDA). The NDA was then
reviewed and approved by the regulatory body before the product could go onto the market.
However, the level of proof required by the 1938 Act was relatively modest ("Food Drug and
Cosmetic Act," 1938).
The need for more stringent requirements became apparent late in the 1950s, after the sedative,
thalidomide, began to be marketed in Europe as a sleeping aid as well as a sedative for
pregnant women who were experiencing “morning sickness”. When birth defects in thousands
of babies born in western Europe appeared after their mothers took thalidomide, it became
apparent that systems to understand drug efficacy and safety remained inadequate (Kim &
Scialli, 2011). Thalidomide never reached the mainstream market in the US, largely because
an FDA medical officer, Dr. Frances Kelsey, had concerns about the level of safety
information provided by its manufacturer (Kim & Scialli, 2011). Despite FDA restrictions,
however, a small quantity of thalidomide had been introduced into the US in order to conduct
informal “seeding trials”. In such trials, physicians were encouraged to distribute unapproved
drug in the premarket period as part of “clinical investigations” to encourage familiarity with
its use. Many public health experts feared that the inadequate oversight of this common
29
practice would continue to have negative public health consequences such as those seen with
thalidomide. Thus, the Kefauver-Harris Drug Amendment (K-H Amendment) of 1962
attempted to strengthen the evidentiary requirements to demonstrate drug efficacy and safety.
The amendments required that the FDA specifically approve the marketing application before
the drug could be sold and required new more rigorous rules for clinical trials. These included
requirements for the informed consent of study subjects, formalized good manufacturing
practices and adverse events reporting. It further transferred the regulation of prescription drug
advertising from the Federal Trade Commission to the FDA (FDA, 2006e).
The impact of the Kefauver Harris Amendment was significant because it set the stage for a
new era in drug safety and efficacy testing. It served as a critical turning point for the
biomedical industry because for the first time, drug manufacturers were required to prove the
effectiveness of their products before marketing them, and it greatly increased the level of
expectation with regard to the quality of clinical trial conduct. FDA began to view strict in-
vivo testing in a clinical setting as the accepted method for providing proof of drug safety and
efficacy. It also represented an important inflection point from a business standpoint because it
increased the costs and challenges associated with biomedical product development.
The second half of the twentieth century became a time when the importance of high-quality
clinical data became more and more central to the drug development process. The Kefauver-
Harris Amendment was part of a more general wave of legislative attention directed not only at
new drugs but also at drugs already on the market. In 1966, FDA contracted with the National
Academy of Sciences/National Research Council to evaluate the effectiveness of 4,000 drugs
that had previously been approved between 1938 and 1962. The work of that demanding set of
30
drug evaluations was assigned to 180 specialists assembled into thirty panels; each panel took
responsibility for a certain group of disease category. The panels sought evidence of drug
safety and efficacy from four main sources: 1) Briefs submitted by the sponsor of the drug; 2)
additional evidence directly solicited from the sponsor; 3) the files of the FDA; and 4) pertinent
medical literature brought in by the panelists. Each claim for a drug was subjected to a separate
evaluation, to be sure that the totality of the evidence sufficiently assured the efficacy of the
drug for its labeled indication. In this daunting exercise, panels made tens of thousands of
decisions, mostly between 1966 and 1972 (NAS, 1968). One outcome of this exhaustive
exercise was the crystallization of benchmarks that the FDA would use in subsequent analyses
of safety and efficacy data. Manufacturers who submitted new drug applications after the
NAS/NRC’s efforts would be expected to provide evidence of similar rigor to that acceptable
to the NAS/NRC committees.
The initiatives in the 1960s also underscored an additional important area of immaturity in the
clinical trial system. Until that time a medical product was typically considered to be safe and
effective if it was successful in the market without much publicity to draw attention to
problems. Seven of the 1,050 drugs that the NAC/NRC panel review found to be ineffective
were manufactured by Upjohn, which filed suit as the plaintiff against Robert H. Finch,
Secretary of Health, Education and Welfare, and Herbert L. Ley, Jr., Commissioner of Food
and Drugs, the defendants. With the thalidomide tragedy still in mind, the Court of Appeals
upheld enforcement of the 1962 K-H drug effectiveness amendment in Upjohn v. Finch
("Upjohn v. Finch,"), by ruling that commercial success alone does not constitute substantial
evidence of drug safety and efficacy. This further reinforced the responsibility of drug
31
manufacturers to provide clinical evidence to support any claims of safety and efficacy.
Historically, some additional examples of popular medical remedies that were neither safe nor
effective include lobotomies, drinking urine, the use of mercury, and bloodletting, none of
which had solid clinical evidence to support their use.
Attention then turned to the way that clinical trials were conducted to establish drug safety and
efficacy. One significant step took place in 1996 when the International Conference on
Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use
(ICH) issued guidelines for Good Clinical Practice (GCP), (ICH, 1996). This standard,
harmonized between the three major drug producing constituencies of the US, EU and Japan,
was intended to protect the rights, safety and welfare of human subjects in clinical trials and to
improve the quality and usefulness of derived data. In the US, new principles for clinical trial
design and management were also implemented as national requirements, in 45 CFR Part 46
and 21 CFR Parts 11, 50, 54, 56, 58.
The implementation of GCPs was resource intensive and increased further the cost and time
needed to meet the higher expectations of the regulators. Of particular concern were the
effects of increasingly rigorous, and some might say, onerous, requirements for certain disease
subgroups in which the new clinical trial expectations were cost prohibitive. This was
particularly the case for “orphan diseases”, defined by the FDA as:
“…any disease or condition which (A) affects less than 200,000 persons in the
United States, or (B) affects more than 200,000 in the United States and for which
there is no reasonable expectation that the cost of developing and making available
in the United States a drug for such disease or condition will be recovered from
sales in the United States of such drug.” ("The Orphan Drug Act," 1983).
32
Clinical trials for orphan diseases such as Huntington's Disease, Myoclonus, Amyotrophic
Lateral Sclerosis (ALS), Tourette Syndrome, and Muscular Dystrophy are difficult to carry out
because so few patients are available to participate in such trials. As an incentive to drug
makers, the US passed the Orphan Drug Act in 1983 ("The Orphan Drug Act," 1983), and
European Union (EU) enacted similar legislation under Regulation (EC) No 141/2000 in 1999.
These laws attempted to encourage clinical trials of orphan diseases by relaxing the
expectations for statistically significant subject numbers in their trials and by reducing some of
the economic barriers by offering tax incentives and market exclusivity. These included the
provision of research grants, a 50% tax credit for expenditures incurred during clinical testing,
and a 7-year market exclusivity for the drug in its orphan market. The incentives had a
positive effect; FDA listed 1,793 orphan designations and 322 approvals between 1983 and
2007 (Seoane-Vazquez, RodriguezMonguio, Szeinbach, & Visaria, 2008).
The advantages offered by orphan product exemptions rapidly become obvious. For example,
retrospective analysis of orphan medicinal products (OMP) approved by the EMEA since the
new (OMP) legislation came into force in August 2000 up to December 2004 reported that 10
of 18 (55 %) orphan-designated treatments approved in Europe had been authorized ‘under
exceptional circumstances’ (Joppi, Bertele, & Garattini, 2006). This designation identified that
the clinical dossiers were incomplete and that further studies would be needed to maintain the
marketing authorization in that there was insufficient clinical evidence to meet standard OMP
requirements for market approval. Specifically, only nine of the eighteen approved treatments
(50 %) were supported by data from randomized Phase III clinical trials, five were supported
only by uncontrolled Phase II studies, two by uncontrolled open-label studies, and one by a
33
literature analysis alone. Other limitations of the submitted data included the use of surrogate
endpoints and inadequate durations of follow-up in relation to the natural history of the disease
(Winstone, Chadda, Ralston, & Sajosi, 2015).
A similar problem was faced when attempting to provide drug products to pediatric
populations. Because pediatric trials are particularly difficult to carry out, most companies
chose to conduct trials only in adult populations. To treat children, then, physicians were
required to administer the adult drugs “off-label” and guess at the correction factor for
pediatric dosages. In order to assure better pediatric treatments, the FDA promulgated the
Pediatric Rule (1998), a regulation that requires manufacturers of most drug and biological
products to conduct studies to assess the safety and efficacy of those products in children – a
requirement later struck down by the courts, in 2002 (Arshagouni, 2002). Those rules created
additional burdens for companies whose products could be marketed for pediatric indications.
The ordering of these and other significant drug approval modifications are shown in Figure 1.
What seems significant from these developments is not only that the amount and nature of
clinical evidence was changing over time, but that the requirements were so demanding that
relief or even legal action was required to ensure that drug development was possible for some
populations.
34
Figure 1: Timeline of Clinical Data Requirements
Elements reproduced from the FDA’s website “Significant Dates in U.S. Food and Drug Law
History”, (FDA, 2014e).
2.2.1 Challenges of Clinical Trial Execution
The milestones in Figure 1 underline the evolution of a sophisticated and demanding clinical
trial path. Before the KH-Amendment, the path to market a new drug was relatively primitive
by today’s standards. The much-needed legislation to ensure the safety and efficacy of medical
products drove the progressive development of a clinical trial machine that introduced greater
complexity and formidable hurdles into the development process. Clinical trials became the
primary tool by which to demonstrate safety and efficacy. They also became the most
burdensome and restrictive element in the drug development path.
The burden to meet the increasing number of regulations, increasing costs, expansions of
subpopulations that must be tested, and addition of new technologies such as genetic screening
35
has made clinical testing very cumbersome – to the point that many feel it stifles the discovery
process itself (Viceconti et al., 2015). Figure 2 illustrates that development costs for new drugs
more than doubled in the twenty-year time span, but the number of New Drug Applications
(NDAs) and New Molecular Entities (NMEs) produced by that research has stayed largely
constant (GAO, 2006).
Figure 2: Drug Research and Development Costs versus Submissions
Shown are data for NDAs, and NDAs for NME Submissions, 1993-2004, (Constant 2004
Dollars). Adapted from “New drug development: Science, business, regulatory, and
intellectual property issues cited as hampering drug development efforts”, (GAO, 2006).
Why are conventional clinical trials so costly? When a trial is conducted under ICH-GCP
guidelines, work is apportioned amongst at least four sets of players. These include the trial
sponsor, the patients, the investigators/sites, and the regulators. The activities that must be
36
performed by all of these players contribute to the relatively long timelines and costs observed
in clinical trials.
2.2.1.1 Sponsor
The sponsor, typically the drug manufacturer, pays for the study, and is responsible for its
conduct and quality. Some of these responsibilities, described in great detail elsewhere,
include trial design, investigator selection, financing, notifications and submissions to
regulatory authorities, supply of investigational products and their associated documentation,
record management and retention, site monitoring, and clinical trial/study reports (ICH, 1996).
Today, many of these tasks are shared with a Contract Research Organization (CRO), paid to
carry out some aspects of the study management.
In 2014, Sertkaya and colleagues estimated that costs range between $161 million and $2
billion to bring a new drug to market. Differences within that range depend on a number of
factors. For example, the therapeutic area itself can be responsible for some of the cost
variance; the highest average per-study costs are typical for trials for pain and anesthesia
($71.3 million), ophthalmology ($49.9 million) and infectious disease ($41.3 million).
Conversely, trials in dermatology, endocrinology, and gastroenterology have the lowest overall
costs (Sertkaya, Birkenbach, Berlind, & Eyraud, 2014). The factors that contribute to these
costs include clinical procedure costs (15 to 22 percent), administrative staff costs (11 to 29
percent), site monitoring costs (9 to 14 percent), site retention costs (9 to 16 percent), and
central laboratory costs (4 to 12 percent) (Sertkaya et al., 2014).
Sponsors, more than any other player, recognize the critical role of good clinical data when
seeking product approval. The need to produce data of high quality can cause sponsors to
37
avoid risk in ways that can contribute to higher costs. In multicenter trials, for example,
uncertainty about enrollment targets and outcomes across sites can drive sponsors to plan trials
“defensively”. They may set enrollment targets higher than necessary and over-screen subjects
by using ultra-rigorous eligibility criteria (excluding, for example, subjects on other
medications or with comorbidities). Such actions make it difficult to find a sufficient number
of participants and thus protract the recruiting process. Many industry sponsors generally do
not involve site investigators in the design of the clinical protocol, so the required procedures
may not be easily integrated into clinical practice at the sites. Clinical trial protocols are
increasingly laden with multiple assessments, exploratory endpoints, biomarkers, biopsies, and
imaging requirements that add complexity to the staffing requirements and administrative
burden of trials (Sertkaya et al., 2014).
2.2.1.2 Patients
Patients increasingly play an active rather than passive role as clinical trial participants, which
may improve the value of healthcare research (Domecq et al., 2014). Additional resource
allocation is needed to put into place systems that will protect patient rights, ensure privacy of
protected health information and improve communications related to patient recruitment,
education and retention. Some insight into the challenge associated with the management of
patients is perhaps reflected even in the definition of a human subject, published in the Code of
Federal Regulations, Title 45, Part 46 ("Code of Federal Regulations," 2009):
Human subject means a living individual about whom an investigator (whether
professional or student) conducting research obtains (1) data through intervention
or interaction with the individual, or (2) identifiable private information.
Intervention includes both physical procedures by which data are gathered (for
example, venipuncture) and manipulations of the subject or the subject’s
environment that are performed for research purposes. Interaction includes
38
communication or interpersonal contact between investigator and subject. Private
information includes information about behavior that occurs in a context in which
an individual can reasonably expect that no observation or recording is taking
place, and information which has been provided for specific purposes by an
individual and which the individual can reasonably expect will not be made public
(for example, a medical record). Private information must be individually
identifiable (i.e., the identity of the subject is or may readily be ascertained by the
investigator or associated with the information) in order for obtaining the
information to constitute research involving human subjects (see also the decision
charts provided by the Office of Human Research Protection). In addition, legal
requirements to protect human subjects apply to a much broader range of research
than many investigators realize, and researchers using human tissue specimens are
often unsure about how regulations apply to their research. Legal obligations to
protect human subjects apply, for example, to research that uses– Bodily materials,
such as cells, blood or urine, tissues, organs, hair or nail clippings, even if you did
not collect these materials Residual diagnostic specimens, including specimens
obtained for routine patient care that would have been discarded if not used for
research Private information, such as medical information, that can be readily
identified with individuals, even if the information was not specifically collected for
the study in question. Research on cell lines or DNA samples that can be associated
with individuals falls into this category.
Costs of trials escalate when trials are unduly lengthened. One of the most common reasons
for an increase in trial duration has been the difficulties associated with recruiting enough
patients; such problems of recruitment can not only cause costly delays but also cancellation of
some trials (Weisfeld, English, & Claiborne, 2011). For disease states in which many trials are
ongoing (areas such as asthma, multiple sclerosis, and chronic obstructive pulmonary disease),
sponsors must compete for a relatively scarce population of potential subjects. Similar
challenges are presented by orphan diseases, where the potential pool of patients is, by
definition, small. Patient drop-outs are a recurring extra financial burden, and this burden
grows as when studies must increase the numbers of long-term endpoints and follow-ups or the
durations of treatments (English, Lebovitz, & Giffin, 2010). As trials go overseas to buttress
39
patient numbers, additional problems and costs are incurred in order to manage language,
culture and literacy issues.
2.2.1.3 Clinical Investigators/Sites
The site investigator of a study is responsible for assuring that adequate staffing is in place to
ensure that the trial can be carried out properly and that trial subjects receive adequate
associated medical care. The investigator also has regulatory responsibilities including
communication with IRB/IEC, administration of investigational products, executing
randomization and un-blinding procedures, informed consent of trial subjects, maintenance of
records and reports, safety reporting to sponsor and agencies, and provision of final report(s) to
the sponsor. Carrying out these responsibilities can be very challenging, so the ability to find
appropriately trained and committed investigators can limit the success of a trial and drive up
its costs. Of particular concern can be the fact that many investigators do not have the ability
to deliver on commitments when trial design is complex. Further the current climate for
clinical trial management may discourage otherwise good clinicians from acting as trial
investigators. Estimates suggest that 45 percent of clinical investigators quit the field after their
first clinical trial (Califf, Filerman, Murray, & Rosenblatt, 2012). Many veteran sites have also
been lost because they fail financially or find that they cannot sustain the cash flow shortfalls
caused by the lag in reimbursement for trial expenses by some sponsors (Getz, 2010). On
average, it takes approximately 120 days for sites to receive payment from sponsors and CROs
for completed work. Those sites may then have to borrow money; the average U.S.-based site
carries a debt of $400,000 (Getz, 2010). It is estimated that administrative staff costs
combined with site monitoring costs may run as high as 20-30%, on average, over the four
40
phases of a clinical trial (Sertkaya et al., 2014), and may limit the ability of any but the largest
and most sophisticated companies to carry out a traditional package of phased studies under
existing regulations. Additionally, protocols have become so complex that they are on the
edge of becoming unmanageable. Some individuals who have studied this challenge have
concluded that more sites can be expected to leave clinical research if these factors remain
unaddressed (Getz, 2010).
2.2.1.4 Regulators and Administrators
The strong interest in better control over the clinical trial process has had the inevitable result
that more regulations and requirements are now in place for the conduct of clinical trials.
Many policy critics recognize that the impact of regulations is often not evaluated to determine
whether the regulations actually achieve those purposes or are simply creating additional costly
obstacles (Kramer, Smith, & Califf, 2012). Certainly, the regulatory path introduces delays. A
study of 72 full board review studies found the mean time for IRB review to be 31 days.
During the IRB review, the mean time added by the sponsor, for example in responding to
questions or providing clarification, was 5 days (UAMS, 2014). FDA also requires a minimum
of 30 days to review a submission, although that process can be much longer depending on the
number of questions, concerns and contingencies that FDA requires to be resolved prior to
approval. Once the clinical trial data have been collected, much longer delays occur in the
approval process. The 2002 amendments to PDUFA set a 10-month goal for a standard
review, although again, this can be extended by the need for clarifications or responses to
questions (FDA, 2017b).
41
2.2.2 The Impact of Increasing Trial Sophistication
In summary, clinical activities are very expensive. Compounding the challenges presented by
the individual studies has been the proliferation in the number of required trials to support a
marketing application for a single drug. Table 1 illustrates the collection of trials that might be
seen for a single drug submission (GAO, 2006).
Table 1: Example of Clinical Study Reports for a Single Drug Submission
Dashes represent reports that may be required depending on circumstances. Adapted from
“New drug development: Science, business, regulatory, and intellectual property issues cited as
hampering drug development efforts”, (GAO, 2006).
The further demands for better trial practices, management and statistical design also come at a
cost. They have increased the number of patients required for clinical trials and ballooned the
costs associated with development without a concomitant increase in the number of curative
innovations (Figure 2). These costs multiply if the few successful drugs must bear the burden
of many other drug development programs that fail to achieve their predicted outcomes, as is
estimated to occur in the commonly used figure below (Figure 3).
42
Figure 3: The Drug Discovery, Development, and Review Process
Adapted from “New drug development: Science, business, regulatory, and intellectual property
issues cited as hampering drug development efforts”, (GAO, 2006).
Recent development efforts for certain drugs to treat Alzheimer’s disease provide instructive
examples of the challenges that make the clinical development process so costly. For example,
in November 2016 the investigational drug, Solanezumab, from Eli Lilly, failed to slow disease
progression in a late stage trial on more than 2,100 patients. The company spent hundreds of
millions of dollars on the development of Solanezumab, testing and retesting it on ever
narrower populations of Alzheimer’s patients in hopes of seeing a benefit (Garde, 2016). It
was the third Lilly-sponsored product to fail in a late stage trial and joins the ranks of roughly
99 percent of experimental drugs for Alzheimer’s disease that have failed in clinical trials over
the past decade. To date, only four cholinesterase inhibitors have shown sufficient safety and
efficacy to justify marketing approval for the more than 5 million Americans with that disease.
No drug has been approved since 2002 in Europe and 2003 in the USA. The Pharmaceutical
Research and Manufacturers of America, PhRMA, an industry trade group, identified 101
failures and three successes since 1998 (Schneider et al., 2014).
43
The experiences in the Alzheimer’s arena are perhaps more negative than experiences in other
disease states, but nevertheless point to a more general problem enunciated in FDA’s 2004
“Challenges and Opportunities Report” (FDA, 2004):
The medical product development process is no longer able to keep pace with basic
scientific innovation….the current capacity for technological innovation has
outstripped the ability to assess performance in patients, resulting in prolonged
delays between design and use.
The likelihood of advancing a drug through late-stage clinical evaluation is highly uncertain,
and the risks are perceived as high. Figure 4, below, illustrates graphically what has become
known as Eroom’s Law in Pharmaceutical R&D (Eroom is Moore spelled backwards), where
the number of new drugs approved by the US Food and Drug Administration (FDA) per billion
US dollars (inflation-adjusted) spent on research and development (R&D) has halved roughly
every 9 years.
44
Figure 4: Overall Trend in R&D Efficiency
Number of drugs per inflation-adjusted billion USD spending on log scale versus year on X-
axis. Reprinted with permission from Macmillan Publishers Ltd (Scannell, Blanckley, Boldon,
& Warrington, 2012).
Negative data from clinical trials and failures of drug approval are late-stage risks of great
concern to investors. Sertkaya and colleagues estimated that an average Phase 1 trial will cost
$30 million, require 100 participants over one year and have a 67 percent likelihood of success.
Phase 2 is expected to cost $45 million, to require 250 participants over two years, and to have
a 41 percent likelihood of success. Phase 3 is expected to cost $210 million, to require 4,000
patients over four years and to have a 55 percent likelihood of success. As of FY 2017, the
subsequent submission of an NDA to FDA requires a user fee for an application requiring
clinical data ($2,038,100), for an application not requiring clinical data or a supplement
requiring clinical data ($1,019,050), for an establishment ($512,200), and for a product
($97,750) (FDA, 2016c), and a wait of one year before approval with an estimated 83 percent
likelihood of success. Overall, cost of that NME is estimated at $973 million over 15 years,
45
with an added cost of capital for the sponsor of approximately 15 percent (Sertkaya et al.,
2014).
It is perhaps not surprising then that investment into early-stage pharmaceutical start-ups has
diminished. This loss of investment interest can starve an industry already under pressure.
The few novel drugs that do reach the market are often marketed at high, in some cases,
unacceptably high prices in order to recoup the investments required to bring the product to
market. As examples, in 2012, Sanofi’s colorectal cancer drug received criticism when doctors
refused to prescribe it at its price point of $11,000 per month (Mullin, 2017). In 2014, Gilead
came under fire for the unacceptably high price, $80,000, for a course of treatment with
Sovaldi and $94,500 with Harvoni for hepatitis C. The newly approved personalized chimeric
antigen receptor T-cell (CAR_T) biologic, Kymriah, is estimated to cost $475,000 for a single
treatment (Mukherejee, 2017). Mullin observes, however, that future drug discovery can come
only on the coattails of the skyrocketing innovation costs – and these costs must be passed on
to patients. In attempts to resuscitate flagging pipelines many companies have turned to
acquisitions, tax maneuvers, and price increases in order to fund their growth, rather than
relying on drug development (Mullin, 2017). Scannell has predicted that R&D spending may
continue to remain flat in the top ten large pharmaceutical companies, which may presage a
decline in novel curative products (Scannell et al., 2012).
2.2.3 Data-Driven Metrics: Cost-Driver or Cost-Reduction Agent?
The information presented above points to clinical trials as a key area in which to improve
efficiency. Regulators and scientists want to leverage the lessons learned in past trials, and in
particular the data collected from those trials, to reduce the burden of clinical testing in the
46
future. Using clinical data as a resource casts its use in a new light, reversing a previous history
in which the data was viewed as a cost driver. The interesting evolution of this change can be
seen by tracing the history of data management over the last century.
2.2.3.1 Historical Background
The disciplined treatment of data is a relatively recent feature of clinical trials. Rigorous
statistical methods were certainly not used in the 1747 landmark scurvy trial conducted by
James Lind aboard the HMS Salisbury. However, by 1887, the beginnings of systematic data
management could be inferred from the establishment of the National Institutes of Health
(NIH), which began to fund research on disease detection, prevention and treatment. The NIH
goals and associated funding opportunities began to influence clinical research methodology
including the ability to tabulate data and apply analytic statistical techniques (Jenkins, 1991).
By the early 1900s, fifty years before the first real computers became available commercially,
medical practitioners had begun to examine data from clinical experience using quantitative
computational methods. For example, in 1923, Pearl published his first edition of Introduction
to Medical Biometry and Statistics (Pearl, 1923) and in 1931 Woods and Russell published the
landmark first edition of An Introduction to Medical Statistics (Woods & Russell, 1931). In
1937, the first pharmacokinetic model, a Physiologically Based Pharmacokinetic (PBPK)
model appeared in the scientific literature (Teorell, 1937). In 1948, the Medical Research
Council (UK) Streptomycin trial for Tuberculosis appeared to be the first to use a randomized
control. The statistician on that trial, Austin Bradford-Hill, introduced the first recognizably
modern set of criteria, now known as Hill's criteria for causation (A. Hill, 1965), to establish
causation in a clinical trial setting.
47
In 1954, the field trials of the Salk Polio Vaccine enlisted 1.8 million children. The two-arm
study incorporated by accident a comparison between a randomized controlled double-blind
clinical trial and a non-randomized open trial, when some but not all doctors came to know
which children had received the vaccine and which had not. This landmark study revealed the
superiority of randomized trials which are now regarded as essential to the definitive
comparison and evaluation of medical treatments (Fieller, 1996). Since that time, study
blinding and randomization to reduce bias in clinical trials have become significant areas of
emphasis in clinical trial design, and is an area in which many early applications of
computational simulations have been applied.
Later in the 20
th
century, the FDA began to scrutinize the ways that statistical methods were
used to analyze clinical data. These included requirements for statistically significant sample
size. For example, Phase I trials typically required ~20-80 people in order to identify a safe
dose range and possible side effects. Phase II trials required ~100-300 participants and Phase
III patient studies often involved ~1000-3000 participants. These large numbers of subjects
added substantial costs. Regulators began to look at novel ways to reduce this burden. One
approach that offered modest potential savings was to use adaptive trial designs that could
reduce the number of subjects imposed by the lock-step progression of phases (FDA, 2010b).
A second, that of simulating trials using computerized methods rather than doing the trials in
live participants, offered even more possibilities for cost reductions but also introduced novel
challenges.
48
2.2.3.2 Evolution of Computer-based Modeling and Simulation
Computers were not used commonly as a tool for pharmaceutical development until the second
half of the 20
th
century. It was not until 1955 when Solomon and Gold published A three
compartments model of potassium transport in human erythrocytes (Solomo, 1955) that a
paper was identified by Index Medicus (now PubMed) with the combined keywords,
‘physiology’ and ‘computer’. Until the late 1980s, most mathematically based computer
models aimed to capture the basic mechanisms underlying physiological or pathological
processes without intending to make quantitatively accurate predictions (Viceconti et al.,
2015). However, a wider role has emerged in the last three decades, as the computational
power of silicon-based microprocessors has increased exponentially. This has resulted in
profound changes in the computational options available to researchers. Often used to
illustrate this expansion is the graph of Moore’s law shown in Figure 5. This now famous
figure identifies that the number of transistors in a dense integrated circuit doubles
approximately every two years. These additions permit more operations that can be processed
in parallel and therefore more quickly (mooreslaw.org, 2017).
49
Figure 5: Plot of Moore’s Law
The graph shows how the number of transistors in a dense integrated circuit doubles
approximately every two years evaluated from 1971 to 2011, (mooreslaw.org, 2017).
The increase in both the number of operations that can be processed and the speed with which
they can be performed represents an important opportunity for the application of computational
methods in clinical development. They have introduced new options to save, organize, filter
and transport data. However, they also allow researchers to go one step further, by using data
from previous trials in order to conduct predictive operations, for what is now called Clinical
Trial Simulation.
According to Bedding, and colleagues, the definition of Clinical Trial Simulation (CTS),
50
…is the study of the effects of a drug in virtual patient populations using
mathematical models that incorporate information on physiological systems. The
virtual patient population is selected to fit certain characteristics (eg. age, disease
status, ethnicity) that are relevant for the study of a particular drug–patient
combination that is under investigation. The purpose of a clinical trial simulation is
to help understand the likely impact of some of the unknown factors (eg. endpoint
variability, recruitment and drop out, treatment effects) that might occur in the
actual clinical trial (Bedding et al., 2013).
This type of simulation allows a researcher to predict the effect of changes in various design
features of a trial on trial outcome prior to its actual conduct. Through hypothesis testing, and
by simulating variations in the study elements, CTS can help investigators to identify and
select clinical trial designs that have a higher probability of success in showing that a drug is
safe and effective (McMahon et al., 2016). For example, biostatisticians now routinely use
simulations to analyze the behavior of various randomization schemes. These simulations can
give the investigative teams more confidence that the chosen algorithms will yield useful
results. Further, the ability to make quantitatively accurate clinical predictions by using
computational methods also opens the door to new testing strategies in which modeled data can
be used alongside actual clinical interventions to reduce the number of needed subjects or to
increase confidence in the results of the real-time trial. This can be very helpful when trials
must draw from a limited study population. For example, pediatric drug development is often
hampered by the small size of an affected pediatric population. Availability of subjects from
this already small population is further compromised because parents are hesitant to involve
their children in trials. Further, a single drug may have varying effects on children of different
ages so that patients may have to be substratified by age, reducing the power for each
subgroup. Thus, those developing pediatric trials have looked to simulations to augment their
traditional approaches (McMahon et al., 2016).
51
To demonstrate the value of clinical trial simulation (CTS), McMahon and colleagues
conducted a study to determine whether age stratification within the pediatric population could
help (1) to assess response to a pharmacologic intervention and (2) to design future trials based
upon published stratified disease data using CTS. The method relied on modeling data
available from the literature for Kawasaki disease (KD), in which age-stratified CTS for a
theoretical new drug was conducted (McMahon et al., 2016). The results predicted that
differences due to age might affect trial success if age was not taken into account. For
example, body weight–adjusted dosing at a given range was found to be acceptable for children
older than 1 year but could lead to a more than 3-fold overexposure in children younger than 1
year. Thus, dose adjustments by age would be required. The simulated data suggested that
poor decisions might be made with regard to age-appropriateness if assumptions regarding the
progress of the pediatric disease and the age of the patient were not taken into consideration
(McMahon et al., 2016).
Simulations are helpful to complement real-life data not only for pediatric trials but for trials in
a number of disease states and populations. The collection of approaches often used to support
drug development is now described under the rubric of Quantitative Systems Pharmacology
(QSP) by the National Institutes of Health’s Quantitative Systems Pharmacology workshop
group. More specifically, QSP has been defined as…
… an integrated approach to translational medicine that combines computational
and experimental methods to define, validate, and apply new pharmacological
concepts to the development and use of small molecules and biologic drugs.
QSP models have been promoted as useful for guiding clinical decisions as varied as dose
selections for various trial phases; statistical approaches to study design; go/no-go decisions;
52
project prioritization; clinical development plans; preclinical candidate identification; proof of
concept; dosing and formulation; pediatric study design; and regulatory interactions and filings
(Crawford, 2016).
Given the potential power of simulations to increase the efficiency of clinical assessment, it is
not surprising that the FDA and other regulatory agencies are beginning to examine how new
simulation technologies can be used to simplify the increasingly burdensome nature of clinical
requirements. As QSP and other computational techniques are understood better, FDA has
taken steps to relax the traditionally strict view that clinical safety and efficacy can only be
proven through in-vivo clinical studies. At the same time, they acknowledge that a large
number of computational approaches useful for product development or regulatory review are
not being employed optimally. In 2016 the FDA had identified that an estimated 354
potentially useful models or methods were not yet covered by standards or guidance, all of
which could potentially serve as tools to improve efficiency in the drug discovery,
development, and review process (FDA, 2008), (FDA, 2014c). The opportunities that they
promise could not come at a better time. Drug failures in late-stage trials are commonplace and
difficult to predict. The use of clinical trial simulation promises to reduce those failures.
2.2.4 Logistical Challenges Impeding Effective Simulation
Rapid advances in computational capability coupled with new scientific discoveries in fields
like genomics and imaging offer new options to predict the appropriateness, safety and efficacy
of potential medical products. However, computer analysis is only effective if it has a strong
data substructure, including access to accurate and plentiful numbers to feed the programs used
for simulations, harmonization of language used to describe variables in different datasets, and
53
availability of trained personnel to code and run the simulations. A key requirement for
effective simulation programs has been the concurrent development of data repositories for
clinical datasets. Some of these datasets can be used to illuminate patterns of disease
progression in untreated patients; others capture clinical outcomes from drugs that have already
entered the market or even that failed their trials. Even if trials on certain drugs are
unsuccessful, the failed results can be instructive in ways that might help a company to avoid
repeating past mistakes (FDA, 2006c). By mining this data, meta-analyses can be conducted
by combining datasets to increase the size of studied populations and the predictive power of
the conclusions (FDA, 2006c), (Marshall, 2016). However, two areas- data accessibility and
data consistency- have posed challenging problems to solve. Other administrative
considerations as well as existing regulatory requirements pertaining to clinical data include
data authenticity and protection, privacy, and access controls ("Food and Drug Administration
Modernization Act," 1997).
2.2.4.1 Gaining Access to Clinical Data
Access to clinical data can profit research of many types. It can be particularly important for
smaller companies or companies involved in the development of biomedical solutions for small
or special populations (e.g. children and rare diseases), because they have limited data to serve
as a foundation for strategizing a development plan. Prior data can also support the study of
diseases whose etiology is particularly complicated, lengthy, and uncertain, such as those
associated with neurological disorders with an overall failure rate estimated to be higher than
95% (Schneider et al., 2014). Companies targeting problematic therapeutic indications such as
Alzheimer’s Disease are recognizing the advantages of pooling their clinical data sets and
54
participating in public-private partnerships (FDA, 2006c), such as the Critical Path Institute’s
Coalition against Major Diseases (CAMD), described in more detail below.
It is a challenge, however, to gain access to clinical data that is owned by a different entity.
The owners of certain datasets may wish to restrict access for many reasons. Privacy
regulations and concerns limit the ways in which personal health information can be shared.
Further, some organizations and academic units keep data private in order that their research
teams can conduct analyses and publish exclusively on interesting data to which they have
exclusive access (Olson & Downey, 2013). This private access to data can open the door to
particular types of grant funding or enhancements in professional reputation.
It is not difficult to understand why for-profit organizations would want to protect the clinical
data that cost so much to generate. However, data sharing is important to advance the research
environment. The pooled data from multiple trials can extend clinical discoveries beyond
those derivable from any single study (Olson & Downey, 2013). This can improve the
accuracy of research and the risk/benefit analysis of treatment options. Moral arguments for
data sharing are also made. These center on fulfilling obligations to research participants,
minimizing safety risks, and honoring the nature of medical research as a public good (Olson
& Downey, 2013). Thus, considerable effort is currently invested to establish rules and
pathways for clinical data-sharing. In Europe, for example, recent regulatory steps have been
taken to require the publication of all clinical data for human medicinal products. The
objectives of EMA’s Policy/0070, adopted in January of 2015, rest on the belief that broader
dissemination of clinical data will foster greater transparancy, permit public scrutiny, and
facilitate future research, all in the interest of public health (EMA, 2015). To this end, under
55
the Eudravigilance access policy, the EMA is creating a database of clinical data submitted for
regulatory review that can be accessed at a general level by any member of the public.
Additionally, this data repository may also be accessible at a more granular level by an
authorized research groups (EMA, 2016). In the US, companies are also sharing clinical
datasets associated with approved drugs, but this sharing is often carried out through a through
a third-party intermediary or a consortium. Work done by Griffin offers additional insights
into industry’s views on disclosing clinical data from privately sponsered clinical research that
vary from positive to guarded (Griffin, 2017).
Developing Consistent Standards for Data Coding and Interchange
A related source of concern has been the poor consistency of data in different data
respositories, if indeed an electronic repository has even been created. Not all data is archived
in a repository. Thus, an important first step to modernizing the shared use of data is a move
from paper records, as noted in the 2009 report of CPI titled, “Leaving the Paper ‐Based World
Behind — Creating an All-Electronic Environment for Managing Data on FDA-Regulated
Products” (FDA, 2009a). The CPI report pointed out that the FDA had depended on a paper-
based infrastructure for nearly a century. However, FDA expressed concern that its legacy
methods can no longer support the demands of a globalized economy or the increasingly
sophisticated analysis required for innovative therapies. FDA recognized that a more
automated method to harness information technologies now is needed to manage the “huge
amounts of data” it receives for regulated products (FDA, 2009a).
However, it is not enough that data be captured in electronic form. All data in a simulation or
meta-analysis must use the same coding convention regardless of the database from which it
56
originates. In the past, data collected for clinical trials often was coded using home-grown
coding methods. Even the code for an attribute as simple as male and female could vary from
one database to another. In some databases, for example, a male might be coded as 1 and a
female as 2. In another database, the male is coded as M and the female as F. Something as
simple as a date format can become problematic when attempting to aggregate data from
different regions where standards vary (Zuazo, Sjogren, & Hurley, 2012). Unless data are
captured in a common coding language, they cannot be meshed and then sub-stratified (FDA,
2010c).
A recent important step in assuring a common language for shared data repositories has been
the introduction of clinical data standards. This effort was led by a multi-partner organization
called the Clinical Data Interchange Standards Consortium (CDISC), established in 1997 as a
global, non-profit consortium by member organizations from academia, biopharmaceutical and
device companies, technology and service providers and other companies. It formalized a
mission “to develop and support global, platform-independent data standards that enable
information system interoperability to improve medical research and related areas of
healthcare” (Kush, 2012). These standards for acquiring, exchanging, submitting, and
archiving clinical research data and metadata, are freely available at the CDISC website
(CDISC, 2017), and are a data language that is vendor-neutral and platform-independent.
Perhaps the most well-known and accepted of the CDISC standards is the Study Data
Tabulation Model (SDTM) that defines a standard structure for study data tabulations
submitted as part of a marketing application to a regulatory authority (CDISC, 2006). The
FDA does require the use of several CDISC data standards in clinical submissions (FDA,
57
2017f). Some of these and other standards under development or currently in use are included
in Table 2, below.
Table 2: CDISC – Standards for Clinical Data
Elements reproduced from the CDISC website, (CDISC, 2017).
CDISC Standard Purpose
Analysis Data Model (ADaM)
ADaM defines dataset and metadata standards that support:
• efficient generation, replication, and review of clinical trial
statistical analyses, and
• traceability between analysis results, analysis data, and data
represented in the Study Data Tabulation Model (SDTM).
Clinical Data Acquisition Standards
Harmonization (CDASH)
CDASH establishes a standard way to collect data in a similar way across
studies and sponsors so that data collection formats and structures provide
clear traceability of submission data into the Study Data Tabulation Model
(SDTM), delivering more transparency to regulators and others who
conduct data review.
Define XML, Define-XML transmits metadata for SDTM, SEND and ADaM datasets; it
is the metadata file sent with every study in each submission, which tells
the FDA what datasets, variables, controlled terms, and other specified
metadata were used.
Standard for Exchange of
Nonclinical Data (SEND)
SEND is an implementation of the SDTM standard for nonclinical studies.
SEND specifies a way to collect and present nonclinical data in a consistent
format.
Study Data Tabulation Model
(SDTM)
SDTM provides a standard for organizing and formatting data to streamline
processes in collection, management, analysis and reporting. Implementing
SDTM supports data aggregation and warehousing; fosters mining and
reuse; facilitates sharing; helps perform due diligence and other important
data review activities; and improves the regulatory review and approval
process. SDTM is also used in non-clinical data (SEND), medical devices
and pharmacogenomics/genetics studies.
Controlled Terminology CDISC Controlled Terminology is the set of CDISC-developed or CDISC-
adopted standard expressions (values) used with data items within CDISC-
defined datasets. The CDISC Terminology Team, in collaboration with the
National Cancer Institute's Enterprise Vocabulary Services (EVS), supports
the controlled terminology needs of all CDISC Foundational Standards
(SDTM (Drugs and Devices), CDASH, ADaM, SEND) and all CFAST
disease/therapeutic area standards.
58
The use of CDISC standards has expanded through partnerships with other organizations that
have adopted the CDISC tools for specific applications. For example, the CDISC PGx
(Pharmacogenomics and Pharmacogenetics) standard was developed as a joint effort between
CDISC and the HL7 Clinical Genomics Work Group, to code genetic information in a
standardized format. The Critical Path Institute participates actively with CDISC to develop
other types of data standards such as those for Alzheimer’s disease.
2.3 Evolution of Regulatory Approaches in the US
In the US, the primary regulator of medical products is the FDA. FDA’s mandate is
“to advance overall public health by helping to speed innovations that make
medicines more effective, safer, and more affordable; and thereby helping the
public get the accurate, science-based information they need to use medicines to
improve their health” ("Food Drug and Cosmetic Act," 1938).
However, FDA has often been criticized because of the slow pace with which it has
incorporated new tools and techniques that might advance these objectives more efficiently
(Manolis et al., 2013). FDA had typically been cautious in its review of clinical trial dossiers
that include not only data from traditional in-human trials but also data from CM&S (Viceconti
et al., 2015).
However, that view is changing. The potential advantages of data-sharing are now promoted by
the regulatory agencies. The FDA has expressed the view that combined datasets provide a
rich scientific resource (FDA, 2014f). At the same time it recognizes that the process of
gathering, pooling, and curating datasets is extremely resource intensive, so that the limited
public and private resources to advance their use should be focused on the most pressing
regulatory science questions. FDA acknowledged that it has historically attempted to apply
59
knowledge gained from analysis of pooled data to improve drug development and review, but
this analysis could benefit from additional external expertise such that contained in specialized
consortia (FDA, 2014f).
Historically, one of the first steps to facilitate the broader acceptance and use of simulation
techniques appears to have come from an unexpected driver - the passage of the “Interagency
Coordinating Committee on the Validation of Alternative Methods (ICCVAM) Authorization
Act of 2000” ("ICCVAM Authorization Act," 2000) (See Figure 6, below). This act was
inspired by concerns about the overuse of animals in research but responded more generally to
a perceived need to modernize test methods and review processes. The act created a high-level
group, the ICCVAM, composed by the heads of more than 15 different federal agencies,
including the Environmental Protection Agency, Food and Drug Administration, National
Institutes of Health, and the National Cancer Institute. The ICCVAM was charged with
finding ways to promote the 3Rs of toxicity testing, i.e., to “reduce, refine, or replace the use of
animals in testing, where feasible” while also increasing “the efficiency and effectiveness of
Federal agency test method review” ("ICCVAM Authorization Act," 2000). It further
instructed the Director of the National Institute of Environmental Health Sciences to establish a
Scientific Advisory Committee (SAC) that could advise the ICCVAM and the National
Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological
Methods.
This act had limitations. One clause of the Act, Sec. 4 (e), was of particular concern because it
sets into place a potential hurdle to the implementation of new test methods. In that section, a
60
test recommendation could be rejected if it did not exceed the capabilities of another previous
method. Specifically, it stated that:
“(1) the ICCVAM test recommendation is not adequate in terms of biological
relevance for the regulatory goal authorized by that agency, or mandated by
Congress;
(2) the ICCVAM test recommendation does not generate data, in an amount and of
a scientific value that is at least equivalent to the data generated prior to such
recommendation, for the appropriate hazard identification, dose-response
assessment, or risk assessment purposes as the current test method recommended
or required by that agency;” ("ICCVAM Authorization Act," 2000)
The Act goes on to allow agencies to reject a new method if:
“(4) the ICCVAM test recommendation is unacceptable for satisfactorily fulfilling
the test needs for that particular agency and its respective congressional mandate.”
("ICCVAM Authorization Act," 2000)
Thus, a test that does not offer a superior capability to assess safety and efficacy might be
rejected. The legislation opened a door, however, for new computer-based M&S to play an
expanded role to reduce the use of animals and humans.
61
Figure 6: Timeline of FDA Regulatory Milestones related to Clinical Simulation
2.3.1 FDA’s Challenges and Opportunities Report - March 2004
A first substantive step toward exploring the use of new computer-based tools was captured in
a 2004 position paper titled, “Innovation or Stagnation: Challenge and Opportunity on the
Critical Path to New Medical Products” (FDA, 2004). This landmark analysis identified
systematically the problems associated with its mission to speed innovations (Figure 8). It
conceptualized two areas of focus particularly salient to this study: 1) it defined a new
approach, called the Critical Path, to enhance the biomedical development process, and 2) it
expanded on the Pipeline Problem that stems from impediments in the preapproval regulatory
path.
Figure 7 illustrates a simplified developmental path followed by medical products (FDA,
2004). Novel biomedical solutions undergo a series of successively more rigorous evaluations
62
first in animals, and then in humans, to increase confidence in the safety and efficacy of the
new product.
Figure 7: The Critical Path for Medical Product Development
Reproduced from “Challenges and opportunities report - March 2004, innovation or stagnation:
Challenge and opportunity on the critical path to new medical products”, (FDA, 2004).
In the 2004 report, the FDA acknowledged that drug development is plagued by inefficiencies
and failures. It noted that a disappointingly low proportion, about 8%, of candidates entering
preclinical development survived preclinical and clinical testing. A drug entering Phase 1
trials in 2000 appears no more likely to reach the market than it did in 1985 (FDA, 2004).
Furthermore, the FDA acknowledged that the technological advances made by biomedical
research had not been able to improve the ability to predict successful candidates. FDA
outlined three dimensions where bottlenecks and challenges might be faced (Figure 8) and
provided further definitions and examples of activities associated with each of the three
dimensions (Table 3).
63
Figure 8: Three Dimensions of the Critical Path
Reproduced from “Innovation or Stagnation: Challenge and Opportunity on the Critical Path to
New Medical Products”, (FDA, 2004).
64
Table 3: Three Dimensions of the Critical Path
Elements reproduced from “Innovation or Stagnation: Challenge and Opportunity on the
Critical Path to New Medical Products”, (FDA, 2004).
Dimension Definition Examples of Activities
Assessing Safety
Show that product is
adequately safe for each
stage of development
• Preclinical: show that product is safe enough for
early human testing
• Eliminate products with safety problems early
• Clinical: show that product is safe enough for
commercial distribution
Demonstrating
Medical Utility
Show that the product
benefits people
• Preclinical: Select appropriate design (devices) or
candidate (drugs) with high probability of
effectiveness
• Clinical: show effectiveness in people
Industrialization
Go from lab concept or
prototype to a
manufacturable product
• Design a high-quality product
- Physical design
- Characterization
- Specifications
• Develop mass production capacity
- Manufacturing scale-up
- Quality control
The first dimension, ensuring product safety, is a mandatory prerequisite for which much
animal and human evidence is needed. The second, demonstrating medical utility, requires
evidence of effectiveness, that can only be obtained by conducting trials in human populations.
The third dimension, industrialization, requires that discovery is translated into design
requirements and specifications, and quality is instilled into manufacturing processes. All
three of the critical path dimensions present unique challenges that ultimately contribute to
delays in the biomedical pipeline.
The 2004 report called for a greater focus on critical path research. In its view, shown in
Figure 9, critical path research can be differentiated from basic research- the fundamental
65
understanding of biology and disease processes- and translational research- the work needed to
move basic discoveries into clinical evaluation. In its view, critical path research attempts to
improve the product-development process across a wider continuum, with a focus on new tools
and approaches for evaluating safety and efficacy. New tools and approaches might include,
for example, assays, standards, computer modeling techniques, biomarkers, and clinical trial
endpoints, that can make the development process more efficient and more likely to result in
safe, efficacious products that benefit patients.
Figure 9: Research Support for Product Development
Reproduced from “Innovation or Stagnation: Challenge and Opportunity on the Critical Path to
New Medical Products”, (FDA, 2004).
The 2004 Challenges and Opportunities Report attempted to identify approaches that could
reduce the pipeline problem. Six areas of priority appeared to offer significant opportunities
for critical path research: 1) assuring better evaluation tools; 2) streamlining clinical trials; 3)
harnessing bioinformatics; 4) modernizing manufacturing; 5) developing approaches to address
urgent public health needs; and 6) assisting specific at-risk populations. It recognized
66
explicitly that bioinformatics and computer modeling could help to predict issues of safety and
efficacy, a conclusion supported by research from external entities. For example, a study by
Price Waterhouse Coopers estimated that such methods had the potential to reduce the cost of
drug development by as much as 50 percent (PricewaterhouseCoopers, 1999).
As a part of the Critical Path Initiative (CPI), FDA was required to inform Congress about the
direction and progress of its efforts to streamline biomedical product development. In 2006, it
published the first of its annual reports, titled “Critical Path Opportunities Initiated During
2006”, that listed more than 40 Critical Path collaborations and research activities in which it
participated (FDA, 2006b). It also released its “Critical Path Opportunities List” (FDA,
2006c) that identified 76 examples of areas such as genomics, imaging, and informatics that
might strengthen areas of perceived weakness related to the objectives of FDA. The examples
spanned the critical path spectrum to include tools such as new biomarkers for animal
toxicology, disease models to improve clinical trials and new imaging biomarkers in
cardiovascular and neurocognitive diseases. Amongst the 76 opportunities listed in the 2006
report were some that were germane to the increased use of simulation in biomedical product
development. Most were listed under two of the areas of focus, Topics 2 and 3, that addressed
streamlining clinical trials and harnessing bioinformatics, respectively, as shown in Tables 4
and 5 below.
67
Table 4: 2006 Critical Path Initiative Opportunities for Streamlining Clinical Trials
(Topic 2)
Elements reproduced from “Critical Path Opportunities List”, (FDA, 2006c).
Opportunity Description
(#44) Development of Data
Standards
CDISC is paving the way by developing its Study Data Tabulation Model
for describing observations in drug trials. That model could someday
encompass observations needed for other types of trials. Standardizing
data archiving conventions would also enable the creation of shared data
repositories, facilitating meta-analyses, data mining, and modeling to
improve clinical trial design and analysis.
(# 46) Identification and
Qualification of Safety
Biomarkers
Collaborative efforts to pool and mine existing safety and toxicology data
would create new sources for identification and qualification of safety
biomarkers. For example, a robust database of preclinical and clinical data
on cardiac arrhythmic risk could help us understand the clinical
significance of QT interval prolongation, reduce the need for clinical
studies, and, possibly, help identify individuals who are at risk for this side
effect.
(#48) Adverse Event Data
Mining
Combining adverse event data related to a product, a class of products, or a
disease could enable identification of previously undetected patterns of
safety events and/or comorbidities and could elucidate drug-drug
interactions.
(#53) Natural History
Databases for Rare Diseases
Many rare diseases are hard to study due to both the difficulty in enrolling
subjects and the long duration of clinical trials. Databases recording the
natural history of patients with rare diseases, incorporating observations on
clinical progression and biomarkers, could assist in creating disease models
and better designing clinical programs and, possibly, contribute virtual
historical control groups”.
68
Table 5: 2006 CPI Opportunities for Harnessing Bioinformatics (Topic 3)
Elements reproduced from “Critical Path Opportunities List”, (FDA, 2006c).
Opportunity Description
(#47) Virtual Control Groups
in Clinical Trials
Databases, models, and/or imaging collections could be used by multiple
sponsors across different product types as historical controls to reduce the
necessary size of control groups in clinical trials… These techniques would
also be of special benefit in instances when use of placebos is infeasible or
unethical. Trusted third parties could be used to hold data or images and
create an open source library. For example, today it is impossible to test a
new drug as monotherapy in epilepsy. Patients need to maintain existing
therapies, so new therapies can only be studied in combination with
existing drugs. Use of historical controls might enable sponsors to
demonstrate effectiveness of a new drug as monotherapy if the data could
be assembled and rigorously analyzed.
(#51) Clinical Trial Simulation
Clinical trial simulation—using in-silico modeling—can predict efficient
designs for development programs that reduce the number of trials and
patients, improve decisions on dosing, and increase informativeness.
Clinical trial simulation requires the development of a disease model, with
subsequent integration of information on the investigational product. Such
models could also help refine some of the innovative trial designs described
in Topic #2, above.” (FDA, 2006c)
2.3.2 FDA’s Early Electronic Database Tools: Medwatch
An early effort by FDA to promote the innovative computer-based management of data was
that to modernize its Medwatch program. Introduced in the 1990s, the Medwatch program
required manufacturers and health care organizations to report serious adverse events
associated with drugs and devices. These adverse event reports were uploaded into a
searchable database from which patterns could be recognized and conclusions could be drawn
(FDA, 2008). This database housed records of about 2 million injuries and 100,000 deaths of
U.S. residents attributed to drug adverse events (Lazarou, Pomeranz, & Corey, 1998); the
comparable rates of adverse events associated with the use of medical devices were also
captured (FDA, 2006d).
69
However, the Medwatch system had its critics. Low rates of reporting were compounded by
concerns that problems with specific products could only be identified retrospectively after an
unacceptably long lag. To counter these criticisms, FDA proposed to introduce a new
electronic system, the Sentinel Initiative, designed to take advantage of database mining and
analysis. The FDA Amendments Act (FDAAA) of 2007 strengthened the resolve to create a
new program by providing FDA with additional requirements, authorities, and resources. The
FDA launched the Sentinel Initiative in May 2008. Its ambitious strategy was to create a
national safety system capable of tracking the performance of a medical product throughout its
life-cycle by using data obtained from databases held privately by companies, insurers, health
care groups and government sources such as Medicare. An improved, real-time system was
predicted to detect problems earlier, and then test hypotheses about possible causal factors
responsible for safety problems in the populations using the products (FDA, 2008). The
Sentinel Initiative is comprised of several large-scale projects that have been phased in since its
launch in 2008, many of which have not yet been completed at time of writing. FDA believes
the effort to be a success, exceeding the legislation’s goal of achieving secure access to data
from 100 million patients by July 1, 2012. In fact, since December 2011, FDA reported secure
access to data concerning approximately 126 million patients nationwide derived from 17
different data partners (Nguyen, 2014).
The success of the Sentinel Program changed the nature of pharmacovigilance from a
“passive” to an “active” surveillance system. Passive systems depend on industry, consumers,
patients, and healthcare professionals to be able to recognize and report suspected adverse
events to an FDA website, such as the Vaccine Adverse Event Reporting System (VAERS).
70
Passive reporting systems can slow the recognition of potential problems related to a licensed
product for months. Sentinel’s active surveillance lets FDA initiate its own studies much more
quickly by gaining access to existing electronic healthcare data from multiple sources. It also
lets FDA evaluate safety issues in targeted groups, such as children, or to evaluate specific
conditions, such as cardiac infarction, that are not usually reported as possible adverse events
of medical products through passive reporting systems. The Post-licensure Rapid
Immunization Safety Monitoring System (PRISM) is one component of FDA’s Sentinel
Initiative, which monitors the safety of a variety of FDA-regulated medical products by
analyzing information in electronic healthcare databases. One PRISM study, for example,
examined clinical outcomes subsequent to the administration of more than 1.4 million doses of
Gardasil and found no evidence of venous thromboembolism among females 9 to 26 years old,
thus dispelling a publicly expressed concern that could have derailed the vaccination program.
FDA has also used PRISM to identify a link between a rotavirus vaccine (RotaTeq) and an
increased risk of intussusception in infants (Shoaibi, 2017).
However, the Sentinel effort proved to be a more formidable task than initially anticipated.
The FDA quickly recognized the dual challenges of establishing access and mating disparate
coding schemes when it attempted to use the databases of insurers and other players to identify
adverse events. Some form of translation to a common language had to be performed before
the meshed data could be used (FDA, 2010c). However, it was useful in helping the FDA to
recognize how expensive and demanding such an undertaking can be and to understand better
the infrastructural requirements.
71
2.3.3 Investing in Clinical Trial Simulation
Much of the promise that FDA saw in data mining and simulation was limited by its modest
resource base. Unlike a granting agency such as NIH, FDA had relatively little capability to
fund projects directly. Instead, it saw its role as a single but important stakeholder, “uniquely
positioned to provide national leadership in this effort”. It therefore directed much of its
energy toward identifying key areas of importance and establishing multi-stakeholder
partnerships. Accordingly, lawmakers recommended that $6M of the $18M earmarked for the
Critical Path Initiative (CPI) should be directed at Critical Path partnerships. The CPI’s 2010
report (FDA, 2010a) to Congress identified certain programs that then received funding under
this envelope (Appendix E).
Highlights of the 2010 budget included funding for more than 170 CPI projects. Of these
funds, $2,934,323 supported six innovative projects on tuberculosis, and another $10,720,065
funded CPI partnerships that, in accordance with section 566 of the FD&C Act, included
approved classes of institutions including educational organizations and tax-exempt
organizations operated for religious, charitable, scientific, literary, or educational purposes
("Food Drug and Cosmetic Act," 1938). Additionally, FDA managed a modest extramural
program directed at funding targeted projects in areas of regulatory science priority as
discussed earlier. A full listing of supported centers and projects are available on the FDA
website (FDA, 2017e).
2.3.4 Simulation Tools Development – Clinical Disease Simulator
One particularly significant achievement to come out of an FDA-linked partnership was that
with the Critical Path Institute (C-Path) (CPI, 2017) that became engaged in the development
72
of tools (listed on its website, c-path.org) to strengthen quantitative approaches to
pharmacology and clinical trials. Its Coalition Agaist Major Diseases (CAMD), for example,
developed the first tool to be qualified by FDA. This tool, the Clinical Trial Simulation Tool
for Alzheimer’s Disease, was designed to support clinical trial design for mild and moderate
AD, using ADAS-cog as the primary cognitive endpoint. Based on a drug-disease-trial model,
it describes disease progression, drug effects, dropout rates, placebo effect, and relevant
sources of variability (Table 6).
73
Table 6: Demographic fields for CAMD’s Clinical Trial Simulation Tool for
Alzheimer’s Disease
Reproduced from “CAMD’s Clinical Trial Simulation Tool for Alzheimer’s Disease”, (CPI,
2017) with permission.
The tool was not intended as a stand-alone approach for the approval of medical products but
rather was designed to work in concert with well conducted trials in real patients. By
combining these approaches, the size and cost of the trials in real patients could be reduced
(Romero, Corrigan, Neville, Kopko, & Cantillon, 2011). The goal of the proposed simulation
tool is to serve as a public resource for sponsors designing clinical trials for Alzheimer’s
Disease (AD), and thus is available publicly on its website (CPI, 2017).
The Alzheimer’s disease simulator is important because it presages a new approach not only to
the development but also the validation of a simulation tool that can provide a prototype
process for other follow-on tools. The Alzheimer’s Disease Simulator represents the first case
74
in which a simulation tool has been “qualified”. The process of qualification by a regulatory
agency is designed to assure reviewers and scientists who are considering the application of a
simulation tool that it is fit for use in a certain situation, or context of use (COU). In its
guidance documents, the FDA states that “qualification is a conclusion that within the stated
COU, the DDT can be relied on to have a specific interpretation and application in drug
development and regulatory review” (FDA, 2014a) (FDA, 2017d). It helps to avoid the
problem that a drug reviewer can face when a submission contains a simulation or model with
which they are not familiar.
Until June of 2013, no simulation tools had been qualified by the FDA. Thus, the Alzheimer’s
Disease Simulator now might serve as a model for the subsequent development of other
qualified methods (Corrigan, 2013). Experience with this qualification process presumably
helped to lay a foundation for the new guidance on tool qualification from the FDA, “Guidance
for Industry and FDA Staff Qualification Process for Drug Development Tools” (FDA,
2014a), released in 2014. This guidance was later updated in 2017 (FDA, 2017g) to defined
Drug Development Tools (DDTs) as tools that include, but are not limited to, biomarkers,
clinical outcome assessments (COAs), and animal models for drug development under the
Animal Rule (FDA, 2015).
CDER took a further step in December 2016, by publishing a more specific guidance,
“Physiologically Based PK Analyses - Format and Content, Guidance for Industry” (FDA,
2016b). This guidance outlines the recommended format and content for a sponsor to submit
physiologically based pharmacokinetic (PBPK) models to the FDA to support regulatory
applications. It addresses what is expected in terms of an overview of Modeling Strategy,
75
Modeling Parameters, Simulation Design, Electronic Files and Other Documentation,
Software, Results, Model Verification and Modification, as well as Model Application (FDA,
2016b).
Additionally, in December 2016, Congress passed the 21st Century Cures Act which amended
the Federal Food, Drug, and Cosmetic Act to add section 507, Qualification of Drug
Development Tools (DDTs), and to establish a multistage process for DDT qualification, as
follows:
Subtitle B, Qualification and Use of Drug Development Tools
(Sec. 2021), the FDA must establish a process to qualify drug development tools
(methods, materials, or measures that aid drug development and regulatory review)
as reliable for use in supporting approval or investigational use of a drug.
(Sec. 2022) The sponsor of a drug for a serious condition may request that the
FDA agree to an accelerated approval development plan. The plan must include the
design of the drug study. ("21st Century Cures Act," 2016)
2.4 Other Initiatives - CDRH
FDA’s Centers responsible for drug development were not the only ones developing guidance
for computer-based tools. The Center for Device Evaluation and Radiological Health (CDRH)
was an early proponent for tools that modeled physical structures such as anatomical parts as
well as electronic database and simulation tools. Its Medical Device Development Tools
(MDDT) program was promoted as a way for the FDA to qualify tools that medical device
sponsors could use to develop and test devices (FDA, 2014b). Initially the FDA’s intent was to
limit the MDDT Pilot Program to no more than 15 candidates, and defined three categories: 1)
Clinical outcome assessments: subjective measures of how a patient feels or functions, such as
patient-reported or clinician-reported rating scales such as the NIH stroke scale; 2) Biomarker
76
tests: laboratory, imaging and other objective tools to detect or measure an indicator of
biologic processes or pharmacologic responses to a treatment (biomarker); and 3) Nonclinical
assessment models: nonclinical test methods or models that simulate device function or
performance in a living organism.
To support the MDDT program, FDA initially released the draft guidance titled “Medical
Device Development Tools - Draft Guidance for Industry, Tool Developers, and Food and
Drug Administration Staff” (FDA, 2014c). This guidance has since been reviewed, updated
and released as a final draft entitled “Qualification of Medical Device Development Tools -
Guidance for Industry” (FDA, 2017d). In addition to redefining the types of MDDTs in which
the FDA was interested, it also differentiated various contexts in which an MDDT is likely to
be used, for example to aid diagnosis, patient selection, choice of clinical endpoints or
nonclinical device assessment (FDA, 2017d).
Some of the initiatives of CDRH, like those of CDER, were aimed at “qualifying” the use of a
particular tool. CDRH defined qualification as:
…a conclusion that within a specified context of use, CDRH expects that the results
of an assessment that uses an MDDT can be relied upon to support device
development and regulatory decision-making (CDRH, 2013).
Context of use defined the boundaries within which the available data would support use of the
MDDT. Context of use is defined in part by 1) the device or product area for which the
MDDT is qualified, 2) the stage(s) of device development (e.g., early feasibility study, pivotal
study, etc.), and 3) the specific role of the MDDT (FDA, 2017d). Once qualified, results based
on using an MDDT would be considered reliable to support regulatory decision-making within
the qualified context of use. This would lift the burden on CDRH review staff to reconsider
77
and reconfirm the suitability of that MDDT. In some instances, an MDDT may have already
been addressed in an FDA-recognized consensus standard, meaning that it had already been
assessed in a manner similar to the MDDT qualification process and might not need
requalification. The guidance document also gives industry a detailed framework for
submitting a qualification package in order to facilitate the qualification process. That
framework includes a description of the MDDT, its context of use, the strength of the evidence,
the advantages and disadvantages of its use and a consent for the MDDT to be disclosed and
used by others. The guidance goes on to outline the CDRH qualification process, procedures
for submitting MDDTs and its related documents, and a preferred template and contents for the
cover letter.
A comparison between the initiatives of CDER and CDRH (Table 7) shows a strongly parallel
approach that is colored by the different natures of their respective product classes. CDER’s
“Guidance for Industry and FDA Staff Qualification Process for Drug Development Tools”
had a similar goal to that of the MDDT program, in that both include tools such as biomarkers,
Clinical Outcome Assessments (COAs), and animal models for drug development under the
Animal Rule (FDA, 2015).
78
Table 7: Comparison of Drug Development Tool (DDT) Definitions and Terms vs
Medical Device Development Tool (MDDT)
Elements reproduced from “Guidance for Industry and FDA Staff Qualification Process for
Drug Development Tools”, (FDA, 2014a), and “Qualification of Medical Device Development
Tools - Guidance for Industry”, (FDA, 2017d).
DDT MDDT
Definition
Drug development tools (DDTs) are
methods, materials, or measures that
aid drug development.
A Medical Device Development Tool
(MDDT) is a scientifically validated tool –
a clinical outcome assessment (e.g. patient-
reported or clinician-reported rating scales),
a test used to detect or measure a biomarker
(e.g. assay for a chemical analyte or medical
imaging method), or non-clinical
assessment method or model (e.g. in vitro,
animal or computational model) - that aids
device development and regulatory
evaluation.
Types of Tools
1. Biomarkers
2. Clinical Outcome Assessments
3. Animal Models
1. Clinical outcome assessment
2. Biomarker Test (BT)
3. Nonclinical Assessment Model (NAM)
Qualification
(Definition)
Qualification is a conclusion that
within the stated COU, the DDT can be
relied on to have a specific
interpretation and application in drug
development and regulatory review.
“a conclusion that within a specified context
of use, CDRH expects that the results of an
assessment that uses an MDDT can be
relied upon to support device development
and regulatory decision-making.”
The qualification
process
consists of three stages:
(1) an initiation stage,
(2) a consultation and advice stage, and
(3) a review stage for the qualification
determination.
A qualification decision involves a
consideration of:
(1) the specified context of use;
(2) the strength of available evidence
supporting the MDDT (including tool
validity, plausibility, etc); and
(3) an assessment of the advantages and
disadvantages of relying on assessments
using the MDDT within the specified
context of use.
Context of Use
(COU)
(Definition)
The COU is a complete and precise
statement that describes the appropriate
use of the DDT and how the qualified
DDT is applied in drug development
and regulatory review. The COU
statement would describe all important
criteria regarding the circumstances
under which the DDT is qualified.
Context of use refers to a key aspect of
qualification. This use is defined in part by
the device or product area for which the
MDDT is qualified, the stage of device
development, and the specific role of the
MDDT (for clinical uses, this includes the
study population or disease characteristics,
as well as specific use – diagnosis, patient
selection, clinical endpoints). The context of
use defines the boundaries within which the
MDDT is qualified.
contexts
The COU is a complete and precise
statement that describes the appropriate
use of the DDT and how the qualified
DDT is applied in drug development
and regulatory review.
1. Aid in Diagnosis
2. Patient Selection
3. Clinical Endpoints
4. Non-clinical Device Assessment
79
2.5 Comparative Regulatory Approaches in Europe
The US was not the only regulatory organization to understand that better tools were needed to
facilitate the product development path. Other regulatory agencies, and particularly the
European Medicines Agency (EMA), have also been influential in shaping the thinking about
clinical trial simulation in a global environment where most products are marketed in the EU as
well as US. One of its seminal efforts was the 2010 “Reflection Paper on the Expectations for
Electronic Source Data and Data Transcribed into Electronic Data Collection Tools in Clinical
Trials” (EMA, 2010). In this work, the EMA began to identify the infrastructure that would be
required for leveraging simulation techniques and noted the importance of standardizing the
source data collected during clinical trials, a concern described earlier in the discussion of
consistent standards for data coding and interchange. It outlined the types of electronic data
and systems that are typically necessary to collect appropriate data from clinical trials-
Electronic Case Report Forms (eCRFs); electronic patient data regarding Patient Reported
Outcomes (PRO); instrumentation-derived data such as ECGs, tests, scans, and images; and
Electronic Health Records (EHRs) (EMA, 2010).
It is important to note that, unlike the FDA, the EMA does not currently use data in analytics.
However, in 2011 the EMA broadened its considerations of simulation by releasing the
harmonized views of experts across a number of European regulatory agencies as well as those
of the EMA in a document titled the “Role of Modelling and Simulation in Regulatory
Decision Making in Europe” (EMA, 2011b). The report suggested several methods to
decrease failures in clinical trials, maximize the information obtained from limited patient
80
numbers in areas where testing was unusually difficult (e.g. pediatrics, orphan drugs) and
establish mechanistic models for drug-drug interactions (DDIs), pharmacogenetic effects, PK,
PD and safety analyses. The experts encouraged EMA to establish a framework for using
simulation in regulatory reviews, centered around a risk-benefit approach in which simulation
results were judged to have high, medium, or low impact on the regulatory decision (EMA,
2011b) (see Appendix F). This approach is similar in some ways to that advocated later by the
DDT (FDA, 2014a) and MDDT (FDA, 2017d) guidance documents that the FDA subsequently
released.
In 2012, the European Medicines Agency organized a modeling and simulation workshop
attended by about 190 participants, including 50 from the regulatory agencies such as the US
FDA and the Japanese PMDA. Many other participants belonged to the European Federation
of Pharmaceutical Industries and Associations (EFPIA) Clinical Development Committee
(CDC). The organizers subsequently released the “EFPIA-EMA Modelling and Simulation
Workshop Report” (EMA, 2012). Particularly relevant to the research to be conducted in this
dissertation was its content related to CTS. It suggested that M&S (particularly CTS) is
underused for regulatory submissions, in part perhaps because industry has misperceptions
about the interest and competence of regulatory reviewers with such methods, or because they
view mechanistic PD and safety models as immature compared to PK models. Participants felt
that the powerful tools of CTS should be integrated across the drug development and
regulatory continuum and not just confined to the exploratory setting. They further expressed
the view that regulators should not set requirements for high impact CTS greater than those for
standard statistical testing. Rather, the level of regulatory scrutiny applied to a particular M&S
81
exercise should reflect the impact of that exercise on the gravity of the regulatory decision and
its effect on product labeling. The report also highlighted the need to harmonize on good M&S
practices, and the importance of sharing data, models, and concepts in the precompetitive
space.
EMA followed its 2011 M&S workshop (EMA, 2012) with workshops to explore both the
advantages and challenges associated with the use of M&S tools. The first, EMA-EFPIA
Modelling and Simulation Workshop, 30 Nov – 1 Dec 2011, held in London based its work on
two examples of studies in which M&S was used to adjust drug dosages for patients within a
target study range. In both examples, the trial simulations identified that the dosing schemes
assured good efficacy and manageable safety risk. Ultimately, those new dosing schemes and
dose adjustments were approved by both EMA and FDA (EMA, 2012).
The second follow-on workshop explored situations in which the dossiers containing simulated
results that were presented to the regulators did not produce such positive outcomes. Its report,
titled “Modelling and Simulation Examples That Failed to Meet Regulator's Expectations”
(EMA, 2011a), evaluated four studies that used population PK models. Typically, the studies
were not seen to be inappropriate or inadequate, but rather were poorly documented or
analyzed. The quality of the reports was judged to have insufficient detail, to be missing
information or to be poorly balanced by reporting irrelevant information but neglecting
important information. The report enumerated a number of final recommendations regarding
the need for sufficient attention to detail, and for critical analysis of the results in several
domains, such as the limitations, assumptions and fit of the model (Table 8).
82
Table 8: Recommendations to meet regulator's expectations
Elements reproduced from “Modelling and Simulation Examples that Failed to Meet
Regulator's Expectations”, (EMA, 2011a).
Final recommendations included:
Reports should be
sufficiently detailed
- include relevant components
- model qualification very important
- enable a secondary evaluation by a regulatory assessor
Companies should
critically assess the
analyses
- How well does the final model describe the data?
- What are the limitations of the analysis?
- Are assumptions discussed and justified?
- What is the clinical relevance of covariate influences?
- How well do the results agree with previously obtained information?
- How will the results of the analysis be used (e.g support labelling, for dose
individualization, for optimizing future studies)?
The work of both the FDA and the EMA prior to about 2012 appeared to promote more
welcoming policies toward simulation. Both agencies identified that simulation held value in
replacing some elements of the clinical trial package needed for product approval. However,
no policy, however well-conceived, will be successful if it is not implemented effectively.
Implementation science is an art in itself, often divided into stages that include exploration,
installation, initial implementation, full implementation, innovation and sustainability (Fixsen
et al., 2009). The regulatory agencies appeared by about 2010 to be transitioning from the
exploration phase, where the advantages and problems of the new policy were evaluated, to the
phase of installation and initial implementation.
It seems clear that the use of simulation in clinical trials is currently at a cross-road. The first
qualified method for drug development in the US has already been described. A second
CAMD effort, in conjunction with CDISC, has received a Letter of Support for Alzheimer’s
disease biomarkers from FDA. Typically, an FDA Letter of Support does not endorse a
specific biomarker test or device but is rather meant to enhance the visibility of the biomarker,
83
in order to encourage data sharing, and stimulate additional studies. The FDA has most
recently published guidance regarding the 21 Century Cures Act enacted on December 13,
2016, whereby the new section 507 Qualification of Drug Development Tools (DDTs) was
added to the Federal Food, Drug, and Cosmetic Act (FDA, 2017g). Thus, potential pathways
for the qualification of new methods and for the subsequent use of those methods more broadly
are setting the stage for expansion of such approaches.
2.6 Industry Perspectives
The literature review to this point has been dominated by the views of regulators and experts
related to the potential use of clinical simulation tools as part of the regulatory science
armamentarium. However, what is not so clear is the value placed by industry on such tools.
We know that several large companies, non-profit organizations, and academic institutions
belong to groups that are developing simulation methods, such as those participating in the
CAMD consortium which include, for example, The Cleveland Clinic, Johns Hopkins
University, The Alzheimer’s Association, Astra Zeneca, Eli Lilly, and Merck (visit CPI’s
CAMD collaborators site at: https://c-path.org/camd-collaborators/ for full listing) (CPI, 2017).
Further, we know that many of these companies also partner with regulatory agencies in order
to formulate policies and best practices. However, what is not so clear is how companies are
more generally viewing such methods. We do not have a clear picture of whether, or what part
of, these initiatives are valuable to them, whether they are implementing clinical trial
simulation approaches in their new regulatory strategies, and whether they view their role as
adopters rather than creators of qualified tools. Further, we could profit greatly by gaining a
better insight into their experiences to date with attempts to incorporate simulation tools.
84
Given that most regulatory submissions will be directed at both the FDA and EMA, the
potential presence of regulatory dissonance (Storm, 2013) between review feedback and
policies regarding acceptable methodologies may also vary from one jurisdiction to another.
Examples of this may include differences in reviewing experience between reviewers as well
as differences in methodologies between regions. For example, training and instruction on the
evaluation criteria used within a given reviewing body may differ greatly based on education,
background, and tenure. If such differences exist, they might diminish the enthusiasm of
companies to adopt clinical trial simulation strategies.
2.7 Assessment Frameworks
The goal of this research is to assess the conditions necessary for successful policy
implementation and identify actionable items that may facilitate improvements. In order to
approach this goal systematically, I examined a series of frameworks and models that have
been used previously to explore the views of industry with respect to policy development and
implementation. Three of these appeared to offer different ways of pursuing such an
exploration: 1) Kingdon’s three streams policy window model approach to assess whether a
window of opportunity exists within the policy making process, 2) Redington’s three-element
issue-framing approach to assess effectiveness of regulatory policy in a) protecting the public,
b) facilitating efficiencies, and c) providing a level-playing field for industry competitors and
players, and 3) Fixsen’s implementation framework to assess progress in policy execution.
Each model is described briefly and then the reason for choosing Fixsen’s implementation
framework is elaborated.
85
2.7.1 Kingdon's Three-Streams Model
The “policy window” model of Kingdon is one of the most established models to evaluate
policy adoption. John Wells Kingdon, a professor of Political Science at the University of
Michigan, may best be known for his book, Agendas, Alternatives and Public Policies
(Kingdon, 1995), in which he proposes his ‘streams’ model as an integral part of an overall
agenda-setting theory. The model posits that policies can only be enacted if they can be placed
on the policy agenda of those who make policy decisions, and then can be moved up in priority
on that agenda by aligning three process streams that Kingdon short-formed as problems,
proposals, and politics. In each of these streams, a series of activities must occur to set up the
foundation for efficient policy implementation once a “window” opens that exposes an
opportunity to accelerate the policy adoption.
The problem stream, the first step in the Kingdon’s model, requires that an issue or problem be
identified. It then requires activities from those advocating a particular policy to influence or
persuade policy decision makers to pay attention to one problem over others.
The proposals stream requires that the policy advocates identify a short list of proposals that
appear to be the most prominent options for implementation. The major driving forces here
come from the interest groups who are committed to a particular policy change and who
ultimately represent the process by which policy proposals are generated, debated, revised, and
ultimately adopted for further consideration.
Finally, the politics stream requires attention to the political factors that influence agendas and
include for example, changes in elected officials, political climate and the voices of advocacy
or opposition groups.
86
Although these three elements may operate independently, the actors in each can overlap.
Kingdon proposes that successful agenda setting requires that at least two elements come
together at a critical time. According to Kingdon, these three elements may be largely
independent of each other although the factors affecting each stream can overlap. When all of
these streams are aligned, the conditions are set into place to capitalize on an opportunity that
might open a “policy window”. It is at this critical point in time that policy setting is more
likely to be successful (Kingdon, 1995).
2.7.2 Redington’s Issue Framing Approach
Another useful approach to policy evaluation is one that emerged from an analysis by Lynn
Redington, while at the University of North Carolina at Chapel Hill. She created an issue
framing model to analyze the effectiveness of the policies developed in conjunction with the
Orphan Drug Act of 1983 ("The Orphan Drug Act," 1983). According to Redington, a good
regulatory policy can be judged by its ability to: 1) protect the public by assuring that policy
assures or increases the safety and effectiveness of medical products, 2) facilitate efficiencies
in the research, commercialization, and life-cycle management of biomedical products, and 3)
equalize requirements and standards for different industry competitors and players (Redington,
2009). A successful policy must balance these requirements. This model focuses on outcomes
but does not consider how those outcomes are achieved in an evolving system.
2.7.3 Fixsen’s Core Implementation Components
The third approach is the Implementation Framework of Fixsen (Fixsen et al., 2009). At the
time of writing, Dean Fixsen was a Senior Implementation Specialist at the Frank Porter
Graham Child Development Institute, University of North Carolina at Chapel Hill. He was
87
also Co-Founder of the National Implementation Research Network. Their particular research
focused on how innovations in the services industry are implemented and sustained. However,
the approaches that it uses are applicable more generally, and contribute to a growing field of
“implementation science”, a field that examines broad areas of diffusion, dissemination, and
implementation of policy and programs (Fixsen et al., 2009). These techniques have become
particularly useful for policy-makers. For example, the National Institute of Health (NIH) uses
the methods of implementation science to promote the integration of research findings and
evidence into healthcare policy and practice (NIH, 2012).
Fixsen and colleagues offer a structured approach by which implementation can be segmented
into a series of stages: exploration and adoption, program installation, initial implementation,
full operation, and sustainability. They describe Core Implementation Components that work
together when new or changed programs must be implemented and can affect the effectiveness
of that implementation. These are defined to be: staff selection, preservice and in-service
training, ongoing coaching and consultation, staff evaluation, decision support data systems,
facilitative administrative support, and systems interventions (Fixsen et al., 2009).
Any one of the three frameworks described above could be helpful in exploring the
development and use of modeling and simulation by industry. However, they would direct the
investigation to focus on different elements of the problem. Kingdon’s model is best suited to
explore the state of various factors important to encourage the development and use of CTS.
That of Redington would be most helpful to understand whether the use of CTS achieves the
policy objectives of reducing the costs and times as well as improving the outcomes of clinical
trials. Finally, that of Fixsen appears best suited to understanding the degree to which CTS
88
methods are being developed and adopted by industry and the challenges that limit that
adoption. Because the main questions that are of particular interest revolve around the
adoption of CTS, the framework of Fixsen and colleagues appears best aligned with the needs
of the study. By using such a framework, it would be possible to dissect the adoption process
into a series of sequential stages, and to query the respondents about the concerns and views
associated with that phase.
We are interested in the perceptions of industry regarding the use of CTS techniques to assist
in commercializing new medical products. For benchmarking purposes, it is interesting to
know at which stages of implementation most companies currently sit with regard to CTS
deployment, and whether the degree of implementation maturity depends on the company size,
structure, or product specialization. By comparing their current practices with the information
obtained from the industry sector in other ways, policy makers might be able to perform a
better gap analysis to determine what, if any, impediments need to be addressed in order for
them to attract more involvement from the industry. We will examine the activities and
attitudes toward using clinical trial simulation techniques, and any insights that those working
for pharmaceutical and device companies might have gained in their interactions with
regulatory agencies when attempting to use clinical trial simulations. We will also focus on the
implementation as well as communication with regulatory authorities, and survey questions are
designed to address these key components of deploying clinical trial simulation in product
development.
89
2.8 Approach
In this study, I explore the current views and approaches used by industry with regard to the
development and use of such methods by disseminating a novel survey instrument to senior
individuals employed in activities related to clinical trial simulation and regulatory submissions
containing simulated data in US pharmaceutical companies. In one sector, developing
treatments for Alzheimer’s disease, a qualified simulation tool already exists. Thus, it is
interesting to gauge the understanding and use of this and other Drug Development Tools
(DDTs) available to aid in AD research.
The survey will probe the biomedical industry on specific views related to the adequacy of
FDA policies for the regulation of simulation used in phase II and III clinical trials. It will also
explore the nature of their activities related to the development, qualification and
implementation of simulation methods. Of particular interest are the experiences of the
industry related to the ability to use such tools as part of regulatory applications. The survey
will be constructed to give respondents the opportunity to identify impediments that they have
experienced and influences that might transform, contribute to, or hinder the development of
effective policies or guidance.
90
CHAPTER 3. METHODOLOGY
3.1 Introduction
The study centered on the basic elements of Fixsen’s approach to policy implementation in an
attempt to measure the success of current regulations regarding the use of clinical trial
simulation tools. Particularly, the study leveraged the growing field of “implementation
science” and examined broad areas of diffusion, dissemination, and implementation of policies
and programs for the use of simulation in clinical trials. These framing techniques have
become particularly useful for policy-makers.
The study attempted to gather insights from the industry participants involved in the clinical
trial process – particularly those biomedical professionals involved with providing evidence of
safety and efficacy from phase II and III clinical trials. This group consisted primarily of
researchers and regulatory professionals within the drug and biologic industry responsible for
some clinical trial submission data.
3.2 Survey Development
The initial survey was prepared using the web-based survey tool, Qualtrics
(http://www.qualtrics.com/). The questions were grouped into three broad topic areas, shown
in Table 9.
91
Table 9: Approximate Breakdown of Survey Questions
Survey Sections
No. of
Questions
I General information about the respondent & industry
7
II Exploration: Questions regarding any use of modelling and simulation
4
III * Policy Framing Questions regarding Installation (12),
Implementation (3), and Sustainability (10) of modelling and
simulation use
25
Total No. of Questions = 36
* Fixsen’s Core Implementation Components / Drivers (Fixsen et al., 2009)
The survey was uploaded to the on-line location using the Qualtrics web-based survey design
and delivery system (http://www.qualtrics.com/). The Qualtrics software provided a platform
for designing, distributing, and evaluating survey results. The survey included a combination
of “Yes/No”, “Choose One”, “Scaled” and “Open Ended” questions.
3.3 Focus Group and Survey Finalization
The focus group included participants from industry and academia. Participants were selected
on the basis of their knowledge of biomedical development regarding safety and efficacy of
medical products, and/or their experience with survey development. The focus group members
are described in the table below, Dr. Frances Richmond served as Co-chair of the focus group
(Table 10). Focus group members were asked to consider the purpose of the thesis research
when commenting on the survey instrument and to provide input on the overall organization of
the survey and the relevance and clarity of each question. In addition, they were asked to
validate the survey from a user perspective and to identify any gaps in topics. Participants were
92
permitted to attend from remote sites via video conferencing in order to facilitate the
interactive experience of remote participants. The proceedings were documented electronically
on video media with participant consent. Written feedback from the focus group was collected
at the end of the Focus Group session.
Table 10: Focus Group Participants
NAME TITLE
Frances J. Richmond, BNSc, MSc, PhD
(Focus Group Co-chair)
Director USC Regulatory Science
Program, and DIA Board of Directors
Member
Michael Jamieson, DRSc Assistant Professor, USC Regulatory
Science Program
Eunjoo Pacifici, PharmD, PhD Assistant Professor of Clinical Pharmacy,
USC Regulatory Science Program
Nancy Smerkanich, DRSc Assistant Professor at University of
Southern California School of Pharmacy
Grant Griffin, DRSc Regulatory Affairs Manager, Global
Regulatory Strategy, Virology at AbbVie
Klaus Romero, MD Klaus Romero, MD, MS, FCP, Critical Path
Institute
Michael Neely, MD Associate Professor of Pediatrics, USC
Keck School of Medicine
Lawrence Lesko, PhD Director, Office of Clinical Pharmacology
and Biopharmaceuticals, CDER
The group was provided with an electronic copy of the survey prior to the meeting (Appendix
A) and with a paper copy at the time of the focus group. The review was initiated by a short
presentation regarding the purpose of the research and the survey, along with a brief discussion
regarding the appropriateness of the targeted respondent pool and methods of survey
distribution. Each of the proposed questions were reviewed in sequence and analyzed for
possible improvements regarding topic and clarity. The survey was then updated to
93
incorporate comments and inputs from the focus group participants. The final survey of about
30-35 questions was then reconfigured on the web-based survey platform, Qualtrics
(http://www.qualtrics.com/). The effective operation of the system was validated by sending
the survey to several individuals from the focus group to ensure that the emails arrive properly,
and the answers can be entered and analyzed appropriately.
3.4 Survey Administration and Analysis
The initial survey was deployed from December 5th to March 5th. The target population was
intended to include a representative number of participants, all of whom were professionals
from the biomedical industry. Although, at the time of writing, the use of computerized
simulation was still relatively new, and many companies considered details associated with
clinical trials to be proprietary and were often reluctant to disclose operational details regarding
clinical activities, a response number of 30-40 was considered to be representative. As part of
the vetting process, individuals within the potential respondent pool were contacted to solicit
their participation in the survey. Recipients were also given the option of providing the contact
information of other personnel in their organization or network who may also meet these
selection criteria and who may have been deemed qualified to address the topic. These
individuals were then included in the panel for distribution and the survey was distributed to
the distribution panel. This process was iterative, and several panels were assembled over the
three-month period. No financial compensation or incentive was offered to encourage
participation. However, several staged reminders were used to encourage the return of the
surveys after 30 days. Respondents who request it were provided a summary of the results
obtained after the survey had been analyzed.
94
Results of the surveys were collected and stored electronically. Open ended survey questions
were examined for their information content and analyzed to see if any trends or common
elements appeared. The Qualtrics software was used to calculate percentages, counts,
minimums, maximums, standard deviations, variances or means for all of the “Yes/No”,
“Choose One”, and “Scaled” questions. Study data were collated, graphed, and reported using
percentages and/or actual counts.
95
CHAPTER 4. RESULTS
4.1 Focus Group Results
The focus group that consisted of panel experts from academia, government, and industry, was
invaluable in providing insights into the content and structure of survey questions. The focus
group assisted the investigator in optimizing the design and deployment of the survey tool with
emphasis on probing difficult questions surrounding the use and acceptance of simulated data
used for clinical trial decision-making that may ultimately be submitted for agency approval.
Feedback from the focus group also served to clarify and standardize terminology and to
identify the clinical stages where computerized simulation is often used with the greatest
effect, and where simulation is often problematic from both an implementation and regulatory
perspective. The changes made in response to their questions can be recognized by comparing
the draft and final survey tools in Appendices A and B, respectively.
4.2 Survey Results
Electronic links to the on-line survey were disseminated between December 5
th
, 2017 to April
5
th
, 2018. Initially, a questionnaire to determine any interest in participating in the survey was
disseminated to a set of 667 email addresses that represented attendees at an FDA meeting
where clinical trial reporting tools were the topic of discussion. From this list, only 31
attendees expressed interest in participating in the survey (5%), however another 50 responded
as being out-of-office and were eventually included. The initial dissemination of the survey to
this group of 81 (Group 1) resulted in 16 responses (20%). Subsequently, the survey was sent
electronically to a more focused list of 44 individuals considered by their background to be
knowledgeable about clinical trial design, or regulatory science, in their respective companies
96
and recommended by professional colleagues. From these focused sources, 38 responses were
received (Group 2), yielding a focused response rate of 86%. The overall response rate was 54
responses out of a total 125 mailings (43%). Although these two sources were generally
unrelated, a comparison of the responses from these two groups found them to be broadly
similar. For example, 1 of the 16 respondents (6%) from Group 1 indicated they were not yet
familiar with control-group simulations in clinical trials, where 4 of the 38 respondents from
Group 2 (11%) responded in this way. Also, 2 of the 16 respondents (12%) from Group 1
indicated that they had researched but not adopted control-group simulations in clinical trials,
where 5 of the 38 respondents from Group 2 (13%) responded similarly. The similarity in
responses serves to further validate the overall results.
A total of 14 respondents began the survey but did not finish. Four of these indicated after
reviewing the survey questions that they did not have sufficient experience in the subject areas
addressed by the survey. The others gave no feedback about the reasons for stopping the
survey early. Many of these participants chose to stop at different points in the survey. Other
participants answered only those questions that were germane to their circumstances, which
offers an explanation for some of the different response totals and denominators recorded
throughout the survey.
4.2.1 General Information About Survey Respondents
Almost all (51/54, 94%) of the respondents were from industry. Two respondents worked for a
non-profit organization, and one respondent was part of a research consortium. Most
respondents (69/89, 78%) were employed by companies that manufactured pharmaceutical
and/or biologic products, and some respondents (8/89, 8%) operated in the medical device
97
space (Figure 10). For this question, respondents were asked to select all product types that
apply so the sum of numbers in Figure 10 is greater than the number of respondents, reflecting
the fact that some respondents worked in companies that operated in two or more sectors.
Note: For multiple choice questions, respondents were asked to select all categories that
applied, thus, the total number of responses may exceed the total number of survey
respondents.
Figure 10: Respondents in Different Product Space / Industry Sectors (Q6)
N=89; The x-axis reflects the number of responses.
Of the 54 respondents, 23 (43%) worked in very large companies with 50,000 or more
employees, 8 (15%) worked in small companies with 1 to 200 employees and the others were
distributed in between (Figure 11).
98
Figure 11: Size of Organizations Represented by Respondents (Q5)
N=54; The x-axis reflects the number of responses.
The job profiles of respondents varied from specialist to vice-president, but the majority
(33/54, 61%) were employed at a director level (Figure 12). Many respondents had primary
responsibilities either in Research and Development (22/54, 41%) or Regulatory Affairs
(21/54, 39%) (Figure 13).
Figure 12: Job Level of Respondents (Q2)
N=54; The x-axis reflects the number of responses.
99
Figure 13: Departmental Area of Primary Activity (Q3)
N=54; The x-axis reflects the number of responses.
4.2.2 Policy Framing Questions - Exploration
Computer simulations were known to be used in clinical studies by 32 of 53 (60%) of
respondents; another 4 (8%) were considering their use (Figure 14). A smaller number (9/53,
17%) identified that they were not used, and 8 respondents (about 15%) did not know.
Figure 14: Prevalence of Computer Simulations in Phase 2 and/or 3 Clinical Studies in
the Surveyed Organizations. (Q8)
N=53; The x-axis reflects the number of responses.
100
A cross-tabulation was conducted to evaluate whether the size of the company (Q5) had any
effect on resources committed to the installation of computerized simulations in clinical trials.
Responses suggest that larger companies most commonly employ simulations. In the largest
companies, more than 90% of those who knew whether simulations were used identified that
they were in fact employed (14 yes versus 1 no for companies with over 50,000 employees).
In contrast, only two of eight companies with fewer than 200 employees were using
simulations and the remaining six were either still considering or not using simulations (Figure
15).
Figure 15: Cross-tabulation: Use of computer simulations (Q8) according to size of
the organization (Q5)
N=53
For those companies that were using simulations, almost all assigned resources that were split
between the use of consultants and the use of staff as shown in Figure 16. There were no clear
patterns of difference according to company size.
101
Figure 16: Cross-tabulation: Nature of resources assigned to CTS (Q13) by
companies using or not using simulations at present (Q8)
N=29;
For what purpose were simulations used when researched and then adopted? Most respondents
(24/28, 86%) whose companies had researched and adopted the use of computerized
simulations were using the tools to predict drug efficacy. The majority of this group also used
simulations to identify drug interactions (19/27, 70%), for the simulation of metabolic effects
(13/24, 54%), and/or to compare performance against other drugs (13/25, 52%). About half
also simulated virtual patient cohorts (12/25, 48%) or control groups (12/24, 50%). Other uses
identified by the respondents included: clinical trial design, exposure response, pre-clinical
applications, study setup simulations, and PPK studies (Figure 17).
Which tools were researched but not adopted? A somewhat smaller group of respondents
(depending on application) indicated that they had explored but did not adopt certain tools. Of
these, 11 of 25 respondents (44%) had researched but not adopted the use of computerized
simulations for running virtual patient cohorts through clinical trials, and 9/25 (36%) had
102
explored but not adopted the use of simulations for performance comparisons with other drugs.
Fewer had researched but not adopted CTS for control group simulations (7/24, 29%), the
simulation of metabolic effects (6/24, 25%), or the prediction of drug/drug interactions (6/27,
22%). One respondent identified that he/she had explored but not adopted tools for simulating
drug efficacy (Figure 17).
Who were not yet familiar with different CTS tools? Only a small number of respondents
(depending on application) identified that they were not familiar with one or more tools. The
most common tool that was unfamiliar to respondents was the use of simulations for control
group modeling or simulation of metabolic effects (5/24, 21% for each, respectively). The use
of simulations for comparisons of drug performance was unfamiliar to 3 of 25 (12%) of
respondents, and simulation use for predictions of drug efficacy was unfamiliar to 3 of 28
(11%) of respondents. The use of computerized simulations for predicting drug/drug
interactions or developing virtual patient cohorts was unfamiliar to only 2 of 27 (7%), and 2 of
25, (8%) of respondents, respectively (Figure 17).
103
Figure 17: Types of Computer Simulations Considered in Clinical Trials (Q9)
N=28; The x-axis reflects the number of responses.
104
Most commonly, respondents who were able to identify the resources used when preparing to
incorporate simulations into their clinical programs indicated that in-house staff/expertise
(10/27, 37%), followed by consultants (5/26, 19%) and journal articles (5/28, 18%), were the
most useful resources. Only 3 of 28 respondents (11%) identified FDA guidance documents as
the most useful resource (Figure 18). Also chosen infrequently were meetings (2/28, 7%),
Critical Path simulation tools (2/28, 7%,) or information on the FDA’s Fit-for-purpose website
(2/28, 7%).
105
Figure 18: Resources Used when Preparing to Incorporate Computer Simulations in
Clinical Trials (Q10)
N=28; The x-axis reflects the number of responses.
106
Respondents were asked to consider the importance of several challenges when attempting to
incorporate computer simulations previously suggested by the literature.
Extremely important: Most respondents (15/28, 54%) identified that access to good data to
base the simulations was an extremely important challenge. Almost half (13/28, 46%)
identified that having sufficient expertise was extremely important (Figure 19). Somewhat
fewer identified acceptance by regulators (11/27, 41%), the need for in-house simulations
(7/26, 27%), and the availability of guidance documents (3/27, 11%) as extremely important
issues.
Moderately important: Most commonly, the availability of guidance documents (9/27, 33%)
was regarded as a moderately important challenge. Less than one-quarter of respondents (6/26,
23%) viewed the challenge of having to develop in-house simulations was moderately
important. Five (of 27, 19%) indicated that acceptance by regulators was a moderately
important challenge, and only 2 (of 28, 7%) indicated that access to good data was a moderate
challenge (Figure 19).
Not important at all: Only one respondent indicated either that sufficient expertise (1/28,
3.5%) or having to develop in-house simulations (1/26, 3.8%) was not an important challenge
(Figure 19).
107
Figure 19: Challenges When Considering the Use of Computer Simulations in Clinical
Trials (Q11)
N=28; The x-axis reflects the number of responses.
108
4.2.3 Policy Framing Questions - Installation
In the previous section, respondents ranked some the challenges encountered when exploring
the possibility of even using computerized simulations in their clinical development – these
included preparatory challenges such as access to good data and sufficient expertise. In this
section, we look to rank the importance of some of the issues associated with the installation of
simulations within clinical development programs.
Respondents were asked whether financial resources had been assigned to any clinical
computer simulations. Of the 29 respondents who answered this question, 11 (38%) indicated
that they had assigned financial resources, whereas (7/29, 24%) had not. More than a third of
the respondents (11/29, 38%) could not comment.
When asked about the nature of the resources that had been assigned to the installation of any
type of computer simulation in the clinical programs, nearly half (18/45, 40%) of respondents
indicated that staff resources had been assigned, about a quarter (11/ 45, 24%) had assigned
consultant resources, and about a fifth (10/45, 22%) had formed teams to support their clinical
simulation projects (Figure 20). Four respondents could not comment and only two reported
that no resources at all had been committed.
109
Figure 20: Types of Resources Assigned to the Use of Computer Simulations in
Clinical Trials (Q13)
N=45; The x-axis reflects the number of responses.
When asked about their familiarity with the Alzheimer's disease simulator developed through
the Critical Path Institute, 14 of 46 respondents (30%) indicated that they were familiar,
whereas 32 respondents (70%) were not. Only 3 of the 14 respondents (21%) who were
familiar with the Alzheimer's disease simulator had ever used the simulation tool (Figure 21).
Figure 21: Use of the Alzheimer's Disease Simulator amongst Those Familiar with the
Tool (Q15)
N=14; The x-axis reflects the number of responses.
When the 14 respondents who were familiar with the Alzheimer's disease simulator were asked
whether a 'Fit-for-Purpose' (FFP) simulation tool such as the Critical Path Institute's
110
Alzheimer's disease simulator would be useful for drug development programs, 9 of the 14
(64%) believed that the FFP designation was very useful, whereas 4 (29%) respondents
believed it to be moderately useful. None had less supportive opinions.
When asked whether their company had cooperated in the development of a simulation tool by
participating in a consortium, 11 of 45 (24%) respondents indicated that they had, whereas 17
(38%) had not, and 17 (38%) could not comment.
Whether respondents were familiar with the Alzheimer’s disease simulator tool was cross-
tabulated to see if there was a relationship with their participation in a consortium (Figure 22).
About half of those familiar with the simulator were part of a consortium. Most others who
either did not know or were not part of a consortium were unfamiliar with this tool.
Figure 22: Cross-tabulation: (Q14) related to (Q19)
N=45
Additionally, a cross-tabulation was performed to determine if the size of an organization had
any impact on whether respondents were either knowledgeable of for the Alzheimer’s disease
simulator, or whether they participated in a consortium (Figure 23). Typically, large
111
companies participated in consortia, but no clear relationship was evident between the size of
the company and its familiarity with the Alzheimer’s disease simulator.
Figure 23: Cross-tabulation: (Q5) related to (Q14 and Q19)
N= variable depending on question
Respondents were asked whether they used commercially available, proprietary and/or fit-for-
purpose simulation tools in clinical programs. Most commonly, the tools were commercial
(18/58, 31%). Less than 20% of companies were using proprietary tools (10/58, 17%) or tools
designated as fit-for-purpose (7/58, 12%). Six respondents chose not to comment (6/58, 10%)
and 11/58 (19%) did not know what types of tools were used (Figure 24). Respondents who
offered ‘Other’ tools identified those tools as: in house custom solutions, R script created in
house, in-house pharmacokinetic simulations, NONMEM (a software program for population
pharmacokinetic modeling). Finally, one respondent indicated the use of virtual twin controls
for analysis of open-label extension data had been used in clinical their programs.
112
Figure 24: Categories of Simulation Tools Used in Clinical Programs (Q21)
N=58; The x-axis reflects the number of responses.
Respondents were asked to rate the importance of various factors in the development and use
of computer simulation tools in clinical development.
When asked to rank the importance of having commercially available simulation tools, 10 of
39 (26%) indicated that factor as very important; it was moderately important to 14 (36%), and
slightly important to 5 (13%) of respondents. One (3%) considered that commercial
availability was not important at all, and 9 (23%) could not comment (Figure 25).
About one-third of respondents (13/39, 33%) ranked the FDA FFP-designation as very
important, 9/39 (14%) as moderately important, and 4 (10%) as slightly important. Three
respondents (8%) ranked FFP designation as not important at all and 10 (26%) could not
comment.
Six respondents of 38 (16%) ranked the proprietary/confidential nature of simulation tools and
data as very important, 11 (29%) as moderately important, and 5 (13%) as slightly important.
Five respondents (13%) ranked the proprietary/confidential nature of simulation tools and data
as not important at all and 11 (29%) could not comment (Figure 25).
113
When asked to rank the importance of simulations for informational purposes only (not
included in a submission), 6 of 36 (17%) respondents indicated this factor to be very important,
12 (33%) as moderately important, and 4 (11%) as slightly important. Four respondents (11%)
ranked the use of simulations for informational purposes only as not important at all, and 10
(28%) could not comment.
Over half of the respondents (21/39, 54%) ranked the use of simulations for regulatory
submissions as very important. Eight (21%) ranked it as moderately important, and 0 (0%) as
slightly important, or not important at all. Six respondents (15%) could not comment (Figure
25).
Over half of the respondents (20/39, 51%) ranked the use of simulations for pediatric study as
very important, 8/39 (21%) as moderately important, 1/39 (3%) as slightly important, and 3/39
(8%) as not important at all. Seven of thirty-nine respondents (18%) could not comment
(Figure 25).
114
Figure 25: Important Factors When Considering Simulation in Clinical Programs
(Q22)
N=39; The x-axis reflects the number of responses.
To identify the challenges experienced by the subset of respondents who had used
computerized simulation in clinical development, respondents were asked about their
satisfaction with various activities or resources associated with the exploration and initial
implementation phase of simulation tools. Responses to the six suggested areas clustered in
115
the somewhat satisfied category. At the exploration phase, most respondents were somewhat
satisfied (12/39, 31%) or extremely satisfied (4/39, 10%) with the ease of locating FDA
guidance documents and were somewhat (14/39, 36%) or extremely (4/39, 10%) satisfied with
finding adequate simulation tools. Fewer were dissatisfied, but the pattern of dissatisfaction
was somewhat different with respect to the availability of FDA documents versus simulation
tools. About 5% of respondents expressed some dissatisfaction with the availability of FDA
documents and that number again was extremely dissatisfied (2/39, 5%). Twice the proportion
(4/39, 10%) of the respondents were dissatisfied with the ability to find adequate simulation
tools but none of these were extremely dissatisfied (Figure 26).
Gaining support for initial implementation also posed a varying level of challenge to
respondents. About one-third were somewhat (8/39, 21%) or very satisfied (3/39, 8%) with
their ability to procure financing for simulation programs and only one respondent (1/39, 3%)
was dissatisfied. More respondents were somewhat (13/39, 33%) or extremely (8/39, 21%)
satisfied with their ability to garner management support. Only four respondents were
somewhat dissatisfied (4/39, 10%) about the level of management support and three were
neutral (3/39, 8%). When asked about developing their own tools, most were somewhat (9/39,
23%) or extremely (7/39, 18%) satisfied and only one was somewhat dissatisfied.
A notable finding when responding to this question was the relatively large number of
respondents who either could not comment, varying between 28%-46%, or were neutral in
their opinions 8%-26% (Figure 26).
116
Figure 26: Satisfaction with Activities during Exploration or Initial Implementation of
Simulation in Clinical Programs (Q23)
N=39; The x-axis reflects the number of responses.
4.2.4 Policy Framing Questions - Implementation
In order to probe the extent to which industry may have implemented computerized
simulations, respondents were asked how many of their clinical submissions contained
simulated data.
117
Investigational new drug applications (IND): When respondents were asked how many times
they had included a simulation as part of an IND submission, 15 of 39 (38%) had not submitted
any simulated results as part of an IND. Of the others, most had submitted simulated results
between 1 to 5 times (18/39, 46%). A few submitted these simulations between 6-10 times (2,
5%) or 10 or more times (4, 10%) (Figure 27).
New drug applications (NDAs) / biological license applications (BLAs): OF the respondents
who could answer this question, many had not included a simulation as part of an NDA or
BLA (17/39, 44%). Those who had submitted simulations put such simulations into the
submissions between 1 to 5 times (13/39, 33%), but a few introduced simulations more often
(6-10 times: 6/39, 15%; 10 or more times: 3/39, 8%) (Figure 27).
Investigational device exemption (IDE): The large majority of respondents had not included
simulated data as part of an IDE (32/35, 91%). Of the three whose companies did submit
simulated results for an IDE, these were in fewer than 6 submissions (1 to 5 times; 3/35, 9%)
(Figure 27).
Pre-market authorization (PMA): Most respondents had not included simulated data as part of
a PMA (29/36, 81%). Six of 36 (17%) had submitted simulated results for a PMA between 1
to 5 times and one (3%) between 6-10 times (Figure 27).
118
Figure 27: Use of Computer Simulations in Regulatory Submissions (Q24)
N=39; The x-axis reflects the number of responses.
The patterns of use of computer simulations in regulatory submissions was cross-tabulated to
explore any possible relationship use and product sector, e.g. medical devices, combination
products, biologics, or pharmaceuticals (Figure 28). The patterns seem to illustrate a lower use
of simulations in the device than pharma/biologics sector.
119
Figure 28: Cross-tabulation: (Q24) related to (Q6)
N is variable depending on submission type.
Respondents who had experience with submitting simulated data to the FDA in order to
support safety and efficacy at the end of phase 2 and 3 respectively were asked to rank the
FDA's receptivity to that data. Most identified that FDA seemed very receptive toward CTS –
based evidence at later stages (15/21, 71% and 12/20, 60%, respectively). However, at the end
of phase 3, 3 of 20 (15%) of respondents found FDA to be slightly unreceptive, and 1/20 (5%)
indicated FDA was very unreceptive (Figure 29). Notably, all respondents found that FDA had
been either very (8/14, 57%) or slightly receptive (6/14, 43%) at the pre-IND meetings.
120
Figure 29: FDA Receptivity toward Computerized Simulation (Q25)
N=20; The x-axis reflects the number of responses.
* Respondents were asked to select all categories that applied
Responses to the question regarding how many times an organization had submitted simulated
data as part of a submission did appear to predict FDA’s receptivity toward simulated data in a
submission that was broadly positive (Figure 30).
121
Figure 30: Cross-tabulation: The number of submissions included as part of
NDA/BLA (Q24) related to FDA’s receptivity toward simulated results (Q25)
N=21
Most respondents had a positive view of the value of simulations. Nearly half (13/30, 43%)
strongly agreed that the time and effort to implement simulations were worth the investment;
an even higher number (18/31, 58%) strongly agreed that they would use simulations to a
greater extent in the future, and many of those same respondents (17/31, 55%) also strongly
agreed that simulations added value to clinical data (17/31, 55%) (Figure 31).
Of the remaining respondents, most also agreed, albeit slightly, that the time and effort to
implement simulations were worth the investment (8/30, 27%), that they will continue to use
simulations (13/31, 43%), and that the simulations added value (10/31, 32%). Only three
disagreed slightly with the view that the simulations were worth the investment and only one
felt that the company would not be inclined to use simulations to a greater extent in future.
None disagreed that the simulations added value (Figure 31).
Responses to the two remaining statements in this question were mixed. When asked about the
ease of implementing such programs, only 4 of 30 (13%) strongly agreed that this was easy,
whereas 13/30 (43%) only slightly agreed and 9/30 (30%) slightly disagreed. When asked
122
whether simulations tools reduced the size of clinical trial commitments, similar numbers
strongly agreed (10/29, 34%) or slightly agreed (9/29, 31%) that the simulations reduced the
size of clinical trial commitments, with 2/29 slightly (7%) and 1/29 (3%) strongly disagreeing
(Figure 31). Between 4-7 respondents could not comment on these statements as also
illustrated in Figure 31.
123
Figure 31: Ease and Value of Implementing Computerized Simulation (Q26)
N=31; The x-axis reflects the number of responses.
4.2.5 Policy Framing Questions - Sustainability
The sustainability of a particular approach to clinical data development often depends on the
nature of regulatory interactions and acceptance. When asked to rate FDA’s knowledge and
support in applying computerized simulations in clinical trials, 18 of 39 (46%) indicated that
124
FDA’s knowledge regarding simulations during meetings and discussions equaled
expectations. Only two respondents of 39 reported (5%) that FDA’s knowledge in meetings
and discussions fell short of expectations, while 8/39 (21%) could not comment, and 11/39
(28%) reported this to be not applicable. None reported that meetings and discussions
exceeded expectations (Figure 32).
The consistency of FDA’s advice regarding the use of simulations exceeded expectations for
one of 39 respondents (3%). For 20/39 (51%), FDA’s advice equaled expectations, and none
noted that it fell short of expectations. Several respondents (8/39, 21%) could not comment,
and several (10/39, 26%) reported this to be not applicable.
When asked specifically if FDA personnel supported the use of simulation tools that had been
developed in-house, 3 of 39 (5%) of respondents reported FDA’s support exceeded
expectations, 12/39 (31%) that it equaled expectations, and 5/39 (13%) that it fell short of
expectations. Additionally, 8/39 (21%) could not comment, and 12/39 (31%) reported this to
be not applicable.
When asked to rate how FDA reviewers considered the results from simulations when giving
advice, 3 of 39 (8%) respondents indicated that FDA’s consideration of simulated results
exceeded expectations, 16/39 (41%) that it equaled expectations, and 3/39 (8%) that it fell
short of expectations. Additionally, 7/39 (18%) could not comment, and 10/39 (26%) reported
this to be not applicable.
When asked if simulations were perceived to be an established part of clinical trial planning
now, 6 of 39 (15%) of respondents reported that the FDA’s approaches exceeded expectations,
125
15/39 (38%) that it equaled expectations, and 4/39 (10%) that it fell short of expectations.
Additionally, 5/39 (13%) could not comment, and 9/39 (23%) reported this to be not applicable
(Figure 32).
Figure 32: FDA’s Helpfulness (Q27)
N=39; The x-axis reflects the number of responses.
Respondents were asked to judge the match between the company’s initial expectations and
eventual experience related to the time to complete simulation projects. Eighteen of 26 (69%)
indicated that the time to complete simulation projects was generally equal to initial
126
expectations (Figure 33). Only two respondents (8%) identified that the effort exceeded and a
further two (8%) that it fell short of the estimated time, and four (15%) suggested that they had
no idea of the amount of time needed at the outset of the project.
Figure 33: Ability to Gauge the Time to Complete Simulations (Q28)
N=26; The x-axis reflects the number of responses.
For most respondents, the top three most difficult elements associated with incorporating
simulations were, in descending order: inexperience of clinical teams in using simulations
(20/69, 29%), difficulty in obtaining data from competitors (17/69, 25%) and the lack of
qualified simulation tools (10/69, 14%) (Figure 34). Some respondents also identified
regulatory issues related to unclear government regulations or guidelines (8/69, 12%) or
reluctance to engage regulators with new tools (5/69, 7%).
127
Figure 34: The Most Difficult Elements when Incorporating Simulation into Clinical
Programs (Q29)
N=69; The x-axis reflects the number of responses.
When asked to assess the difficulty/ease of incorporating simulations into clinical programs, 15
of 30 (50%) respondents reported the overall experience to be somewhat easy, whereas 12 of
30 (40%) indicated that it had been somewhat difficult (Figure 35). Only a few respondents
found the experience very difficult (2/30, 7%) or very easy (1/30, 3%).
Figure 35: Ease of Implementing Computerized Simulations (Q31)
N=30; The x-axis reflects the number of responses.
A minority of respondents have so far presented simulation results to regulatory agencies,
including FDA (20/54, 37%), EMA (17/54, 31%) and PMDA (11/54, 20%). Text responses
specified for the ‘Other’ category included Australia’s Therapeutic Goods Administration
(TGA), Health Canada, and Swiss Medic (Figure 36).
128
Figure 36: Regulatory reporting of Computerized Simulation Results (Q32)
N=54; The x-axis reflects the number of responses.
To assess whether the specific challenges were impacting the incorporation of simulations into
clinical trials, responses to (Q32), whether simulated results had been reported to regulatory
agencies, were cross-tabulated with question (Q29) regarding difficulties encountered when
incorporating computerized simulations (Figure 37). In general, a lack of simulation tools,
inexperience with simulations, difficulty gaining data from competitors were listed as the top
three pressure points for most respondents regardless of the country of submission.
Figure 37: Cross-tabulation: (Q32) related to (Q29)
N=25
When asked if the responses of different agencies and reviewers were consistent in their
approach to simulations, 19 of 38 (50%) could not comment, while 2 of 38 (5%) agreed or
129
somewhat agreed (11/38, 29%) that regulatory reviewers did exhibit differences when
reviewing computerized simulations (Figure 38). Four of 38 (11%) neither agreed or disagreed
and two (5%) somewhat disagreed.
Figure 38: Regulatory Dissonance in Reviewing Computerized Simulations (Q33)
N=38; The x-axis reflects the number of responses.
Only 7 of the 19 respondents (37%) who were able to comment on the previous question
(Q33), regarding whether regulatory reviewers differed in their approaches to the use of
simulations, a few responded with additional commentary. Using an open-text comments box,
several respondents provided additional details about the differences that they encountered
when submitting clinical data incorporating computerized simulations and these are included in
Table 11 below. These comments cited differences between EMA and FDA, as well as
differences between how regulators viewed simulated data for given diseases and even
populations. The full text of these comments is included in Appendix C – Survey Results and
Reports.
Table 11: Examples of comments on how reviewers differed in their approaches (Q34)
Regarding how regulatory reviewers differed in their approaches to the use of simulations, can
you comment on the nature of the disagreement?
130
Some agencies require more simulations than others, especially for rare
populations
General overreliance on randomized, controlled clinical trial data, as
least in the therapeutics areas I support (i.e., non-oncology).
EMA feedback is more erratic
FDA more open to stimulation and modeling.
EU has been proactive compared to FDA. So there is not necessarily a
disagreement just a lag time.
Survey participants were given the opportunity to elaborate on their concerns regarding the use
of simulations in clinical development. Some of these included: the time and effort required,
validation, and acceptance by regulators of simulated data for pivotal trials (Table 12).
Additionally, concerns were expressed regarding the modeling tools themselves, which
included the expense of licensing fees, and the accessibility to source code. The full text of
these comments is included in Appendix C – Survey Results and Reports.
Table 12: Examples of concerns regarding the use of simulations in clinical development
(Q35)
Can you share any concerns that you have had about the use of simulations in clinical
development?
Industry and clinical teams are not aware of the time and effort needed to
perform simulations. Expectations are that this exercise is easily
implemented.
It needs validation and endorsement from a greater community, as well
as qualified experts to be able to examine the scenarios so that the
simulations make sense and are manageable when it comes to
implementation.
Although I see their potential value, my concerns would be how
acceptable simulations would be to regulatory agencies for providing
131
pivotal data in a BLA/NDA. The FDA, and EMA in particular, seem
hesitant to allow new methods of data collection without a high degree of
scrutiny that ultimately holds up approval, or worse, provides conditional
approval with additional PACs. Validation would be key here. That's
been my experience anyways.
Regulators appear much less willing to consider novel methodology
outside of oncology therapeutic area.
Expenses specially licensing fees for a commercially available tool.
Provide the model and simulation code or explain the details of the
simulation process.
Finally, using an open-text comments box, survey participants were given the opportunity to
provide their general views of the use of computerized simulations in clinical development.
Several respondents provided their parting comments regarding computerized simulations
which included: observations around the usefulness and likelihood of continued use and
growth within clinical programs. Other comments seem to express reservations around the
boundaries of usefulness of simulations in clinical trials. Still others cited the regulatory
considerations associated with simulations in clinical trials (Table 13). The full text of these
comments is included in Appendix C – Survey Results and Reports.
Table 13: Examples of Final views on the use of computerized simulations in clinical
development (Q36)
Can you provide us with more information on your views of the use of simulations in clinical
development?
(This is a) powerful tool to aid in drug development
I guess it all depends on what the simulations would be used for. It might
be a stretch to use simulations for pivotal data.
(I am) highly supportive, but do not see clear regulatory pathways.
132
The use of simulations will continue to grow
This is an area that the health agencies will need to consider as the
expense of clinical trials increase, the competition for patients, delays in
bringing drugs to market due to the items mentioned above.
A lot of thought has to go in to performing simulations and explain the
conduct of the simulation and results given the knows and unknows in
the model have to be communicated.
Additional text commentaries in response to questions on the use of simulations are listed in
Appendix C.
133
CHAPTER 5. DISCUSSION
5.1 Methodological Considerations
The results in chapter 4 provide new and interesting insights into the experience with, and use
of, computerized simulations in phase II and III clinical trials, and the subsequent submission
of that simulated data in regulatory submissions. The validity and applicability of those
findings rest on the appropriateness and strength of the primary instrument, the survey tool, to
gather relevant information. Surveys offer methodological advantages when trying to describe
or understand the characteristics of a population that cannot be easily approached through other
means (Jones, 2011). Although interviews or written requests could have been used to gather
similar types of data, the use of a survey also provides an advantage in that a larger target
audience can be reached, and greater amounts of information can be gathered more efficiently
and in a shorter time.
Validity is one of the primary challenges associated with experimental design of all types
including surveys (Patton, 2002). Two aspects of validity that are known to be particularly
important when conducting survey studies include: (i) the representational or face validity of
the questions of the survey, to assure that the questions are appropriate for their intended
purpose and (ii) the external validity of the survey – whether the results obtained from the
sampled professionals sufficiently represent the population that they attempt to characterize
(Dellinger, 2005).
In order to assure that the survey questions are sufficiently appropriate and broad for the
purpose intended, the face validity of survey questions was assessed prior to distribution by
critiquing the survey and its questions by a focus group of experts with strong domain
134
knowledge related to the use of computer simulations in trials. Focus groups have been used to
develop and improve surveys in the past e.g., (Jamieson, 2011; Nassar-McMillan & Borders,
2002; Smerkanich, 2016; Storm, 2013). A valuable aspect of the focus group approach is its
ability to use the synergies of the group to deepen the discussion (Vaughn, 1996), which in this
case reviewed the relevance and depth of each question and the design of the survey and
questions. Such a critical analysis was deemed essential because no surveys or even systematic
questions had ever been developed to explore the use of simulations by this group of industry
professionals.
Although focus groups have been used in the past to improve a survey tool, this type of use is
somewhat unusual. Focus groups are usually employed as the primary target group of a
research study rather than to prepare the tools for that research, as was done here. This kind of
use is similar to some forms of usability studies in which groups of individuals are brought
together to evaluate the attributes of a medical or consumer product, such as the clarity of its
labeling information (FDA, 2016a). Also, as individuals share their views, they build on the
comments of others in the group to generate a particularly rich range of suggestions for
revision (Nassar-McMillan & Borders, 2002). Both of these recognized advantages were
observed in the present study and helped to improve the survey tool. In addition, the focus
group also provided particularly good insights regarding the inclusion of questions related to
specialized aspects of simulation with which I was less knowledgeable, such as their use in
pediatric and orphan populations, as can be seen by comparing the draft and final versions of
the survey.
135
The external validity of the survey is also important and rests on the appropriate selection of a
suitably chosen respondent pool (Nassar-McMillan & Borders, 2002). Two aspects of this
pool, its appropriate expertise and its size, were considered to be particularly important to
assure validity. In this study, the respondent group was deliberately delimited to one particular
topic, simulations related to phase II-III clinical trials, and to an associated population of
individuals with domain knowledge in this area. The delimitation was purposefully chosen to
exclude those who may be more involved with pre-clinical modeling, as well as those who may
be establishing or enforcing regulatory policies. This allowed me to focus the survey on the
specific topic in more depth. However, it does mean that views of groups using simulations,
for example, as part that use of pre-clinical or engineering testing, or regulators who review
submissions containing simulated data will not be included as part of this work. It also means
that the population of individuals appropriate to participate effectively in this survey will be
difficult to identify. For example, in initially identifying qualified respondents for this study,
only 31 (5%) of 667 attendees at an FDA meeting at which such tools were discussed were
ultimately willing to provide insights regarding the use computerized simulations. This may in
part reflect the fact that most individuals in this field are not yet using those tools; they may
still be exploring the use of simulations or are using them for purposes unrelated to clinical
trial design. For example, “virtual” patients simulated using software can be used not only for
clinical trial simulation but also for other purposes such as clinical reasoning, team training
related to procedural and basic clinical skills, and patient education or communication
(Roterman-Konieczna, 2015).
136
A commonly identified concern for the validity of studies such as these is the response rate to
the survey. As Orcher has noted, obtaining a representative sample is more important than
using a large sample (Orcher, 2007). Nevertheless, a low response rate has the potential to
influence the representativeness of the data, so I was concerned about the relative response rate
in this study compared to others using similar methods. Research has shown that
considerations such as the time of year when the survey was taken, the number of questions in
the survey, the number of prenotification contacts, the number of follow-up contacts and
relevance of the survey topic to participants can influence the response rate (Sheehan, 2001).
The lack of response to the questionnaire by potential respondents in a sample or population is
referred to as nonresponse bias, which can negatively affect both the reliability and validity of
survey study findings (Fincham, 2008). However, regardless of these effects, response rates
for electronic surveys are typically low. Sheehan identified from a review of published
research using e-mail surveys for research (see, for example, (Schaefer & Dillman, 1998;
Sheehan & Hoy, 1999; Weible & Wallace, 1998)), that the average response rate for an E-Mail
survey in 2000 was 24%. More recent studies suggest that the average response rate to on-line
surveys can vary quite widely. Many published studies are in the range of 20-33% 33%
(Lindemann, 2018; Medway & Fulton, 2012; Sivo, Saunders, Chang, & Jiang, 2006) but it is
not unusual for response rates for some surveys to dip below 10% (Lindemann, 2016). For this
survey, the overall response rate of 54 responses out of a total 125 mailings (43%) was roughly
in line with the ranges presented by other published surveys, if not slightly higher due to pre-
screening. Considering the difficulty of assuring the engagement of busy professionals, I was
quite confident that this response rate was adequate to provide at least beginning insights in
this exploratory work. Nonetheless, it is important to limit the tendency to overinterpret the
137
results of this relatively small group. Further, the types of data collected, which depends on
significant number of categorical rather than continuous datasets, will provide a coarse
measure of views and experiences but are unsuitable for many types of more sophisticated
analyses that might illuminate relationships between variables.
As indicated in earlier chapters, the use of computerized simulation in clinical development is
still in a formative stage. In a September 2017 speech to the Regulatory Affairs Professionals
Society, FDA Commissioner Scott Gottlieb stated how FDA is taking steps to modernize how
sponsors can evaluate clinical information, and how FDA can review this data through better
use of more advanced computing tools and more sophisticated statistical and computational
methodologies, as part of the drug development and the drug review process. He went on to
state that “almost 100 percent of all new drug applications for new molecular entities have
components of modeling and simulation”. Typically, these include modeling of preclinical
aspects such as PK/PD estimations from animal data and modeling of dose response
relationships as a way to evaluate the safety and efficacy of different doses, and to help select
the optimal dose for the general population or subgroups. They also include methods to
estimate a new drug’s effect size to develop the appropriate sample size for pivotal trials (FDA,
2017a). Thus, the number of clinical researchers currently planning or using computerized
simulations in clinical trials as opposed to preclinical activities can only be estimated. This
seems illustrated by the results in this study showing that, for example, only half of the 54
highly distilled survey candidates who responded to the survey were able to provide any
response regarding their use, or planned use, in clinical trials (Figure 17).
138
Even though the study was primarily delimited to US medical device companies, a boundary
that might be considered as quite specific, it nevertheless includes subpopulations that might
have different experiences. Simulation tools are agnostic to the fact that patients might be
located in different geographic regions. Nevertheless, regional differences exist in the
acceptance of such novel methods (Storm, 2013). For instance, EMA’s workshop report
entitled, “Modelling and Simulation Examples That Failed to Meet Regulator's Expectations”
(EMA, 2011a), highlights the considerations that EU regulators emphasize when reviewing
computerized simulations used in clinical development – these can be different when compared
to FDA’s guidance on the use of DDTs (FDA, 2014a). Thus, simulation experts in different
regions may have widely different experiences. In this study, the focus was on US industry
professionals. Caution should be used in viewing these results as representative of
practitioners with different profiles, for example in different product sectors or geographic
regions outside of the US.
To limit bias from any one subpopulation, the mix of respondents was drawn widely from
product sectors where simulations are used more or less commonly, and whose experiences
might differ, in order to reduce product-specific bias. For example, the survey included a few
individuals who were participants or users of the CAMD’s Clinical Trial Simulation Tool for
Alzheimer’s Disease (See Figure 21), which is one of the most well-developed simulation tools
available today for clinical trial simulations and is discussed extensively in chapter two. These
participants were few in number and did not appear to have views that differed greatly from
those of the rest of the sample at least as judged from the coarse filter of a cross-tabulation. In
future, it would be useful to repeat this survey with individuals in the Alzheimer’s field to
139
explore their comparative views in more detail, given that they have access to a highly
developed simulation tool that is not so relevant for those in other drug product areas.
5.2 Consideration of Results
Significant time has passed since the FDA issued its Challenges and Opportunities Report
(FDA, 2004) that identified the bottle-neck formed by what it termed the Restricted Pipeline.
They characterized the “critical path” challenges associated with this bottleneck as key
problems preventing new technologies from reaching the public domain, as identified in earlier
in sec 2.3.1. In this research, we have tried to understand how simulation methods, viewed as
one way to accelerate the development of new products, have been embedded in clinical
development to help remediate this problem. Insights into potential policy issues were
analyzed within a well-known analysis framework, namely Fixsen’s structured approach to
“implementation science”. The nomenclature for implementation maturity and core
implementation components introduced by Fixsen (2009), and his structured approach to
implementation science (Fixsen et al., 2009), provided the critical foundation to examine broad
areas of diffusion, dissemination, and implementation of clinical trial simulation policy and
programs.
5.2.1 Exploring the Use of Clinical Trial Simulation
The first stage faced by companies when trying to make a decision to use simulations in a new
application is exploring the nature of the available tools and determining if those tools will be
useful and appropriate for regulatory purposes. Responses to questions designed to understand
the experience in this early stage suggest two areas relevant to this discussion. First is the
degree to which the methodologies associated with clinical trial simulation (CTS) have
140
penetrated the thinking of those organizations conducting clinical trials. Second is the degree
to which this exploratory stage has been assisted by guidance and materials provided by
regulatory authorities to industry.
The term, “technology penetration”, has been defined quite generally as the rate at which a
specific technical innovation becomes adopted into the everyday life of individuals within a
social group (IGI-Global, 2018). With regard to this research, it might be defined more
narrowly, as the degree to which simulations are adopted by the individuals or teams
responsible for seeking regulatory submission for their drugs. Although the medical products
industry is often viewed as lagging behind other risk averse sectors in adopting simulation
technology (Manolis et al., 2013), most survey respondents identified that their companies had
at least explored, and commonly even adopted, CTS technology for one or more purposes in
the clinical trial phase (Figure, 14 and 19). The penetration of CTS technology appears to be
greater in larger companies. After reanalyzing responses by removing those respondents that
were unable to comment or did not know whether CTS was used within their organization, the
relative percentages of CTS use when stratified by organization size were as follows: 1-200
employees (25%), 201-1000 (67%), 1001-10,000 (75%), 10,001-50,000 (75%), and 50,000+
(93%). These results suggest that the largest companies are typically using CTS technology,
but that proportion of users decreases progressively with decreasing company size.
When pharmaceutical companies question whether they should adopt an expensive and
challenging tool to support safety and efficacy evaluations, they must be convinced that the
regulatory agencies will accept the information generated by the new approach. Guidance
documents have traditionally been the way in which FDA communicates its thinking about a
141
new methodology. Guidance documents are valuable at the early stages of implementation
before many direct conversations have been held with the agency, as confirmed by respondents
who ranked them between very and slightly useful at this stage. However, FDA guidance
documents were not regarded as the most preferred source when preparing to adopt
simulations, ranking behind the human resources of in-house staff, consultants, or even
meetings. Additionally, the availability of guidance documents was also seen to be an
extremely or very important challenge by many respondents, which may suggest that published
regulatory guidance does not currently meet all needs.
It is perhaps not surprising that guidance documents seem to lag when a new methodology is
developing rapidly. The first agency guidance to deal at all with simulations appears to be that
published in 2009 regarding the (FDA, 2009b). This guidance did address at a high level the
potential usefulness of models and simulations in pharmaceutical trial planning but fell short of
discussing in any detail their specific use for the current types of applications most commonly
identified by survey respondents here. On the medical device side, a recent guidance titled
“Reporting of Computational Modeling Studies in Medical Device Submissions” provided a
framework for reporting results from modeling but did not comment extensively on why a
certain approach might be chosen (FDA, 2014d). However, neither guidance is very helpful
regarding how and when certain tools might be recommended or implemented. This may not
be surprising given the relatively short history of CTS. Others have noted that regulatory
guidance is by its nature destined to follow the science. Davis points out that policy makers
may never have enough data to perfect policies, and that establishing policy based on ‘bad’
science may often be worse that having no policy at all (Davis, 1979). The question remains
142
whether these FDA guidance documents are sufficiently granular and specific to keep up with
the many ways in which simulations are being used. For example, although the Drug
Development Tool (DDT) guidance offers three broad types of DDTs (biomarkers, clinical
outcome assessments, and animal models) for which qualification programs have been
established, there is little real ‘guidance’ regarding computerized CTS (FDA, 2014c).
A regulatory agency has a fine line to walk when it is challenged to provide detailed
information about how to approach and implement clinical simulations. It must provide as
much information as possible, yet much of that information comes from its interactions with
companies that have submitted proprietary material and tools during the advisory or
submission process. The involved regulators then must be careful about limiting the insights
that they gain from their access to confidential information that might be more specific than
current guidance documents. Nevertheless, it is clearly the intent of FDA to move in the
direction of enhanced transparency. In January of 2018, FDA Commissioner Scott Gottlieb
announced new steps that FDA is taking to enhance the transparency of clinical trial
information to support innovation and scientific inquiry (FDA, 2018a). It will be interesting if
this will include more specific information on the use of novel tools such as simulations.
5.2.2 Installation
Installation is a phase during which preparations are made to implement a policy or system. Its
success depends on a number of factors- whether adequate decision-support systems are in
place, whether facilitative and administrative support systems have been installed, and whether
sufficient preservice and in-service training have been provided, for example. To be effective,
practitioners need to learn when, where, how, and with whom to use new approaches and new
143
skills (Fixsen et al., 2009). In this study, questions about the importance of various potential
challenges during the installation phase produced some interesting findings. Staffing emerged
as a particularly important area. For some companies, the most difficult part of installing
simulations appeared to be dealing with the inexperience of clinical teams. For example,
survey responses included comments such as:
Industry and clinical teams are not aware of the time and effort needed to perform
simulations.
and
It (CTS) needs…qualified experts to be able to examine the scenarios so that the
simulations make sense and are manageable…
Attempts to add simulations into a clinical trial program further complicate a system that is
already demanding and sophisticated. To date, clinical trial programs have relied primarily on
randomized clinical trials. FDA Commissioner, Scott Gottlieb, has stated that the agency is
seeing more use of combined-phase studies, sometimes called “seamless trials” (FDA, 2017a),
many of which are supported by computational screening and computerized simulation.
However, even with seamless trials, in-human studies will continue to remain a prominent
aspect to informed decision-making.
The specific needs for CTS implementation are new and will be difficult for traditional clinical
teams. Most of the training of those involved in the clinical research enterprise has been
directed at ensuring an appropriate infrastructure to support and monitor the investigators who
conduct research in the institutions that carry out the trials (English et al., 2010). The clinical
teams have become skilled at managing clinical protocol design and site logistics. However,
144
relatively few members of these teams have strong computational and engineering skills. As
early as 2000, Holford and colleagues identified the need for trained personnel if CTS was to
be used optimally as a drug development tool. As stated in that paper,
Because the trained personnel…are already scarce, consideration should be given
to expansion of existing, and establishment of new, educational programs to
develop the large number (hundreds) of experts in clinical trial simulation that will
be needed. (N. Holford, Kimko, Monteleone, & Peck, 2000)
The results here reinforce an already-identified call to train more regulatory
pharmacometricians (essentially clinicians with pharmacometrics expertise) who can
implement simulation tools and assess M&S documentation (Romero et al., 2011). Holford
and Karlsson also suggest that penetration of new CTS policy and methods into clinical
development may require the development of a cadre of regulatory pharmacometricians (N.
Holford & Karlsson, 2007) – those who can critically assess CTS documentation or tools
actively review new simulation policies or search for newly accepted methods. These
assessors should have both technical skills and the ability to recognize when models are
deficient, data is inadequate, or endpoints are badly chosen (N. Holford & Karlsson, 2007).
These multidisciplinary aspects of pharmacometrics demand frequent communication between
clinical and pharmacometric assessors (Barrett, Fossler, Cadieu, & Gastonguay, 2008) as well
as industry and regulators (Romero et al., 2014). Given that the qualitative tools can be quite
sophisticated, those discussions can be intimidating for all but the most experienced
practitioners.
145
The shortage in CTS expertise is not limited to qualified simulation tool users or data
managers. It goes to the additional level of ensuring that the larger team is able to understand
the advantages and uses of simulation. Because clinical departments in companies draw
heavily from medical and nursing programs, they may find that their staff members are
uncomfortable with the use of highly computational methods, which they may find difficult to
understand (Viceconti et al., 2015). Most clinical coordinators, for example, have nursing
backgrounds. Le Flore has pointed out that the current model of clinical education in advanced
practice nursing continues to be much the same as it was 45 years ago (LeFlore & Thomas,
2016). LeFlore has advocated the used of computerized models and simulations as valuable
aids when training APRNs for the contemporary clinical environment (LeFlore & Thomas,
2016).
The current shortages in qualified personnel will ultimately affect the ability of companies to
use computerized simulations in clinical development. Given the severe shortage of SAS
programmers with clinical knowledge (Gupta, 2009), pharmaceutical companies and CROs
may have to reevaluate their in-house training programs to develop these new capabilities in
their own personnel. The deficiencies in CTS expertise is further complicated by the need to
anticipate multiple uses for clinical trial simulations - to assess drug-drug interactions; to
evaluate the safety and efficacy of dosage regimes; or to develop virtual cohorts, for example -
all of which require different analytical methods, different software tools, and different
modeling expertise (Kimko & Peck, 2011). This topic could be explored more fully in future
research.
146
In contrast to the clear limitations posed by staffing challenges, impediments in other areas
related to installation may be less severe than might have been anticipated. For example, most
respondents did not identify funding as a very important issue. The lower priority of money as
a critical limiting factor is unusual for an implementation project. More often, concerns around
financial limitations will dominate the implementation of a novel project (Andreassen,
Kjekshus, & Tjora, 2015; Kerschner, 2016; Lepori et al., 2007). Bertram and colleagues
(2015) note that financial limitations often impact successful implementation, because the
elements that drive implementation often require difficult decisions regarding the ways in
which money will be spent (Bertram, Blase, & Fixsen, 2015). Thus, resource restrictions are
important to identify and understand early because they can have consequences for the success
of later phases (Fixsen et al., 2009). Simulation methods hold promise of reducing clinical trial
costs. Currently, only 10 percent of trials now appear to finish on schedule, and enrollment
activities account for 40 percent of trial costs (Schiller, 2015). Thus, companies may be
willing to invest in novel CTS methods if they promise to reduce trial time and costs.
Another area of concern that has been identified in the literature is that related to the
acceptance of CTS methods by regulatory agencies. Viceconti and colleagues (Viceconti et al.,
2015), for example, pointed to the possibility that regulatory hurdles could significantly reduce
the uptake of simulations in a clinical trial program. They encouraged regulatory bodies across
the world to avoid becoming the bottleneck for CTS adoption. The possible challenges
associated with FDA pushback might also be of concern because regulatory agencies have had
serious funding issues that have reduced their ability to hire reviewers and experts who might
be comfortable with CTS approaches (Gottlieb, 2018). If industry is having trouble to build
147
knowledgeable teams, as is suggested in the research here, we might anticipate that FDA will
have an even greater challenge. The difficulties faced by FDA to hire sufficient, well-trained,
personnel are well recognized. For example, a 2015 review of FDA’s Centers of Excellence in
Regulatory Science and Innovation (CERSI) program drew attention to the seriously limited
funding available to FDA for regulatory science research and training. It noted that such
shortfalls in resource allocation could limit FDA’s ability to satisfy its research and training
agenda (CERSI, 2016). More recently, in its 2017 document titled “Initial Assessment of FDA
Hiring and Retention-A path Forward”, FDA authors note that “the percentage of CDER and
CBER hiring managers who self-reported being “very satisfied” with the quality of new hires
is only 31%”. In a status report to Congress, FDA identified that its growing needs in highly
specialized fields such as statistics and bioinformatics were difficult to satisfy because these
individuals are relatively rare and can command much higher salaries in the private sector than
currently can be paid under governmental pay structures (Gottlieb, 2018).
In the face of these concerns, it was surprising that industry seemed to be reasonably satisfied
with the receptivity and knowledge of regulatory reviewers regarding the use of CTS (Figures
29, 30, 32, and 34). The supportive approach of advisors and reviewers may reflect the fact
that the advanced tools are promoted so strongly by FDA at the highest levels. In September
of 2017 at a recent keynote address to the Regulatory Affairs Professional Society, FDA
Commissioner, Scott Gottlieb highlighted the importance of advanced statistical and
computational methodologies in the drug development and review process, in which he made
the following statement:
148
“Our work with high performance computing, and simulation, is a good example of
an area where we need to make sure our methods match the sophistication and
resources of the tools and approaches being adopted by sponsors,” (FDA, 2017a)
Further, in his “FDA 21st Century Cures Workforce Planning Report to Congress”, FDA
Commissioner, Dr. Gottlieb states in his opening letter:
As the science that we regulate becomes more complex and specialized, so too must
the skills and expertise of our staff. (Gottlieb, 2018)
5.2.3 Implementation
A goal of the study reported here was to understand the extent to which pharmaceutical
companies have fully implemented CTS and to identify the challenges that were faced during
their implementation. As part of this process, I was interested in the types of uses to which
those methods have been put. It seemed clear from this survey that CTS implementations were
varied in their natures. Some of these uses aligned with FDA’s reports discussing the use of
simulations in areas such as estimating an appropriate sample size for pivotal trials and
evaluating dose response characteristics and ranges for the general population or for specific
population subgroups (FDA, 2017a). In this survey, the most common implementations
appeared to be those for predicting drug efficacy or drug-drug interactions. However, other
types of use were reported quite frequently as well, and the evidence here seems insufficient to
conclude that some uses are more popular than others. Perhaps the bigger lesson emerging
from the data is that simulations are now used in several different ways, all of which have
potentially important contributions.
149
What were the challenges identified when implementing CMS approaches? The uses of
simulations are well-known to depend on a sophisticated knowledge of statistics, a good data
management team, and access to well-structured datasets (N Holford, 2010; Kimko & Peck,
2011; Manolis et al., 2013). All of these requirements were identified in the present study to
create issues. At the same time, however, many respondents characterized their overall
experience when implementing simulations as somewhat easy. This result seems
counterintuitive. However, the definition of “easy” is relative. It may be that working with
automated tools and data applications is relatively simple compared to the efforts needed to
gather informed consent and conduct complicated interventions in a large clinical-study whose
sites are distributed globally (English et al., 2010). The use of computerized simulations, even
if demanding, may be preferred by some clinical program developers to the more burdensome
regulations and interventions that are involved with the conduct of human clinical trials.
As discussed above, regulatory interactions with FDA were commonly identified as positive
during the installation phase, and a positive experience was not unusual during the
implementation phase as well. A majority of respondents reported that FDA’s knowledge
about the use of simulations generally met their expectations during meetings and discussions.
Further, the reviewers appeared to consider the use of simulations when giving advice, and the
advice was generally consistent with agency messaging. That companies and regulators are
gaining confidence with the use of CTS for submissions is also suggested by the fact that many
companies were adding CTS data to more than one submission. In this regard, study results
suggest that FDA policies for the regulation of computerized simulations have translated into
broad support in the Agency’s review divisions.
150
At the same time, however, some divisions of FDA were reported to approach simulations
differently than others (Figure 38). These differences may not be surprising given that the
advice and reviews can be affected by the varying scientific competencies required of
reviewers, the nature of the clinical problem to be studied, and the availability of the resources
needed to review the extensive submission dossiers (L. Hill & Johnson, 2004). FDA is aware
of these issues and is attempting to improve the situation. In May of 2018 at a recent keynote
address, FDA Commissioner, Scott Gottlieb, made the following statement:
Our goal is to make drug review even more team based, so that the skills of FDA
staff who have expertise in discrete areas like statistics and modeling and
simulation and advanced manufacturing can be more easily leveraged when it
comes to trial designs - and more novel but promising development approaches -
are brought to us. (FDA, 2018a)
Less emphasis was placed in this survey on experiences with regulatory agencies from other
economies. However, the questions that were asked about the relative experiences overseas
suggested some level of disparity between regional regulators such as EMA, FDA, and PMDA
(See Table 11). Given the newness of these initiatives, it might not be surprising that different
levels of scientific competency, staffing, funding and procedural standards may act to increase
regulatory dissonance (Storm, 2013). As indicated in Table 11, the anecdotal comments of
respondents reinforced the message that some regulatory agencies required more simulations
than others, or that feedback was erratic between agencies, or that one agency seemed more
open to CTS than another. The fact that nearly one-third of respondents identified differences
in the approaches of reviewers suggests that regulators will be challenged to ensure that
different reviewers “speak with one voice”, especially between agencies. In this regard, the
Avicenna Consortium has recommended a “virtuous circle” that should be supported by all
151
industrialized countries to coordinate their regulatory frameworks more effectively (Viceconti
et al., 2015).
The majority of respondents in this survey were associated with the pharmaceutical or
biologics sectors but about one in ten had experience in the medical device space and slightly
more (13%) were experienced with combination products. The responses of this smaller
grouping suggested that simulations are also used in the medical device sector. This result was
not unexpected. Many types of computerized approaches are being used to study medical
devices, as identified in Chapter 2. For example, the Virtual Family (VF) developed by the
IT’IS Foundation was found to have been used in more than 120 medical device submissions to
FDA before the year 2014 (Gosselin et al., 2014). However, the design of studies needed to
support clinical development for devices is somewhat different from that for pharmaceutical or
biologics products. Such studies often rely on many fewer patients, and this may affect
whether CTS makes sense for most device submissions (FDA, 2006a). Thus, more work
would be helpful to understand the other ways in which other simulation tools are currently
being used.
5.2.4 Sustainability
The final stage of Fixsen’s model of implementation is sustainability. Although it may be too
early to assess this stage, the initial results presented here seem to suggest that current policies
governing computerized simulations in clinical trials are not the gating item in the
sustainability of the technology’s use across industry. Several respondents are using
simulation methods in multiple applications, a result that would not be expected if their first
experiences had been negative. The newness of these approaches, however, makes
152
sustainability difficult to study at this juncture. This may be a good topic to revisit in several
years after simulation methods have become more established as part of the clinical research
armamentarium. It is only then that it will be possible to assess whether simulation methods
become an expected part of the clinical development toolbox or whether they fall “out of
fashion” as the result of implementation challenges. Many have expressed concern that the
more challenging pharmacometric approaches, such as computerized simulations, may be
difficult to implement given the shortage of trained pharmacometricians. Companies may
attempt clinical trial simulation methods initially, but, may revert to more traditional
approaches if the methods prove too demanding.
5.2.5 Specific Fit-for-Purpose Tools
In the pharmaceutical sector, the Alzheimer’s Disease (AD) simulator will be an important
first-mover to establish the role of computerized simulation tools. As discussed in Chapter 2,
the CAMD AD-DDT is the first such method to receive a favorable ‘Fit-for-Purpose’
determination by FDA’s Center for Drug Evaluation and Review (CDER). The tool is of
course valuable for those companies developing drugs for Alzheimer’s disease, but it is more
generally important as a model to show others what can be done with CTS tools. The study
finding that only about a third of companies surveyed are carrying out simulations in their
clinical program may disappoint developers who invested so much time and effort in the
undertaking. However, this is an exploratory study with relatively few contributors actually
developing drugs for Alzheimer’s disease, so the findings would need to be explored in more
detail before any conclusions could be reached.
153
A first step to understanding the extent to which the Alzheimer’s tool has been disseminated
and used would be to investigate the extent of uptake amongst companies that specifically
manufacture Alzheimer’s drugs. We might predict that these companies should all be familiar
with the Alzheimer’s Disease Simulator and most should be using it. However, it appears that
such new approaches have not been communicated effectively to the rank and file of regulatory
and quality departments. Given the perceived value of such tools, it would seem important that
developers and regulators be more aggressive in educating industry. However, it is known that
such communication and education programs can be challenging and require systematic
planning and effort. To assure that education about techniques such as these are effective, a
large number of participating functions must be included in the conversation, and high levels of
resource commitment and leadership are required for successful implementation of such
information (Coiera, 2006). At the present time, however, communication about the
qualification of CTS tools appears to be a work in progress. The Alzheimer's disease
simulator, having received FDA’s Fit-For-Purpose designation, would seem to be of great use
to these companies if only because of its instructive value regarding interactions with the
regulators and approaches to the creation and testing of virtual patient groups. This value is
supported by survey results in that 13 of 14 respondents familiar with the tool felt that
development of the Alzheimer’s tool was important and would have great value to drug
development programs going forward.
The study conducted here was considered to be topical because regulatory agencies are still
developing the methods to qualify certain CTS tools in ways that give them greater regulatory
acceptance. At the time of writing, eight DDTs have been qualified, and two such tools have
154
been granted a Fit-for-Purpose designation (FDA, 2013). However, this small number of
DDTs may presage more to come, as groups such as the Critical Path Institute pursue the use
of simulations for other disease states such as Parkinson’s disease and tuberculosis (CPI,
2017). It may be these other consortia to which certain companies represented in this study
belong; cross-tabulations suggested that most of the companies reported to be participating in
consortia were not working in the areas that would be appropriate for membership in the
Alzheimer’s consortium, but no further information was gathered about their affiliations.
In this study, respondents identified access to the data necessary for simulation purposes as
being a key concern. This result may suggest why companies are motivated to participate in
the development of consortia to build qualified tools, even though the tools will eventually
enter the public domain. As stated by FDA, the Drug Development Tool (DDT) Qualification
Programs allow “collaborative groups to undertake DDT development programs to increase the
efficiency and lessen the individual resource burden” (FDA, 2018b). The members of these
consortia pool datasets that individually might be too small to support tool development.
Additional motivation to participate in these programs may come from the opportunities that
they present to interact with agency reviewers and to see their thinking. However, it is likely
that public funding will still be needed for the development and coordination of such activities.
Industry by itself might be unwilling to develop DDT simulation tools at great expense, only to
have them become public property. Results suggest that some many companies are also using
commercial or proprietary tools, some of which are presumably inaccessible to competitors.
However, the proprietary nature of these company-specific tools may be threatened. In
January of 2018, FDA Commissioner Scott Gottlieb announced new steps that FDA is taking
155
to enhance transparency of clinical trial information related to new drugs (FDA, 2018a). This
policy may well impact how proprietary software used in clinical simulations is implemented if
the systems design and coding of such proprietary systems become vulnerable to disclosure.
5.3 Conclusions and Future Considerations
The medical products industry, and particularly the pharmaceutical industry, has gone through
major changes in the last decade. The pressures to control healthcare costs and to find new
products that can fill a diminishing pipeline have forced regulators, tasked with ensuring public
health, to support ways to decrease the high costs and long timelines associated with clinical
trials. New simulation tools hold promise to reduce patient numbers in clinical trials based on
data from previous development programs. However, adoption of these tools may be delayed,
particularly for risk-averse companies whose staff are poorly educated about simulation tools
and who may not be willing to include and defend simulated data in their submissions to
regulatory reviewers.
The problem is in part one of trust. Traditionally, the industry has used randomized clinical
trials to assess safety and efficacy of medical products. As part of these trials, confidence was
built in a certain set of statistical models. More recently, the trust has been extended to
pharmacokinetic and in-vitro models that have been verified over time. CTS tools will face the
same challenge of proving their worth. The FDA’s ‘Fit-for-Purpose’ designation is a good
start in providing some assurance that a computerized simulation tool can be accepted as a
“trusted’ methodology for a defined context of use, but more needs to be done.
Two issues seem particularly important for future progress in this regard. First is the need to
build competencies in simulation tools so that they can be integrated more effectively into
156
clinical development programs. This may include the development of university programs and
courses as well as intensive workshops for those already in the industry. Second, the
availability and usefulness of such tools should be communicated more widely. The
challenges of such communication suggest that a multipronged approach may be needed that
could include workshops and presentations by regulators but also by experts and industry
groups as part of professional meetings and internet presentations.
157
APPENDIX A. SURVEY QUESTIONS – FINAL VERSION
158
159
160
161
162
163
164
165
166
APPENDIX B. SURVEY QUESTIONS – DRAFT VERSION
167
168
169
170
171
172
173
174
175
176
APPENDIX C. SURVEY RESULTS AND REPORTS
Q2: What title is most closely aligned with your current responsibilities?
Results Data Table
Other (please specify)
177
Q3: Which statement best describes your department's activities?
Results Data Table
Other (please specify)
178
Q4 Which statement best describes your sector?
Results Data Table
179
Q5: Which statement best describes the size of your overall organization?
Results Data Table
180
Q6: How would you describe your product space? (Choose all that apply)
Results Data Table
181
Q7: What class(es) of medical products does the organization sell for the US
market?
Results Data Table
182
Exploration: (Q8-11)
Q8: Are computer simulations used in your organization’s phase 2 and/or 3
clinical studies?
Results Data Table
183
Q9: What types of computer simulations did you consider in your
exploration of clinical simulation tools?
184
Results Data Table
Other (1) (please specify)
Other (2) (please specify)
185
Q10: How useful were the following resources in preparing to incorporate
simulations into clinical studies?
186
Results Data Table
187
Other (1) (please specify)
Other (2) (please specify)
188
Q11: How important were the following challenges when considering the use
of simulations for your clinical projects?
189
Results Data Table
190
Installation: (Q12-23)
Q12: At this point, have you assigned financial resources to any clinical
computer simulations?
Results Data Table
191
Q13: At this point, what types of resources have you assigned to the
installation of any type of computer simulation in your clinical programs?
(List all that apply)
Results Data Table
192
Q14: Are you familiar with the Alzheimer's disease simulator developed
through the Critical Path Institute?
Results Data Table
193
Q15: Have you used the Critical Path Institute's Alzheimer's disease
simulator?
Results Data Table
194
Q16: How important to you was the fact that the Critical Path Institute's
Alzheimer's simulation tool has received the 'Fit-for-Purpose' (FFP)
designation by the FDA?
Results Data Table
195
Q17: What is your view on the usefulness that a 'Fit-for-Purpose' (FFP)
simulation tool such as the Critical Path Institute's Alzheimer's disease
simulator will have on drug development programs?
Results Data Table
196
Q18: Would you like to comment on any aspects of the impact that a 'Fit-
for-Purpose' (FFP) designated simulation tool such as the Alzheimer's
disease simulator will have on drug development programs?
Text responses:
197
Q19: Has your company cooperated in the development of a simulation tool
through a consortium?
Results Data Table
198
Q20: With which consortia are you working, or do you belong?
Text responses:
199
Q21: What category of simulation tools have you used in your clinical
programs? (select all that apply)
Results Data Table
Other (please specify)
Other (please specify)
200
Q22: Please rank the importance of the following statements regarding the
development and use of computer simulation tools in clinical development?
201
Results Data Table
202
Q23: What was your experience with the following areas during the initial
implementation of simulation tools?
203
Results Data Table
204
Implementation: (Q24-26)
Q24: How many times have you included a simulation as part of your
submissions?
Results Data Table
205
Q25: Please rank the FDA's receptivity toward the simulated data within
your submission.
Results Data Table
206
Q26: Please give your opinion on the following statements:
207
Results Data Table
208
Sustainability: (Q27-36)
Q27: Please share your opinion on the following statements:
209
Results Data Table
210
Q28: Did the time to complete simulation projects from start to finish:
Results Data Table
211
Q29: What do you think was the most difficult part of incorporating
simulations into your clinical program? (select all that apply)
Results Data Table
212
Other (1)
213
Q30: Would you like to comment on aspects of the implementation that were
more difficult than you expected?
Text responses:
214
Q31: Overall, what was your experience when implementing simulations?
Results Data Table
215
Q32: Have you reported the results of the same simulations to different
regulatory bodies? (select all that apply)
Results Data Table
216
Q33: Did you find that the regulatory reviewers differed in their approaches
to the use of simulations?
Results Data Table
217
Q34: Can you comment on the nature of the disagreement?
Text responses:
218
Q35: Can you share any concerns that you have had about the use of
simulations in clinical development?
Text responses:
219
Q36 - Q36: Can you provide us with more information on your views of the
use of simulations in clinical development?
Text responses:
220
APPENDIX D. VIRTUAL FAMILY MODELS
Table 11: Virtual Family Attributes
Elements reproduced from FDA’s “Virtual Family” website, (FDA, 2017h).
Figure 39: Virtual Family
Elements reproduced from FDA’s “Virtual Family” website, (FDA, 2017h).
221
APPENDIX E. 2010 CRITICAL PATH FUNDING
Table 12: Summary of FY 2010 Critical Path Initiative Obligations
Table reproduced from FDA’s “Critical Path Initiative, report on projects receiving Critical
Path support, fiscal year 2010 Report”, (FDA, 2010a).
222
Table 13: Center for Drug Evaluation and Research (CDER) FY 2010 Funding
Summary
Table reproduced from FDA’s “Critical Path Initiative, report on projects receiving Critical
Path support, fiscal year 2010 Report”, (FDA, 2010a).
223
APPENDIX F. EMA FRAMEWORK
Figure 40: EMA Framework for M&S in Regulatory Review
Figure reproduced from EMA’s “Role of modelling and simulation in regulatory decision
making in Europe”, (EMA, 2011b).
224
Figure 41: EMA – Continuum of Learn/Confirm/Predict using for using M&S
Figure reproduced from EMA’s “Role of modelling and simulation in regulatory decision
making in Europe”, (EMA, 2011b).
225
Figure 42: EMA – Types of M&S documentation reviewed
Figure reproduced from EMA’s “Role of modelling and simulation in regulatory decision
making in Europe”, (EMA, 2011b).
226
REFERENCES
21st Century Cures Act, (2016).
Andreassen, H., Kjekshus, L., & Tjora, A. (2015). Survival of the project: A case study of ICT
innovation in health care. Social Science and Medicine, 132, 62.
Arshagouni, P. (2002). Federal court invalidates the FDA pediatric rule: AAPS v. FDA.
Retrieved from
https://www.law.uh.edu/healthlaw/perspectives/Children/021223Federal.html
Barrett, J., Fossler, M., Cadieu, K., & Gastonguay, M. (2008). Pharmacometrics: A
multidisciplinary field to facilitate critical thinking in drug development and
translational research settings. Clin Pharmacol Ther., 48, 632-649.
Bedding, A., Scott, G., Brayshaw, N., Leong, L., Herrero-Martinez, E., Looby, M., & Lloyd, P.
(2013). Clinical trial simulations – an essential tool in drug development. Retrieved
from https://www.ncbi.nlm.nih.gov/pubmed/15559184
Bertram, R., Blase, K., & Fixsen, D. (2015). Improving programs and outcomes:
Implementation frameworks and organization change.
Bhatt, A. (2010). Evolution of clinical research: A history before and beyond James Lind.
Perspectives in Clinical Research, Jan-Mar (1(1)), 6-10.
Califf, R., Filerman, G., Murray, R., & Rosenblatt, M. (2012). Appendix D discussion paper:
The clinical trials enterprise in the United States: A call for disruptive innovation
Envisioning a Transforming Clinical Trials Enterprise in the United States:
Establishing An Agenda for 2020: Workshop Summary. Washington, D.C.
CDISC. (2006). Study Data Tabulation Model - Version 1.4. Retrieved from
http://www.cdisc.org/sdtm
CDISC. (2017). CDISC. Retrieved from https://www.cdisc.org/
227
CERSI. (2016). Centers of Excellence in Regulatory Science and Innovation (CERSI) ---
Program evaluation. Retrieved from
https://www.fda.gov/downloads/AdvisoryCommittees/CommitteesMeetingMaterials/Sc
ienceBoardtotheFoodandDrugAdministration/UCM488115.pdf
Code of Federal Regulations, 45 CFR Part 46 (2009).
Coiera, E. (2006). Communication Systems in Healthcare. Clinical Biochemist, 27(May 2006),
89-98.
Corrigan, B. (2013). A comprehensive clinical trial simulation tool for Alzheimer’s disease:
Lessons for model collaboration CAMD M&S Workgroup. Washington, DC: CAMD.
CPI. (2017). Critical Path Institue. Retrieved from https://c-path.org/
Crawford, M. (2016). The when and why of quantitative systems pharmacology. AAPS News
Magazine, Dec 2016, 30-31.
Davis, D., Comar, C. (1979). Science and regulatory policy. American Association for the
Advancement of Science, 203(4375).
Dellinger, A. (2005). Validity and the review of literature. Research in the schools, 12(2), 41-
54.
Domecq, J., Prutsky, G., Elraiyah, T., Wang, Z., Nabhan, M., Shippee, N., . . . Murad, M.
(2014). Patient engagement in research: A systematic review. BMC Health Services
Research, 14:89, 1-9.
EMA. (2010). Reflection paper on expectations for electronic source data and data transcribed
to electronic data collection tools in clinical trials (Vol. EMA/INS/GCP/454280/2010).
London, United Kingdom: EMA.
EMA. (2011a). Modelling and simulation examples that failed to meet regulator's expectations.
Retrieved from
http://www.ema.europa.eu/docs/en_GB/document_library/Presentation/2011/11/WC50
0118266.pdf
228
EMA. (2011b). Role of modelling and simulation in regulatory decision making in Europe.
Retrieved from
http://www.ema.europa.eu/docs/en_GB/document_library/Presentation/2011/11/WC50
0118262.pdf
EMA. (2012). EFPIA-EMA modelling and simulation workshop report. Retrieved from
http://www.ema.europa.eu/docs/en_GB/document_library/Report/2012/05/WC5001271
18.pdf
EMA. (2013). Concept paper on extrapolation of efficacy and safety in medicine development.
EMA/129698/2012. Retrieved from
http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2013/04
/WC500142358.pdf
EMA. (2015). European Medicines Agency policy on publication of clinical data for medicinal
products for human use. EMA/240810/2013. Retrieved from
http://www.ema.europa.eu/docs/en_GB/document_library/Other/2014/10/WC50017479
6.pdf
EMA. (2016). European Medicines Agency policy on access to EudraVigilance data for
medicinal products for human use. EMA/759287/2009. Retrieved from
http://www.ema.europa.eu/docs/en_GB/document_library/Other/2015/12/WC50019904
8.pdf
English, R., Lebovitz, Y., & Giffin, R. (2010). Transforming clinical research in the United
States: Challenges and opportunities, workshop summary Forum on Drug Discovery,
Development, and Translation. Washington, DC.
FDA. (2004). Challenges and opportunities report - March 2004, innovation or stagnation:
Challenge and opportunity on the critical path to new medical products. Retrieved
from
http://www.fda.gov/ScienceResearch/SpecialTopics/CriticalPathInitiative/CriticalPath
OpportunitiesReports/ucm077262.htm
FDA. (2006a). Clinical trials for medical devices: FDA and the IDE process. Retrieved from
https://www.fda.gov/downloads/training/clinicalinvestigatortrainingcourse/ucm378265.
pdf
229
FDA. (2006b). Critical Path opportunities initiated during 2006. Retrieved from
http://www.fda.gov/ScienceResearch/SpecialTopics/CriticalPathInitiative/CriticalPath
OpportunitiesReports/ucm077251.htm
FDA. (2006c). Critical Path opportunities list. Retrieved from
http://www.fda.gov/downloads/ScienceResearch/SpecialTopics/CriticalPathInitiative/C
riticalPathOpportunitiesReports/UCM077258.pdf
FDA. (2006d). Ensuring the safety of marketed medical devices: CDRH's medical device
postmarket safety program. Retrieved from
https://www.fda.gov/ohrms/dockets/dockets/06n0292/06n-0292-bkg0001-08-Tab-07-
vol2.pdf
FDA. (2006e). Information Sheet Guidance for IRBs, Clinical Investigators, and Sponsors -
Significant Risk and Nonsignificant Risk Medical Device Studies. Retrieved from
http://www.fda.gov/downloads/RegulatoryInformation/Guidances/UCM126418.pdf
FDA. (2008). The Sentinel Initiative: A national strategy for monitoring medical product
safety. Retrieved from
http://www.fda.gov/Safety/FDAsSentinelInitiative/ucm089474.htm
FDA. (2009a). The Critical Path Initiative, report on key achievements in 2009. Retrieved
from
http://www.fda.gov/downloads/ScienceResearch/SpecialTopics/CriticalPathInitiative/U
CM221651.pdf
FDA. (2009b). Guidance for industry: End-of-phase 2A meetings. Retrieved from
https://www.fda.gov/downloads/Drugs/.../Guidances/ucm079690.pdf
FDA. (2010a). Critical Path Initiative, report on projects receiving Critical Path support, fiscal
year 2010 Report. Retrieved from
http://www.fda.gov/downloads/ScienceResearch/SpecialTopics/CriticalPathInitiative/U
CM249262.pdf
FDA. (2010b). Guidance for industry - Adaptive design clinical trials for drugs and biologics.
Retrieved from https://www.fda.gov/downloads/drugs/guidances/ucm201790.pdf
230
FDA. (2010c). The Sentinal Initiative (update 2010). In CDER (Ed.). Washington, DC: US
DHHS.
FDA. (2013). Drug development tools: Fit-for-purpose initiative. Retrieved from
https://www.fda.gov/drugs/developmentapprovalprocess/ucm505485.htm
FDA. (2014a). Guidance for Industry and FDA Staff Qualification Process for Drug
Development Tools. Retrieved from
http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Gui
dances/UCM230597.pdf
FDA. (2014b). Medical device development tools (MDDT) program. Retrieved from
http://www.fda.gov/MedicalDevices/ScienceandResearch/MedicalDeviceDevelopment
ToolsMDDT/
FDA. (2014c). Medical device development tools draft guidance for industry, tool developers,
and food and drug sdministration staff. Retrieved from
http://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/Guidan
ceDocuments/UCM374432.pdf
FDA. (2014d). Reporting of coputational modeling studies in medical device submissions.
Retrieved from
https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/Guida
nceDocuments/UCM381813.pdf
FDA. (2014e). Significant dates in U.S. food and drug law history. Retrieved from
https://www.fda.gov/AboutFDA/WhatWeDo/History/Milestones/ucm128305.htm
FDA. (2014f). Study data standards for submission to CDER. Retrieved from
http://www.fda.gov/Drugs/DevelopmentApprovalProcess/FormsSubmissionRequireme
nts/ElectronicSubmissions/ucm248635.htm
FDA. (2015). Product development under the animal rule, guidance for industry. Retrieved
from
http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Gui
dances/UCM399217.pdf
231
FDA. (2016a). Applying human factors and usability engineering to medical devices Retrieved
from https://www.fda.gov/downloads/MedicalDevices/.../UCM259760.pdf
FDA. (2016b). Physiologically based pharmacokinetic analyses —Format and content
guidance for industry - draft guidance. Retrieved from
https://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Gu
idances/UCM531207.pdf
FDA. (2016c). Prescription drug user fee rates for fiscal year 2017. Retrieved from
https://www.federalregister.gov/documents/2016/07/28/2016-17870/prescription-drug-
user-fee-rates-for-fiscal-year-2017
FDA. (2017a). Dr. Gottlieb's speech to the Regulatory Affairs Professionals Society (RAPS)
2017 regulatory conference. Retrieved from
https://www.fda.gov/NewsEvents/Speeches/ucm575400.htm
FDA. (2017b). Frequently asked questions about the FDA drug approval process. Retrieved
from https://www.fda.gov/drugs/resourcesforyou/specialfeatures/ucm279676.htm#4
FDA. (2017c). Letter of support initiative. Retrieved from
https://www.fda.gov/drugs/developmentapprovalprocess/ucm434382.htm
FDA. (2017d). Qualification of medical device development tools - Guidance for industry, tool
developers, and Food and Drug Administration staff. Retrieved from
https://www.fda.gov/downloads/MedicalDevices/DeviceRegulationandGuidance/Guida
nceDocuments/UCM374432.pdf?source=govdelivery&utm_medium=email&utm_sour
ce=govdelivery
FDA. (2017e). Regulatory science extramural research and development projects. Retrieved
from
https://www.fda.gov/ScienceResearch/SpecialTopics/RegulatoryScience/ucm227223.ht
m
FDA. (2017f). Study Data Standards: What You Need To Know. Retrieved from
https://www.fda.gov/downloads/Drugs/DevelopmentApprovalProcess/FormsSubmissio
nRequirements/ElectronicSubmissions/UCM511237.pdf
232
FDA. (2017g). Updated process for qualification of drug development tools under new FD&C
Act section 507. Retrieved from
https://www.fda.gov/Drugs/DevelopmentApprovalProcess/DrugDevelopmentToolsQua
lificationProgram/ucm561587.htm
FDA. (2017h). Virtual Family. Retrieved from
https://www.fda.gov/aboutfda/centersoffices/officeofmedicalproductsandtobacco/cdrh/c
drhoffices/ucm302074.htm
FDA. (2018a). Keynote address by commissioner Gottlieb to the 2018 FDLI annual
conference. Retrieved from
https://www.fda.gov/NewsEvents/Speeches/ucm606541.htm
FDA. (2018b). Statement from FDA Commissioner Scott Gottlieb, M.D. on advancing the
development of novel treatments for neurological conditions; part of broader effort on
modernizing FDA’s new drug review programs. Retrieved from
https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm596897.htm
Fieller, N. (1996). Medical Statistics: Clinical Trials. University of Sheffield: NRJF.
Fincham, J. (2008). Response rates and responsiveness for surveys, standards, and the journal.
American Journal of Pharmaceutical Education, 72(2).
Fixsen, D., Blase, K., Naoom, S., & Wallace, F. (2009). Core implementation components.
Research on Social Work Practice, 19(September), 531-540.
Food and Drug Administration Modernization Act, 21 USC 301 (1997).
Food Drug and Cosmetic Act, 21 USC 301 (1938).
GAO. (2006). New drug development: Science, business, regulatory, and intellectual property
issues cited as hampering drug development efforts. Retrieved from
http://www.gao.gov/products/GAO-07-49
Garde, D. (2016). Eli Lilly’s Alzheimer’s drug fails in latestage trial, dashing hopes.
Retrieved from https://www.statnews.com/2016/11/23/alzheimers-eli-lilly-drug-trial/
233
Getz, K. (2010). Chasing veteran US sites out of the enterprise. Retrieved from
http://www.appliedclinicaltrialsonline.com/chasing-veteran-us-sites-out-enterprise
Gosselin, M., Neufeld, E., Moser, H., Huber, E., Farcito, S., Gerber, L., . . . Kuster, N. (2014).
Development of a new generation of high-resolution anatomical models for medical
device evaluation: The virtual population 3.0. Physics in Medicine and Biology, 59(18),
5287–5303.
Gottlieb, S. (2018). FDA 21st Century Cures workforce planning report to congress.
Retrieved from
https://www.fda.gov/downloads/regulatoryinformation/lawsenforcedbyfda/significanta
mendmentstothefdcact/21stcenturycuresact/ucm612023.pdf
Griffin, G. (2017). Sharing the results of clinical trials: Industry views on disclosure of data
from industry-sponsored clinical research. Los Angeles, CA: Universtiy of Southern
California.
Gupta, S. (2009). Practice makes perfect: Training and performing in the pharmaceutical
industry.
Hill, A. (1965). The environment and disease: Association or causation. Proceedings of the
Royal Society of Medicine, 58((5)), 295-300.
Hill, L., & Johnson, H. (2004). Fertility changes among immigrants: Generations,
neighborhoods, and personal characteristics. Social Science Quarterly, 85(3), 811-827.
Holford, N. (2010). Clinical trial simulation: A review. State of the Art, 88 (2).
Holford, N., & Karlsson, M. (2007). Time for quantitative clinical pharmacology: A proposal
for a pharmacometrics curriculum. Clinical Pharmacology and Therapeutics, 82(1),
103-105.
Holford, N., Kimko, H., Monteleone, J., & Peck, C. (2000). Simulation of clinical trials.
Annual Review of Pharmacology and Toxicology, 40, 209-234., 40(209-34).
ICCVAM Authorization Act, 42 USC 201 (2000).
234
ICH. (1996). Guidance for Industry, E6 Good Clinical Practice: Consolidated Guidance.
Retrieved from http://www.ich.org/products/guidelines/efficacy/efficacy-
single/article/good-clinical-practice.html
IGI-Global. (2018). What is technology penetration. Retrieved from https://www.igi-
global.com/dictionary/technology-penetration/38557
Jamieson, M. (2011). The role of universities in the commercialization of medical products: A
survey of industry views. Los Angeles, CA: University of Southern California.
Jenkins, J., Hubbard, S. (1991). History of clinical trials. Semiars in Oncology Nursing, 7(No
4), 228-234.
Jones, T. (2011). A quick guide to survey research. RCS(November), 5-7.
Joppi, R., Bertele, V., & Garattini, S. (2006). Orphan drug development is progressing too
slowly. British Journal of Clinical Pharmacology, 2006, 1365-2125.
Kerschner, J. (2016). The importance of NIH funding to spur biomedical research. WMJ :
Official Publication of the State Medical Society of Wisconsin, 115(1), 54-55.
Kim, J., & Scialli, A. (2011). Thalidomide: The tragedy of birth defects and the effective
treatment of disease. Toxicological Sciences, 122(1), 1–6.
Kimko, H., & Peck, C. (2011). Clinical Trial Simulations - Applications and Trends. New
York: Springer.
Kingdon, J. (1995). Agendas, Alternatives, and Public Policies (Vol. (2nd ed.)). New York:
Longman.
Kramer, J., Smith, P., & Califf, R. (2012). Impediments to clinical research in the United
States. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/22318614
Kush, R. (2012). Current status and future scope of CDISC standards. CDISC Journal, October
2012.
235
Lazarou, J., Pomeranz, B., & Corey, P. (1998). Incidence of adverse drug reactions in
hospitalized patients: a meta-analysis of prospective studies. JAMA, 279(15), 1200-
1205.
LeFlore, J., & Thomas, P. (2016). Educational changes to support advanced practice nursing
education. J Perinat Neonat Nurs , 30(3), 187-190.
Lepori, B., Van den Besselaar, P., Dinges, M., Van der Meulen, B., Potì, B., Reale, E., &
Theves, J. (2007). Indicators for comparative analysis of public project funding:
Concepts, implementation and evaluation. Research Evaluation, 16(4), 143-255.
Lindemann, N. (2016). 34 Ways to improve your survey response rate. Retrieved from
https://surveyanyplace.com/improve-survey-response-rate/
Lindemann, N. (2018). What’s the average survey response rate? [2018 benchmark].
Retrieved from https://surveyanyplace.com/average-survey-response-rate/
Manolis, E., Rohou, S., Hemmings, R., Salmonson, T., Karlsson, M., & Milligan, P. (2013).
The role of modeling and simulation in development and registration of medicinal
products: Output from the EFPIA/EMA modeling and simulation workshop (Vol. 2, pp.
1-4).
Marshall, S., Burghaus, R., Cosson, V., Cheung, S., Chenel, M., Pasqua, O., Frey, N., Hamr,
B., Harnisch, L., Ivanow, F., Kerbusch, T., Lippert, J., Milligan, P., Rohou, S., Staab,
A., Steimer, J., Tornøe, C., Visser, S. (2016). Good practices in model-informed drug
discovery and development: Practice, application, and documentation. CPT
Pharmacometrics Syst. Pharmacol, 2016 (5)(May), 93-122.
McMahon, A., Watt, K., Wang, J., Green, D., Tiwari, R., & Burckart, G. (2016). Stratification,
hypothesis testing, and clinical trial simulation in pediatric drug development.
Retrieved from
https://www.fda.gov/downloads/ScienceResearch/SpecialTopics/PediatricTherapeutics
Research/UCM525110.pdf
Medway, R., & Fulton, J. (2012). When more gets you less: A meta-znalysis of the effect of
concurrent web options on mail survey response rates. Public Opinion Quarterly, 76(4),
733-746.
236
mooreslaw.org. (2017). Moore's Law. Retrieved from http://www.mooreslaw.org/
Mukherejee, S. (2017). The way we treat cancer will be revolutionized as gene therapy comes
to the U.S. Fortune. Retrieved from http://fortune.com/2017/08/30/fda-novartis-car-t-
kymriah/
Mullin, R. (2017). Costly drugs. C&EN, 95, 30-34.
Nassar-McMillan, S., & Borders, D. (2002). Use of focus groups in survey item development.
The Qualitative Report, 7.
Nguyen, M. (2014). Sentinel: Harnessing the power of databases to evaluate medical products.
FDA Voice, March 18, 2014.
NIH. (2012). Implementation science information and resources. Retrieved from
https://www.fic.nih.gov/ResearchTopics/Pages/ImplementationScience.aspx
Olson, S., & Downey, A. (2013). Sharing Clinical Research Data. Washington, D.C.: Institute
of National Medicine.
Orcher, T. (2007). Conducting a Survey: Techniques for a Term Project. Glendale, CA:
Pyrczak Pub.
The Orphan Drug Act, 21 CFR Part 316 (1983).
Patton, M. (2002). Qualitative Research and Evaluation Methods (3 ed.). Thousand Oaks,
Calif.: Savage Publications.
Pearl, R. (1923). Introduction to Medical Biometry and Statistics. Philidelphia: W. B. Saunders
Company.
PMDA. (2011). Current Position and Expectation for Use of M&S in Drug Development and
Regulatory Decision Making: The PMDA Viewpoint. Retrieved from
https://ss.pmda.go.jp/en_all/search.x?q=use+of+modeling+and+simulation+in+drug+de
velopment&ie=UTF-8&page=1&x=40&y=7
237
PricewaterhouseCoopers. (1999). Silicon rally: The race to e-R&D. Retrieved from
http://www.pwc.com/gx/en/pharma-life-sciences/pdf/silicon_rally.pdf
Redington, L. (2009). The Orphan Drug Act of 1983: A case study of issue framing and the
failure to effect policy change from 1990-1994. Chapel Hill, NC: The University of
North Carolina at Chapel Hill
Romero, K., Corrigan, B., Neville, J., Kopko, S., & Cantillon, M. (2011). Striving for an
integrated drug-development process for neurodegeneration: The coalition against
major diseases. Future Medicine, 2011(1(5)), 379–385.
Romero, K., Rogers, A., Polhamus, D., Qiu, R., Stephenson, D., Mohs, R., . . . Corrigan, B.
(2014). The future is now: Model-based clinical trial design for Alzheimer’s disease.
Clinical Pharmacology and Therapeutics, 38(November).
Roterman-Konieczna, I. (2015). Simulations in Medicine : Pre-clinical and Clinical
Applications. Berlin/Boston: De Gruyter.
Scannell, J., Blanckley, A., Boldon, H., & Warrington, B. (2012). Diagnosing the decline in
pharmaceutical R&D efficiency. NATURE REVIEWS | DRUG DISCOVERY, 11(March
2012), 101-109.
Schaefer, D., & Dillman, D. (1998). Development of a standard e-mail methodology: Results
of an experiment. Public Opinion Quarterly, 62(3), 378-397.
Schiller, B. (2015). The future of clinical trials: How bringing patients to the centre can cut
costs and deliver better outcomes. eyeforpharma(2016), 1-8.
Schneider, L., Mangialasche, F., Andreasen, N., Feldman, H., Giacobini, E., Jones, R., . . .
Kivipelto, M. (2014). Clinical trials and latestage drug development for Alzheimer’s
disease: An appraisal from 1984 to 2014. Journal of Intern Medicine., Mar; 275(3),
251–283.
Seoane-Vazquez, E., RodriguezMonguio, R., Szeinbach, S., & Visaria, J. (2008). Incentives for
orphan drug research and development in the United States. Orphanet Journal of Rare
Diseases, 2008(3: 33).
238
Sertkaya, A., Birkenbach, A., Berlind, A., & Eyraud, J. (2014). Examination of Clinical Trial
Costs and Barriers for Drug Development. Washington, DC: ERG.
Sheehan, K. (2001). E-mail survey response rates: A review. Journal of Computer-Mediated
Communication, 6(2).
Sheehan, K., & Hoy, M. (1999). Flaming, complaining, abstaining: How online users respond
to privacy concerns. Journal of Advertising, Vol. 28((3)), 37-51.
Shoaibi, A. (2017). PRISM identifies vaccine safety issues. FDA Voice, April 7, 2017.
Sivo, S., Saunders, C., Chang, Q., & Jiang, J. (2006). How low should you go? Low response
rates and the validity of inference in IS questionnaire research. Journal of the
Association for Information Systems, 7(6), 351-356,359-360,362-372,374-414.
Smerkanich, N. (2016). Benefits-risk frameworks: Implementation by industry. Los Angeles,
CA: University of Southern California.
Solomo, A., Gold, G. (1955). Potassium transport in human erythrocytes: Evidence for a three
compartment system. The Journal of General Physiology, 38(3), 371-388.
Storm, N. (2013). Regulatory dissonance in the global development of drug therapies: A case
study of drug development in postmenopausal osteoperosis Regulatory Science. Los
Angeles: University of Southern California.
Teorell, T. (1937). Kinetics of distribution of substances administered to body, I: The
extravascular modes of administration. Archives internationales de pharmacodynamie
et de therapie, 57, 205–225.
UAMS. (2014). How long does it take to get IRB approval. Retrieved from
http://irb.uams.edu/2014/06/how-long-does-it-take-to-get-irb-approval-2/
Upjohn v. Finch, 422 F.2d 944 C.F.R. (1970).
Vaughn, S., Schumm, Jeanne Shay, Sinagub, Jane. (1996). Focus group interviews in
education and psychology. Thousand Oaks, Calif. ; London SAGE.
239
Viceconti, M., Morley-Fletcher, E., Henney, A., Contin, M., El-Arifi, K., McGregor, C., . . .
Wilkinson, E. (2015). In Silico Clinical Trials: How Computer Simulation Will
Transform The Biomedical Industry. Brussels, Belgium: Avicenna Consortium.
Weible, R., & Wallace, J. (1998). Cyber research: The impact of the internet on data collection.
Marketing Research, 10(3), 19-24.
Weisfeld, V., English, R., & Claiborne, A. (2011). Public engagement and clinical trials: New
models and disruptive technologies: Workshop summary. Washington, DC.
Winstone, J., Chadda, S., Ralston, S., & Sajosi, P. (2015). Review and comparison of clinical
evidence submitted to support European Medicines Agency market authorization of
orphan-designated oncological treatments. Orphanet Journal of Rare Diseases,
10(139), 1-7.
Woods, H., & Russell, W. (1931). An Introduction to Medical Statistics. Suffolk, Great Britain:
Richard Clay & Sons Limited.
Zuazo, J., Sjogren, J., & Hurley, C. (2012). SDTM datasets: Unlocking CDER's 7 most
commonly found problems. Retrieved from http://www.mmsholdings.com/mms-
blog/sdtm-datasets-unlocking-cders-7-most-commonly-found-problems/
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Use of electronic health record data for generating clinical evidence: a summary of medical device industry views
PDF
A survey of US industry views on implementation of decentralized clinical trials
PDF
Regulatory agreements for drug development collaborations: practices in the medical products industry
PDF
21 CFR Part 11 compliance for digital health technologies in clinical trials
PDF
Regulatory CMC strategies for gene and cell therapies during mergers and acquisitions: a survey of industry views
Asset Metadata
Creator
Hartigan, John Patrick
(author)
Core Title
Computerized simulation in clinical trials: a survey analysis of industry progress
School
School of Pharmacy
Degree
Doctor of Regulatory Science
Degree Program
Regulatory Science
Publication Date
10/15/2018
Defense Date
08/17/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
Alzheimer’s disease simulator,CDISC,clinical data management,clinical trial simulations,CM,computerized simulations,CPI,critical path,CTS,DDT,drug development tools,EMA,FDA,in silico,MDDT,medical device development tools,modelling and simulation,Models,OAI-PMH Harvest,PMDA,regulatory policy,safety and efficacy,submission,synthetic data,virtual cohort,virtual patient
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Richmond, Frances (
committee chair
)
Creator Email
jhartiga@usc.edu,medpconsulting@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c89-78454
Unique identifier
UC11670512
Identifier
etd-HartiganJo-6828.pdf (filename),usctheses-c89-78454 (legacy record id)
Legacy Identifier
etd-HartiganJo-6828.pdf
Dmrecord
78454
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Hartigan, John Patrick
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
Alzheimer’s disease simulator
CDISC
clinical data management
clinical trial simulations
CM
computerized simulations
CPI
critical path
CTS
drug development tools
EMA
FDA
in silico
MDDT
medical device development tools
modelling and simulation
PMDA
regulatory policy
safety and efficacy
submission
synthetic data
virtual cohort
virtual patient