Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Competency based education in low resource settings: design and implementation of a novel surgical training program
(USC Thesis Other)
Competency based education in low resource settings: design and implementation of a novel surgical training program
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
1
COMPETENCY BASED EDUCATION IN LOW RESOURCE SETTINGS:
DESIGN AND IMPLEMENTATION OF A NOVEL SURGICAL TRAINING PROGRAM
by
Meghan McCullough, MD
A Thesis Presented to the
FACULTY OF THE USC GRADUATE SCHOOL
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree of
MASTER OF SCIENCE
CLINICAL, BIOMEDICAL AND TRANSLATIONAL INVESTIGATIONS
May 2018
2018 Meghan McCullough
2
Table of Contents:
Introduction: Page 3
Part 1: Review of the Existing Literature Page 7
Part 2: Diagonal Development in the Short-term Mission Model Page 13
Part 3: Competency-Based Surgical Education Page 16
Part 4: Program Design Page 18
Creation of the Evaluation Instrument Page 18
Participant Selection Page 21
Program Structure Page 22
Credentialing Pathway Page 23
Part 5: Pilot Experience Page 24
Results Page 24
Limitations and Challenges Page 34
Future Directions Page 37
Conclusion: Page 39
References: Page 41
Appendix 1: Cleft Surgeon Training Program Mission Workbook Page 53
Appendix 2: Program Guide Page 65
3
INTRODUCTION:
The need to increase surgical capacity in low- and middle-income countries (LMICs) has gained
significant attention in the global health community over the past two decades. Surgical disease
is an essential component of global public health as it accounts for nearly 30% of the global
burden of disease (Meara, 2015). The Lancet Commission on Global Surgery (LCoGS) estimates
that five billion people, over half of the world’s population, lack access to safe, timely surgical
care (Mock, 2015). The burden is greatest in the poorest third of the world, with these regions
receiving only 3% of the total procedures performed each year while also bearing nearly half of
all surgical deaths (Meara, 2015). A critical shortage of trained surgeons, predominantly in
LMICs, is an significant contributor to this unmet need. The LCoGS conducted a qualitative
investigation into the contextual challenges limiting surgical care in low-resource settings and
found that in addition to deficiencies in basic infrastructure, equipment and supplies, both rural
and urban teams lacked the trained workforce and clinical support to meet patient need (Raykar,
2016).
Among the core indicators of national surgical systems strength proposed by the LCoGS is
surgical workforce density. This metric represents the number of surgeons, anesthesiologists and
obstetricians (SAO) per 100,000 people. Countries with an SAO density of less than 20 have
been shown to have worse health outcomes (Meara, 2015). The SAO is widely divergent across
countries; for example, in parts of sub-Saharan Africa it may be as low as 0.5 (Petroze, 2013;
Holmer, 2015). This health workforce crisis is recognized by the World Health Organization
(WHO) in 57 LMICs, where the organization calls current surgical capacity “critically
inadequate and grossly inequitably distributed” (Mock, 2015). It is estimated that an additional
4
1.5 million providers would need to be trained to meet surgical demand world-wide, with the
greatest scale up being required in LMICs (Rose, 2015).
Training more providers and advancing the skills of existing providers is, of course, essential to
strengthening human resources for surgery; however, with an increasing need matched against
constrained resources, educational efforts must be both effective and as efficient as possible. The
majority of surgical training in LMICs is undertaken by local health systems, but international
collaborations, including both non-governmental organizations and universities, can play an
important role in augmenting traditional training mechanisms. Collaborations are able to
leverage additional resources, including teachers, technology and funding, as well as to diversify
the training experience by introducing additional perspectives and approaches.
Given the complexities of the global setting, it becomes essential that efforts be undertaken in an
evidenced-based fashion so that well-meaning intention translates to positive impact. The
existing literature on international educational collaborations is fragmented, and there is,
therefore, a need to analyze efforts across models, rather than individual programs, in order to
understand common strengths and limitations. Existing reports have also had limited analysis of
relevant quality indicator or outcome measures, both for the trainees themselves and for the
broader healthcare systems, and evaluative frameworks are needed to critically assess the impact
of ongoing collaborations.
5
The most common platform for international collaboration is the short-term service trip,
hereafter referred to as a mission. As used in this context, the term mission does not imply any
religious association but rather is used to describe platforms wherein teams of healthcare
providers including surgeons, nurses and anesthesiologists travel to sites and temporarily
augment the human resource capacity in a coordinated effort to deliver surgery, typically for a
particular condition. The traditional surgical mission model rests partially on the premise that
one-time interventions can effectively treat many surgical conditions and that they produce a
high return for time and resources invested while allowing for feasible commitments from
visiting providers. This model has often been viewed as an example of vertical development,
designed to deliver services parallel to, but not necessarily within, the local healthcare system
(Unger, 2003; Raviglione, 2002).
By contrast, a horizontal model for healthcare delivery tends to focus on long-term investments
in public health infrastructure and human capital, with an emphasis on primary care and broadly
applicable health interventions, and this model has less often been implemented by surgical
organizations (Unger, 2003; Vitoria, 2004; Atun, 2010). A hybrid construct that has emerged is
that of diagonal development, through which traditionally vertical mechanisms of aid, such as
missions, can enrich surgical capacity through specific efforts towards education and capacity
building (Patel, 2012). In an effort towards this diagonal development, some surgical
organizations have adopted the training, promotion, and support of local surgeons as a primary
objective (Magee, 2010; Pollock, 2011). However, this aim often consists of spontaneous intra-
operative lectures by the visiting surgeon and consequently lacks structure regarding teaching
6
objectives, impairing the trainee’s ability to systematically master skills and knowledge (Pollock,
2011).
Operation Smile is a 501c3 organization that provides cleft lip and palate surgery in over 60
LMICs. It has utilized the mission model for 35 years, making the platform a cornerstone of its
care delivery efforts. In recognition of the importance of training local providers toward
building sustainable surgical capacity, Operation Smile, in collaboration with University of
Southern California, now expands its services to include a surgical education program to train
local surgeons in cleft specialty procedures within the existing mission infrastructure. This
“Fellowship without Borders” seeks to promote education and centralize it into the
organization’s core mission, while simultaneously continuing to provide much needed surgical
care. In response to the need for increased structure in training efforts, the program centers
around the use of competency-based evaluation principles. The movement towards competency-
based training, rather than time-based training, has dominated medical education in high-income
countries over the last decade, and initial results have been promising, documenting increased
skill with decreased training time. However, it remains unknown how these principles can be
applied to low-resource settings or to the mission environment.
In this manuscript, the existing literature on educational collaborations is reviewed and current
mechanisms for monitoring and evaluation are outlined in order to understand the current
landscape of initiatives and identify gaps in the literature. The concept of diagonal development
and its impact on the provision of surgical education within international collaborations is then
7
discussed. The transition from time-based to competency-based evaluation within medical
education more broadly is described, and the application of these principles to help maximize the
training opportunities afforded in the mission setting is proposed. The specific development of
Operation Smile’s surgical training program and the creation and contextualization of the
evaluation instrument is then discussed. Finally, the results of the program’s 12-month pilot
implementation including challenges and lessons learned are described, and future directions and
planned expansion are explored.
PART 1: REVIEW OF EXISTING LITERATURE
Increasingly, education is becoming recognized as a sustainable solution to the human resource
crisis in LMICs, but in this setting of constrained resources and ever-increasing need, efforts
must be implemented effectively and efficiently to maximize benefit. Coordination of existing
initiatives is important to limit duplication, and critical evaluation is essential to understand
limitations and respond to changing needs. Characterization of initiatives has been undertaken
with respect to care delivery by humanitarian organizations (Shrime, 2016) and has also been
attempted with respect to education within subspecialties of surgery, including pediatric surgery
(Butler, 2016). However, the constantly changing landscape of global surgery, coupled with the
increasing demand for evidenced-based solutions, calls for more comprehensive and critical
evaluation of educational initiatives within the broader surgical field. Here, the literature on
existing models of surgical education in low resource settings is reviewed in order to delineate
the strengths and limitations of each and to identify gaps in the literature with respect to
evaluation mechanisms.
8
In a systematic review of the literature utilizing PubMed and Medline databases following
PRISMA guidelines, combinations of the following search terms were used: (global surgery)
AND (education OR training OR consortium OR partnership). The PRISMA diagram is shown
in Figure 1. A “snowball” technique was adopted in which citations within articles were searched
if they appeared relevant to the review, through which 28 additional studies were identified.
Exclusion criteria included non-English language, lack of full text availability, non-LMIC
setting, and educational outcomes not relevant to program country participants (i.e. dedicated to
impact on high-income country (HIC) volunteers/residents).
Figure 1: PRISMA diagram for Systematic Literature Review
9
Programs described in the manuscripts were categorized by the following models: short-term
surgical trips; long-term partnerships and academic collaborations; training in high-income
countries (HICs)/ observerships; “train the trainer” workshops or surgery camps; online learning
platforms; and telemedicine/augmented reality, categorizations adapted and expanded from
initial work by Butler in pediatric surgery. Each model was evaluated for strengths and
limitations, as outlined in Table 1.
Table 1: Strengths and Limitations by Educational Model
Model Strengths Limitations
Short-term service trip
• In-person training
• High case volume and repetition
• Context-specific training
• Shorter time commitment for volunteers
• Short duration
• Limited capacity building
• Poor continuity
• Significant time and travel requirement
for HIC volunteers
Long-term partnership
• Repeated exposures/improved continuity
• In-person training
• Often capacity building and research
• Significant time and travel requirement for
HIC volunteers
HIC training/
observership
• Provides opportunities not otherwise
available in country
• Eliminates travel and time requirement for
HIC volunteers
• Significant cost and travel requirement of
LMIC trainees
• Limited participation
• Low applicability to LMIC context
• Potential for brain drain
Train the trainer workshop
• Focused skill transfer
• In-person training
• Context-specific training
• Often capacity building
• Significant time and travel requirement for
HIC volunteers
Online learning platforms • Autonomous and self-paced learning
• Avoids geographic constraints
• Dependent on internet connection
• Didactic only
Tele-medicine/
Augmented Reality
• Real-time, visual instruction
• Avoids geographic constraints
• Dependent on internet connection
• Requires expensive equipment
While the initiatives reviewed may undertake overlapping components of several models,
categorization is still of value when assessing overall strengths and limitations. Much of the
literature to this point has been fragmented by single organization reports or has been classified
10
by disease condition treated. This attempt at categorization allows for summary of existing
initiatives in order to better understand the current landscape and provided a basis critical
evaluation of ongoing efforts.
With the global momentum to invest in educational programs, it has become increasingly
important to provide accountability to stakeholders through effective monitoring and evaluation
mechanisms. Therefore, each individual initiative was also evaluated for educational focus,
country of implementation and evaluation metric and was further categorized as
descriptive/subjective, objective, or none. Evaluations metrics were also rated as either direct
(focused on the trainees’ development) or indirect (based on increases in surgical systems
functioning). Table 2 summarizes the findings of the systematic review.
Table 2: Systematic Review of Surgical Educational Initiatives by Model
Authors Year Country Educational Intervention Outcomes measures
Short-Term Service Trips
Merrell et al 2007 Vietnam Introduction of microsurgery techniques
during existing service missions
No educational outcomes
described
Magee 2010 Multiple Integration of local practioners into mission
setting
No educational outcomes
described
Levy et al 2012 Ukraine Graduated surgical responsibility over three
one-week trips
Indirect, Objective (Increase in
surgery rates)
Duenas et al 2012 Peru Graduated surgical responsibility over three
one-week trips
Indirect, Objective (Increase in
surgery rates)
McCullough et al 2017 Multiple Formal pairings of trainees and mentors on
service delivery trips
Direct, Objective (Competency-
based evaluation of technical
skills)
Long-Term Partnerships
Swanson et al 2016 Poland 30-year craniofacial surgery partnership with
training missions
Indirect, Descriptive (Increase in
surgery rates)
Haglund et al 2011 Uganda Two-year neurosurgery partnership with
supply donation and training missions
Indirect, Objective (Increase in
surgery rates)
11
Lipnick et al 2012 Uganda 10-year partnership focused on workforce
expansion, research and collaboration
support
Indirect, Descriptive
(Improvement in scholarships,
postgraduate job opportunities
and international collaboration)
Qureshi et al 2010 Malawi Two-year placement of senior surgical
resident with ongoing visits from faculty
No educational outcomes
described
Pollock et al 2010 Gabon,
Ethiopia,
Cameroo
n, Kenya
Five-year accredited surgery residency with
permanent HIC program directors
Direct, Objective (Yearly written
and oral exams)
Mitchell et al 2012 Tanzania Eight-week didactic curriculum, electronic
database and surgical skills workshops
Direct, Objective (Essential
surgical skills pre- and post-
workshop test)
Lakhoo and
Msuya
2015 Tanzania Twelve-year partnership with annual visits
for pediatric surgery training
Indirect, Descriptive
(Improvement in infrastructure,
scholarships and research)
Harris et al 2016 Malawi Three-year partnerships with training courses
on burn management
Indirect, Descriptive (Interviews
on barriers and facilitators)
Weil et al 2015 Haiti Three-year neurosurgical fellowship
supported by rotating, visiting faculty
No educational outcomes
described
Khambaty et al 2010 Eritrea Two-year general surgical residency
supported by rotating, visiting faculty
Indirect, Objective (Increase in
surgery rates, decrease in length
of stay, antibiotic use)
O'Hara et al 2015 Uganda Eight-year partnership with biannual surgical
skills training visits
Indirect, Descriptive
(improvement in research and
workforce expansion)
Binagwaho et al 2013 Rwanda One-year placement of HIC surgery faculty No educational outcomes
described
Kaptigau et al 2016 Papua
New
Guinea
Three-year neurosurgical training program
supported by rotating, visiting faculty
Indirect, Objective (Increased
surgical rates and decreased
complications/mortality)
Coberger et al 2014 Tanzania Neurosurgery training program supported by
rotating, visiting faculty
No educational outcomes
described
Arabi et al 2015 Haiti One-year placement of HIC pediatric surgery
faculty
Indirect, Descriptive (Increase in
number of teaching cases)
HIC Observerships
Galardi et al 2015 Multiple One to three-month program with
observation and mentored research
No educational outcomes
measures
Baird et al 2016 Kenya Eight-week pediatric surgery training
exchange
Direct, Descriptive (Participant
satisfaction surveys)
Wu et al 2016 Mulitple Multiday course with didactic lectures,
discussions and skills workshops
Direct, Descriptive (Participant
satisfaction survey)
Train the Trainer Workshop
Carey et al 2015 Nepal Yearly course with didactic lectures,
discussions and skills workshops over five-
year period
Direct, Descriptive (Participant
satisfaction survey),
Direct, Objective (Case log of
completed flaps)
Jalloh et al 2015 Senegal Urological surgical workshops one to three
times per year
No educational outcomes
described
12
Carlson et al 2012 Haiti Orthopedic trauma courses over two-year
period
No educational outcomes
described
Wesson et al 2016 Jamaica One-day pediatric trauma course Direct, Descriptive (Participant
pre- and post self -evaluation)
Blair et al 2014 Uganda Yearly one-two week pediatric surgical
camps
Indirect, Descriptive (Increased
research and collaboration)
Butler 2014 Ghana Training sessions in colorectal procedures No educational outcomes
described
Vargas et al 2012 Mongolia Two-week laparoscopic cholecystectomy
training course
Indirect, Objective (Increase in
surgery rates)
Bedaba et al 2015 Botswana Yearly three-day laparoscopic skills
workshop
Indirect, Objective (Increase in
surgery rates, autonomous
procedures)
Online Learning Platforms
Goldstein et al 2014 Multiple Online competency-based general surgical
curricula
Direct, Descriptive (Participant
satisfaction survey)
Beveridge et al. 2003 East
Africa
Open access online database of surgical
literature
Direct, Descriptive (Participant
satisfaction survey)
Butler et al. 2015 Multiple Online platform for international surgeon
communication and exchange
No educational outcomes
described
Corlew and Fan 2011 Multiple Grand Rounds website to collaborate and
share difficult cases
No educational outcomes
described
Blankstein et al. 2011 Ghana Twelve-week structured online neurosurgery
training course
Direct, Descriptive (Participant
satisfaction survey)
Hadley and Mars 2011 Malawi,
Uganda
and
Kenya
DVD recordings of weekly didactic teaching
and discussion sessions on pediatric surgery
Direct, Descriptive (Participant
satisfaction survey)
Hadley and Mars 2008 South
Africa
Videoconference lectures and seminars Direct, Descriptive (Participant
satisfaction survey)
Mains 2011 Malawi Online learning curriculum with modules,
library access and discussion boards
Direct, Descriptive (Participant
satisfaction survey)
Telemedicine
Okrainec et al 2009 Botswana Three-day simulation fundamentals of
laparoscopic (FLS) surgery course
Direct, Objective (Written exam,
pre- and post technical skill
scores)
Mikrogianakis et
al
2011 Botswana Three session simulation of intraosseous
insertion technique
Direct, Objective (Written exam,
pre- and post technical skill
scores)
Datta et al 2015 Paraguay
and Brazil
Telementoring during hernia surgery Direct, Objective (Competency
based pre and post skills
evaluation)
Lee et al 1998 Thailand Telementoring of laparoscopic operations No educational outcomes
described
Rosser et al 1999 Ecuador Telementoring for mobile surgery support No educational outcomes
described
Davis et al 2016 Vietnam VIPAR platform used during neurological
surgery
Direct, Descriptive (Participant
satisfaction survey)
Heyes et al 2017 Peru PROXITEM platform used during cleft lip
surgery
No educational outcomes
described
13
As demonstrated by this review, significant gaps remain in the literature. Many collaborations
cite descriptive, indirect benefits to the health system, yet very few undertake objective
measurement of improvements in trainee knowledge and skill. With the variability of training, it
becomes difficult to evaluate and compare competencies; nonetheless, efforts should be taken
both to measure and to document the direct impact of educational collaborations on trainees as
well as indirect impact on health systems.
PART 2: DIAGONAL DEVELOPMENT IN THE SHORT-TERM MISSION MODEL
The mission model is the most ubiquitous model of international collaboration in surgical care.
As it was initially conceived and implemented, it was a purely vertical approach. Vertical
programs tend to be narrowly focused and disease-specific and generally operate outside of
existing healthcare structures, supplying their own delivery mechanisms (Patel, 2012). This
approach has proved to be extremely cost-efficient for surgical care delivery (Speigel, 2008;
Moon, 2012). However, the vertical approach may also lead to uncoordinated interventions,
compromise countries' autonomy, and neglect patients whose needs exceed the narrow range of
services provided (Unger, 2003; Biesma, 2009). Additionally, vertical approaches fail to develop
the infrastructure and workforce necessary to support broader healthcare demands (Fulton, 2011;
Price, 2010).
In contrast, the horizontal approach to healthcare delivery emphasizes long-term investments in
healthcare infrastructure, for example through WHO efforts to strengthen primary care systems
and World Bank-guided reforms of district-level health administrations (Atun, 2010). Horizontal
development can additionally support disease-specific therapies through the improved
14
functioning of the broader healthcare system (Patel, 2012). However, projects depend heavily on
local government functionality and cooperation in order to be effective. In surgery as in other
fields, the long timeframe of interventions can also be a major concern. The domestic practice
commitments of many surgeon volunteers can make the long-term requirements of horizontal
initiatives unfeasible.
A hybrid approach is that of “diagonal” development, which refers to programs that are neither
purely vertical nor purely horizontal. These programs combine the immediate advantages of the
vertical approach with the long-term benefits of the horizontal approach, increasing access while
simultaneously enriching capacity (Figure 2). One such example of diagonal development is the
formalization of educational initiatives on current mission platforms. Informal knowledge
sharing has long been a part of missions through the participation of local providers on mission
teams, yielding a positive “spill-over” effect on local health systems. The diagonal approach,
however, would seek to embrace training of local providers as a primary aim, rather than as a
welcome byproduct.
15
Figure 2: Diagonal Development Model
While integration of education into existing mission platforms holds promise for accomplishing
dual goals of care delivery and capacity building, how best to structure and formalize training
curricula and learning objectives in this setting remains a challenge. As demonstrated by the
review of the literature, few groups have undertaken any formalized approach to training in the
mission setting, and fewer have applied metrics to measuring trainee skill and knowledge
acquisition. However, advances in medical education in high-resource countries may hold the
potential for application not only to low-resource settings but also to the mission environment
specifically.
16
PART 3: COMPETENCY-BASED SURGICAL EDUCATION
Traditionally, surgical trainees have been evaluated by the global impression of supervising staff,
using number of years of training as a benchmark of competency. However, evaluations in this
time-based approach, with its non-criterion–based rating have been found to be largely
subjective and unreliable (Reznick, 1993; Dafnis, 2001). Moreover, while adequate surgical
volume is necessary to achieve competence, performance of a set number of procedures or
completion of a certain number of years of training alone does not guarantee ability (Holmboe,
2010). This realization has prompted interest in alternative educational approaches. Competency-
based training (CBT) is built around the evaluation of specific knowledge, skills and attitudes
agreed upon as necessary for successful practice. Frequent, formative feedback provides
opportunity to quickly identify deficiencies and address them in a specific and timely manner
(Carraccio, 2002). Structured evaluation also emphasizes a learner-centered approach, creating a
sense of ownership and self-efficacy, which in turn contributes to the promotion of life-long
learning. Table 3 compares the traditional time-based approach to the competency-based
approach.
Table 3: Time vs. Competency-Based Educational Models
Time-base Competency-based
Driving force of
curriculum
Content-knowledge
acquisition
Outcome-knowledge application
Driving force of process Teacher Learner
Setting of evaluation Removed (gestalt) Direct observation
Basis of evaluation Norm-based Criterion-based
Program completion Fixed time Variable time
*Adapted from Carraccio, 2002
17
Over the last decade, HICs have increasingly moved toward the competency-based model in
medical education. Initiatives through the Accreditation Council for Graduate Medical
Education (ACGME) in the United States, the Royal College of Physicians and Surgeons of
Canada, and a number of European and Australian counterparts have sought to increase
documented competency as a way of improving public accountability (Macy Foundation, 2011;
Institute of Medicine, 2014). Most training programs have maintained a hybrid of time and
competency-based approaches, but several have trialed purely competency-based models with
promising results (Long, 2000; Ferguson, 2013; Nguyen, 2016).
The University of Toronto pilot
program in orthopedic surgery found sustained improvement in resident technical ability and
comparable, if not higher, In-Training Examination scores (Ferguson, 2013). Residents in the
experimental cohort also completed the residency program a full year earlier, on average, than
the residents in a traditional model (Ferguson, 2013).
The educational theory of deliberate practice describes the essential components of acquisition of
technical skills as intense, repetitive performance of an intended skill coupled with rigorous
assessment (Ericsson, 2015; McGaghie, 2006). However, such consistent repetition may not
always be feasible in clinical practice, especially for less common conditions such as cleft lip and
palate. The mission environment, in contrast, provides high volumes of identical cases in a
compressed time period. When coupled with frequent, targeted feedback, this environment holds
significant potential to expedite the learning curve (Bosse, 2015). Based on this theory,
Operation Smile saw a unique opportunity to create a “Fellowship without Borders” combining
intensive, focused training on missions with competency-based evaluation to accelerate the
training of in-country surgeons in cleft specialty techniques.
18
PART 4: PROGRAM DESIGN
CREATION OF THE EVALUATION INSTRUMENT
A core component of competency-based education is the utilization of specific evaluation tools
to measure progress and to ensure that competency has, in fact, been achieved. Valid and reliable
forms of assessment are paramount to CBT programs; however, obtaining comprehensive and
accurate assessment of learners is a challenge. A single method of assessing surgical trainees in
every respect has yet to be established, although multiple assessment tools, each with a focus on
specific and particular skills, have been proposed and utilized in the surgical context. Equally as
important to surgical practice as technical skills are non-technical skills such as decision-making,
communication, teamwork, and leadership. Nearly half of all errors in the operating room can be
traced to surgeon behavior and intraoperative decision-making (Gawande, 2003). These skills
become even more relevant and valuable in LMICs, where managing inconsistent technology,
infrastructure, and workforce are essential to providing safe surgery. Central to the creation of
Operation Smile’s “Fellowship without Borders” was the development of a contextually
appropriate evaluation instrument that incorporated both technical and non-technical elements to
define the scope of the program and the goals of training, as well as set standards for competency
to ensure that the highest quality of care continues to be delivered to all of Operation Smile’s
patients.
The first part of the evaluation comprised a technical skills evaluation, with a combination of
general technical and procedure-specific skills. The general technical skills section was adapted
directly from the Objective Structured Assessment of Technical Skills (OSATS), an extensively
19
validated instrument in multiple specialties of surgery (Faulkner, 1996; Bodle, 2008; Gallagher,
2014; Argun, 2015). This instrument is often considered the gold standard for technical skills
evaluation (van Hove, 2010) and has been translated and validated in multiple languages
including Spanish (Anaya-Prado, 2012), as well as utilized in low-resource settings (Tansley,
2016). The skills encompassed in this section are non-specific to plastic surgery, but rather
cover good practice in surgical technique generally. The procedure-specific skills section was
derived through a Delphi method, querying existing Operation Smile senior surgeons to reach
consensus on what constituted the essential steps of unilateral lip, bilateral lip and palate
procedures. While there are no current evaluation instruments for cleft surgery, similar validated
procedure-specific evaluations in other specialties exist. The Gastrointestinal Endoscopy
Competency Assessment Tool (GiECAT) for endoscopic colonoscopy (Walsh, 2015) and the
global operative assessment of laparoscopic skills (GOALS) for laparoscopic cholecystectomy
(Kramp, 2015) were used as models. A category for pre- and post-operative care skills was also
included in the Operation Smile evaluation and was similarly modeled off the GiECAT and
GOALS evaluations.
All sections for technical skills were rated on a five-point Likert scale. While both the
GiECATS and GOALS assessment are measured in a dichotomous fashion with 1 (yes) and 0
(no) notation, similar to a checklist, it was felt that evaluation of the degree of facility with each
item would allow deeper understanding of the trainees’ skills and areas for improvement to
emerge. Therefore, highly specific anchoring descriptors helped standardize rater responses: in
the general technical skills section, exact descriptors from the OSATS were used to define
20
ratings for scores of one, three and five, and for the procedure-specific skills, a scale with
descriptors related to degree of supervision required were detailed for each of the scores.
The second portion of the evaluation covered non-technical skills and was based on the Non-
Technical Skills Assessment for Surgeons (NOTSS) which has been validated for use in the
surgical context, (Dedy, 2016; Jung, 2018) and in an adapted version for use in low-resource
settings (Lin, 2017). The NOTSS taxonomy is broken down into four main categories:
situational awareness, decision making, leadership, and communication/teamwork with specific
sub-elements within each category (Yule, 2012). Utilizing this template, the NOTSS categories
and sub-elements were contextualized to represent the unique cultural competency required on
international missions as well as the values of Operation Smile as an organization. An iterative
process with Operation Smile senior management, senior surgeon volunteers and local medical
foundations resulted in the new categories of cultural awareness and sensitivity, communication,
team member skills, and professionalism/leadership. Each item was again rated on a five-point
Likert scale, here reflecting performance ranging from “not meeting expectations” to
“exemplary.” At the end of each section, in both technical and non-technical sections, a free text
space was provided for additional comments.
An evaluation packet was created for each mission experience, with a baseline assessment to be
completed on the first day of surgery, and an end-of-week assessment to be completed on the
final day of surgery. Included in this packet were also self-evaluation forms for the trainee to
complete, with ratings for level of confidence in knowledge and surgical technique in each of the
procedures typically encountered on the mission (primary unilateral lip, bilateral lip, and cleft
21
palate and revision lip, and revision palate procedures). Each item was rated on a one through
five Likert scale and a space was provided below for additional comments. In order to
contextualize the trainees’ prior experience, the pre-mission evaluation form provided space to
indicate the number of cases the trainee had previously performed for each of the procedures.
Designation was made as to whether the trainee served as an assistant on the case (i.e. performed
less than 50% of the case) or as primary surgeon (i.e. performed more than 50% of the case).
This pre-mission form also included relevant demographic information on the participant
including level of training, prior fellowships, and number of past mission experiences with
Operation Smile.
At the end of the week, a similar self-evaluation form was completed by the trainee, again
indicating their confidence in their knowledge and surgical technique for each of the procedures.
The number of cases performed during the mission for each of the procedures was recorded,
again with designation of whether the trainee served as the assistant or the primary surgeon.
Compilation of the cases performed during the week was assisted through a case log included in
the packet, completed by the trainee at the end of each day of surgery. Finally, a five-question
satisfaction survey with the program was included, asking the trainee for their degree of
agreement with statements regarding the value of the training on a one through four Likert scale
(ranging from “not at all” to “significantly”). The evaluation packet is included as Appendix 1.
PARTICIPANT SELECTION
Educators, hereafter referred to as mentor surgeons, were first identified by Operation Smile
senior leadership as long-time volunteers with prior teaching experience. These individuals were
22
contacted and invited to participate in the program, and upon agreement, assisted in each
iterative revision of the evaluation form. Of note, these individuals were selected both from HIC
volunteers as well as from senior surgeons from the countries within which the missions took
place in order to promote local ownership of the program. Program sites were selected based on
a number of factors including organizational relationships with local medical foundations, local
interest in and acceptance of the program, and availability of suitable local trainees. At each site
between one and three local trainees were identified in collaboration with existing in-country
senior surgeons and local medical foundations. Each individual had completed subspecialty
surgical training in their respective country, including plastic surgery, pediatric surgery,
maxillofacial surgery and otolaryngology, but with minimal to no experience in cleft lip and
palate surgery. Trainees were selected for their expressed commitment to deliver cleft surgical
care within their countries and their ability to participate in the entirety of the training program,
including three to four mission experiences over the course of one to two years.
PROGRAM STRUCTURE
Another core component of the program design wa its direct one-to-one mentorship between
trainees and senior surgeons during the mission experience. Each trainee was paired with one or
two mentor surgeons to work with directly over the entirety of the mission, spanning from
“screening day,” in which prospective patients are evaluated in a clinic setting for their
suitability for surgery and continuing throughout all five operative days. The program guide
distributed to participants, which outlines roles and expectations for both trainees and mentor
surgeons, is attached as Appendix 2. The expectation was that trainees would predominantly
assist on the first day as they familiarized themselves with the mentor surgeon’s technique for
23
different procedures and then assume increased responsibility in cases as the week progressed, as
case severity and complexity allowed. Initial virtual introductions were made between the
trainee and mentor surgeon prior to the mission via email or skype and communication was
encouraged to delineate goals and expectations for the interaction. Trainees were additionally
provided with access to online resources through Operation Smile including a text book of cleft
lip and palate surgery to assist with preparation.
As previously described, the evaluation packet was completed by the trainee and mentor over the
course of the week and submitted to headquarters for review upon completion of the mission. A
summary form was then created of the trainees’ scores in each of the categories, as well as a
compilation of all free text comments, and returned to the trainee. When possible, efforts were
made to maintain trainee-mentor pairings across multiple missions, but as this practice was often
not logistically feasible, the prior summary evaluation form was also sent to the new mentor in
advance of a mission when a new pairing became necessary. The trainees were tracked by
Operation Smile’s education department to monitor progress, determine availability of mission
sites for each training experience and facilitate regional exchanges when necessary, with the goal
of having no longer than four to six months between training experiences.
CREDENTIALING PATHWAY
While the primary goal of the program was increasing surgical skill for the trainees to practice in
their local environment, credentialing with the organization was also offered as an incentive.
Trainees was generally considered eligible for consideration for completion of the program, and
in turn for credentialing, when they had achieved an average of 4’s on all competencies as
24
assessed by two separate mentors. Trainees were then was entered into the general pool of
volunteer applicants. They were invited to participate in a mission in which they operated
independently and were assessed by a separate senior surgeon, in accordance with the standard
evaluation system for the credentialing of any new volunteer with the organization.
PART 5: PILOT STUDY
RESULTS
The program was piloted at ten sites in Nicaragua, Paraguay, Mexico, Guatemala, Malawi,
Madagascar, Bolivia, Peru, and Ecuador between November 2016 and December 2017. A total
of sixteen trainees participated in the program over the ten pilot sites. Demographic information
for all participants including specialty, number of years since the completion of their training,
previous surgical fellowships, and number of prior missions with Operation Smile is shown in
Table 4.
Table 4: Trainee Demographics
Specialty:
Plastic Surgery 8
Pediatric Surgery 1
OMFS 3
ENT 2
Pediatric Plastic Surgery 2
Years since completion of
surgical residency
6.8 (1-17)
Number of prior missions 3.8 (1-15)
Years of engagement with
Operation Smile
4.1 (1-10)
25
The majority of trainees participated in more than one mission over the pilot phase: seven
participants completed two missions, two completed three missions, and one completed four
missions. Two trainees have graduated from the program: one after two missions and the other
after three missions of participation.
As described, evaluations were completed by the mentor surgeon after the first day of surgery to
characterize baseline ability and again after the last day of the program to track incremental
improvement. Overall, compliance with completion of these evaluation forms was low, and of
the 16 candidates, 5 (29%) were missing evaluations from one or more missions in which they
participated. Overall, 24 evaluation packets were collected across all the missions. The trainee
pre-evaluation and the technical skills evaluation at the end of the mission were completed with
the greatest consistency (92% and 88%, respectively) while the non-technical evaluation and
baseline technical skills evaluation were completed less frequently (53% and 67%, respectively).
Table 5 shows the completion rates for various components of the evaluation packets.
Table 5: Form Completion Rates
Form Number completed,
n=24 evaluations
Percentage
Pre-mission self-evaluation 22 92%
Post-mission self-evaluation 17 71%
Baseline technical 16 67%
Post-mission technical 21 88%
Non-technical 15 53%
Mentor evaluation 18 75%
Case log 18 75%
26
From the available forms, raw scores were converted into percentages based on the total possible
score for each section. Percentages were used to account for the variable number of total points
in different sections and to allow for comparison across competencies. Pre- and post-training
scores for each trainee were analyzed using a paired t-test with SPSS statistical software (IBM
Corp, Version 21.0. Armonk, NY). Averages were reported across the first training and second
training. Thirteen trainees had sufficient evaluations to analyze for the first training, and five
had sufficient forms to analyze a second. Although some participants completed up to four
trainings, there was insufficient data to analyze across more than two missions. Statistical
significance was determined as p<0.05 and indicated that the null hypothesis of no improvement
from pre- to post-training scores was disproved.
Across the first training mission, average general technical competency scores improved from
53.1% to 67.2% (p=0.01), as did procedure-specific competency scores for both cleft lip repairs,
from 50.6% to 64.6% (p<0.01), and cleft palate repairs, from 45.16% to 65.1% (p<0.01).
Postoperative care competencies improved from 58.5% to 73.3% (p=0.01) while preoperative
care competencies improved from 53.8% to 70.6% (p<0.01). The greatest average percentage
improvement was found in palate specific competencies (30.7%) (p<0.001), followed by
postoperative care (23.7%) (p<0.01) and lip specific competencies (21.6%) (p<0.01). Average
percent scores with standard deviation, percentage improvement and p-values for preoperative,
postoperative, general technical, procedure-specific, and non-technical competencies across the
first and second training missions are shown in Table 6.
27
Table 6: Average Pre and Post-training Scores, Percentage improvement and p values for First
and Second training missions
First mission (n=13)
Pre-evaluation
average (± SD)
Post-evaluation
average (± SD)
%
improvement
p value 95%Confidence
interval
Preoperative 58.46 (±16.25) 73.33 (±14.35) 20.3% 0.01 4.33, 25.66
Postoperative 53.84 (±17.03) 70.66 (±13.74) 23.7% <0.01 6.32, 28.33
General 53.15 (±18.77) 67.23 (±10.48) 20.9% 0.01 3.85, 24.29
Lip-specific 50.61 (±18.08) 64.61 (±15.02) 21.6% <0.01 5.92, 21.91
Palate-
specific
45.07 (±15.81) 65.07 (±12.76) 30.7% <0.001 10.97, 29.02
Non-
technical
73.25 (±18.43)
Second mission (n=5)
Pre-evaluation Post-evaluation %
improvement
p value 95%Confidence
interval
Preoperative 68 (±21.67) 80 (±18.70) 15.0% 0.03 1.61, 22.38
Postoperative 67.66 (±25.42) 76.40 (±26.76) 11.5% 0.01 3.69, 12.30
General 65.80 (±17.07) 72.00 (±19.50) 8.6% 0.02 1.35, 11.04
Lip-specific 61.5 (±19.09) 78.33 (±3.53) 21.4% 0.05 1.8, 21.9
Palate-
specific
56 (±24.99) 73.25 (±17.72) 23.4% 0.06 1.5, 36.01
Non-
technical
80.33 (±20.55)
28
Figure 2 demonstrates the average improvement by competency across all trainees for the first
training mission.
Figure 2: Average Improvement in Competencies from Pre- to Post-training with Standard
Deviation
*indicates statistical significance
Figure 3 A-C graphically represents pre- and post-training improvement for each of the trainees
for general and procedure-specific competencies, with trainees listed by number. The majority
of trainees achieved higher baseline percent scores in unilateral lip procedures but showed a
greater percentage improvement in palate procedures.
0
10
20
30
40
50
60
70
80
90
100
Preop Postop General Unilateral Lip Palate
Average Percentage Score
Competency
Average Change in Competencies
Pre Post
29
Figure 3: Pre- and Post-training Competency Scores by Trainee
A: General Technical Competencies
B: Lip Specific Competencies:
0 10 20 30 40 50 60 70 80 90
13
12
11
10
9
8
7
6
5
4
3
2
1
Percentage Score
Candidate
General Competencies
pre post
0 10 20 30 40 50 60 70 80 90
13
12
11
10
9
8
7
6
5
4
3
2
1
Percentage Score
Candidate
Lip Specific Competencies
pre post
30
C: Palate Specific Competencies
Across the second training mission, pre-training average scores were higher, as would be
expected after having completed a previous training, but percent increases were slightly lower.
This was most marked in post-operative competencies and general technical skills 11.5%
(p=0.01) and 8.6% (p=0.02), respectively. Procedure specific skills continued to show
considerable improvement, with a 21.4% (p=0.05) and 23.4% (p=0.06) improvement for lip and
palate respectively. Although the smaller percentage improvement may suggest a possible
plateau to the learning curve, a low number of trainees in this second training group limits the
ability to draw conclusions. Figure 4 demonstrates the improvement in score across the first
mission compared to the second mission for the five trainees who completed two missions.
0 10 20 30 40 50 60 70 80 90
13
12
11
10
9
8
7
6
5
4
3
2
1
Percentage Score
Candidate
Palate Specific Competencies
pre post
31
Figure 4: Pre- and Post-Training Scores Across First versus Second Mission by Trainee and
Competency
Within procedure-specific criteria, individual technical components were analyzed for average
percentage improvement in unilateral lip and palate procedures, but again there was insufficient
data to analyze bilateral lip procedures. Among the procedural competencies assessed within
unilateral lip, surgical markings showed the greatest improvement (19.0%) (p=0.02) followed by
mucosal/skin/nasal floor approximation (15.0%) (p=0.04). For the palate, markings again
showed the greatest improvement (22.8%) (p<0.01), followed by retractor placement (18.5%)
(p=0.01) and quality of dissection of the hard palate/nasal floor (17.1%) (p=0.02). Percentage
improvement for each component of the procedures is detailed in Figure 5.
0
10
20
30
40
50
60
70
80
90
100
1 2 1 2 1 2 1 2 1 2
Percentage Score
Mission
Improvement Across First vs Second Training MIssions
General pre General post Lip pre Lip post Palate pre Palate post
32
Figure 5: Average Percentage Improvement by Procedure Component
Trainee self-evaluation forms were also analyzed to determine self-perceived improvement in
confidence for different procedures. Pre-training, participants rated their confidence in their
knowledge of primary unilateral cleft lip repair as an average of 3.64 (±1.16), and primary palate
repair as an average of 3.68 (±1.13). Confidence in technical skill was slightly lower, rated as an
average of 3.23 (±0.97) and 3.25 (±1.06), for lip and palate respectively. Post-training scores
increased in self-perceived knowledge for both procedures, as well as in self-perceived skill.
Unilateral lips were rated as an average confidence of 3.76 (±0.82) for knowledge, and 3.46
(±0.66) for skill, while palates were rated as an average of 3.76 (±0.72) for knowledge and 3.46
(±0.77) for skill. While overall, as in previous studies (Okosanya, 2012; Acosta, 2017), average
33
scores increased in both knowledge and technical skill, a decrease in confidence was noted
among some trainees which may have been due to an increased appreciation of limitations and
areas for improvement after the period of intensive oversight and instruction.
When examined using Pearson’s correlation coefficient there was no significant correlation
between the number of cases completed over the mission and the change in self-perceived
confidence in both lip and palate procedures for either knowledge (r=-0.62, p=0.856, n=11 for lip
and r=-0.057, p=0.867, n=11 for palate) or technical skill (r=-0.091, p=0.802, n=11 for lip and
r=-0.059, p=0.863, n=11 for palate). There was also no correlation between improved procedure-
specific competency scores and number of cases performed for either lip or palate procedures,
although inconsistency with respect to case log completion may have contributed to the lack of
apparent association. A positive correlation was seen between improvement in lip and palate-
specific competencies (Table 7).
Table 7: Correlation between Change in Competency and Number of Cases Completed
Lip cases
completed
Lip
competency
change
Palate competency
change
Palate cases
completed
Lip
competency
change
Pearson
correlation (n)
1 (18) 0.039 (11) 0.774* (13) 0.175 (11)
Lip cases
completed
Pearson
correlation (n)
0.39 (11) 1 (14) 0.199 (9) 0.391 (13)
Palate
competency
change
Pearson
correlation (n)
0.774* (13) 0.199 (9) 1 (15) 0.003 (9)
Palate cases
completed
Pearson
correlation (n)
0.175 (11) 0.391 (13) 0.003 (9) 1 (13)
*indicates statistical significance
34
There was also no correlation found between mentor evaluations and increases in competency in
either lip or palate procedures (r=0.032, p=0.917, n=11 for lip and r=0.046, p=0.886, n=11 for
palate), however all mentors were universally positively rated with an average score of 93.6% on
mentor evaluations.
While the five-question program evaluation regarding the value of the program to participants
was not added until later iterations of the evaluation form and therefore had fewer responses,
among participants in which it was collected, 100% of respondents strongly agreed with the
statement: “I feel that this training has improved my knowledge and skills in cleft surgery."
Seventy-five percent of respondents agreed with the statements: “The training was appropriate to
my skill level” and “I will use the skills I gained this week in my own practice,” while 100%
agreed with the statements: “I would like to continue training in the program” and “Once
credentialed, I would like to participate in future Operation Smile missions.”
LIMITATIONS AND CHALLENGES
While the evaluation forms attempted to comprehensively cover a range of important skills, a
relatively myopic focus on easily quantifiable skills limits overall performance assessment.
Moreover, in emphasizing performance as opposed to time, the program may miss teaching and
evaluating many of the skills needed in clinical practice, such as clinical judgement and the
accumulation of experience that comes naturally with time. Additionally, while the evaluation
was created through the compilation and contextualization of previously validated tools, the
finalized instrument itself was not formally validated. Content validity for both technical and
non-technical skills was achieved through Delphi iterations with experts from Operation Smile as
35
well as with local foundation members, however the instrument’s external validity and reliability
will need to be further investigated. For example, despite attempts at standardization through
anchoring descriptors, in circumstances where a trainee had more than one mentor within the
same mission, raw scores were seen to vary across evaluators, although percentage
improvements were similar.
Another issue identified over the pilot phase was accommodating a range of case complexities in
the evaluation. The nomenclature around cleft lip and palate currently grades cases as either
unilateral or bilateral and as either primary or revision procedures. The simplicity of this current
grading system belies the significant range of severities, and therefore the surgical complexity,
that can be present within a particular classification. The width of the cleft, degree of
involvement of the nasal floor and rotation of the pre-maxilla all impact the difficulty of repair.
While the evaluation of each procedure was intended to be a combined impression from multiple
cases, the potential for distribution of either more or less severe cases within a mission make
consistent evaluation difficult. This is not to suggest that only less severe cases should be used
for the training program. The trainee must become comfortable with the different nuances in
technique required to accommodate varying degrees of severity, but some indication of case
complexity may be important to help contextualize scores.
The distribution of cases within the mission to facilitate ideal learning was also a challenge,
especially when weighed against the practical considerations of patient need and integration
within the broader service delivery platform. Ideally, a range of cases including both lip and
palate, unilateral and bilateral, and primary and revision cases would be included within the case
36
schedule for the training. However, in accordance with Operation Smile policy, priority is given
to primary cleft lips. In certain settings where there remains a significant backlog of untreated
patients, such as in Madagascar, these may represent the majority of the cases performed. Other
practicalities also make ideal case distribution difficult. For example, bilateral cleft lip has a
lower incidence than unilateral, and even in the high-volume mission setting achieving adequate
numbers is difficult, as evidenced by the lack of sufficient data to analyze bilateral procedures
for any of the candidates.
A major limitation to analyzing trainee progression, even when adequate case composition
existed, was poor compliance with evaluation completion. In semi-structured interviews with
program participants, the length of the evaluation was cited as “overly burdensome” given the
busy setting of the mission environment. Initially the form was designed to be completed at the
end of each day, but given this feedback, the requirement was changed to once at the beginning
of the mission and again at the end. Because the evaluation packet underwent multiple revisions
over the course of the pilot phase, not all components were consistently included, further limiting
the amount of data available to be analyzed. The non-technical skills evaluation and case log
were added in later iterations as was the program satisfaction survey, accounting for the apparent
lower percentage of completion. Compliance with evaluation completion did increase over the
course of the pilot phase, especially among participants returning for a third and fourth mission.
Additionally, over the course of the pilot phase, the responsibility for evaluation completion and
submission was transferred from a coordinator to the trainee themselves, increasing the sense of
ownership, and in turn, improving form completion rates.
37
Originally, the intention was for participants to be scheduled to work with two different mentors
on alternate days, and while in the missions where sufficient personnel allowed for this it was
considered a positive experience, this goal was rarely feasible. The same mentor-trainee pairings
were also attempted between different training missions, but due to logistical constraints, were
often not accommodated. Engagement in multiple training sessions over the course of several
missions required time away from existing practice responsibilities for both trainees and
mentors, making consistent pairings difficult. As a result, trainees reported difficulty re-learning
technical particularities of each new surgeon with which they worked. The time commitment to
attend training missions also proved difficult in participant selection, and already two candidates
have dropped out of the program over inability to attend multiple missions.
Given the requirement for multiple missions to complete training, every effort was made to
minimize time between training sessions. Typically, program countries host two international
missions per year where the training program could be facilitated. Where feasible, candidates
were financially supported in traveling to regional missions to continue their training on an
accelerated schedule to accommodate 3-4 training sessions within a year. However, in some
instances, regional transfers were not possible, and environmental and political factors also
delayed training. For example, three candidates from Madagascar were delayed several months
in their training due to an outbreak of plague canceling operations in the country.
FUTURE DIRECTIONS
Future steps for the program include expansion to other peri-operative specialties in recognition
of the multidisciplinary nature of providing surgical care, and initial curriculum and evaluation
38
development is already underway in anesthesia and peri-operative nursing. Expansion to include
additional surgical trainees is also underway, and 59 candidates have been identified by local
medical foundations for consideration for the program. While initially limited to plastic,
maxillofacial, pediatric and otolaryngology surgeons, future program participants will
encompass other specialties including general surgery. Given the paucity of sub-specialty
providers in many countries, the training of general surgeons in specialty procedures becomes
essential to addressing the backlog of cases. Finally, collaboration with national credentialing
bodies to allow credentialing from the program to be recognized locally will be essential. This
process will be necessarily country-specific and will require additional data before formal
agreements can be reached.
Additional data collection and analysis is ongoing and more longitudinal follow up is needed to
better define the trajectory of the learning curve between missions and establish the ideal length
of time between trainings. While the case log was not completed with enough consistency to
have sufficient data to analyze, in future analysis it will be used to draw associations between the
number of cases necessary, on average, to reach an objectively evaluated level competency in
different procedures. One of the advantages to competency-based evaluation is that it allows for
variable timing of skill progression, and it is expected that trainees will move through the
program at different rates. Nonetheless, correlations with average case numbers may be helpful
in setting expectations for trainees and mentors, aiding in organizational oversight and logistical
planning and adapting the program to fit trainee needs. Finally, as the program progresses,
information on case numbers completed outside of the formal education setting provided during
the mission will be examined to demonstrate the impact of training on patients’ access to care.
39
CONCLUSION
As surgical disease is being brought into the global health spotlight there is increasing
recognition that education is key to addressing this vast burden of disease. Designing and
evaluating educational initiatives in a complex field such as surgery is highly challenging, and
the complexities of training only become further amplified in the global arena. Multiple
educational initiatives have been previously implemented, each with their own strengths and
limitations, but consensus is growing that these efforts must continue to improve to keep pace
with the increasing number of patients and the challenges of delivering care.
There has been a call in the literature to link mission-based work with surgical training, but this
is the first proposed program that utilizes competency-based evaluation principles to maximize
the unique training opportunities presented by the surgical mission setting. The “Fellowship
without Borders” serves as an example of diagonal development, utilizing existing models of
care delivery to train in-country providers and to improve education and sustainability while
simultaneously continuing to provide much needed services. Moreover, this program shows
potential to expedite the learning curve in cleft surgical training by capitalizing on the high case
volume and procedure repetition inherent to the mission setting. The competency-based
approach maximizes efficiency through objective, structured evaluation and assesses valuable
data lacking in many other educational initiatives to monitor skill acquisition and maintenance
and to adapt appropriately to changing needs.
The initial one-year pilot phase of the program demonstrates significant potential to the model, in
addition to identifying challenges and areas for future investigation. The planned expansion and
40
continued analysis hold promise for continuing to merge service and education to build stronger
health systems in lower and middle-income countries.
41
REFERENCES
1. Meara JG & Greenberg SL. (2015) The Lancet Commission on Global Surgery Global surgery
2030: Evidence and solutions for achieving health, welfare and economic development. Surgery,
157(5), 834-835.
2. Mock CN, Donkor P, Gawande A, Jamison DT, Kruk ME, Debas HT. (2015) DCP3 Essential
Surgery Author Group. Essential surgery: key messages from Disease Control Priorities, 3rd
edition. Lancet, 385(9983), 2209-2219.
3. Raykar NP, Yorlets RR, Liu C, Goldman R, Greenberg SLM, Kotagal M, Farmer PE, Meara
JG, Roy N, Gillies RD. (2016) The How Project: understanding contextual challenges to global
surgical care provision in low-resource settings. BMJ Glob Health, 1(4), e000075.
4. Petroze R, Mody G, Ntaganda E. (2013) International academic collaboration in surgical
development: the inaugural meeting of the strengthening Rwanda surgery initiative. World J
Surg, 7(7), 1500-1505.
5. Holmer H, Lantz A, Kunjumen T. (2015) Global distribution of surgeons, anaesthesiologists
and obstetricians. Lancet Glob Health, 3(2), S9-S11.
6. Rose J, Weiser TG, Hider P, Wilson L, Gruen RL, Bickler, S. (2015) Estimated need for
surgery worldwide based on prevalence of diseases: A modeling strategy for the WHO Global
Health Estimate. Lancet Glob Health, 3(2), S13-S20.
7. Unger JP, De Paepe P, Green A. (2003) A code of best practice for disease control
programmes to avoid damaging health care services in developing countries. Int J Health Plann
Manage, 18(1),S27-39.
8. Raviglione MC & Pio A. (2002) Evolution of WHO policies for tuberculosis control, 1948–
2001. Lancet, 359(9308), 775–780.
42
9. Victora C, Hanson K, Bryce J, Vaughan JP. (2004) Achieving universal coverage with health
interventions. Lancet, 364(9444), 1541–1548.
10. Atun R, De Jongh T, Secci F, Ohir K, Adeyi O. (2010) A systematic review of the evidence
on integration of targeted health interventions into health systems. Health Policy and
Planning, 25 (1), 1–14.
11. Patel P, Hoyler M, Maine R, Hughes C, Hagander L, Meara JG. (2012) An Opportunity for
Diagonal Development in Global Surgery: Cleft Lip and Palate Care in Resource-Limited
Settings. Plast Surg Int, 89, 24-37.
12. Magee WP. (2010) Evolution of a sustainable surgical delivery model. Journal of
Craniofacial Surgery, 21(5), 321–326.
13. Pollock JD, Love TP, Steffes BC, Thompson DC, Mellinger J, Haisch C (2011) Is it possible
to train surgeons for rural Africa? A report of a successful international program. World Journal
of Surgery, 35 (3), 493–499.
14. Shrime MG, Sekidde S, Linden A, Cohen JL, Weinstein MC, Salomon JA. (2016)
Sustainable Development in Surgery: The Health, Poverty, and Equity Impacts of Charitable
Surgery in Uganda. PLoS One, 11(12), e0168867.
15. Butler, M. (2016) Developing pediatric surgery in low- and middle-income countries: An
evaluation of contemporary education and care delivery models. Semin Pediatr Surg, 25(1), 43-
50.
16. Merrell JC, Tien NV, Son NT, An LN, Sellers D, Russell R, Manktelow R, Wei FC, Orgill
DP. (2007) Introduction of microsurgery in Vietnam by a charitable organization: a 15-year
experience. Plast Reconstr Surg, 119 (4), 1267–1273.
43
17. Velebit V, Montessuit M, Bednarkiewicz M, Khatchatourian G, Mueller X, Neidhart P.
(2008) The development of cardiac surgery in an emerging country: a completed project. Tex
Heart Inst J, 35(3), 301-306.
18. Novick WM, Stidham GL, Karl TR, Arnold R, Anić D, Rao SO, Baum VC, Fenton KE, Di
Sessa TG. (2008) Paediatric cardiac assistance in developing and transitional countries:
the impact of a fourteen-year effort. Cardiol Young, 18(3), 316-323.
19. Levy ML, Duenas VJ, Hambrecht AC, Hahn EJ, Aryan HE, Jandial R. (2012) Pediatric
neurosurgery outreach: sustainability appraisal of a targeted teaching model in Kiev, Ukraine. J
Surg Educ, 69(5), 611-616.
20. Duenas VJ, Hahn EJ, Aryan HE, Levy MV, Jandial R. (2012) Targeted neurosurgical
outreach: 5-year follow-up of operative skill transfer and sustainable care in Lima, Peru. Childs
Nerv Syst, 28(8), 1227-1231.
21. Vargas G, Price RR, Sergelen O, Lkhagvabayar B, Batcholuun P, Enkhamagalan T. (2012)
A successful model for laparoscopic training in Mongolia. Int Surg, 97 (4), 363-371.
22. McCullough M, Campbell A, Siu A, Durnwald L, Kumar S, Magee WP 3rd, Swanson J.
(2018) Competency-Based Education in Low-Resource Settings. World J Surg, 42(3), 646-651.
23. Qureshi JS, Samuel J, Lee C, Cairns B, Shores C, Charles AG. (2011) Surgery and global
public health: the UNC-Malawi surgical initiative as a model for sustainable collaboration.
World J Surg,35(1), 17-21.
24. Khambaty FM, Ayas HM, Mezghebe HM. (2010) Surgery in the Horn of Africa: a 1-year
experience of an American-sponsored surgical residency in Eritrea. Arch Surg, 145 (8), 749–752.
44
25. Haglund MM, Kiryabwire J, Parker S, Zomorodi A, MacLeod D, Schroeder R, Muhumuza
M, Merson M. (2011) Surgical capacity building in Uganda through twinning, technology, and
training camps. World J Surg, 35(6), 1175–1182.
26. Mitchell KB, Giiti G, Kotecha V, Chandika A, Pryor KO, Härtl R, Gilyoma J. (2013)
Surgical education at Weill Bugando Medical Centre: supplementing surgical training and
investing in local health care providers. Can J Surg, 56(3), 199-203.
27. Lipnick M, Mijumbi C, Dubowitz G, Kaggwa S, Goetz L, Mabweijano J, Jayaraman S,
Kwizera A, Tindimwebwa J, Ozgediz D. (2013) Surgery and anesthesia capacity-building in
resource-poor settings: description of an ongoing academic partnership in Uganda. World J Surg,
37(3), 488-97.
28. Binagwaho A, Kyamanywa P, Farmer PE Nuthulaganti T, Umubyeyi B, …Goosby E. (2013)
The human resources for health program in Rwanda--new partnership. N Engl J Med, 369(21),
2054-2059.
29. Coburger J, Leng LZ, Rubin DG, Mayaya G, Medel R, Ngayomela I, Ellegala D, Durieux
ME, Nicholas J, Härtl R. (2014)Multi-institutional neurosurgical training initiative at a tertiary
referral center in Mwanza, Tanzania: where we are after 2 years. World Neurosurg, 82(1-2), e1-
8.
30. Aarabi S, Smithers C, Fils MM, Godson JL, Pierre JH, Mukherjee J, Meara J, Farmer P.
(2015) Global Surgery Fellowship: A model for surgical care and education in resource-poor
countries. J Pediatr Surg, 50 (10), 772-1775.
31. Lakhoo K & Msuya D. (2015) Global health: A lasting partnership in paediatric surgery. Afr
J Paediatr Surg, 12(2), 114-118.
45
32. Cook M, Howard BM, Yu A, Grey D, Hofmann PB, Moren AM1…Schecter WP. (2015) A
Consortium Approach to Surgical Education in a Developing Country: Educational Needs
Assessment. JAMA Surg, 150(11), 1074-1078.
33. OʼHara NN, OʼBrien PJ, Blachut PA. (2015) Developing Orthopaedic Trauma Capacity in
Uganda: Considerations From the Uganda Sustainable Trauma Orthopaedic Program. J Orthop
Trauma, 29(10), S20-22.
34. Harris L, Evridiki F & Broadis E. (2016) Paediatric burns in LMICs: An evaluation of the
barriers and facilitators faced by staff involved in burns education training programmes in
Blantyre, Malawi. Burns, 42(5), 1074-1081.
35. Swanson JW, Skirpan J, Stanek B, Kowalczyk M, Bartlett SP. (2016) 30-year International
Pediatric Craniofacial Surgery Partnership: Evolution from the "Third World" Forward. Plast
Reconstr Surg Glob Open, 4(4), e671.
36. Kaptigau WM, Rosenfeld JV, Kevau I, Watters DA. (2016) The Establishment and
Development of Neurosurgery Services in Papua New Guinea. World J Surg, 40 (2), 251–257.
37. Galardi N, Ciminero M, Thaller S, Salgado C. (2015) Visiting Educational Scholarship
Training Program at the University of Miami, Miller School of Medicine: A Global Opportunity
to Learn. J Craniofac Surg, 26(4), 1048-1049.
38. Baird R, Ganey G, Poenaru D, Hansen E, Emil S. (2016) Partnership through fellowship: The
Bethany Kids-McGill University pediatric surgery fellowship exchange. Cape- town, South
Africa: Pan-African Paediatric Surgical Association, Capetown. J Pediatr Surg, 51(10), 1704-
1710.
39. Wu HH, Patel KR, Caldwell AM, Coughlin RR, Hansen SL, Carey JN. (2016) Surgical
Management and Reconstruction Training (SMART) Course for International Orthopedic
Surgeons. Ann Glob Health, 82(4), 652-658.
46
40. Carlson LC, Slobogean GP, Pollak AN. (2012) Orthopaedic trauma care in Haiti: a cost-
effectiveness analysis of an innovative surgical residency program. Value Health, 15(6), 887-
893.
41. Selim NM. (2014) Teaching the teacher: An ethical model for international surgical missions.
Bull Am Coll Surg, 99(6), 17-23.
42. Blair GK, Duffy D, Birabwa-Male D, Koyle M, Hudson GR… Lidstone K. (2014) Pediatric
surgical camps as one model of global surgical partnership: a way forward. J Pediatr Surg, 9 (5),
786–790.
43. Bedada AG1, Hsiao M, Bakanisi B, Motsumi M, Azzie G. (2015) Establishing a contextually
appropriate laparoscopic program in resource-restricted environments: experience in Botswana.
Ann Surg, 261(4), 807-811.
44. Carey JN, Caldwell AM, Coughlin RR, Hansen S (2015) Building Orthopaedic Trauma
Capacity: IGOT International SMART Course. J Orthop Trauma, 29 (10), S17-19.
45. Jalloh M, Wood JP, Fredley M, deVries CR. (2015) IVUmed: a nonprofit model for surgical
training in low-resource countries. Ann Glob Health, 81(2), 260-264.
46. Wesson HK, Plant V, Helou M, Wharton K, Fray D, Haynes J, Bagwell C. (2017) Piloting a
pediatric trauma course in Western Jamaica: Lessons learned and future directions. J Pediatr
Surg, 52(7), 1173-1176.
47. Beveridge M, Howard A, Burton K, Holder W. (2003) The Ptolemy project: a scalable model
for delivering health information in Africa. BMJ, 327(7418), 790-793.
48. Hadley GP & Mars M. (2008) Postgraduate medical education in paediatric surgery:
videoconferencing—a possible solution for Africa? Pediatr Surg Int, 24(2), 223–226.
47
49. Hadley GP & Mars M (2011) e-Education in paediatric surgery: a role for recorded seminars
in areas of low bandwidth in sub-Saharan Africa. Pediatr Surg Int, 27(4), 407–410.
50. Corlew S & Fan VY. (2011) A model for building capacity in international plastic surgery:
ReSurge International. Ann Plast Surg,67(6), 568–570.
51. Blankstein U, Dakurah T, Bagan M, Hodaie M. (2011) Structured online neurosurgical
education as a novel method of education delivery in the developing world. World Neurosurg,
76(3-4), 224-230.
52. Mains EA, Blackmur JP, Dewhurst D, Ward RM, Garden OJ, Wigmore SJ. (2011) Study
on the feasibility of provision of distance learning programmes in surgery to Malawi. Surgeon,
9(6), 322-325.
53. Goldstein SD, Papandria D, Linden A, Azzie G, Borgstein E, Calland JF,…Abdullah F.
(2014) A pilot comparison of standardized online surgical curricula for use in low- and middle-
income countries. JAMA Surg, 149(4), 341-346.
54. Butler M, Ozgediz D, Poenaru D, Ameh E, Andrawes S, Azzie G, Borgstein E,…Sekabira J.
(2015) The Global Paediatric Surgery Network: A Model of Subspecialty Collaboration Within
Global Surgery. World J Surg, 39(2), 335-342.
55. Lee BR, Bishoff JT, Janetschek G, Bunyaratevej P…Kavoussi LR. (1998) A novel method of
surgical instruction: international telementoring. World J Urol, 16(6), 367-370.
56. Rosser JC Jr, Bell RL, Harnett B, Rodas E, Murayama M, Merrell R. (1999) Use of mobile
low-bandwith telemedical techniques for extreme telemedicine applications. J Am Coll Surg,
189(4), 397-404.
48
57. Okrainec A, Smith L, Azzie G. (2009) Surgical simulation in Africa: the feasibility and
impact of a 3-day fundamentals of laparoscopic surgery course. Surg Endosc, 23(11), 2493-2498.
58. Mikrogianakis A, Kam A, Silver S, Bakanisi B, Henao O, Okrainec A, Azzie G. (2011)
Telesimulation: an innovative and effective tool for teaching novel intraosseous insertion
techniques in developing countries. Acad Emerg Med, 18(4), 420-427.
59. Datta N, MacQueen IT, Schroeder AD, Wilson JJ…Chen DC. (2015) Wearable Technology
for Global Surgical Teleproctoring. J Surg Educ, 72(6), 1290-1295.
60. Davis MC, Can DD, Pindrik J, Rocque BG, Johnston JM. (2016) Virtual Interactive Presence
in Global Surgical Education: International Collaboration Through Augmented Reality. World
Neurosurg, 86, 103-111.
61. Heyes R, Hachach-Haram N, Luck JE, Billingsley ML, Greenfield MJ. (2017) The Role of
Augmented Reality Telesurgery in Promoting Equity in Global Surgery. Diversity and Equity in
Health and Care. Retrieved from http://diversityhealthcare.imedpub.com/the-role-of-augmented-
reality-telesurgery-in-promoting-equity-in-global-surgery.php?aid=18974
62. Spiegel, D.A., Gosselin, R.A., Coughlin, R.R., Kuscher AL, Bickler, SB. (2008) Topics in
global public health. Clin Orthop Relat Res, 466(10), 2377–2384.
63. Moon W, Perry H, Baek RM. (2012) Is international volunteer surgery for cleft lip and cleft
palate a cost-effective and justifiable intervention? A case study from East Asia. World J Surg,
36(12), 2819-30.
64. Biesma R, Brugha R, Harmer A, Walsh A, Spicer N, Walt G. (2009) The effects of global
health initiatives on country health systems: a review of the evidence from HIV/AIDS
control. Health Policy and Planning, 24(4), 239–252.
49
65. Fulton BD, Scheffler RM, Sparkes SP, Auh EY, Vujicic M, Soucat A. (2011) Health
workforce skill mix and task shifting in low income countries: a review of recent evidence.
Human Resources for Health, 9(1), 11.
66. Price J & A. Binagwaho. (2010) From medical rationing to rationalizing the use of human
resources for aids care and treatment in Africa: a case for task shifting. Developing World
Bioethics, 10(2), 99–103.
67. Reznick RK. (1993) Teaching and testing technical skills. Am J Surg,165(3), 358–361.
68. Dafnis G, Granath F, Påhlman L, Hannuksela H, Ekbom A, Blomqvist P. (2001) The impact
of endoscopists' experience and learning curves and interendoscopist variation on colonoscopy
completion rates. Endoscopy, 33(6), 511-517.
69. Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. (2010) The role of assessment in
competency-based medical education. Med Teach, 32(8), 676-682.
70. Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. (2002) Shifting paradigms:
from Flexner to competencies. Acad Med, 77(5), 361-367.
71. The Josiah Macy Jr. Foundation (2011) Annual Report: Creating an Accountable Graduate
Medical Education Program. New York: Josiah Macy Jr. Foundation. Retrieved from
http://macyfoundation.org/publications/publication/2011-annual-report
72. Institute of Medicine (2014) Graduate Medical Education That Meets the Nation’s Health
Needs. Washington DC: National Academies Press.
73. Long DM. (2000) Competency-based residency training: The next advance in graduate
medical education. Acad Med, 75(12): 1178–1183.
50
74. Ferguson PC, Kraemer W, Nousiainen M, Safir O, Reznick R. (2013) Three-year experience
with an innovative, modular competency-based curriculum for orthopaedic training. J Bone Joint
Surg Am, 95(21), e166.
75. Nguyen VT, Losee JE. (2016) Time-versus Competency-Based Residency Training. Plast
Reconstr Surg. Aug: 138(2): 527-531.
76. Ericsson KA. (2015) Acquisition and maintenance of medical expertise: a perspective from
the expert-performance approach with deliberate practice. Acad Med, 90(11), 471-1486.
77. McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. (2006) Effect of practice on
standardised learning outcomes in simulation-based medical education. Med Educ, 40(8), 792–
797.
78. Bosse HM, Mohr J, Buss B, Krautter M, et al (2015) The benefit of repetitive skills training
and frequency of expert feedback in the early acquisition of procedural skills. BMC Med Educ.
Feb 19; 15: 22.
79. Gawande A, Zinner M, Studdert D, Brennan TA. (2003). Analysis of errors reported by
surgeons at three teaching hospitals. Surgery, 133(6), 614-621.
80. Faulkner H, Regehr G, Martin J, Reznick R. (1996) Validation of an objective structured
assessment of technical skill for surgical residents. Acad Med, 71(12), 363-365.
81. Bodle JF, Kaufmann SJ, Bisson D, Nathanson B, Binney DM. (2008) Value and face validity
of objective structured assessment of technical skills (OSATS) for work-based assessment of
surgical skills in obstetrics and gynaecology. Med Teach, 30(2), 212-216.
82. Gallagher AG, O'Sullivan GC, Leonard G, Bunting BP, McGlade KJ. (2014) Objective
structured assessment of technical skills and checklist scales reliability compared for high stakes
assessments. ANZ J Surg, 84(7-8), 568-573.
51
83. Argun OB, Chrouser K, Chauhan S, Monga M, Knudsen B, Box GN…Sweet RM. (2015)
Multi-Institutional Validation of an OSATS for the Assessment of Cystoscopic and
Ureteroscopic Skills. J Urol, 194(4), 1098-1105.
84. van Hove PD, Tuijthof GJ, Verdaasdonk EG, Stassen LP, Dankelman J. (2010) Objective
assessment of technical surgical skills. Br J Surg, 97(7), 972-987.
85. Anaya-Prado R, Ortega-León LH, Ramirez-Solis ME, Vázquez-García JA…Ayala-López
EA. (2012) Assessment of surgical competence. A Mexican pilot study. Cir Cir, 80(3), 261-269.
86. Tansley G, Bailey JG, Gu Y, Murray M, Livingston P, Georges N, Hoogerboord M. (2016)
Efficacy of Surgical Simulation Training in a Low-Income Country. World J Surg, 40(11), 2643-
2649.
87. Walsh CM, Ling SC, Khanna N, Grover SC, Yu JJ….Carnahan H. (2015) Gastrointestinal
Endoscopy Competency Assessment Tool: reliability and validity evidence. Gastrointest Endos,
81(6), 1417-1424.
88. Kramp KH, van Det MJ, Hoff C, Lamme B, Veeger NJ, Pierie JP. (2015) Validity and
reliability of global operative assessment of laparoscopic skills (GOALS) in novice trainees
performing a laparoscopic cholecystectomy. J Surg Educ, 72(2), 351-358.
89. Dedy NJ, Fecso AB, Szasz P, Bonrath EM, Grantcharov TP. (2016) Implementation of an
Effective Strategy for Teaching Nontechnical Skills in the Operating Room: A Single-blinded
Nonrandomized Trial. Ann Surg, 263(5):937-941.
90. Jung JJ, Borkhoff CM, Jüni P, Grantcharov TP. (2018) Non-Technical Skills for Surgeons
(NOTSS): Critical appraisal of its measurement properties. Am J Surg, S0002-9610(17), 31460-
31465.
52
91. Lin Y, Scott JW, Yi S, Taylor KK, Ntakiyiruta G, Ntirenganya F…Riviello R. (2017)
Improving Surgical Safety and Nontechnical Skills in Variable-Resource Contexts: A Novel
Educational Curriculum. J Surg Educ, S1931-7204(17), 30483.
92. Yule S & Paterson-Brown S. (2012) Surgeons' non-technical skills. Surg Clin North Am,
92(1), 47-50.
93. Acosta D, Castillo-Angeles M, Garces-Descovich A…Kent TS. (2017) Surgical Practical
Skills Learning Curriculum: Implementation and Interns' Confidence Perceptions. J Surg Educ,
S1931-7204(17), 30180.
94. Okusanya OT, Kornfield ZN, Reinke CE, Morris JB, Kelz, RR. (2012) The effect and
durability of a pre-graduation boot Camp on the confidence of senior medical student entering
surgical residencies. J Surg Educ, 69(4), 536-543.
53
Appendix 1: Cleft Surgeon Training Program Mission Workbook
Welcome to the CSTP!
Background
In order to improve the consistency and effectiveness of educating candidate surgeons to become
competent surgeons through Operation Smile missions, this program facilitates competency-
based candidate surgeon cleft training. Through mentored surgical experiences on cleft missions
and cleft centers, candidate surgeons from program countries can complete a “cleft fellowship”
for transfer of knowledge and skills, culminating in Operation Smile credentialing.
This workbook will guide you through the necessary forms over the course of your mission. As
the trainee candidate, it will be your responsibility to assist your mentor in completing all forms
and submitting your booklet at the completion of the mission to the mission PC.
CSTP Checklist of Forms:
Before the Mission:
_____ Trainee Pre-Mission Self-Assessment form – trainee to complete
Day 1 of OR (or first day)
_____ CSTP technical skills evaluation of candidate– mentor to complete
_____ CSTP Procedure specific skills evaluation - mentor to complete
complete relevant procedure after each case
Each Day
_____ Case log - trainee to complete
a single case log has been provided at the back of the book
Day 5 of OR (or final day):
_____ CSTP technical skills evaluation of candidate – mentor to complete
_____ CSTP Procedure specific skills evaluation – mentor to complete
complete relevant procedure after each case
_____ CSTP non-technical skills evaluation – mentor to complete
_____ Trainee evaluation of mentor – trainee to complete
_____ Trainee Post-Mission Self-Assessment form - trainee to complete
54
Surgeon Trainee PRE-Mission/Fellowship Competency Self-Evaluation
Candidate Surgeon: ___________________________ Date: ______________
Mission or Fellowship Site/Dates: _____________________
Year of Completion of Plastic Surgery Residency: _____________________
Any additional fellowships: _____________________
Previous Operation Smile Mission Experience: _____________________
Year of First Mission Participation: _____________________
Number of Missions of Previous Participation: _____________________
At what hospitals/with which organizations have you gained previous cleft experience:
_____________________________________________________________
Please describe your prior experience with cleft lip and cleft and your current comfort level with the
following procedures:
1. Unilateral Cleft Lip:
Number of cases done prior to mission: Majority of case _________
Assisted on case _________
Confidence Level with Unilateral Cleft Lip:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
_____________________________________________________________________________
_____________________________________________________________________________
_____________________________________________________________________________
2. Bilateral Cleft Lip:
Number of cases done prior to mission: Majority of case _________
Assisted on case _________
Confidence Level with Bilateral Cleft Lip:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
_____________________________________________________________________________
_____________________________________________________________________________
_____________________________________________________________________________
55
3. Cleft Palate:
Number of cases done prior to mission: Majority of case _________
Assisted on case _________
Confidence Level with Cleft Palate:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
_____________________________________________________________________________
_____________________________________________________________________________
_____________________________________________________________________________
4. Secondary Cleft Lip:
Number of cases done prior to mission: Majority of case _________
Assisted on case _________
Confidence Level with Secondary Cleft Lip:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
_____________________________________________________________________________________
_____________________________________________________________________________
_____________________________________________________________________________
5. Secondary Cleft Palate:
Number of cases done prior to mission: Majority of case _________
Assisted on case _________
Confidence Level with Secondary Cleft Palate:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
____________________________________________________________________________
56
DAY 1-Trainee Technical Skills Competency Evaluation
The purpose of this evaluation is to help our volunteers work to their greatest potential, help Operation Smile better involve volunteers, and provide high-quality, patient-centered
care. Please assess the below named volunteer by considering the following competencies and key behaviors. Please rate each item using the appropriate system and feel free to
include your comments in the appropriate section.
Surgeon/Trainee: _______________________________ Mission Site/Country:___________________________
Surgeon Evaluator: ______________________________ Evaluation Date: _______________________________
II. General Cleft Surgical Competency (Please circle one descriptor for each category)
Flow of procedure:
Frequently stopped, unsure
of next move
Occasionally stopped Forward planning with
reasonable progression
Smooth progression Obviously planned, effortless
flow
Instrument knowledge:
Frequently asked for wrong
instruments
Knew some instruments Knew instruments and used
appropriate one most of time
Used appropriate
instrument most of time
Familiar with all instrument
names and proper use
Instrument handling:
Repeatedly tentative or
awkward moves
Intermittent awkward
moves
Competent, occasionally stiff or
awkward
Rarely stiff or awkward Fluid moves, no
awkwardness
Time and Motion:
Many unnecessary moves Periods of efficient motion,
some unnecessary moves
Overall efficient motion, some
unnecessary moves
Minimal unnecessary
moves
Clear economy of motion
Soft tissue handling:
Rough, frequent tissue
damage
Incautious, occasional
tissue damage
Careful but occasional tissue
damage
Careful, minimal tissue
damage
Consistent gentle handling,
no damage
Use of Assistants:
Poorly or failed to use
assistants
Occasional use of assistants Good use of assistants most of
the time
Good use of assistants
majority of time
Strategically used assistants
to best advantage at all
times
Knowledge of surgical sequence:
Deficient knowledge Knew some steps Knew all important steps Familiarity with most
aspects of the case
Familiarity with all aspects of
the case
Evaluation Scale N/A 1 2 3 4 5
UNABLE TO
EVALUATE
NOT MEETING
EXPECTATIONS
NEEDS
IMPROVEMENT
SATISFACTORY EXCELLENT
EXEMPLARY, POSITIVE
EXAMPLE FOR OTHERS
Preoperative Care
Establishes relationship with family and
communicates clearly regarding the operative plan.
Demonstrates sound knowledge of OSI policies and
procedures
Postoperative Care
Post-operative incident recognition, decision-
making and follow up
Involves and communicates with team in
postoperative planning
Communicates clearly with family regarding
postoperative care
57
Procedure-Specific Competency (Please check boxes based on the appropriate surgeon or trainee scale)
Surgeon/Trainee: _______________________________ Mission Site/Country:___________________________
Surgeon Evaluator: ______________________________ Evaluation Date: _______________________________
Evaluation Scale N/A 1 2 3 4 5
CLEFT SURGEON TRAINEE
UNABLE TO
EVALUATE
LEARNING UNDER
FULL SUPERVISION
PERFORMING
COMPONENT UNDER
SUPERVISION
DEMONSTRATES RELIABLY WITH
AUTONOMY IN BASIC CASES, BENEFITS
FROM GUIDANCE IN MORE COMPLEX CASES
DEMONSTRATES RELIABLY
WITH AUTONOMY IN
MAJORITY OF CASES
MASTERY FOR VARIETY OF CASE
PRESENTATIONS, ABLE TO
TEACH OTHERS
Unilateral Cleft Lip
Procedure selection/planning
Marking, Local Anesthesia
Injection
Dissection - Lip
Dissection – Nose, Nasal floor
Muscle approximation
Mucosal, skin, nasal floor
approximation
Final lip symmetry/on-table result
Final nose/alar base symmetry
Comments:
Bilateral Cleft Lip
Procedure selection/planning
Marking, Local Anesthesia
Injection
Dissection – Lip, Prolabium
Dissection – Nose, Nasal floor
Muscle approximation
Mucosal, skin, nasal floor
approximation
Final lip symmetry/on-table result
Final nose/alar base symmetry
Comments:
Cleft Palate
Procedure selection/planning
Retractor placement and use
Marking, Local Anesthesia
Injection
Dissection – Soft Palate
Dissection – Hard Palate, nasal
floor
Nasal mucosa approximation
Muscle release/ approximation/
intravelar veloplasty
Oral mucosa approximation
Overall on-table result
Comments:
58
DAY 5- Trainee Technical Skills Competency Evaluation
The purpose of this evaluation is to help our volunteers work to their greatest potential, help Operation Smile better involve volunteers, and provide high-quality, patient-centered
care. Please assess the below named volunteer by considering the following competencies and key behaviors. Please rate each item using the appropriate system and feel free to
include your comments in the appropriate section.
Surgeon/Trainee: _______________________________ Mission Site/Country:___________________________
Surgeon Evaluator: ______________________________ Evaluation Date: _______________________________
I. Pre/Post-Operative Competency (Please check appropriate column)
II. General Cleft Surgical Competency (Please circle one descriptor for each category)
Flow of procedure:
Frequently stopped, unsure
of next move
Occasionally stopped Forward planning with
reasonable progression
Smooth progression Obviously planned, effortless
flow
Instrument knowledge:
Frequently asked for wrong
instruments
Knew some instruments Knew instruments and used
appropriate one most of time
Used appropriate
instrument most of time
Familiar with all instrument
names and proper use
Instrument handling:
Repeatedly tentative or
awkward moves
Intermittent awkward
moves
Competent, occasionally stiff or
awkward
Rarely stiff or awkward Fluid moves, no
awkwardness
Time and Motion:
Many unnecessary moves Periods of efficient motion,
some unnecessary moves
Overall efficient motion, some
unnecessary moves
Minimal unnecessary
moves
Clear economy of motion
Soft tissue handling:
Rough, frequent tissue
damage
Incautious, occasional
tissue damage
Careful but occasional tissue
damage
Careful, minimal tissue
damage
Consistent gentle handling,
no damage
Use of Assistants:
Poorly or failed to use
assistants
Occasional use of assistants Good use of assistants most of
the time
Good use of assistants
majority of time
Strategically used assistants
to best advantage at all
times
Knowledge of surgical sequence:
Deficient knowledge Knew some steps Knew all important steps Familiarity with most
aspects of the case
Familiarity with all aspects of
the case
Evaluation Scale N/A 1 2 3 4 5
UNABLE TO
EVALUATE
NOT MEETING
EXPECTATIONS
NEEDS
IMPROVEMENT
SATISFACTORY EXCELLENT
EXEMPLARY, POSITIVE
EXAMPLE FOR OTHERS
Preoperative Care
Establishes relationship with family and
communicates clearly regarding the operative plan.
Demonstrates sound knowledge of OSI policies and
procedures
Postoperative Care
Post-operative incident recognition, decision-
making and follow up
Involves and communicates with team in
postoperative planning
Communicates clearly with family regarding
postoperative care
59
Procedure-Specific Competency (Please check boxes based on the appropriate surgeon or trainee scale)
Surgeon/Trainee: _______________________________ Mission Site/Country:___________________________
Surgeon Evaluator: ______________________________ Evaluation Date: _______________________________
Evaluation Scale N/A 1 2 3 4 5
CLEFT SURGEON TRAINEE
UNABLE TO
EVALUATE
LEARNING UNDER
FULL SUPERVISION
PERFORMING
COMPONENT UNDER
SUPERVISION
DEMONSTRATES RELIABLY WITH
AUTONOMY IN BASIC CASES, BENEFITS
FROM GUIDANCE IN MORE COMPLEX CASES
DEMONSTRATES RELIABLY
WITH AUTONOMY IN
MAJORITY OF CASES
MASTERY FOR VARIETY OF CASE
PRESENTATIONS, ABLE TO
TEACH OTHERS
Unilateral Cleft Lip
Procedure selection/planning
Marking, Local Anesthesia
Injection
Dissection - Lip
Dissection – Nose, Nasal floor
Muscle approximation
Mucosal, skin, nasal floor
approximation
Final lip symmetry/on-table result
Final nose/alar base symmetry
Comments:
Bilateral Cleft Lip
Procedure selection/planning
Marking, Local Anesthesia
Injection
Dissection – Lip, Prolabium
Dissection – Nose, Nasal floor
Muscle approximation
Mucosal, skin, nasal floor
approximation
Final lip symmetry/on-table result
Final nose/alar base symmetry
Comments:
Cleft Palate
Procedure selection/planning
Retractor placement and use
Marking, Local Anesthesia
Injection
Dissection – Soft Palate
Dissection – Hard Palate, nasal
floor
Nasal mucosa approximation
Muscle release/ approximation/
intravelar veloplasty
Oral mucosa approximation
Overall on-table result
Comments:
60
Trainee Non-Technical Skills Evaluation
The purpose of this evaluation is to help our volunteers work to their greatest potential, help Operation Smile better involve volunteers, and provide high-
quality, patient-centered care. Please assess the below named volunteer by considering the following competencies and key behaviors. Please rate each item
using the appropriate system and feel free to include your comments in the appropriate section.
Surgeon/Trainee: _____________________ Mission Site/Country: ___________________________
Mentor:_____________________ Date of Evaluation:_________________________
I. Non-Technical Skill Competency
Evaluation Scale
UNABLE TO
EVALUATE
NOT MEETING
EXPECTATIONS
NEEDS
IMPROVEMENT
SATISFACTORY EXCELLENT
EXEMPLARY,
POSITIVE EXAMPLE
FOR OTHERS
Cultural Awareness and Sensitivity
Demonstrates cultural awareness and concern to
patients, volunteers, and staff
Understands differences in health systems, and
demonstrates flexibility/adaptability to new health care
and political environments
Shares Operation Smile’s family-patient-centered
culture
Communication
Communicates effectively with team members and staff
Recognizes and articulates problems respectfully
Gives and takes feedback well
Team Member Skills and Professionalism
Supportive to Operation Smile staff and local hospital
staff
Handles difficult situations calmly and diplomatically
Easy to get along with and willing to help where needed
Displays integrity, humility, and conducts themselves in
a professional manner
Manages time efficiently
Leadership
Demonstrates sound knowledge of OSI policies and
procedures
Advocates for the importance of reporting, the mission,
and patient-centered care
Demonstrates desire to grow and learn
Leads when appropriate and able to follow
direction/instruction
Comments:
61
CANDIDATE SURGEON EVALUATION OF OPERATION SMILE SURGICAL MENTOR
Surgeon/Trainee: _____________________ Mission Site/Country: ___________________________
Mentor:_____________________ Date of Evaluation:_________________________
UNABLE TO EVALUATE STRONGLY DISAGREE DISAGREE NEUTRAL AGREE STRONGLY AGREE
0 1 2 3 4 5
Score Rationale/Comments
Communicates with Candidate Surgeon prior to the mission
to discuss learning objectives and previous cleft surgery
experience.
Serves as a role model in patient care activities.
Treats colleagues, other health care providers and patients
with respect
Includes residents as valuable members of the team
Explains pre-operative diagnostics and decision making
Was accessible to residents for discussion of patient problems
Made expectations clear
Gives specific and constructive feedback
Probes Candidate Surgeon with questions to improve critical
thinking skills
Made rounds a valuable learning experience
Provides adequate opportunities to perform and/or assist in
appropriate procedures
Teaches effectively in the OR
Remains calm and supportive during the operation
Responds to emergency situations efficiently and helps to
guide the team’s actions
Is dedicated to education
COMMENTS:
62
Surgeon Trainee POST-Mission/Fellowship Competency Self-Evaluation
Candidate Surgeon: ___________________________ Date: ______________
Please describe your experience during the mission and your current comfort level with the following
procedures:
1. Unilateral Cleft Lip:
Number of cases done on the mission: Majority of case _________
Assisted on case _________
Confidence Level with Unilateral Cleft Lip:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
_____________________________________________________________________________
_____________________________________________________________________________
2. Bilateral Cleft Lip:
Number of cases done on the mission: Majority of case _________
Assisted on case _________
Confidence Level with Bilateral Cleft Lip:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
_____________________________________________________________________________
_____________________________________________________________________________
3. Cleft Palate:
Number of cases done on the mission: Majority of case _________
Assisted on case _________
Confidence Level with Cleft Palate:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
_____________________________________________________________________________
____________________________________________________________________________
63
4. Secondary Cleft Lip:
Number of cases done on the mission: Majority of case _________
Assisted on case _________
Confidence Level with Secondary Cleft Lip:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
_____________________________________________________________________________________
_____________________________________________________________________________
5. Secondary Cleft Palate:
Number of cases done on the mission: Majority of case _________
Assisted on case _________
Confidence Level with Secondary Cleft Palate:
Knowledge: 1 2 3 4 5
Surgical Technique: 1 2 3 4 5
Comments:
_____________________________________________________________________________
_____________________________________________________________________________
Having participated in this training, please use the following scale to respond to the statements below:
1 2 3 4
Not at all Slightly Moderately Significantly
I feel that this training has improved my knowledge and skills in cleft surgery
1 2 3 4
The training was appropriate to my skill level
1 2 3 4
I will use the skills I gained this week in my own practice
1 2 3 4
I would like to continue training with the CTSP (cleft surgeon training program)
1 2 3 4
Once credentialed, I would like to participate in future Operation Smile missions
1 2 3 4
64
CASE LOG
Surgeon/Trainee: __________________ Mission Site/Country: ___________________________
Mentor: _____________________ Date of Evaluation:_________________________
Please indicate the NUMBER of cases and if you performed <50% of the case (mostly assisted) or > = 50% of the case (preformed most of the case). For Other,
please indicate the procedure (rhinoplasty, alveolar bone graft, fistula repair, ect)
Monday
Procedure
Count
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Cleft Lip Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
Cleft Palate Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
Tuesday
Procedure
Count
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Cleft Lip Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
Cleft Palate Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
Wednesday
Procedure
Count
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Cleft Lip Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
Cleft Palate Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
Thursday
Procedure
Count
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Cleft Lip Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
Cleft Palate Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
Friday
Procedure
Count
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Number
of cases
<50%
Number
of cases
> 50%
Cleft Lip Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
Cleft Palate Unilateral
(primary)
Unilateral
(secondary)
Bilateral
(primary)
Bilateral
(secondary)
65
Appendix 2: Program Guide
Cleft Surgeon Training Program
Program Guide
Background
In order to improve the consistency and effectiveness of educating candidate surgeons to
become competent surgeons through Operation Smile missions, this program facilitates
competency-based candidate surgeon cleft training. Through the course of several mentored
surgical experiences on cleft missions and cleft centers, candidate surgeons from program
countries can complete a “cleft fellowship” for transfer of knowledge and skills, culminating in
Operation Smile credentialing.
Roles and Responsibilities
Candidate surgeon (CS) – Eligible surgeon (surgeon observer with patient privileges, or
resident), not yet credentialed
Surgeon educator – Designated Operation Smile-credentialed surgeon who acts as a surgeon
mentor to candidate surgeons in the cleft surgeon training program during the mission, and
facilitates general surgical educational activities during the mission
• The surgeon educator (SE) will be responsible for making contact with the candidate
surgeon prior to the mission to discuss objectives, facilitate mentorship throughout the
mission (screening, operative days), and complete the evaluation, and discuss next steps
in progression with the candidate and Operation Smile medical leadership.
• The surgical education coordinator (EC) will match surgeon educators and candidate
surgeons to missions in conjunction with the programs team, introduce candidate
surgeons and surgeon educators to each other, and synthesize evaluations and feedback
after the mission.
• The program coordinator (PC) will provide electronic materials to candidate surgeons
and surgeon educators prior to the mission, select printed materials during the mission,
and act as on-the-ground liaison between the candidate surgeon/surgeon educator and
Operation Smile during the mission for any issues that arise.
Timeline and Guidelines
Pre-mission
1-4 months prior to mission:
66
1. Communication between EC, PC, foundation medical director to confirm that the CSTP
will be part of the mission and # of surgical tables designated for the CSTP.
Of note, other “observer” or “trainee” surgeons can be scheduled to the mission as per
usual. They will typically rotate and assist among the other surgical tables, and typically
not the CSTP table.
2. Surgeon educator(s) and candidate surgeon identified and verified to roles in TML.
Surgeon educators will be sent a summary of their assigned candidates prior
evaluations.
T-minus 4 weeks to mission:
1. Initial introduction should be made between surgeon educator and candidate surgeon
via email. The candidate surgeon will be emailed a copy of the CSTP workbook
(Appendix 1) to complete the pre-mission background sheet.
2. Candidate surgeon and surgeon educator should try to speak by phone or email, and
review together the pre-mission Background sheet. It can be useful at this time for the
surgeon educator to discuss the techniques he/she uses and send any supplementary
articles/guides to the candidate surgeon.
3. Based on above conversation, PC should introduce clinical coordinator (CC) to surgeon
educator, review roles and responsibilities, and surgeon educator should inform CC and
PC of any goal case types for the CSTP table (primary patients, primary lips, primary
palates, etc.)
On the mission
1. Pre-screening meeting. During the pre-screening meeting, surgeon educator should
introduce the surgeon candidate to all surgeons and briefly describe their role and the
CSTP. Before/after pre-screening meeting, surgeon educator should meet personally
with candidate to discuss objectives during screening and to again review together the
pre-mission candidate Background sheet.
In most cases, the candidate should plan to act autonomously during screening under
direct supervision of the surgeon educator, at least at first. This may not be assumed, so
should be explicitly discussed. This allows the surgeon educator to observe baseline
abilities of the candidate, for purposes of tailoring their mentoring and assessment.
Charts should be completed by the candidate and co-signed by the surgeon educator.
2. Scheduling meeting. The surgeon educator should attend the scheduling meeting to
facilitate scheduling of appropriate cases to the CSTP table. It is generally best for the
CSTP table to be permanent (e.g. “Table 5) throughout the mission. While other
surgeons typically rotate among the other tables, only the surgeon educator(s) rotate on
67
the CSTP. If two surgeon educators are sharing responsibility for one CSTP table and
alternating in that role daily, the “other” surgeon each day can fill a roll that rotates
among the other tables.
It will help the CSTP keep pace with the other tables if a more experienced
anesthesiologist is assigned for the week to the CSTP table. This can be facilitated by the
anesthesiology team leader during the scheduling meeting.
Generally, the surgeon educator works alongside the plastic surgery team leader (PSTL)
to assign cases to the CSTP as they are placed on the scheduling board. In pilot studies,
we found that it worked well to assign one-fewer cases than on other tables to the CSTP
table, or to assign a short last case if assigning the same numbers of cases to each.
Practically, we found that the CSTP often finished earlier each day than other tables if
one-fewer cases were assigned; in this case, that table could take on an extra case from
a slower table as a last case, and if time was tight, the surgeon educator could do more
of that last case than the candidate surgeon.
Cases assigned to the CSTP should be thoughtfully balanced during the week, and
balance the overall types of cases in the mission (i.e. so as not to displace an
inappropriately high number of primary lips from the other tables.) Lip revisions
(particularly those requiring recreation of the defect) can be excellent learning
experiences for candidate surgeons. An appropriate schedule for the CSTP table, for a
candidate surgeon on their first CSTP rotation, may be:
Monday Tuesday Wednesday Thursday Friday
1o incomp uCL 1o comp uCL 1o comp uCL 1o comp bCL 1o comp uCL
1o comp uCL 1o comp bCL Pharyngoplasty Fistula repair Macrostomia
2o uCL 1o CP 1o CP 1o incomp bCL 1o comp uCL
1o incomp uCL 2o bCL 1o CP 1o CP 2o bCL
2o uCL 1o incomp uCL 2o uCL 2o uCL 1o incomp uCL
u=unilateral, b=bilateral
CL=cleft lip, CP=cleft palate
Incomp=incomplete, comp=complete
1o=primary, 2o=secondary
It is important to verify that the procedure listed on the card going on the scheduling
board matches the planned procedure and patient type in the chart. For example, a
“Priority 2 Cleft Palate Repair” could indicate a straightforward primary palate, or a
complicated secondary surgery for a dehisced palate.
3. Pre-surgery team meeting. Surgeon educator shares case schedule with candidate to
ensure readiness for first day’s cases. Surgeon educator ensures that the candidate
surgeon fully understands all of the discussion led by the PSTL.
68
Again, in most cases, the candidate should plan to act autonomously during the first
surgical day under direct supervision of the surgeon educator, but without prompting.
(E.g. candidate surgeon acts as “surgeon in charge of the table” under direct
supervision; that is introducing self to parents, examining patient preoperatively,
leading the “time out.”) However, this role may not be assumed, so should be explicitly
discussed. This allows the surgeon educator to observe baseline abilities of the
candidate, for purposes of tailoring their mentoring and assessment. Charts should be
completed by the candidate and co-signed by the surgeon educator.
At the pre-surgery team meeting, the PC should give a physical copy of the CSTP
workbook to the candidate surgeon. The candidate surgeon will keep the workbook
throughout the week and will be in charge of prompting the surgical educator to
complete evaluations at the appropriate times. That said, the surgeon educator is also
encouraged to actively seek time to complete the evaluations.
4. After the first day of surgery, the surgeon educator should fill out the “Day 1
evaluation” in the workbook. This acts as a baseline assessment. The evaluation should
be discussed together with the candidate surgeon and any goals or areas for improved
identified.
If there are two surgeon educators who are sharing responsibility for a single candiate,
the second surgeon educator should complete the same baseline evaluation at the end
of the second day.
After the last day of surgery, the surgeon educator should fill out the “Day 5 evaluation”
in the workbook. This acts as a final assessment by the surgeon educator. This again
should be discussed together with the candidate and should be compared to the
baseline assessment.
Please note that the workbook contains full evaluation forms for each day of surgery.
Given the volume of cases and amount of learning that takes place over the course of
the week, the surgeon educator and candidate may find it helpful to complete an
evaluation at the end of each day of surgery to facilitate constructive and formative
feedback. However, we understand the days during the surgery week are very long and
very busy, and therefore the minimum requirement is that first and last day of surgery
have completed evaluation forms.
On the last day of surgery, the surgeon educator will also complete an additional “non-
technical skills” evaluation form, also included in the workbook. At this time the
candidate will complete a post-mission self-assessment as well as an evaluation of the
surgeon educator.
69
5. All completed forms should be returned to the PC. The final party is a good opportunity
to recognize the accomplishments of the candidate surgeon, and to bring attention to
the cleft surgeon training program.
6. At the end of the mission, surgeon educators should make a consensus decision, in
conjunction with the PSTL and/or medical director as appropriate, about the next step
for the candidate. Options include (1) continue with CSTP (many candidates would be
expected to need 3-4 missions until ready for credentialing), (2) credentialing, or, in
unusual situations, (3) cease further progression.
Post-mission
1. Generally, the PCs should electronically submit the evaluation forms and send as a pdf
file to the surgical EC early the week after the mission. Hard copies can be used as a
backup and hand-carried to OSI HQ.
2. 1-2 weeks after the mission, the EC should schedule a brief phone call to discuss with
the surgeon educator(s) recommended progression for the candidate, feedback from
the mission, feedback for the CSTP program, etc. These comments will be utilized in the
summary evaluation that is given to the candidates next educator.
Surgeon Educator/Candidate Expectations:
o Mentor and candidate are responsible for attending all components of the mission
including didactic educational days, screening day, and all operative days.
o Mentor and candidate will be paired together during screening day to allow for
education and evaluation around appropriate patient selection, clinical decision-making
regarding operative plan and communication skills with patient families.
o There may be more than one mentor assigned to a candidate, but all cases should be
primary educational cases for the candidate.
o Mentor and candidate will fill out all required evaluation forms including initial and post
mission assessments by the mentor, and pre-and post-mission self-assessments by the
candidate.
o Assessment forms should be reviewed in person with the candidate immediately after
(or during) completion in order to increase the formative role of the evaluation
o All forms should be turned into PC when completed or by the final day
Every educational pairing will be slightly individualized, but generally the candidate
surgeon is expected to take on the role of the main physician on the team. He/she should lead
preoperative discussions with families, interactions with anesthesia and nursing staff and
postoperative rounds. The mentor is expected to provide guidance when necessary, but these
non-technical skills are additionally considered competencies for credentialing, and therefore
require active participation by the candidate.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Barriers to surgery for in low- and middle-income countries: a cross-sectional study of cleft lip and cleft palate patients in Vietnam
PDF
The environmental and genetic determinants of cleft lip and palate in the global setting
PDF
Use of a low fidelity contained manual tissue extraction simulation to improve gynecology resident competence and confidence
Asset Metadata
Creator
McCullough, Meghan Claire
(author)
Core Title
Competency based education in low resource settings: design and implementation of a novel surgical training program
School
Keck School of Medicine
Degree
Master of Science
Degree Program
Clinical, Biomedical and Translational Investigations
Publication Date
05/01/2018
Defense Date
03/30/2018
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
education,low and middle-income country,low-resource setting,OAI-PMH Harvest,Surgery,surgical education
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Magee, William P., III (
committee chair
), Conti, David (
committee member
), May, Win (
committee member
), McKean-Cowdin, Roberta (
committee member
)
Creator Email
mcculloughmeghan@gmail.com,Meghan.McCullough@med.usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c40-499546
Unique identifier
UC11267156
Identifier
etd-McCullough-6299.pdf (filename),usctheses-c40-499546 (legacy record id)
Legacy Identifier
etd-McCullough-6299.pdf
Dmrecord
499546
Document Type
Thesis
Format
application/pdf (imt)
Rights
McCullough, Meghan Claire
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
education
low and middle-income country
low-resource setting
surgical education