Close
The page header's logo
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected 
Invert selection
Deselect all
Deselect all
 Click here to refresh results
 Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A predictive valdity study: correlation of admission variables with program completion and student performance on the National Certification Examination in a physician assistant program
(USC Thesis Other) 

A predictive valdity study: correlation of admission variables with program completion and student performance on the National Certification Examination in a physician assistant program

doctype icon
play button
PDF
 Download
 Share
 Open document
 Flip pages
 More
 Download a page range
 Download transcript
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
A PREDICTIVE VALIDITY STUDY:  CORRELATION OF ADMISSION
VARIABLES WITH PROGRAM COMPLETION AND STUDENT
PERFORMANCE ON THE NATIONAL CERTIFICATION EXAMINATION IN A
PHYSICIAN ASSISTANT PROGRAM

by

Delores E. Middleton

__________________________________________________________

A Dissertation Presented to the
FACULTY OF THE ROSSIER SCHOOL OF EDUCATION
UNIVERSITY OF SOUTHERN CALIFORNIA
In Partial Fulfillment of the
Requirements for the Degree
DOCTOR OF EDUCATION


December 2008



Copyright 2008                                                                           Delores E. Middleton
ii
DEDICATION
This manuscript is dedicated to my mother Juanita LaVerne Jackson-Miles who
made her transition when I began my doctoral studies.  She was my life-long mentor
and my greatest inspiration. Her strong family values taught me what it means to
take charge of my destiny and to pursue my goals.  I will forever embrace her
courage; her wisdom and her strength.  
iii
 

ACKNOWLEDGEMENTS
I would like to acknowledge my gratitude and appreciation to Dr. Ilda Jimenez West
for the academic and moral support she provided while guiding me through this
process.  Her faith in me gave me the strength and courage to complete this task.

I would like to express my thanks to Dr. Darnell Cole for his educational guidance
and moral support in helping me achieve this goal.  

Thanks to Dr. Dennis Hocevar for the many hours he spent teaching and guiding me
through the data analysis process.  His one-on-one instruction provided me with life
long skills and understanding of data analysis.  I am truly grateful.

Overall, I would like to thank my committee for their commitment and the
enthusiasm they demonstrated while directing me through this process.

Special thanks to Lisa Landry-Taylor and Janice Tramel for their friendship and their
commitment to see me through the completion of this project. Their participation as
raters in this process enhanced the outcomes of the study.  They were instrumental in
designing the  analytic and holistic rubrics used in this study.  

iv
TABLE OF CONTENTS
DEDICATION …………………………………………………………….. ii
ACKNOWLEDGMENTS ………………………………………………… iii
LIST OF TABLES ………………………………………………………...  v
ABSTRACT ………………………………………………………………. viii
Chapter I: INTRODUCTION ……………………………………………... 1
Chapter II: LITERATURE REVIEW …………………………………….. 17
Chapter III: METHODOLOGY …………………………………………... 59
Chapter IV: RESULTS ……………………………………………………. 74
Chapter V: DISCUSSION ………………………………………………… 107
REFERENCES ……………………………………………………………. 117
APPENDICES …………………………………………………………….. 123
v
LIST OF TABLES
Table 1. National Certification Examination Peformance Report from  
2001 to 2007 ……………………………………………………………………….. 7

Table 2. Comparison of Entry-Level Skills with Exit-Level Skills ……………….. 46
Table 3. Summary of Prerequisite Requirements …………………………………. 63
Table 4. Categories of Clinical Hours …………………………………………….. 65
Table 5. Rating Scale for Personal Statement …………………………………….. 67
Table 6. Ordinal Scores for Reference Form ……………………………………... 69
Table 7. Correlation between Predictor Measures and Criterion Measures  
in the Study ……………………………………………………………………….. 71

Table 8. List of Variable Abbreviations and Definitions ………………………… 76
Table 9. Descriptive Statistics of Science Grade Point Average and  
Cumulative Grade Point Average ………………………………………………… 77

Table 10. Descriptive Statistics for Types of Clinical Experience ……………….. 78
Table 11. Mean and Standard Deviation of Clinical Hours ………………………. 78
Table 12. Descriptive Statistics for Clinical Hours ………………………………. 79
Table 13. Descriptive Statistics of the Entry-Level Skills ……………………….. 80
Table 14. Rater Reliability of Letter of Reference “a” …………………………... 81
Table 15. Rater Reliability of Letter of Reference “b” …………………………... 81
Table 16. Descriptive Statistics for Reference Letters …………………………… 81
Table 17. Frequency of Reference Letter Scores ………………………………… 82
Table 18. Reliability Statistics for Rating of Professional Goal …………………. 83
Table 19. Reliability Statistics for Rating of Commitment to the Profession ……. 83
vi
Table 20. Reliability Statistics for Rating of Written Communication Skills …… 83
Table 21. Reliability Statistics of the Personal Statement ……………………….. 84
Table 22. Item-Total Statistics of Rater Reliability of the Six Ratings ………….. 84
Table 23. Descriptive Statistics of Written Documents ………………………….. 84
Table 24. Descriptive Statistics for Program Completion ……………………….. 85
Table 25. Descriptive Statistics for National Certification Examination
Performance ……………………………………………………………………… 86

Table 26. Descriptive Statistics of Student Performance on Exit-Skill on the  
National Certification Examination ……………………………………………… 87

Table 27. t-Test Group Statistics for Predictor Measures and Program  
Completion ………………………………………………………………………. 90

Table 28. Independent Sample t-Test Analyses for Predictor Measures and  
Program Completion …………………………………………………………….. 91

Table 29. Chi-Square Test for Type of Clinical Experience and Program  
Completion ………………………………………………………………………. 92

Table 30. Chi-Square Test for Clinical Hours and Program Completion ……….. 93

Table 31. t-Test Group Statistics for GPAsci, GPAcum and NCEP ……………. 95

Table 32. Independent Samples t-Test Analyses for GPAsci, GPAcum, and  
Performance on the National Certification Examination ……………………….. 96

Table 33. Chi-Square Test for Type of Clinical Experience (EXPTYP) with  
National Certification Exam Performance (NCEP) …………………………….. 97

Table 34. Chi-Square Test for Clinical Hours Performance on the National  
Certification Examination ………………………………………………………. 97

Table 35. Correlations between GPAsci and PANCE 7 ………………………... 99

Table 36. Correlation between Skill 1 and PANCE 1 ………………………….. 100

Table 37. Correlation between Skill 2 and PANCE 2 ………………………….. 101
vii
Table 38. Correlation between Skill 3 and PANCE 3 …………………………. 101

Table 39. Correlation between Skill 4 and PANCE 4 …………………………. 102

Table 40. Correlation between Skill 5 and PANCE 5 …………………………. 102

Table 41. Correlation between Skill 6 and PANCE 6 …………………………. 103

Table 42. Correlation between Skill 7 and PANCE 7 …………………………. 103
viii
ABSTRACT
The purpose of this investigation was to examine the reliability and predictive
validity of the admission data in predicting student success in completing a
community college-based physician assistant program and their performance on the
National Certification Examination (NCE). The files of 170 graduates were reviewed
and the following data was complied: 1) science grade point average (GPAsci), 2)
cumulative grade point average (GPAcum), 3) reference letter ratings, 4) personal
statement ratings, and 5) work experience – each identified as a predictor measure in
this study. The criterion measures identified in the study were 1) program
completion, 2) performance on the NCE, and 3) skills. Findings demonstrate
variations in the degree of relationship among predictor measures and criterion
measures.   The GPAsci demonstrated the greatest degree of correlation with student
outcome in comparison with other predictor measures, which is consistent with
previous research. Overall, the research demonstrated that there was practical
significance or potentially significant correlations between the majority of the
predictor measures.
1
CHAPTER I
INTRODUCTION
The physician assistant (PA) profession was first recognized in the mid-
1960s.  Over the past forty years the evolution of the PA profession has resulted in
an increase in the utilization of physician assistants in health care.  This increase in
demand for more physician assistants in health care has propelled the rapid increase
of physician assistant programs throughout the nation.  Currently, there are 139
physician assistant training programs in the United States (http://www.arc-pa.org).
The demand for admission to these programs is extremely high, but the seating is
also limited. Therefore it is crucial that the selection process utilized by PA programs
be efficient in identifying applicants who have the academic and personal qualities
that are a requisite to success in these programs.  This study evaluated the admission
selection process of a community college based physician assistant program.  The
admission variables used in the selection process were traditional variables
commonly used in other allied health professional programs i.e. science grade point
average, cumulative grade point average, personal statement, letters of reference and
work experience.  Data gathered from these measurements served as indicators to
predict the success or failure of students in completing the program and their
performance on the National Certification Examination (NCE).
The national board pass rate for the program had consistently dropped 5%,
3%, 9% and 3% from 2003, 2004, 2005, and 2006, respectively. A simultaneous
increase in attrition was also noted.  From 2001 to 2006, the attrition rate gradually
2
increased to 2%, 4%, 9%, 20% and finally to 20%  The faculty alleged that the
decline in student performance resulted from the inability of the admission selection
process to adequately predict the success and failure of students in the program.
Background of the Problem
The American Academy of Physician Assistants (AAPA), the National
Commission on Certification of Physician Assistants (NCCPA), the Physician
Assistant Education Association (PAEA) and the Accreditation Review Commission
on Education for the Physician Assistant (ARC-PA), Inc. worked together
collaboratively to define and outline professional competencies and skills that
physician assistant graduates must demonstrate to obtain entry into the profession.
The ARC-PA (whose responsibility it is to grant accreditation status to physician
assistant programs in the United States) established accrediting guidelines that
echoes these values and the NCCPA (author of the certification examination)
designed the certification examination to assess graduates’ competency in these
skills.
The certification examination is based on a practice analysis report that
reflects the knowledge and skills graduates need as entry level practitioners.  The
NCE assesses graduates in seven basis skill areas identified as history taking and
physical examination, interpretation of laboratory and diagnostic test, formulating
diagnoses, health maintenance, clinical intervention, pharmaceutical therapeutics and
the application of basic science concepts (www.nccpa.net). A practice analysis is
performed every four years to ensure that the skills and competencies tested on the
3
NCE are consistent with the knowledge and skill competencies appropriate for
clinical practice.
In addition, public demand for quality health care and increased
accountability of health care providers also drives the profession toward changes in
professional competencies.  Increased public demand for qualified health care
providers require that PA programs select individuals with the cognitive skills and
personal qualities necessary to complete the program and obtain knowledge and
skills that demonstrate competence for clinical practice.  Selection criteria among the
139 accredited physician assistant programs vary from program to program and are
defensible by different entry-level requirements or degree options offered upon
completion. Despite the differences in admission selection criteria, health
professional programs use similar admission variables to discriminate applicants
with the cognitive skills and the ability to succeed.
Statement of the Problem
The admission selection criteria for admission to physician assistant
programs must be reliable and valid in discriminating individuals who have the
knowledge, abilities, and experience to succeed.  The admission selection process at
the community college based physician assistant program is governed by the State
Matriculation Regulations.  In the case of the physician assistant program, the
admission selection process is also governed by the Physician Assistant Committee
of the Medical Board of the state in which the college is located.  The open-door
philosophy of the community college guarantees equity in access and therefore
4
prohibits limitation on enrollment to programs or courses unless such limitations are
justified through research, mandated by the state licensing agency or stipulated by
the accrediting agency of the profession.
State regulations require that the community college provide predictive
validity evidence of test-score-based inferences and course prerequisites if test scores
or course prerequisites are used to limit enrollment into a course or program
(Armstrong, 2000).  Standardize test scores (i.e. GRE, ACT, MCAT) was not
adopted as a variable in the admission selection process because data did not exist to
perform the required study to support its significance in the admission process.  For
the purpose of student placement, the state allows companion measures (e.g. grade
point average GPA, personal statement, letter of reference, and work experience) as a
supplement to placement test scores.  Companion measures were adopted as
predictor variables in the admission selection process and were the primary data used
as indicators to predict student success or failure in completing the program and their
performance on the National Certification Examination (NCE).
Science grade point average (GPAsci) and cumulative grade point average
(GPAcum) were determined from a list of prerequisite courses that must be
completed prior to application submission.  The grade point averages (GPAs) were
established based on admission criteria set by other relevant allied health programs
in the District.  The prerequisite courses used for the GPAs were basic education
core courses defined by the state’s Physician Assistant Committee (PAC).  Section
1399.531 of the PAC Laws and Regulations identifies prerequisite basic education
5
core courses that candidates must complete prior to admission to an approved
program or must complete during their tenure as a student in a physician assistant
training program. The PAC identifies the following college level basic core courses:
anatomy and physiology, chemistry, microbiology, English, sociology or cultural
anthropology, psychology and college level algebra.  These prerequisite courses
were selected in compliance with State Regulations which allow the use of
prerequisite as a limitation on enrollment if the prerequisites are required by the state
licensing agency.
The problem with this list of courses is that over the past forty years the
profession has evolved from an informal hospital-based curriculum to a two-year
college curriculum.  Despite this evolution in the profession the basic education core
courses published in the PAC Laws and Regulations in 1965 have remained
unchanged.  The rationale for keeping these courses the same is unknown.  
Regardless of the reason, the role and responsibilities of physician assistants in
clinical practice has evolved considerably since 1965, moving  PA education towards
a more complex training in order to prepare graduates for clinical practice.
The demand for more highly skilled clinicians warrant more stringent basic
education core courses as entry requirements to the program or additional training
while in the program.  The two-year curriculum does not allow for additional courses
to be taught during training therefore, individuals who enroll into the program must
have the requisite academic credentials to achieve.  Many PA program offer higher
6
degree options and therefore require the completion of more upper division science
coursework as prerequisites for admission.
The community college based physician assistant program requires no upper
division science courses as prerequisites or prior undergraduate degree for
admission.  This disparity in academic prerequisite requirements among programs,
does not exclude programs with fewer requirements from providing the quality of
educational training necessary to prepares graduates with clinical competencies for
entry into the profession.
Since 1999, the community college-based program has graduated 170
students.  The National Certifying Examination pass rates from 2001 to 2007 are
outlined in the table below along with the national pass rates for comparison. The
first time takers’ pass rate for the cohort of students graduating in 2001 was 100%.  
The program’s performance for this cohort of students was 8% above the national
first time takers’ average.  From 2002 to 2004, the program’s performance for first
time takers remained above the national first time takers average of 2%, 6% and 5%,
respectively.  The program graduate pass rate for first time takers dropped below the
national first time takers average pass rate for the first time in 2005 and has remained
below the national average for three consecutive years; 8%, 10% and 1% for 2005,
2006 and 2007 (NCCPA PANCE Report 2005, 2006, and 2007).
7
Table 1
National Certification Examination Performance Report from 2001 to 2007
Class of:
Program first time
takers pass rate
NCCPA national first time
takers average pass rate
2001 100% 92%
2002 92% 90%
2003 95% 89%
2004 95% 90%
2005 83% 91%
2006 75% 90%
2007 90% 91%


The attrition rates for 1999 through 2005 cohorts of students illustrated a
similar drop in student performance.  Since the 2003 cohort, there had been a
persistently high attrition rate of 20% for 2003, 2004, and 2005. Individuals selected
for enrollment during this time frame had a high drop out rate and a low pass rate on
the NCE. The overall performance of these cohorts of students was below the
previously demonstrated performance of the program graduates.  Disparities in
student performance among cohorts of students are believed to reflect the overall
quality of students enrolled from 2003 to 2005.
The faculty believed that the academic abilities of students in these cohorts
were not compatible to the academic abilities of their predecessors.  The rationale for
their belief was obvious; the number of students needing tutorial support or
remediation increased by 50%; the number of students who failed out of the program
8
after the first semester increased by 15%; and the number of exam failures for the
cohort of students increased.  As a result of these findings, the faculty believed that
the data gathered in the admission selection process did not adequately discriminate
among the candidates well enough to identify which candidates had the potential to
complete the program and pass the NCE.
Data gathered as part of the admission selection process should serve as
indicators to predict the success or failure of students in the program and their
performance on the NCE.  The admission criteria of the program utilized traditional
admission variables (i.e. GPAs, work experience, letters of reference, and personal
statement) as predictor measures to subsequently predict student success. These
predictor measures were the primary sources of data used to predict student success.
Entry-level skills (reported on the application) used as predictor variables were
analogous to the exit-skills assessed on the NCE.  Review of the literature for this
study identified few studies that had used preadmission skills as an indicator of the
applicant’s ability to perform in a field of study. Tang and Lee (1989) administered a
preadmission mobility test to predict the candidates’ ability to function as a
physiotherapist. The study showed that the mobility test did not have significant
predictive power for the overall grade point average.  Gansky et. al., (2003)
investigated the use of a preadmission manual dexterity test as an indicator of the
dental hygienist applicants’ ability to perform the skills required to function as a
dental hygienist and found inconclusive evidence that the test predicted student
success in completing the program.  Similarly, this investigation demonstrated that
9
entry-level skills had no significant predictive power to predict student performance
on the exit-level skills on the NCE.
Theoretical Framework
The theoretical framework for this study was designed as a predictive validity
model.  Review of prior predictive validity studies positions and contextualizes this
study with prior research that has investigated relationships of admission variables
with student outcomes.  Correlations between predictor measures and criterion
measures in the study were expected to show statistically significant correlation
coefficients in order to validate the use of measures used in the admission process.  
Findings in this study demonstrated both statistically significant correlations and
practical indications that predictor variables used as indicators to predict student
success were valid.  The GPAsci was the most significant predictor variable in the
admission selection process; GPAsci demonstrated a near significant correlation (r =
0.052) with program completion, a statistically significant correlation (r = 0.009)
with NCE performance; and statistically significant correlation (r = 0.000) with
graduates performance in the basic science concepts (PANCE7) on the NCE.
Purpose of the Study
The primary purpose of the study is threefold:  First,  to evaluate the
reliability and predictive validity of the admission variables in being able to project
student success in a community college-based physician assistant program.  State
regulations define predictive validity principles that must be adhered to when
establishing admission criteria for a community college-based program.  The current
10
admission criteria had not been investigated for compliance with these principles
because the program did not have sufficient data for such research.  A sample size of
170 was acquired and the investigation was performed with the goal of fulfilling the
state’s requirement.
The second purpose of the study to determine if data gathered during the
admission selection process served as indicators to predict the success or failure of
students in passing the National Certification Examination (NCE);  identify
statistically significant variables that are reliable and valid indicators that predict the
performance of students in the program and student performance on the NCE.
The third and final purpose of the study is to identify statistically significant
variables that are reliable and valid indicators that predict the performance of
students in the program and student performance on the NCE. The open-access
philosophy of the community college requires that the admission criteria have few
requirements while attempting to reduce high attrition rates and high pass rates on
the NCE.  Prerequisite requirements that restrict admission to programs in
community colleges must show a statistically significant correlation with student
success and the prerequisite skills and this knowledge must serve a foundation to
courses offered in the program.
Research Questions
This study concentrated on three broad research questions that focused on the
effectiveness of the admission variables at predicting student outcomes.  
11
Supplemental questions specific to aspects of the broader questions were created to
focus on each predictor measure separately. The research questions were as follows:
1. Do data gathered as part of the admission selection process into the PA
program serve as an indicators to predict the success or failure of students in
the completion  of the PA program?
To what extent does the cumulative grade point average (GPAcum)
predict student success in completing the physician assistant
program?
a. To what extent does the science grade point average (GPAsci) predict
student success in completing the program when used as indicators of
success?
b. To what extent do the quality of work experience and hours of work
experience correlate with student success in completing the program
when used as an indicator of student success?
c. To what extent does the reference letter predict student success in the
program when used as an indicator of student success?
d. To what extent does the personal statement predict student success in
the program when used as an indicator of success?
2. Do data gathered as part of the admission selection process into a PA
program serve as an indicator to predict the success or failure of student in
passing the certification examination?
12
a. To what extent do GPAsci and GPAcum predict student performance
on the NCE?
b. To what extent does the GPAsci predict student success in the basic
science concept and pharmaceutical therapeutic task areas on the
National Certification Examination when used as indicators of
success?
c. To what extent do the quality of work experience and the hours of
work experience correlate with student performance on the National
Certification Examination?
3. Is there a statistically significant relationship between the applicant’s entry-
level skills and the exit-level skills identified on the National Certification
Examination?
Delimitations
The delimitations identified in this study were as follows:
1. The grade point average (GPA) is a traditional predictor of student outcome.  
The entry-level GPAs used in the study come from a wide range of
institutions and did not differentiate GPAs from private or state institutions
nor did it differentiate between 2-year versus 4-year institutions, therefore the
GPAs may not truly predict students’ abilities to succeed.
2. The study will correlate entry-level sills with skills sets assessed on the NCE
without taking into account learning that took place during their course of
training.
13
Limitations
Data used in the study was limited to data collected from students who
enrolled a community college based PA program from the fall of 1999 to the fall of
2005.  The physician assistant profession is a multi entry-level profession with
programs requiring candidates to have academic prerequisites that range from
undergraduate course work to graduate course work.  The majority of the accredited
PA programs offer baccalaureate degrees or masters degrees; only ten (out of 139)
PA programs in the country are located at community colleges.  This study was
limited to the investigation of admission criteria to one community college based PA
program and therefore cannot be generalized beyond this demographic.
Definition of Terms
The definitions of the following key terms are for the purpose of enhancing
the reader’s understanding of the concepts to follow:
Accreditation Review Commission on Physician  Assistant Education,.
Inc. (ARC-PA):  The Accreditation Review Commission on Physician Assistant
Education, Inc., is an independent accrediting agency authorized to accredit qualified
PA educational programs (http://www.arc-pa.org).  ARC-PA protects the public’s
interest and the PA profession by defining the standards for PA education in the
United States and ensuring their compliance with those standards (http://arc-pa.org).
American Academy of Physician Assistants (AAPA):  The national
professional organization that represent practicing physician assistants and student
physician assistant in training.
14
Admission variables: The variables used during the admission selection
process to gather data that identify predictors of student performance.  Independent
variables identified in the study are as follows:  cumulative GPA, science GPA, work
experience (quality and duration), letters of reference, essay and entry-level skills.  
The dependent variables are program completion and score on the NCE.
Science Grade Point Average (GPAsci):  The GPAsci is calculated from the
anatomy, physiology and microbiology.
Cumulative Grade Point Average (GPAcum): The GPAcum is calculated
based on prerequisite course work only (chemistry, physics, English 1A, sociology
or anthropology, psychology), with the exclusion of anatomy, physiology and
microbiology.
Physician Assistant Education Association (PAEA): The Physician
Assistant Education Association, established in 1972, is the national organization
representing physician assistant education programs in the United States.  The
PAEA’s mission is to assist PA education programs in the instruction of highly
educated physician assistants.  The association offers an array of services to PA
programs, faculty, students, and the general public aimed at fulfilling this mission.
National Commission on the Certification of  Physician Assistants
(NCCPA):  The NCCPA is the only certification agency for physician assistants
profession. The organization was established in 1975 as a non-profit organization.
Physician Assistant National Certifying Examination (PANCE):  The
National Certifying Examination (NCE) for the physician assistant profession. The
15
examination is designed to assess entry-level competency of graduate physician
assistants to practice medicine.  The exam in based on a practice analysis performed
every four years and is practice-based in format.
Conclusion
The physician assistant profession is a fairly new health profession compared
to the longevity of other health professions such as nursing and medicine.  The
profession has evolved from an informal hospital-based curriculum in the mid 1960s
to a formal college/university curriculum that exists in physician assistant education
today.  The roles and responsibility of physician assistants in clinical practice has
expanded and public demand for highly qualified health care providers have
compelled gatekeepers of the profession (i.e. ARC-PA, PAEA, NCCPA, and AAPA)
to ensure that physician assistant graduates have the skills and competencies
necessary to practice quality medicine.  The admission selection process plays a
crucial role in ensuring that individuals who enter the profession have the academic
abilities and personal attributes requisite to achieve this goal.
The current admission criteria for the community college-based physician
assistant program are driven by two independent sources: the state guidelines
regulating the community college admission process and the Physician Assistant
Committee Laws and Regulation guidelines established in the early 1970s. The
admission criteria for the program were implemented eight years ago and the
reliability and validity of the admission process had never been established through
research. The increased attrition rates of students in the program and the decrease in
16
performance of graduates on the National Certification Examination were believed to
be related to poor reliability of the admission variables and insignificant correlations
between the admission variables and student outcomes.
The purpose of this study was to examine the predictive reliability and
validity of the admission data in predicting student success in program completion
and student success on the NCE.  The goal was to establish predictive validity
criteria that would select students with a relatively high probability of successfully
completing the program and pass the National Certifying Examination.  The
admission criteria established was aimed at improving student success while
adhering to the predictive validity principles defined by the state regulations.
17
CHAPTER II
LITERATURE REVIEW
The physician assistant profession is a relatively new profession compared to
other health professions such as nursing, physical therapy, chiropractor, pharmacy
and medicine.  The physician assistant profession has only been around since 1965.  
Due to the newness of the profession educational investigative research on the
physician assistant profession is sparse, consequently  this literature review primarily
explores admission practices employed by related health professional programs such
as chiropractic, dental hygiene, nursing, pharmacy, physical therapy and medicine.  
Research studies in these related health fields created an appropriate foundation for
this study because the traditional admission criteria used as discriminators in the
selection process for these programs are similar and the intellectual and personal
characteristics valued in related health professions are consistent throughout.
To support this investigation and provide guidance in selecting admission
variables that will identify qualified candidates, this literature review will focus on
relevant studies that investigated admission variables used by health professional
programs which may guide physician assistant  programs toward a more
discriminating selection process.  The literature review provides an overview of
reliable and valid educational investigations that have performed predictive validity
studies correlating admission variables and student academic performance.  These
studies were used to support and guide the investigation in this study.
18
In order for the reader to understand the significance of this research there
must be a clear understanding of the history of the profession, the role and
responsibilities of the physician assistant in clinical practice, and the significance of
the certification process in validating competency of the physician assistant.
Establishing a foundation of knowledge about the profession will help the reader
understand the importance of the admission criteria in supporting the profession.
History of Physician Assistant Education
The physician assistant profession was created in the mid-1960s as a response
to health care shortage in the United States (http:/aapa.org).  The initial hospital-
based curriculum was started at Duke University in the mid-1960s utilizing medical
corpsmen from the Vietnam War. Since 1965 there has been a substantial growth of
the PA profession along with a simultaneous change in the roles and responsibilities
of PAs in health care.  It is one of the fastest growing health care professions in the
United States due to the shortage of health care providers and the cost effective
resolution that can be achieved by hiring physician assistants as health care
practitioners.  Physician assistants provide competent health care at half the salary of
traditional physicians. The U.S. Department of Labor Bureau of Labor Statistics
2006 Report projected a 49% increase in the number of Physician Assistant jobs
from 2004 – 2014 (http://www.bls.gov/oco/ocos081.htm.6/29/08).
Physician assistants are trained as generalists, but the responsibilities of a
physician assistant in health care depends on the practice setting, education and
experience of the PA.  Physician assistants are trained to work in an array of clinical
19
settings including hospitals, physician private offices, HMOs, correctional facilities,
branches of the military, nursing homes, public health and community clinics, as
well as industrial clinics.  Physician assistants are also employed by the government
to work in foreign embassies and in the White House.
Physician assistant graduates must obtain certification in order to gain the
necessary license to practice medicine as a physician assistant.  Achieving a passing
score on the National Certification Examination (NCE) is required for certification.  
A brief description defining the certification process follows.
Physician Assistant Certification
The NCCPA is the only national credentialing organization for physician
assistants in the United States. It was established in 1975 at the recommendation of
the National Board of Medical Examiners and the American Medical Association.  
The purpose of this organization is to certify that graduate physician assistants and
practicing PAs have the knowledge and skills necessary to practice medicine.   All
states, U.S. territories and the District of Columbia rely on the NCCPA certification
criteria for initial licensure of physician assistants in their state or territory  
(http://nccpa.net/AboutUs.aspx).
The first certification examination was administered in 1975 and was
designed as a two part examination.  Part I, a 300 question multiple-choice test was
designed to cover content knowledge, therapeutic, diagnostic and laboratory
interpretation and clinical management.  Part II was a demonstration component
which assessed physical assessment skills and clinical reasoning.  In 1995, part II
20
was discontinued because it was proven through research that multiple choice
questions could assess candidates’ competency as well as the demonstration
components.  The current certification examination is a 360 question multiple-choice
examination assessing six basic task areas: history taking and performing physical
examinations; using laboratory and diagnostic studies; formulating most likely
diagnosis; health maintenance; clinical intervention; pharmaceutical therapeutics;
and applying basic science concepts (http://nccpa.net/AboutUs.aspx 4/13/08).  For
the purpose of this study the six basic task areas identified in the NCE blueprint will
be referred to as exit skills. The study will correlate the entry-level skills to exit skills
on the NCE.
To attain PA certification, PAs must graduate from an accredited PA program
and pass the Physician Assistant National Certifying Exam (PANCE).  After passing
the PANCE the PA become NCCPA certified which qualifies them for licensure in
their state to practice medicine as a physician assistant.  There is six-year
certification maintenance cycle that follows the initial certification which include
completing 100 continuing medical education (CME) hours every two years and
during the fifth or sixth year of the cycle the PA must pass a recertification
examination to maintain certification (http://nccpa.net/AboutUs.aspx, 4/13/08).  
Selecting applicants who have the cognitive abilities and personal attributes to
succeed in completing the program and passing the NCE is one aspect of the
admission selection process, but another important consideration in this process is
21
selecting individuals who will maintain certification through the recertification
process which takes place every six years.
The Admission Process
Providing this detailed description of the admission selection process at the
community college-based physician assistant program is necessary to give the reader
a clearer understanding of the overall selection process that impacts the selection of
students for the PA program. The utility of admission variables as predictors of
student success is an important component of the admission selection process.  The
selection of students is done by lottery therefore variables used to predict student
success must be reliable and valid. Individuals who qualify for the lottery pool
should have the academic abilities and personal qualities necessary to succeed.
Variables used in the admission selection process must be effective as predictors of
student success or failures so that applicants who are placed in the lottery are
compatible in academic abilities and skills necessary to succeed.
The admission prerequisite requirements that was completed prior to
admission to the PA program are as follows:  1) Completion of all prerequisite
courses (anatomy and physiology, microbiology, English 1A, physics 10 and 11,
chemistry 2A, sociology I or anthropology I), 2) a science GPA of 2.7 based on
grades received in anatomy, physiology, and microbiology; 3) a cumulative GPA of
2.5 based on all prerequisite courses excluding anatomy, physiology and
microbiology.
22
Courses used as prerequisites serve as a foundation for courses taught in the
PA program.  For example, a student must have knowledge about anatomy,
physiology and microbiology to understand the pathophysiology and etiology of
disease processes.  Another example is the physics course, which serves as
foundation for the orthopedic courses by giving students a foundation for
understanding the mechanism of injury for musculoskeletal disorders.
Non-cognitive data is also gathered in the admission process.  Non-cognitive
variables include: 2000 hours of paid “hands-on” health care experience (in nursing,
emergency medical technician, paramedic, certified nursing assistant, and medical
assistant, etc.); a personal statement that assesses written communication skills, the
applicants’ commitment to the physician assistant profession, and the applicant’s
desire to become a physician assistant; and two letters of reference from individuals
who are familiar with the applicant’s clinical performance.   The community college-
based PA program is one of two PA programs in the country that do not utilize
interviews to assess applicants’ potential for success.  Faculty and preceptors for the
program consider this a limitation in the selection process.  State regulations prohibit
the use of interviews as a variable in the admission process which is why interviews
had not been incorporated into the admission process. All information gathered in the
admission process was thoroughly evaluated for compliance with the admission
criteria.  A step-by-step process was implemented to ensure that the all candidates in
the lottery are qualified.
23
The application process is divided into three phases.  Phase I entails transcript
evaluation to ensure at all applicants have met the academic qualifications for the
program.  Transcript evaluation was done by the District’s transcript evaluator who
determined if all prerequisite course work had been completed and validated course
equivalencies for courses taken outside the college. Additionally the transcript
evaluator calculated the cumulative and science GPAs.  Applicants who passed
Phase I criteria progressed to an advanced screening process, Phase II.  Individuals
who do not pass the transcript screening process were disqualified and the
applicant’s file was removed from the selection pool.
Phase II of the admission process entails screening the application for the
following: verification of occupation experience (quality and hours of work, hours of
work experience), evaluation of the personal statement (to determine commitment to
the profession, goals of becoming a PA, and written communication skills), and
evaluation of reference letters.  Each file was reviewed by two members of the
faculty.  Written documents were read by two faculty members as well.  The
personal statement was evaluated for content and must obtain statements that express
a goal of wanting to become a PA and a commitment to the PA profession.  Rating
of personal statements and letters of reference were done subjectively without a
rating scale.  Rater reliability had not been established and was a limitation in the
admission process currently practiced.  This study incorporated the use of analytic
and holistic rubrics as tools for rating the personal statement and the letters of
reference.  The decision to use rubrics in this study was to improve the reliability and
24
validity of the variables when predicting student performance in the program and
student scores on the NCE.
Phase III is the final stage of the selection process.  Members of the Selection
Committee (the program director, dean of the department, the medical director, a
physician assistant from the community, and two faculty members from the
program) perform a final review of each file to verify compliance with the admission
requirements and to determine if the applicant is qualified for the lottery.  Qualified
applicants are those individuals that the committee believes have the academic and
personal characteristics to successfully complete the program and pass the licensing
examination to become competent, contributing, self-fulfilled members of the health
care team (Downey et. al, 2002) based on the selection criteria.  Qualified applicants
are selected for enrollment using a lottery system. Since the number of qualified
applicants out-number available slots the Selection Committee select the class by
lottery based on Board Policy 5000.  The policy reads as follows (www.rcc.edu):
Admission priority to designated over-subscribed programs shall be
determined according to the legal residence of applicants in the order listed:
1. Residents of the Riverside Community College District
2. Residents of other community college districts within Riverside County
which do not present similar courses or programs.
3. Residents of California community college districts outside Riverside
County.
4. Residents of areas outside of California
25
The committee selected 30 applicants for enrollment and the remaining
applicants in the pool are assigned alternate positions via the lottery process until all
applicants were assigned a number.  The lottery results in a random selection of a
student and does not discriminate between the least qualified of the group and most
qualified.  Individuals with a minimal GPAcum of 2.5 or a GPAsci of 2.7 have the
same opportunity for selection as an individual with a 4.0 GPA for both science and
cumulative scores.
Limitations on the admission selection process create barriers that interfere
with the selection of qualified applicants. Data gathered in the admission selection
process must be reliable and valid in identifying predictors of student success or
failure because variables used in the admission process determines which applicants
are qualified to go into the lottery pool.  Applicants must complete all course
prerequisites, achieve a GPAsci of 2.7 and a GPAcum of 2.5, complete 2000 hours
of paid “hands-on” health care experience, submit a personal statement that reflect
interest and commitment to the PA profession and present two letters of reference
that support their candidacy as an applicant.  The selection process was designed to
thoroughly evaluate all data gathered with the intent of qualifying or disqualifying
the applicant for the lottery pool.  The Selection Committee must ensure that all
applicants in the lottery pool have the academic and personal qualities to succeed
because once the names are placed in the lottery pool; the committee does not have
any control over the outcome of the selection process.  Efforts to select the best
candidates for enrollment must be taken prior to the lottery process.  Although the
26
lottery process creates limitation on the outcome, the admission selection process can
be effective in predicting student success or failure if the data gathered from the
admission variables are strong discriminators and are reliable and valid predictors of
student success.
Theoretical Framework
State Regulations clearly stipulate what the policies are regarding the utility
of prerequisite courses in community college curriculum or as requirements for
admission to programs. Regulations approved by the Community College Board of
Governors (BOG) and published by the Chancellor’s Office of the California
Community Colleges (COCCC, 1997) and the State Academic Senate (Scroggins, et.
al., 1997) acknowledge that prerequisites are an integral part of the community
college curriculum, but insist that the rationale for implementing such prerequisites
for courses or admission to programs should be implemented for the purpose of
ensuring that the students have the required entering abilities and background to
succeed in the course (Phillips et. al., 2002).  Additionally, prerequisites should be
used to create a more homogenous cohort of students that will have compatible
aptitudes and abilities and are intended to improve student retentions and success
rates of students by ensuring that the cohort of students has the common background
of knowledge, abilities, experiences and aptitudes (Armstrong, 1997).  The
admission criteria for the physician assistant program were drafted with this
intention.  However, the reliability and validity of the criteria was never completed
27
as part of the program review process.  The program is a fairly new program and up
to now did not have enough data to perform the appropriate research.
The State Matriculation Regulations define one of two criteria for validating
prerequisites for courses or program admission.  One criterion is a content review
whereby the content of the prerequisite course serves as foundation for another
course.  The second criterion for achieving prerequisite validation requires that an
empirical or statistical relationship be demonstrated between a course and its
respective prerequisite before implementing a mandatory prerequisite requirement
(Phillips, et.al, 2002).  This criterion is the guiding force of the theoretical
framework for this study.  The regulation states that the “prerequisite for a course
shall be clearly related to course content and must be validated as being necessary
for success in such course.” [(Section 58106 (c) (2))]  The regulations further state
that:
In order to show that a prerequisite is necessary for success in a particular
course, the validation procedures must ensure that a student who has not met
the prerequisite is highly unlikely to obtain a satisfactory grade in the
course.” [Section 58106 (e)]

The current admission process for the physician assistant program has not
been validated through research.  The academic prerequisites for admission to the
program were adopted based on the mandates published in the Physician Assistant
Committee’s Laws and Regulations.  The purpose of this investigation is to assess
the reliability and validity of data currently gathered in the admission process and to
determine if this data provides criterion-related validity evidence for prerequisite
28
used for admission to the PA program.  The burden of proof for validating the
current admission criteria must be achieved using a predictive validity process that
will maintain conformity with the state and federal statues regarding access (Phillips,
et.al, 2002).
The theoretical framework for this study relied on predictive validity research
that place this study in context with other correlational studies that examined
predictor variables (i.e. course prerequisite GPA, GPA of science course, entry-
skills) used in the PA program admission criteria.  Review of prior predictive studies
enabled this investigation to anticipate various sources of measurement error so that
an attempt to control the error may occur at the outset of the investigation (Phillips
et.al., 2002).
Predictive validity is an attempt to approximate the future in the present
(Armstrong, 2000). This study is aimed at identifying variables that forecast
academic, vocational and personal success (Gall et. al, 2007, p.342).  Prediction
studies provide three types of information: (1) the extent to which the criterion
behavior pattern can be predicted, (2) data for developing the criterion behavior
pattern, and (3) evidence about the predictive validity scores that were correlated
with the criterion behavior pattern.  The criterion for this study was student success
which was demonstrated by student performance in completing the program and
success in passing the NCE.  The predictor variables in this study were GPAs, skills,
personal statement, and letters of reference.  If a significant correlation exists
between predictive measures and criterion measures, criterion-related validity
29
evidence will exist.  Review of the following predictive studies provided the context
for this study and builds a foundation that supported this research.
Identifying Predictors of Success
Review of the literature on health professional programs indicated that
admission criteria used in health professional programs relied on a variety of
traditional variables as measurements that predict student success.  Cognitive and
non-cognitive data is collected for the purpose of selecting candidates most likely to
succeed.  Common admission criteria used among allied health programs consist of
cumulative GPA, science GPAs, personal statements, interview, letters of  reference,
and standardized test scores (i.e. GRE, ACT, SAT).  Previous scholastic achievement
was by far one of the most commonly used predictor of success (Fontanella and
Cooke, 1992; Thieman, 2003; Zhang, 1999).   Thieman (2003) administered surveys
on admission criteria used in allied health professions which confirmed a strong
reliance on previous scholastic achievement, both GPAcum and GPAsci as
discriminators for effectively identifying candidates capable of success.  Fontanella
and Cooke (1992) demonstrated in their study the role of grades in gaining access to
medical school. A retrospective study by Beeson and Kissling (2000) of nursing
students (over a  fifteen year period) demonstrated a high correlation between GPA
in the program and pass rate on the NCE (Espen, 2006).   In the same study Espen
(2006) demonstrated that a student’s science GPA showed a significantly high
correlation with pass rate on licensure examination and program completion for
students in a radiography program. For some chiropractic colleges, GPA is the only
30
academic criterion for admission (Zhang, 1999).  Zhang claims that GPA is widely
used as a predictor of success because it is calculated from individuals’ past
performance which is considered a predictor of future academic performance.  It is
assumed that if the student was able to achieve a high GPA at an institution once,
they will be able to do it again (Zhang, 1999).  However, Zhang postulated several
limitations that may raise concern regarding GPA as a predictor of future success.  
Zhang suggested that limitations should be considered when using GPA as a
predictor of success as one’s learning strategies, behavior, and attitude is not factored
into the equation. Abedi (1991) contends that GPA may not be a reliable indicator
because it does not take into consideration the college or university that the
individual attended, the bases for grading, nor the number of college units
completed.  The number of college units completed prior to enrollment is not a
consideration in this study, but the relevance of this variable as a predictor of student
success is a recommendation for future studies.
Zhang (1999) investigated correlations of students’ entry-level grade point
average, academic performance in the program and the National Board Examination
(NBE) in all basic science subjects for students at Sherman College of Straight
Chiropractic.  The results showed a moderate to good correlation between entry-level
grade point average and basic science subjects in the program; general anatomy (r
=0.670), spinal anatomy (r = 0.620), physiology (r = 0.570), biochemistry (r =
0.548), pathology (r = 0.583) and microbiology (r = 0.640).  The study also showed
a good correlation (r = 0.720) with GPA of overall coursework in the sciences and
31
class performance. This study mimics findings in Zhang’s study by demonstrating a
statistically significant correlation between science grade point average and
performance in basic science concepts on the NCE.
Part I of the NBE is divided into general anatomy, spinal anatomy,
physiology, biochemistry, pathology and microbiology.  A correlation with entry-
level GPA and performance on the NBE science content was not as impressive.  The
correlations over entry-level GPA with scores on the NBE were low to moderate
(except in microbiology and pathology) with results as follows: between anatomy (r
= 0.514), spinal anatomy (r = 0.521), physiology (r = 0.512), biochemistry (r =
0.552), pathology (r = 0.205), and microbiology (r = 0.292).  Zhang’s evidence
indicates that GPA predicted class performance better than NBE scores which are
explained by three facts: 1) the GPA correlates with learning ability, 2) questions on
the NBE covers a vast amount of core content and 3) the stress of taking the
examination.   Zhang also reports a moderate correlation of students’ overall GPA
with students’ overall NBE scores (r = 0.515).  The correlation with entry-level GPA
with performance on the NBE is moderate.  Overall, Zhang’s study demonstrated
that GPA was a better predictor of class performance than NBE scores. The
foundation established by Zhang’s study in comparing academic performance with
outcome was used to investigate the correlation between GPAsci and GPA cum with
program completion and performance on the National Certification Examination for
physician assistants.  In contrast to Zhang’s findings, the results of this study
demonstrated a strong correlation between GPAsci and student performance on the
32
National Certification Examination.  In addition, the GPAcum did not demonstrate a
statistically significant correlation with program completion or performance on the
physician assistant NCE.  The previous prediction of an insignificant correlation
between GPAsci and performance of basic science concept on the NCE was
unfounded. The results demonstrated a strong correlation between GPAsci and
performance on basic science concepts on the NCE.
The limitation in Zhang’s study is that the study did not compare other
predictors, such as personal statement or entry-level skills, etc. He recommended that
a comparative study be done to investigate the predictive value of the GPA and other
predictors (Zhang, 1999).  This study expanded on Zhang’s study by investigating
the predictive values of the GPAs compared to others predictors (i.e. personal
statement, letters of reference, entry-level skills, etc.).  Sandow et. al.(1999)
conclude that two or more admission criteria, in combination, provide a more
reliable means of predicting academic success.
Thieman (2003) did a retrospective multiple regression analysis of 121
students who enrolled in an entry-level master’s degree in physical therapy at the
College of Saint Catherine.  He collected 121 NBE scores from graduates; the first
two class scores were self-reported.  The study correlated GPA as a predictor of
clinical performance.  The only admission criterion that yields an above-chance
correlation with the licensure examination was the undergraduate GPA (r = .243).  
The focus of the study was to evaluate the ability of the admission data to predict
clinical performance. The problem with the study is that the data gathered in the
33
admission process could not be used to predict clinical performance because the
forms used to collect preadmission data did not emulate with the form used in the
program.  Specifically, the skills listed on the Clinical Reference Form used in the
application process did not emulate the skills assessed on the Clinical Performance
Instrument used to assess the clinical competency of students in the program.  
Therefore data gathered in the admission process could not be used to predict clinical
performance in the program.  Entry-level skills listed on the PA application form
emulates the exit skills identified on the NCE, consequently this should not be a
problem in this study.  Thieman acknowledged this was a weakness in his study and
suggested in future research to redesign the Clinical Experience Reference form used
in the application process to reflect the Clinical Performance Instrument used in the
program.  The form used in the application for the physician assistant program to
report clinical skills in this study emulates the seven skill areas identified on the
NCE.  Thieman also noted that an additional limitation on his research was that the
study relied on self-reported NCE scores from graduates of the first two classes
which may affect the reliability of scores reported by those graduates.  The National
Commission on Certification for Physician Assistant reports all board scores directly
to the program, therefore, NCE information used in this study was reliable.
Evans and Wen (2005) examined the value of Medical College Admission
Test (MCAT) subscores in predicting global academic performance in osteopathic
medical school (defined by GPA in basic science), cumulative undergraduate GPA
(UGPA), clinical GPA, and national licensing examination scores.  The
34
undergraduate GPA was the most significant predictor (ß = .13-.33) of the five
variables. GPAsci also showed significant predictive value.  MCAT subscores were
of limited value in predicting academic performance.  Although some studies have
suggested that MCAT scores is a better predictor (r=0.615 -.07) of academic
performance than UGPA (r = 0.54-.0.58) as a predictor of academic performance
(Koenig, 1999) and the findings of Veloski et. al (1996) indicate that the science
MCAT score was a better predictor of performance on the NBE (Evan, 2005).
Besinque et. al (2000) performed a self-reported mail study to determine
performance predictors on the California State Board of Pharmacy Licensure
Examination (CSB exam).  The study evaluated graduates’ success in passing the
CSB.  The investigation also considered the demographic, work experience, current
position, study habits and the National Association of Boards of Pharmacy Licensure
Examination results.  A student’s t-test was performed to examine the relationship
between pre-pharmacy GPA and pharmacy school GPA to performance on the CSB
exam.  The correlation analysis of the pre-pharmacy GPA with the CSB Exam
performance showed little statistical difference between individuals who pass the
exam and those who failed the exam.  The mean GPA for individuals passing the
exam was 3.39 + 0.27 and 3.31 + 0.23 for those individuals not passing the exam (P
<0.08). However, the study did indicate that academic performance in pharmacy
school proved to be a stronger predictor of performance on the CSB Exam.  
Besinque et. al (2000)  postulates that the reason for the difference is that the
educational background in pre-pharmacy and the number of years in pre-pharmacy
35
school was not taken into consideration. The admission criteria for the PA program
do not consider differences in educational background in the admission process.  
However, the open-access mission of the California Community Colleges ensures
that the students served come from diverse educational backgrounds (Phillips, et. al,
2002).  Therefore the limitation recognized in Besinque’s study was recognized in
this study as well.
Phillips, et. al (2002) investigated the selection and modeling of variables
used for admission to Associate Degree Nursing (ADN) programs.  The Center for
Student Success at City College in San Francisco worked closely with the ADN
consortium of community college nursing directors to develop, conduct, analyze and
disseminate findings related to prerequisite standards for community college nursing
programs (Phillips, et. al., 2002).  Community college ADN programs have similar
constraints related to admission criteria that the community college based physician
program is confronted with.  The attrition rate in ADN programs were high and the
ADN faculty believed that the findings were related to the lack of required
prerequisite courses and skill-level competency for entry into ADN programs which
resonate with the concerns of the physician assistant program faculty.  The RCC PA
program faculty postulated that the prerequisite courses are minimal compared to
most physician assistant program admission criteria and that the prerequisite courses
required for admission to the program have not changed since the 1970s despite the
evolutionary changes that have occurred within the profession.  The prerequisite
courses mandated by the Physician Assistant Committee have not been modified to
36
reflect changes in the profession, changes in PA curriculum, nor changes in the role
and responsibilities of the PAs in clinical practice.
The problem of nurse shortage coupled with high attrition rates of students in
ADN programs refueled a 20 year debate over access versus quality (Phillips et al,
2002).  The open door philosophy of community colleges ensures equity in access,
but does not ensure quality in selection of students in the community college
programs.  The attrition rate for ADN programs in California increased from 18% in
1994-95 to 27% in 1998-99 (Phillips et al., 2002).  With the rise in attrition rates and
the need for more nurses in the state, the purpose of the research was to create ways
to provide open-access with few requirements for admission, while attempting to
reduce high attrition rates in ADN programs; which is the same objective of the PA
program.  The study used a predictive validity model to test fifty variables to
determine their relationship with program completion.  The study identified four top
variables: overall college GPA, English GPA and core biology GPA (anatomy,
physiology and microbiology) and core biology repetitions (referring to the number
of times a student repeats one of the core biology courses).  Correlation between
prerequisite GPA and program completion showed a statistically significant increase
in program completion as the prerequisite GPA increased.  Similar correlations exist
with core biology and English:  Students who spoke English as their primary
language had higher completion rates.  Core biology had the highest success rate and
English has the third highest success rate.  Students who earned a 3.0 Core Biology
37
GPA had a 79% successful completion rate whereas students with an overall GPA of
3.0 had a 73% completion rate (Phillips et al., 2002).
A composite formula was created using logistic regression and application of
the formula in the admission process decreased the attrition rate from 27% in 1998-
99 to 18% in 1999-2000.  Use of the composite formula to predict student success
resulted in an improvement in completion rate for ethnic groups rather than a
decrease in improvement.  Findings of the study were significant in that researchers
were able to create a formula to improve success rates without negatively influencing
the success rates of any ethnic group. The predictive validity principles defined by
the state regulations were met and the results were higher success rates for all
students (Phillips et. al, 2002).
Phillips’ study served as a strong foundation for this study because of the
similarity of issues related to state mandates governing admission selection criteria to
community college based programs.  Phillips’ study will serve as a model for
validating knowledge on prerequisites and skills for the physician assistant program.
The physician assistant program faces the same problems identified in this study;
high attrition rates, shortage of PA practitioners, and problems identifying the
prerequisite skills and knowledge needed to select students with a relatively high
probability of successfully completing the program and passing the National
Certifying Examination.
Chisholm (1995; 1997) approaches the subject slightly differently from the
previous researchers.  He used admission variables to predict academic failure
38
(students who fell below the 25
th
percentile) instead of predicting academic success.  
He studied four factors ( math GPA, science GPA, prepharmacy GPA, and the
Pharmacy College Admission Test) in a pharmacy admission process to determine if
any one of the factors was a predictor of student ranking; neither proved to be a
significant predictor of students’ failure in the program.
Holglum et. al., (2005) investigated predictors of academic success and
failure in a pharmacy professional program. Academic performance measured in this
study were cumulative GPA, science GPA, American College Test (ACT) and
average organic chemistry grade. Findings of the study demonstrated a significant
correlation between higher science grades, prior degree and higher organic chemistry
grades and  higher scores on the ACT with students who transfer to a four-year
college and students who succeed.  Curtis et. al (2007) correlated the admissions
criteria with academic performance of dental students.  The research was carried out
by tracking ten underachieving students from five classes (total 50 students).  The
following admission variables were used to measure success: cumulative GPA,
science GPA, Perceptual Ability Test, college rigor, and academic load in college.
Descriptive statistics, correlation, and regression analysis was used to correlate the
admission variable with first year GPA and graduating GPA.  First year GPA was a
moderate indicator (r2 = 0.58) for predicting graduate GPA.
Tang and Lee (1989) completed a retrospective study to evaluate the
admission criteria in predicting academic achievement of Hong Kong Polytechnic
students: Pre-admission English grades and science scores along with other
39
independent variables to evaluate how well they predict the overall performance
grade.  The study demonstrates that science subjects alone had no significant
correlation with the overall performance which indicated that the subjects did not
contribute significantly to the prediction of success.
Downey et. al (2002) conducted a predictive reliability study using five
variables (incoming GPA, math/science GPA/ and total SAT, verbal SAT score and
math SAT) scores in predicting success in the dental hygiene program at the Medical
College of the Georgia dental hygiene class of 1996-2001. Since this study did not
consider standardized test scores as a determinant of students’ success only those
findings related to cumulative and math/ science GPA are discussed from Downey’s
study.  Success was defined in the study as GPA at graduation and scores on the
Dental Hygiene National Board score.  Findings indicated that the most efficient
admission model included incoming GPA (p<.001) and total SAT scores as
predictors of dental hygiene GPA.  Non math/science GPA and math/science
combined (incoming GPA) add significantly to the ability to predict student
performance on the Dental Hygiene National Board. The incoming GPA was a more
reliable predictor of success than science and math GPA.  Berchulc et. al (1987)
showed similar findings in his study that the overall GPA is a more efficient variable
than subject component GPA in predicting student success.
Non-cognitive Variables
Academic merit alone should not be used as the sole bases for admission to
health care professional programs (Marvis et. al., 2006).  Research has demonstrated
40
that a better prediction of student performance in a health care professional program
is obtained when both cognitive variables (reflecting the academic abilities) and non-
cognitive variables (personal attributes) are used to predict student success or failure.
Non-cognitive variables such as personal interviews, personal statement expressing
commitment to the profession, written communication and prior work experience are
commonly used as non-cognitive variables in admission processes to augment
information gathered from cognitive variables.  Non-cognitive characteristics have
been valuable for selecting students whose interests are harmonious with the ideals
expressed in the institutional mission statement (Marvis, et.al, 2006).  This is true
regarding evaluation of personal statements in the physician application packet.
Personal statements that reflected values in the mission statement were given greater
consideration in the selection process.
The significance of using subjective data to augment academic data used in
the admission selection process was demonstrated in the research study done by
Confer et. al (1995).  Confer (1995) completed a study that correlated objective and
subjective admission criteria with first year academic performance at the College of
Veterinary Medicine at Oklahoma State University.  Findings indicated that a
combination of several academic scores augmented with subjective admission
criteria resulted in a better prediction of student academic performance in the
program. Although the literature does not claim that this would result in a better
prediction of student performance on the NCE, the assumption is that if students do
better in the program they will do better on the NCE. GRE scores, cumulative GPA,
41
GPA for prerequisite requirements, grades in science courses, and MCAT scores
were collectively used as objective criteria for admission.  Subjective criteria such as
reference letters, professional and other work experience, and selectivity of colleges
and universities were assessed and assigned a numerical score labeled as FILE.  
Numerical scores were assigned by the interviewer at the conclusion of the interview
and labeled as INTERV.  A linear regression analyses was done for each subjective
and objective criteria to determine correlation with the first year academic
performance.
Confer’s study demonstrated that subjective admission criteria had the
highest correlations with year one academic performance in four out of five classes
studied. Multiple regression analyses were done to determine which combination of
variables created the best predictor model for selecting the applicants most likely to
succeed.  Although findings demonstrated academic correlation with Year I
academic performance, a combination of subjective variables (instead of individual
subjective variables) correlated better with academic performance in the first year in
four out five classes.  This study emphasized the importance of utilizing subjective
data to augment academic achievement in the selection process, but it did not isolate
which variables were most effective as predictors.
Turnbull et. al (2003) conducted a study at the University of Adelaide, in
South Australia, on the undergraduate medical school’s transition from an admission  
system based solely on high school academic scores to a broader system which
incorporated multiple variables (i.e. a national written examination of reasoning and
42
interaction skills, a structured oral assessment, and a threshold matriculation score)
in 1997.  Introduction of the new admission process brought about improved changes
in the quality of students selected for admission.  More positive psychosocial
characteristics were noted among those candidates selected using the broader system
than those previously selected by the matriculation score alone.  Research findings
suggested that a large portion of students admitted prior to 1997 had inappropriate
reasons for studying medicine, lacked knowledge about it and were poorly prepared
for the career (Turnbull et. al, 2003).  A purpose-designed survey was used to collect
data from Year 3 students and Year 6 students.  The Year 6 students were selected
using the academic merit system and the Year 3 students were selected using the
broader system which incorporated both objective and subjective data.  Chi-square
test showed a statistically significant difference between the two groups (Turnbull et.
al, 2003).  Findings indicated that individuals selected under the new system were
less likely to withdraw from the course due to a change in academic direction that
came about due to lack of interest, lack of commitment to the profession or lack of
knowledge about the profession.  Survey results showed that Year 3 students were
more aware about a career in medicine, fewer individuals reported family pressure as
a reason for studying medicine and more were aware of the length of time it took to
study medicine. Findings of the study were hampered by relatively low response
rates, but findings were statistically significant and warranted future study (Turnbull,
2003).
43
Confer (1995) and Turnbull (2003) demonstrate the importance of using
subjective data to augment academic data in the admission criteria for health
professional programs.  This investigation utilized subjective data to augment
objective data to examine the reliability and validity of such data in predicting
student success or failure.  The utility of work experience, letters of reference and
personal statements are non-cognitive variables used in the physician assistant
admission criteria.  The next section of this chapter focuses on the review of
literature that investigated the reliability and validity of these variables in admission
criteria in other health professional programs.
Work experience
The disposition of the medical profession requires that physician assistant
graduates have certain basic skills and abilities to practice medicine.  Methods of
assessing basic skills and abilities of potential students should be measured as a
variable in the admission selection process of health professional programs
(Salahdeen, 2004).  Prior work experience in health care as a prerequisite to the PA
program provides a means of assessing the applicant’s basic skills and abilities that
will contribute to the applicant’s performance in the program and on the Physician
Assistant National Certifying Examination (PANCE).
Review of the literature related to admission practices for health professional
programs acknowledged the utility of work experience as a consideration in the
selection process, but few studies actually investigated correlations between work
experience and academic performance and no study was found that correlated work
44
experience with student performance on the licensure examination. The majority of
research correlating work experience with academic performance has been centered
on research identifying key predictors of student success in masters level business
schools (Sulaiman et. al, 2006).  Work experience is a prerequisite for business
schools around the world in major schools like Harvard Business School, UCLA
Anderson School of Management, and the University of Pennsylvania’s Wharton
Business School (Schellhardt, 1998).  Ainin et. al (2006) claims that prior work
experience in a related field of study provides the individual with a broader view of
the information being studied which gives the student an advantage over individuals
with little or no experience.  Dreher and Ryan (2000) contended that students with
prior work experience may more readily see the relevance and potential applications
of materials learned in a course.
Studies correlating work experience and academic performance in MBA
programs have demonstrated conflicting conclusions, but the majority of the research
demonstrates positive correlation between work experience and academic
performance. McClure, Wells and Bowerman (1986) demonstrated a positive
relationship between work experience and academic performance in MBA studies.
Dreher and Ryan (2000) found similar correlation with work experience and
academic performance.  Adams (2000) expands on the topic by demonstrating a
correlation between years of experience and academic performance.  Findings
demonstrated a significant correlation between the years of experience in a field and
the academic performance in the program. However, Adams’ study did not take into
45
account the quality of prior experience as a contributing factor to student
performance.  In contrast, other researchers (Dreher & Ryan, 2000, 2002, 2004;
Grady et. al., 1996) concluded in their studies that work experience did not related to
GPA in MBA programs (Sulaiman et. al 2006).  This investigation related to the
community college-based PA program will expand on existing research by
correlating student performance with the quality of prior work experience (e.g.
medical assisting, LVN, registered nurse, respiratory therapist) and the length of time
in the field.  Additionally, a relationship between entry-level skills reported from the
previous work experience and exit-skill performance on the NCE will be
investigated.  Although this section of the literature review focused on correlated
studies between work experience and academic performance in MBA programs, the  
assumption that prior experience in a skill give relevance to new material learned,
thereby giving a better potential for applying the information learned, and that it
mirrors the philosophy  of the  physician assistant program faculty. The faculty
postulates that work experience serves as foundation for material learned during
training and skills needed in clinical practice.
The entry-level skills identified in the admission application served as an
independent variable in this study.  The entry-level skills identified on the
application emulate the exit-skills identified on the NCE.  The NCE Blueprint
identifies seven exit skills that correlate with the entry-level skills that are identified
in the application.  They were as follows: 1) history taking and performing physical
examination, 2) using laboratory and diagnostic studies, 3) formulating most likely
46
diagnosis; 4) health maintenance; 5) clinical intervention; 6) pharmaceutical
therapeutics and 7) applying basic science concepts.  A correlation between entry-
level skills practiced in prior work experience and student performance on the exit-
level skills identified on the NCE was expected to demonstrate significant
correlations, however findings in this study did not support this hypothesis.  Table 1
aligns entry-level skills on the left with exit-level skills on the right for comparison.  
Applicants who have prior experience in an entry-level skill should perform better in
the compatible exit skill.  


Table 2
Comparison of Entry-Level Skills with Exit-Level Skills
Entry Skills (Exit Skills)
Skill1: History Taking and Physical
Examination
PANCE1: History Taking and
Perform Physical Examination
Skill 2: Vital signs/EKG interpretation
Bacterial culture interpretation
PANCE2: Using Laboratory &
Diagnostic Studies
Skill 3: EKG interpretation/culture
interpretation
PANCE3: Formulating Most
Likely Diagnosis
Skill 4: Patient education PANCE4: Health Maintenance
Skill 5: PANCE5: Clinical Intervention
Skill 6: Injections/Medications/
Respiratory therapy
PANCE6: Pharmaceutical
Therapeutics
Skill 7: Microscopic evaluation of Blood,
Urine, Gram-stained specimens
PANCE7: Basic Science Concepts
47
Extensive search of the literature found very few studies that dealt with the
validity and effectiveness of pre-entry skills with performance on a health
professional licensure examination. A retrospective study by Tang and Lee (1989)
investigated the use of a mobility test as a variable in the admission selection
process.  The mobility test was designed to evaluate the candidates’ physical fitness,
coordination and imaginative powers considered essential qualities to perform as a
competent physiotherapist.  The result of the study indicated that the mobility test
did not have significant predictive power for the overall grade point average.  Lack
of significant correlation in the study was justified in two ways:  1) cultural behavior
which demands focus on academic rather than physical fitness and 2) most of the
applicants were not familiar with the exercise they were ask to perform, thereby
contributing to the apprehension which cause them not to perform to their potential.  
The study indicated that although physical fitness was relevant to job performance,
applicants who performed poorly were not excluded because the level of fitness did
not have to be of top athletic quality.
The manual dexterity test is a part of the admission criteria in dental schools
and dental hygiene programs.  Such a test as part of the admission process is
intended to serve as predictor of student success in the program and on the licensure
examination.  Gansky et. al., (2003) investigated the reliability and validity of the
manual dexterity test to predict preclinical grades and faculty perception of
satisfactory performance in these skills which is a requirement for students to
advance to the clinic. The manual dexterity test consists of a two-hour block carving
48
test.  The carving is judged by three preclinical faculty members, three clinical
faculty, and two basic science faculty.  Intra-rater reliability varied greatly, ranging
from 0.34 to 1.00. The manual dexterity test did not correlate significantly (p=0.342)
with students in the bottom 10 percent of the class.  The MDT identified about 10
percent of the students as not passing the test; two out of three scored as No Passes.  
Only four of the students who got a No Pass score were in the lower 10 percent.  
They did not have significantly lower grade point averages.  Additionally, students
who failed the MDT also had low scores on the Perceptual Ability Test (PAT).  The
number of failures was significantly related to the PAT.  Gansky concludes that the
MDT did not appear to add information to the admission criteria or predict student
success in completing the program.
Review of the literature supports the utilization of skill assessment as a
predictor of student performance in health professional programs, however Tang &
Lee, (1999) and Gansky et. al (2003) did not conclude that skill assessment tests
showed significant correlation with student performance in the program.  Their
studies will be used to guide this study to determine if there is a significant
correlation with entry-skills and exit-skills on the NCE.
Letter of Reference
Research on the utility of reference letters and the admission process is
meager.  Predictive validity of personal references in the U.S. has fallen to almost
zero (Hughes, 2002).  Reference letters show little meaning as predictors of student
success.  Stronck (1979) and Stieren (1981) in their investigation on nursing
49
programs admission criteria found that qualitative data such as interviews and letters
of reference were unrelated as predictors of program completion or GPA. Downey
et. al (2002) also expressed doubts about the usefulness of reference in the admission
process and give the rationale that most respondents provide only positive
recommendations.  Letters of reference contribute little meaningful information
about the applicant because people dislike writing undesirable information about an
individual. Therefore only good and vague information is written in the letter of
reference.  The Family Educational Right to Privacy Act (FERPA) also known as the
Buckley Amendment, contributes greatly to this behavior.  The law makes it legal for
students to review their educational records therefore writers are less likely to write
undesirable information in the letter of reference.  If a student signs a waiver (giving
up their rights to review the letter of reference), in accordance with paragraph (C) of
the FERPA law, the reluctance to write undesirable information may not persist.  
Although letters of reference are meaningless in most circumstances, they are  
probably most valuable for evaluating marginal candidates (Marvis et. al, 2006).
Despite negative concepts about letters of references, the letters of reference
are still one of the most common requested sources of information used by many
professional programs in their admission process (Mavis et. al, 2006).  The letter of
reference serves many purposes in gathering significant non cognitive information
about the personal qualities of the applicant that are assumed to correlate with
academic achievement and clinical performance (Marvis, et. al, 2006).  Information
50
commonly asked on reference forms include information related to critical thinking,
problem-solving, communications skills, integrity, interpersonal skills and empathy.
The major pitfall regarding the utility of letters of reference to predict student
success is the similarity that exists among reference letters; they disallow
discrimination among applicants because they focus only on the positive attributes.
Marvis et. al’s (2006) descriptive study on the intentions of letter writers for
applicants to MD programs, justifies the similarity among reference letters by giving
a rationale as to how individuals who write reference letters approach the task.  
Marvis et. al (2006) contend that the letter writer perceives his task as one of
supporting the applicant rather than evaluating the applicant.
Marvis conducted a study using a survey with questions that focused on the
letter writer’s perception of the task and to determine what strategy they used to
handle the request.  Surveys were mailed out to 106 individuals who wrote letters of
recommendation for students who were accepted to the medical school by way of the
Medical Scholars program at the College of Human Medicine at Michigan State
University.  Twenty attributes were derived from the literature and author’s
experience of reading letters of reference.  The participants were ask to rate each
attribute in the letter of reference, using a five point scale to rate the likelihood of
each attribute.  The letters were reviewed and analyzed by two raters using attributes
agreed to in advance.  The results indicated that most individuals saw their role as
supportive rather than evaluative.
51
Marvis’s study reflected previous philosophy regarding writers’ reluctance to
judge or report negative information.  Many reported that they would provide the
letter for the applicant even if they did not know the applicant well, although some
reported that they would attempt to gather information about the applicant before
they wrote the letter rather than write a general letter.  But at best Marvis et. al
suggest that strategies such as this on the part of the writer would result in a vague,
nonspecific generic letter.  Academic performance was the most frequent attribute
found in the content analysis and the survey, which dilutes the significance of
obtaining non academic attributes (which is the intent of the reference letter).  There
was little evidence that suggested that the information provided  in the letter
impacted the admission related outcome.  This study will use aspects of Marvis’s
research as a foundation for establishing a technique for content analysis of the
reference letters used in this study because attributes used in the survey were
common to attributes on the PA application.  Using two raters to read the references
as he did in his study improved the reliability of the score given to the reference
letter.  The limitation of this study is the inability of the study to have compared
letters from the rejected applicants’ files.
Hughes (2002) published an article in the Journal of the Royal Society of
Medicine which addressed the issue of how to improve the selection of medical
students to choose people with the potential to be a good doctor.  Hughes asserts that
character references from tutors or previous employers have potential to add
prediction, but recognizes the limitation of character references.  Reliability of
52
references, the motivation of referee and data protection legislation raise concerns
about the reliability of the referee writing the letter.  Motivation of the referee is
uncertain; loyalty to the student and fear of being compromised by recent data
protection legislature are contributors. To improve reliability of reference letters a
medical school in New Zealand adopted a reference system by writing to head
teachers with specific questions and requesting a rating of the candidate’s qualities.  
Although no long-term predictive validity study has been done to assess the outcome
of this change, the school believes that it provides valid information and correlates
with other non-cognitive information.
Despite the efforts to improve this assessment tool in the selection process,
the literature suggests that this form of assessment needs further standardization to
be more effective and reliable in predicting student performance.  This study will
expand on the existing study by utilizing methods described in the study to improve
the reliability and validity of the letter of reference as a predictor measurement to be
correlated with student performance.
Personal Statement
There is conflicting evidence to support the reliability and validity of writing
samples as predictors of students’ performance.  Salvatori's (2001) review of the
literature related to reliability and validity of admission tools used in health
professions, addresses the reliability and validity of written submissions in terms of
inter-rater reliability and predictive validity.  Youdas et al. (1992) investigation
demonstrated that inter-rater reliability of personal statements indicate a low to
53
moderate inter-rater reliability coefficient.  In a retrospective study of 52 physical
therapy students the reliability coefficients for an applicant’s essay range from 0.15
to 0.43.  In a study conducted on medical students at McMaster University, the inter-
rater reliability coefficient on applicants’ autobiography was 0.45 (Moruzi, 1998).  
Higher inter-rater reliability coefficients of 0.71 to 0.80 were reported by Brown et.
al (1991) and Heale et al (1989) reported higher inter-rater reliability coefficients
assessing letters, not essays.
The ambivalence of using writing samples as a criterion variable for
admission relates to these concerns: the lack of inter-rater reliability of evaluators
contributes significantly to the low inter-rater reliability coefficients of applicants’
essays (Youdas et al., 1992); the lack of clear guidance to both the writer and the
assessor leads to poor reliability and validity of essays as predictors of student
performance and true authorship of the personal statement submitted with the
application form.   Roehrig (1990) demonstrated that an essay written on site and
graded on format, neatness, spelling, writing style and quality of content prove to be
good discriminators of student performance in physiotherapy.  Schmalz et al. (1990)
concluded the same in his study on essay submission given on a selected topic and
rated using similar standards were predictive of student performance in an
occupational program.
In terms of predictive validity, Berchulc et al (1987) explained a 34%
variance in the first semester GPA of 72 students in an occupational therapy program
54
based on essay scores.  Balogun et al. (1986) explained 11% of variance in the GPA
of graduates of 83 physiotherapy students using the same parameters.
Review of the literature for evaluating written submission, both personal
statements and letters demand that a more standardized method be used to enhance
reliability.  Slater and Boulet’s (2001) investigation compared holistic rating and
analytic scoring of written performance in order to determine which scoring
technique resulted in a more accurate assessment of written submissions.  Goulden
(1994) found analytic scores and holistic ratings to have comparable levels of
reliability and validity, with analytic scoring having a slightly higher reliability and
holistic rating capturing the essence of an expert judgment better.  Investigations
related to scoring of written submission will be used to support and guide the rating
of personal statements and letters of reference obtained in the physician assistant
program admission process.  Currently the process does not use a standardized
method to score written submissions. The study will replicate the use of holistic
rubrics and analytic rubrics to achieve reliable and valid scores to be used as scores
that may be predictors of student outcomes.
Conclusion
Admission to physician assistant training programs is competitive with
limited space available for a large number of applicants. There is shortage of PA
practitioners; the U. S. Bureau of Labor predicts a 49% increase of PA jobs by the
year 2014.  This demand for PA health practitioners in clinical practice alone with
limited space allocated for training makes the admission selection process a crucial
55
element in selecting qualified applicants who have the academic abilities and
personal attributes to succeed.  An increase in attrition rate and decreases in  pass
rate on the NCE further impacted the need for a reliable and valid admission process.  
The fact that the physician assistant program resides at a community college
complicates the process even more.  Regulations approved by the CCCBOG and
published by the COCCC govern the use of prerequisite course for courses or
admission to programs at community colleges. Criterion-related validity evidence of
a prerequisite or set of courses (i.e. prerequisite courses used in the admission
criteria) must exist before the prerequisite or set of prerequisites can be used in the
admission criteria (Armstrong, 2000; Phillips, 2002).
The current admission process for the physician assistant program had not
been validated through research.  This study investigated the admission selection
process of a community college-based physician assistant program.  The goals of the
study were as follows:
1. To evaluate the reliability and validity of variables in the admission
criteria as predictors of student performance in the program and on the
NCE.
2. To establish a statistical selection model that will improve program
completion rates and student performance on the NCE.
3. To establish a criterion-related validity evidence of the set of prerequisite
courses required for admission to the program while maintaining fidelity
with state and federal statues regarding access.
56
Variables used in the admission selection process consisted of the following:
science grade point average, cumulative grade point average, prior work experience,
personal statement and letters of reference; data gathered from these variables were
used a predictors of student success.  Review of the literature focused on predictive
studies that investigated these variables.  Predictive reliability studies were the focus
of this literature review and were useful in placing this study in context with prior
research on the reliability and validity of admission variables to predict success or
failure of student success (Phillips, 2002).
GPA was undoubtedly the most common variable used to predict success in
completing the program and passing the NCE which support the fact that there is a
strong reliance on previous scholastic achievement as a predictors of student success.
Espen (2006), Zhang (1999), Thieman (2003), Phillips et. al (2002) and Downey et.
Al’s (2002) investigations demonstrated a low to high correlation between entry
level GPAcum with student performance in the program and student performance on
the NCE.  Evans and Wen (2005) demonstrated a positive correlation with GPA and
program performance and pass rate on the NCE, as well. Zhang (1999) who
demonstrated that GPA was a better predictor for class performance than NCE
scores.  Besinque et. al’s (2000) investigation did not show a significant correlation
between pre-pharmacy and student performance on the licensure exam, but the study
did indicate that academic performance in the program showed a stronger prediction
towards student performance on the licensure exam.  In some studies science GPA
57
was a better predictor of student performance on the certification examination than
the cumulative GPA.
Academic merit alone should not be the sole bases for admission to health
professional programs.  The use of non-cognitive data such as interviews, personal
statements, and letters of reference results in the selection of a better quality of
student (Confer, 1995; Turnbull et. al, 2003).  The review of literature related to
admission practices for health professional programs acknowledged the utility of
work experience as a consideration in the selection process, but most studies relating
work experience to student performance was found in MBA admission criteria.  
Studies correlating work experience with academic performance demonstrated
conflicting results.  McClure, Wells, and Bowerman (1986) demonstrated a positive
correlation between work experience and academic performance in MBA studies.  In
contrast Dreher and Ryan (2000, 2002, 2004; Grady et. al, 1996) concluded in their
study that work experience did not relate to GPA in MBA.
Research investigating the utility of reference letters in admission indicated
that predictive validity of personal references was zero (Hughes, 2002) and showed
that letters of references has had little meaning as predictors of student success.  The
rationale for these finding is that because they are vague, written with the wrong
intent, and are usually positive because of fear of litigation.  Although a letter of
reference is believed to be meaningless, it is still one of the most commonly
requested sources of information use by professional programs (Marvis, 2006).
Letters of reference was used in the admission selection process for the PA program.  
58
Based on information discovered in the literature review, the tool used to rate the
letter of reference in this study must be a device that is reliable and valid.
There is conflicting evidence to support the reliability and validity of writing
samples as predictors of students’ performance.  The problem associated with the
utility of personal statements as predictors of student success is the inter-rater
reliability.  Youdas et. al (1992) demonstrated a low to moderate inter-rater
reliability coefficient between raters evaluating personal statements.  To improve
rater reliability on scores assigned to personal statements, analytic rubrics can be
used.
This literature review has been instrumental in contextualizing this study
within other relevant studies that will support and guide the investigation as to the
reliability and validity of predictor measures as effective tools for predicting student
success in completing the PA program, as well as student success on the NCE.
59
CHAPTER III
METHODOLOGY
The purpose of this study is to evaluate the reliability and validity of the
current admission measurements as indicators to predict student performance in
completing the physician assistant program and passing the National Certifying
Examination.
In order to fulfill the purpose of this study the following research questions
were addressed:
1. Do data gathered as part of the admission selection process into the PA
program serve as indicators to predict the success or failure of students in
the completion  of the PA program?
a. To what extent does the cumulative grade point average (GPAcum)
predict student success in completing the physician assistant
program?
b. To what extent does the science grade point average (GPAsci) predict
student success in completing the program when used as indicators of
success?
c. To what extent do the quality of work experience and hours of work
experience correlate with student success in completing the program
when used as an indicator of student success?
d. To what extent does the reference letter predict student success in the
program when used as an indicator of student success?
60
e. To what extent does the personal statement predict student success in
the program when used as an indicator of success?
2. Do data gathered as part of the admission selection process into a PA
program serve as an indicator to predict the success or failure of students
in passing the certification examination?
a. To what extent do GPAsci and GPAcum predict student performance
on the NCE?
b. To what extent does the GPAsci predict student success in the basic
science concept and pharmaceutical therapeutic task areas on the
National Certification Examination when used as indicators of
success?
c. To what extent do the quality of work experience and the hours of
work experience correlate with student performance on the National
Certification Examination?
3. Is there a statistically significant relationship between the applicant’s
entry-level skills and the exit-level skills identified on the National
Certification Examination?
Independent variables identified in the study are as follows:  cumulative
GPA, science GPA, work experience, letters of reference, and personal statement.  
Dependent variables are program completion and pass/fail on the NCE.  
Correlational statistics was the best fit for this study because correlational statistics
allows the analyses of relationships among the multiple variables.  Descriptive
61
statistics will be used to describe the sample.  Reliability statistics, t-Test analyses,
cross-tabulation of variables will be used to evaluate the extent to which criterion
behavior pattern could be predicted.  The goal was to discover the correlations of
variability in students who succeed with the intentions of designing admission
criteria that was reliable and valid.
Participants
A total of 187 students enrolled in the Physician Assistant Program from
1999 to 2005 participated in the study.  This study focused on existing data of the
187 students who were admitted in 1999 (n = 25), 2000 (n = 25), 2001 (n=28), 2002
(n=25), 2003 (n=28), 2004 (n=28), and 2005 (n=28).  Data did not include applicants
not selected for enrollment into the program because program completion data and
data on National Certification Examination performance only exist on individuals
who enrolled into the program.
Data Collection
Archived data from the admission files of students admitted to the program
were used to collect data to measure the predictor variables in the study.  The entry-
level GPAs are obtained from the students’ files and were verified by the admission
office during the admission process.  Ordinal and nominal scales will be determined
for non-cognitive variables (communication skills, reference letters, and work
experience) and each will be assigned a nominal category or ordinal scale.
National Commission on the Certification of Physician Assistants releases
quantitative scores on students’ performance in the program in an annual program
62
report.  The sum total of the students’ scores is released to the PA program along
with a quantitative breakdown of student performance in each of the six skill areas as
outlined on the exam blueprint. The six skill areas are identified as follows: 1)
history taking and performing physical examination; 2) using laboratory and
diagnostic studies; 3) formulating the most likely diagnosis; 4) health maintenance;
5) pharmaceutical therapeutics and 6) basic science concepts.  Candidates must
receive a score of 350 to pass the NCE examination; achieving a score of 350
demonstrates that the candidate has entry level proficiencies for practicing as a
physician assistant. Information given in the report will be used as measures of
dependent variables in the study.  Success is defined as successfully passing the
examination in the first attempt; repeat test taker scores will be not considered in the
study.
Instrumentation and Procedure for the Independent Variables
This section will describe the instrumentation and procedure that will be used
to establish predictor measures for independent variables utilized in the study.  
Cognitive and non-cognitive independent variables are used in the admission
process.  Cognitive variables are used to assess the academic abilities of the
candidate and are augmented with non-cognitive variables to select students of the
highest quality.  The cumulative grade point average (GPAcum), science grade point
average (GPAsci), work experience, number of clinical hours, entry-level skills,
personal statement and reference letter are the independent variables used in this
research design.
63
The cognitive independent variables are science and cumulative GPAs.  
Applicants must have a GPAsci of 2.7 and a GPAcum of 2.5. GPAsci was based on
semester credits in anatomy (5 semester credits), physiology (5 semester credits) and
microbiology (4 semester credits).  GPAcum was based on all other prerequisite
course work to include English 1A (3 semester credits), sociology (3 semester
credits), chemistry (3 semester credits) psychology (3 semester credits), physics (4
semester credits), college algebra (3 semester credits).

Table 3
Summary of Prerequisite Requirements
Prerequisite
Courses
Semester
Units
Required grade point
average(GPA)
Anatomy 5  
Physiology 5  
Microbiology 4 GPAsci 2.7
English 1A 3  
Chemistry 2A 4  
College Algebra 3  
Physics 10 and 11 3 GPAcum 2.5


The GPA point average was be calculated by converting the letter grade to
numbers (i.e. A = 4.0, B=3.0, C=2.0).  The community college grading criteria does
not recognize plus or minus grades and does not accept grades of C- in a course as a
passing grade.  Once letter grades were converted to numbers the numbers were
64
multiplied by the credit hours earned for the course.  The multiplied numbers were
added and the sum total was divided by the total number of credit hours earned.  If a
course was repeated the highest grade obtained for the course was used in the
calculation of the grade point average.
Non cognitive data gathered in the admission process included a personal
statement (written communication skills and professional commitment and goals),
work experiences (quality of work experience and number of clinical hours), and
reference letters from two individuals who have supervised the applicant in the
clinical setting.
Work experience
Work experience is a valuable admission variable which provides
information on the candidates’ entry-level skills and clinical background.  Prior
research showed a strong correlation between years of experience and academic
performance, and that individuals with prior work experience see the relevance and
potential application of materials learned (Adams, 2000; Dreher & Ryan, 2000).
Data was collected on the number of clinical hours, entry-level skills and the quality
of work experience.  Information reported by the applicant was authenticated by the
Verification of Occupation Form completed by the employer.  The scores for clinical
hours are continuous scores and range from 2000 hours to greater than 72,000 hours.
65
Table 4
Categories of Clinical Hours
Number of Clinical Hours Code
<5000 1
5001 to 10000 2
10001 to 150000 3
15001 to 20000 4
20001 to 35000 5
>35001 6


The quality of work experience was reported by the applicant and was
verified by the Verification of Occupation Form in the application and the
background check of the candidate that was obtained prior to admission. Each type
of job title was coded; in some instances clinical experiences requiring similar
educational background or similar skills to perform job duties were grouped together
to increase the cell size.  Each code put into the data system referenced a specific job
title (e.g. registered nurse 001; licensed vocational nurse 002, etc.).
Technical skills reported by the applicant reflected the quality of the job
experience from previous job responsibilities.  A list of twenty-two skills was found
on the application. The twenty two skills are divided into seven skills sets and
classified as entry-level skills.  The entry-level skills were aligned with the seven
task areas assessed on the Physician Assistant National Certifying Examination
66
(PANCE) blueprint.  A correlation of entry-level skills with exit-skills on the
PANCE should predict student performance in these task areas on the PANCE.
I present Table 2 (from page 46) again for the reader’s reference:
Table 2 (Also on page 46)
Correlation between Entry-Level Skills and Exit-Level Skills
Entry-Level Skills Exit Skills
Skill 1: Taking Medical History/Physical
Examination
PANCE 1: History Taking and
Performing Physical Examinations
Skill 2: EKG interpretation, bacterial
culture interpretation
PANCE 2: Using Laboratory and
Diagnostic Studies
Skill 3: Diagnosis PANCE 3: Formulating Most Likely
Diagnosis
Skills 4: Patient Education PANCE 4: Health Maintenance
Skill 5: Vital signs, first aid,
cardiopulmonary resuscitation, suturing,
suture removal, Splinting and/or casting,
physical therapy, gastric lavage
PANCE 5: Clinical Intervention
Skill 6: Injections/Respiratory therapy PANCE 6: Pharmaceutical
Therapeutics
Skill 7: Microscopic evaluation of:
blood, urine, gram-stained specimens
PANCE 7:Basic Science Concepts


Personal statement
The purpose of the personal statement as a variable in the admission process
was to assess the applicant’s written communication skills, the applicant’s
professional goal and the applicant’s commitment to the profession.  An analytic
rubric scoring system was used to evaluate the quality of the essay submitted in the
application using the above criteria (Appendix A).  Each criterion was scored on a
67
scale of zero to three and the score for each criterion was averaged.  The maximum
score for the essay rubric is nine.  Rater reliability of the readers was determined.
Three categories for rating the essay were as follows:  fair, good, excellent.
The category values will be measured by nominal scales, in which nominal scores
were used to present the categories.

Table 5
Rating Scale for Personal Statement
Category Ordinal scale Ordinal score
Fair 0 - 3 1
Good 4 - 6 2
Excellent 7 - 9 3


Letters of Reference
Two letters of reference from individuals who have supervised the applicant
in the clinical setting are required of each applicant.   The Letter of Reference is
designed to evaluate the applicant’s strengths and weaknesses in the following
domains: maturity; emotional stability; learning ability; interpersonal skills; and
clinical skills. A holistic rubric was utilized to rate each letter of reference.  The
rubric was designed as a descriptive rating scale in which one score was given for
each letter of reference.  Two PA educators identified as experts (each with ten years
or more experience evaluating letters of references) participated as evaluators.  Two
68
independent ratings of each letter was averaged to get a “true” expert holistic rating
(Slater, 2001).   The “true” expert holistic rating of each letter in the file was
summed and averaged again to achieve a score for the letter of references.
A holistic rubric was designed to score each letter of reference in the
applicant’s file (Appendix B).  Score ranges from 3 to 1, corresponding to the
strength of the individual’s personal characteristics as described in the letters of
reference. A score of “3” represents an individual whose character is described as
outstanding in all domains addressed in the letter.  Descriptors such as excellent,
exceptional, great, outstanding, superior or wonderful have been used to describe the
applicants’ abilities in the five domains. A score of “2” represents an individual
whose overall character is described as good.  Characteristics considered good are
referenced from words such as skillful, enjoyable, good quality, fine.  A score of “1”
is the lowest score on the rating form which reflects individuals who receives an
overall rating of fair.  Descriptors considered fair are words such as adequate,
average, reasonable, decent, and moderately good.
The score achieved from the rating will be referenced to a category as defined
in the table below.
69
Table 6
Ordinal Scores for Reference Form
Category Nominal Score
Fair 1
Good  2
Outstanding 3


Instrumentation and Procedure for Dependent Variables
There are two major dependent variables in the study:  program completion
and scores on the PANCE.  Program completion was measured as a yes or no; a
score of 1 indicated yes for completing the program and a score of 0 indicated a no
for not completing the program. The NCCPA releases PANCE scores of all program
graduates at the time the examination is scored.  Data reported by NCCPA include
the student’s overall score on the licensure examination, the student’s national
ranking on the exam, and a score distribution in the six task categories referred to as
exit skills in this study.
The scores produced by the NCCPA examinations are validated and the
content of the exam is regularly updated to appropriately reflect the current scope of
practice for the profession.  The scores are calculated using multiple statistical
programs and multiple stages.  The exam is scored using two independent and
different scoring systems.  The first scoring system compares each response to the
answer key.  The system assigns a 1 to correct responses and a 0 to incorrect
70
responses.  Sum of all 1’s becomes the raw score.  The exam is scored a second time
through a different scoring system generating a second raw score.  The two raw
scores are then compared and if they match they are sent to the next phase of scoring.  
The second phase of the scoring uses a mathematically based statistical scoring
program based on the Rasch model.  This program uses the difficulties of the test
items and the number of correct responses to determine the examinee’s proficiency
measure.  The rationale for this process is that it gives more credit to individuals who
took a harder version of the exam and prevents unfair advantage to individuals who
took the easier version.
Once the proficiency measure is obtained the scores are re-scaled so that they
can be more easily interpreted.  A base reference group serves a benchmark for re-
scaling.  The reference group has a mean score of 500 with a standard deviation of
100.  Scores usually range from 200 to 800, occasionally individuals score above
800.   The proficiency level required to pass the test is determine by the “cutscore”.  
The minimal passing score to pass the examination is a score of 350.  Individuals
need 55% - 65% correct responses to pass the test, depending on the difficulty of the
exam version.
Data Analysis
Data will be coded into SPSS 16.0 version computer program.  Data analysis
will occur in two phases.  The initial phase of data analysis will be a univariate
correlation between predictor variables scores with criterion scores (Table 7) to
determine correlation coefficients.
71
Table 7
Correlation between Predictor Measures and Criterion Measures in the study
Outcomes (dependent variables)
 Completion Pance1 Pance2 Pance3 Pance4 Pance5 Pance6 P/F
Predictors
(IV)
       
GPAcum          
GPAsci          
Experience          
#hours          
Essay          
Reference          
Skills          


Recommendation of correlational researchers suggests a cross-validation
check to determine the extent of shrink in the initial set of correlation coefficients
before using them in prediction situations.  Making predictions on the basis of
correlation coefficient scores derived from one sample is uncertain (Gall et. al, p.
346-47).  If the correlations drop to a non-significant level it should be dropped.  
Since a cross-validation was not performed in this study, this is a limitation that
obtains in the study.
To determine the magnitude of the relationship between two variables,
bivariate correlational statistics was incorporated into the study.  Variable forms used
in this study are primarily continuous forms therefore the most appropriate bivariate
technique is the product-moment correlation.  The correlation between the predictor
variable and the prediction criterion is expected to be linear.  If the study shows non-
72
linear correlation then a correlation ratio showing a curved line may illustrate a better
prediction of the relationship between the two variables.
Multiple regression statistics may be used to further refine the search for
predictor variables in the admission process.  The multiple linear regression type of
multivariate correlational statistic is suggested for this phase of the study because it
is best for determining the correlation between a criterion variable and a set of
predictor variables when the correlation is expected to be linear.  If the correlation is
not linear then the nonlinear regression type of multivariate correlational statistic will
be used.
Conclusion
The purpose of this research is to determine if data gathered in the admission
selection process of a community college based physician assistant program serve as
an indicator to predict the success or failure of students in the completion of the PA
program and passing the National Certification Examination.  Existing data from
applicants’ file and data reported from the National Commission on the Certification
of Physician Assistants on student performance on the NCE was analyzed in this
study.  Expert raters utilized holistic and analytic rubrics to rate letters of reference
and the personal statement to improve the reliability of data used from written
assessments. Correlations between predictor variables and prediction criterion was
performed to determine the correlation coefficients to indicate the significance of
relationship between the variables and the criterion. Bivariate statistics were used to
show the magnitude of the relationship between two variables and multiple
73
regression statistics were used to further refine the search for predictor variables.  
Findings in the study provided reliable and valid predictors of student success that
will guide the admission selection process for the physician assistant program. The
USC UPIRB approval number is: UP-08-00114.
74
CHAPTER IV
RESULTS
This chapter provides the results of the data analyzed for this study.  The
purpose of the study was three-fold:  to evaluate the reliability and predictive validity
of the admission variables in being able to project student success in a community
college-based physician assistant program; to determine if data gathered during the
admission selection process served as indicators to predict the success or failure of
students in passing the National Certification Examination (NCE); and to identify
statistically significant variables that are reliable and valid indicators that predict the
performance of students in the program and student performance on the NCE.  The
research involved the study of existing data, documents and records of 170 students
who enrolled in the physician assistant program from 1999 to 2005.  Information was
recorded in such a manner that subjects could not be identified directly or through
identifiers linked to the subjects.  Information was recorded using codes that the
investigator could not link back to the individual applicant.  The Institutional Review
Board (IRB) determined that the project met the requirements outlined in 45 CFR
46.101 Category (4) and qualified the project for exemption from IRB review.  The
following UPIRB# was assigned: UP-08-00114.
Independent variables were defined as science grade point average (GPAsci),
cumulative grade point average (GPAcum), work experience (i.e. hours, skills),
letters of reference and personal statement.  Dependent variables were defined as
75
program completion (COMP) and student performance on the NCE.  Statistical
analyses were performed using SPSS software, version 16.0.
The research was performed in two phases.  The first phase of the research
involved descriptive statistics that provided quantitative descriptions of the data used
in this study.  The second phase involved the use of correlational statistics to study
the relationship between predictor measures and criterion measures defined in the
study.  Findings are presented for each research questions alone with a discussion of
the significance of the findings of that research question.
General Descriptive Statistics
Data gathered from the admission files included the science grade point
average (anatomy, physiology and microbiology), the cumulative grade point
average (English, physics, psychology, sociology, chemistry), and work experience
(number of clinical hours, type of experience and entry-level skills), letters of
reference and personal statement. The transcript evaluator at the college evaluated all
files for course equivalencies and calculated the grade point averages for all
admission files. Letters of reference and personal statement were read by two raters;
a reliability coefficient was established to ensure inter-rater reliability of the scoring
which was found to be statistically significant for both written documents.  In order
to guide the reader through the descriptive statistics that follows, the table below
gives a list of variables used in the study, their abbreviations and definitions.
76
Table 8
List of Variables Abbreviations and Definitions
Abbreviation Definition
Predictor Measures  
AVGPER Personal Statement Mean Score
CLINHRS Clinical Hours Worked Prior to Admission
GPAsci Science Grade Point Average
GPAcum Cumulative Grade Point Average
EXPTYP Type of Clinical Experience
REFSUM Letter of Reference Mean Score
SKILLS Entry-Level Skills
Criterion Measures  
COMP Program Completion
PANCE Exit-Level Skills
NCEP National Certification Examination Performance


Descriptive Statistics for the Predictor Measures (Independent Variables)
The independent variables used in the research included the GPAsci,
GPAcum, CLINHRS, CLINEXP, SKILLS, Refsum, and AVGPER (Table  8).  The
quantitative statistics for the each of these measures are outlined below.  Descriptive
statistics for the GPAsci indicated that GPAsci ranged from 2.44 to 4.0 with a mean
of 3.44 and standard deviation of 0.031.  The cumulative GPA ranged from 2.12 to
4.0 with a mean of 3.21 and a standard deviation of 0.399.  The mean GPAsci and
GPAcum are well above the required prerequisite grade point average of 2.70 and
2.5 respectively.
77
Table 9
Descriptive Statistics of Science Grade Point Average and Cumulative Grade Point
Average
N Mean Std. Deviation
GPAsci 170 3.44 .406
GPAcum 170 3.21 .399


Sixteen different job categories were identified among the 170 students who
enrolled in the program from 1999 to 2005.  Each type of clinical experience was
coded; in some instances clinical experiences requiring similar educational
background or similar skills to perform job duties were grouped together.  Table 10
provides quantitative descriptive statistics for clinical experience for this sample
population. The emergency medical technicians, medical assistants and certified
nursing assistants enrolled at the greatest frequency, 21.9% and 17.8% respectively.
Table 12 outlines the descriptive statistics for clinical hours.  Clinical hours
ranged from 1,630 hours to 72,000 hours; the mean is 11, 808 clinical hours with a
standard deviation of 1.081E4.  The majority of the students completed less than
5000 clinical hours prior to enrolling into the program. The hypothesis related to this
predictor measure was that students who completed more clinical hours would more
likely complete the program and perform better on the National Certification
Examination.
78
Table 10
Descriptive Statistics for Types of Clinical Experience

Type of Clinical Experience Frequency
Valid
Percent
1 Medical Assistant/Certified Nursing Assistant 30 17.8
2 Psychiatric Technician 8 4.7
3 Licensed Vocational Nurse 6 3.6
4 Registered Nurse 7 4.1
5 Corpsman 8 4.7
6 Emergency Medical Technician 37 21.9
7 Paramedic 11 6.5
8 Respiratory Therapist 12 7.1
9 Physical Therapy Assistant/Chiropractor Assistant/Rehabilitation
Technician/ Occupational Therapy Assistant
11 6.5
10 Foreign Medical Doctor 3 1.8
11 Optometry Assistant/Dental Assistant 3 1.8
12 Laboratory Technician/Surgical Technician 9 5.3
13 Phlebotomy 6 3.6
14 Radiology Technician 13 7.7
15 Other: Message Therapy, Clinical Health Educator/EKG Technician 2 1.2
16 Chiropractor/Acupuncture 3 1.8
Total  169 100.0


Table 11
Mean and Standard Deviation of Clinical Hours
N Mean Std. Deviation
CLINHRS 169 11, 808 1.081E4

79
Table 12
Descriptive Statistics for Clinical Hours
Category Clinical Hours Frequency Valid Percent
1 <5000 57 33.7
2 5001 - 10000 39 23.1
3 10001 - 15000 27 16.0
4 15001 – 20000 23 13.6
5 20001 – 35000 17 10.1
6 >35001 6 3.6
Total  169 100.0


The descriptive statistics for entry-level skills are summarized in Table 13.
Skills that the applicant identified on the application were verified from the
Verification of Occupation form submitted which was completed by the employer or
representative thereof.  Skills 1 through 7 are described below (Table 13) and the
percentage distribution of each skill are indicated.  Skills 1- history taking and
physical exam, 4- health maintenance, and 5 - clinical interventions demonstrated the
highest frequency skills. Since experience with “hands-on” patient care is a
prerequisite for admission it is not surprising that these three skills were the most
often identified.
80
Table 13
Descriptive Statistics of the Entry-Level Skills
 N Percentage
SKILL1 History Taking/Physical Exam 145 85.3%
SKILL2 Using Laboratory & Diagnostic 51 30%
SKILL3 Formulating Most Like Diagnosis 9 5.3%
SKILL4 Health Maintenance 147 86.5%
SKILL5 Clinical Intervention 168 99.4%
SKILL6 Pharmaceutical Therapeutics 46 27%
SKILL7 Basic Science Concepts 14 8.2%


The tables below outline descriptive statistics for letter of reference scores
(LETTER) and personal statement scores (PERSONAL).  The ratings for the letter of
reference were determined by two raters, therefore, rater reliability of the readers
were used to determine if the rating score used was valid.  The following steps were
taken in order to validate the rating score:  First, the one letter of reference was
labeled “letter a” and the second letter was labeled “letter b”; each letter was read by
rater 1 and rater 2.  Then to assess rater reliability, the reliability coefficient for each
letter was computed and found to be low, but satisfactory for research purposes.  The
reliability coefficients were 0.641 for “letter a” and the reliability coefficient for
“letter b” was 0.641 as well. Therefore, the mean score of the reference letters
(refsum) was calculated and used as the reference letter rating score.  The scale for
the reference letter ranged from zero to three; one means the candidate’s
recommendation rating was fair, two means the candidate’s recommendation rating
81
was good, and three means the candidate’s recommendation rating was outstanding.
The reference scores for this population ranged from 1.25 to 3.0; mean reference
score 2.77 with a standard deviation of 0.306 and the mode was 3.0. The majority of
the candidates (57.5%) were rated as outstanding.

Table 14
Rater Reliability of Letter of Reference “a”
Cronbach's Alpha N of Items
.641 2


Table 15
Rater Reliability of Letter of Reference “b”
Cronbach's Alpha N of Items
.641 2


Table 16
Descriptive Statistics for Reference Letters
N Mean Median Mode Std. Deviation
Valid 167 2.77 3.0 3.0 .306
Missing 3    
82
Table 17
Frequency of Reference Letter Scores
Score Frequency Valid Percent
1.75 1 .6
2 7 4.2
2.25 11 6.6
2.5 32 19.2
2.75 20 12.0
3 96 57.5
Valid
Total 167 100.0
Total 170  


A reliability coefficient for the personal statement was determined prior to
performing the descriptive statistics for the personal statement.  As a reminder the
personal statement was used to explore the applicant’s professional goal, the
applicant’s commitment to the profession and assessed the applicant’s written
communication skills.  Each criterion used to assess the personal statement was
assigned letter a through c; “a” referred to professional goal, “b” referred to the
applicant’s commitment to the profession and “c” referred to the applicant’s written
communication skills.  Each criterion was rated on a scale of zero to three, with a
maximum total score of nine.  Each letter was read by two raters and assigned a
rating.  The reliability coefficients for the rating were determined. First, the rater
reliability coefficient for each criterion rating was established independently; the
83
reliability coefficient for criterion a, b, and c were 0.753, 0.753, and 0.656,
respectively which was statistically significant.  Then the reliability coefficients of
the sum and the mean of the six ratings were established.  The reliability coefficient
for the mean of the ratings was 0.776 (Table 21), which indicated the letter ratings
were moderately reliable.  Table 22 shows the item total correlation.

Table 18
Reliability Statistics for Rating of Professional Goal
Cronbach's Alpha N of Items
.753 2


Table 19
Reliability Statistics for Rating of Commitment to the Profession
Cronbach's Alpha N of Items
.753 2


Table 20
Reliability Statistics for Rating of Written Communication Skills
Cronbach's Alpha N of Items
.656 2
84
Table 21
Reliability Statistics of the Personal Statement
Cronbach's Alpha
Cronbach's Alpha Based on
Standardized Items
N of Items
.775 .776 3

Table 22
Item-Total Statistics of Rater Reliability of the Six Ratings

Scale Mean if
Item Deleted
Scale Variance
if Item Deleted
Corrected Item-
Total Correlation
Squared
Multiple
Correlation
Cronbach's Alpha
if Item Deleted
comm 3.1476 1.392 .641 .439 .669
goal 2.9488 1.508 .687 .478 .608
write 2.3614 1.981 .529 .286 .783


Once the rater reliability was established, descriptive statistics were
completed for the ratings.  The mean rating score is 1.41 with a standard deviation of
.605.  The median is 1.33 and the mode 1.0.

Table 23
Descriptive Statistics of Written Documents
N Mean Std. Deviation
REFSUM 166 2.77 .306
AVGPER 166 1.41 .605
85
Descriptive Statistics for the Criterion Measures (Dependent Variables)
The criterion measures used for this research included:  Program completion,
National Certification Examination Performance, and performance on the Physician
Assistant National Certification Examination in the seven task areas on the (PANCE
1-7).  Table 24 summarizes descriptive statistics for program completion (COMP).  
One hundred and seventy students enrolled in the program from 1999 to 2005,
eighteen did not complete the program.  The overall program attrition rate for
students who enrolled into the program from 1999 to 2005 is 10.7%.

Table 24
Descriptive Statistics for Program Completion
Completion Frequency Valid Percent
Fail 18 10.7
Pass 151 89.3
Valid
Total 169 100.0
System 1  
Total 170  


Table 25 outlines descriptive statistics for the National Certification
Examination Performance (NCEP) for this study population. Scores on the NCE
ranged from 200 to 864, with a mean score of 485 with a standard deviation of
86
115.652.  The national mean for all first-time takers was 498.   Ninety-one percent of
the 151 graduates passed the NCE.

Table 25
Descriptive Statistics for National Certification Examination Performance
N Mean Std. Deviation Valid Percent
NCEP 151   89.3%
Passed 137 485 115.652 90.7%
Failed 14   9.3%
Missing 59    


Table 26 outlines descriptive statistics of student performance on the exit-
skills (PANCE) on the NCE.  The mean percentage score on the PANCE 1-7 ranged
from 70.62% to 73.95%.   The National Commission on Certification of Physician
Assistants, Inc. (NCCPA) did not report exit skill results for students graduating in
2001, which explains the majority of missing data for PANCE 1 through PANCE 5
and 7; the remaining missing data was reflective of individual who did not complete
the program. The NCCPA reported PANCE 6 (clinical therapy) for 2002 and 2003
only.
87
Table 26
Descriptive Statistics of Student Performance on Exit-Skill on the National
Certification Examination
Valid Missing Mean Standard Deviation
PANCE 1 128 42 71.91% 1.033
PANCE 2 128 42 71.21% 1.232
PANCE 3 128 42 73.95% 1.112
PANCE 4 128 42 71.60% 1.161
PANCE 5 128 42 73.67% 1.054
PANCE 6 44 126 71.40% 9.08
PANCE 7 128 42 70.62% 1.258


Correlational Statistics
Phase I of the study focused on correlational statistics specific to the
proposed research questions.  A correlation coefficient of p< .05 was considered
statistically significant for this study.  Various correlation statistical techniques were
incorporated into the study to assess the relationship between predictor variables and
criterion variables.  t-Test analysis, cross-tabulations and bivariate statistics were
used for this purpose.
88
Research Question One
Do data gathered as part of the admission selection process into the PA
program serve as an indicator to predict the success or failure of students in the
completion of the PA program?
Correlation between predictor measures and program completion
Predictor measures specific to this question were science grade point
(GPAsci), cumulative grade point average (GPAcum), letters of reference (Refsum)
and personal statement (AVGPER), work experience (EXPTYP and CLINHRS) and
entry-level skills (SKILLS). Independent sample t -test analyses were used to
establish correlations between continuous variables (GPAsci, GPAcum, Refsum, and
AVGPER) and program completion.
Supplemental Questions to Question One
To what extent does the cumulative grade point average predict student
success in completing the program?
Independent sample t-test analysis was performed to determine if there was
significant correlation between GPAcum and program completion. The correlation
coefficient of p<.248 was determined, which indicated that no statistically significant
correlation exits between cumulative grade point average and program completion.
However, the cumulative grade point average was lower for students who dropped
out of the program.
89
To what extent does science grade point average predict student success in
completing the program?
Independent sample t-test analysis was performed to determine if there was a
significant correlation between GPAsci and program completion. The independent
sample t- test analysis revealed a near significant (2-tailed) correlation of p<.052.
“Because an analysis of statistical significance depends on sample size to a great
degree” (Hocevar, 2007), an analysis of practical significance (effect size) was
established. In correlational research the group difference can be used to calculate
the effect size, which was done for this analysis.  The differences between the mean
of program completer and program non-completer was used to calculated the index
of effect size, which was calculated at 0.46. Using Cohen’s (1988) categorization of
effect sizes, the effect size was categorized as moderate.  An effect size of .5 in most
scientific is considered “moderate” in most scientific circles and this is common
standard for assigning practical significance.
To what extent does the personal statement predict student success in
completing the program?
Independent sample t-test analysis was performed to determine if there was
significant correlation between personal statement ratings and program completion.
An observed probability of p<.337 was determined which indicated that no
statistically significant correlation exits between the personal statements and
program completion.  However, the direction of the means for completer versus non-
completer was as expected.
90
To what extent do the letters of reference predict student success in
completing the program?
Independent sample t-test analysis was performed to determine if there was a
significant correlation between letters of reference ratings and program completion.  
The independent sample t-test analysis revealed a near significant (2-tailed)
correlation of p<.072 . Similar consideration was given when interpreting these
findings as was given to the interpretation of the significance of the correlation
coefficient associated with GPAsci.  The index of effect size was calculated at 0.57
which was categorized as “moderate”.  As indicated above, an effect size of .5 is
considered “moderate” in most scientific circles this is common standard for
practical significance (Cohen, 1988).  Therefore, it can be determined that
relationship between GPAsci and program completion has practical significance.

Table 27
t-Test Group Statistics for Predictor Measures and Program Completion
COMP N Mean Std. Deviation Std. Error Mean
0 18 3.29 .324 .076 GPAsci
1 151 3.46 .412 .034
0 18 3.1089 .41412 .09761 GPAcum
1 151 3.2312 .39740 .03234
0 16 2.5938 .40697 .10174 Refsum
1 151 2.7947 .28880 .02350
0 16 1.2500 .68853 .17213 AVGPER
1 150 1.4267 .59500 .04858
0 = program non completers, 1 = program completers.
91
Table 28
Independent Sample t-Test Analyses for Predictor Measures and Program
Completion
 Levene's Test for
Equality of Variances t-test for Equality of Means
 
 
F Sig. t df
Sig. (2-
tailed)
Equal variances
assumed
3.588 .060 -1.695 167 .092
GPAsci
Equal variances not
assumed
 -2.046 24.074 .052
Equal variances
assumed
.012 .912 -1.229 167 .221
GPAcum
Equal variances not
assumed
 -1.189 20.909 .248
Equal variances
assumed
1.341 .248 -1.112 164 .268
AVGPER
Equal variances not
assumed
 -.988 17.474 .337
Equal variances
assumed
5.764 .017 -2.535 165 .012
refsum
Equal variances not
assumed
 -1.924 16.639 .072


To what extent do the type of work experience and hours of work experience
correlate with student success in completing the program?
Cross-tabulation analyses of type of work experience (EXPTYP) and hours
of work experience (CLINHRS) with program completion (COMP) was performed
to determine the relationship between these variables with program completion.  
First, the chi-square test was used to determine if a significant correlation exist
between work experience and program completion.  (Table 29) A Pearson’s chi-
square of  19.83 (p = .178) was non significant.  Chi-square tests the underlying
92
probabilities in each cell; and when the expected cell frequencies fall below five the
probabilities cannot be estimated with sufficient precision.  Such was the case for
this chi-square test. The minimal expected cell size for each cell was 21; however,
62.5 % of the cells had a cell count of less than 5 which affects the significance of
the results.  Therefore, the finding was not estimated with sufficient precision.
Second, the chi-square test was then used to determine if a significant
correlation exists between clinical hours and program completion.  Clinical hours
were ordinal variables so the chi-square linear-by-linear association was used to
determine if a significant correlation exist.  Chi-square linear-by-linear association
demonstrated an insignificant correlation with an observed probability at .701 (Table
30); but, as in the previous chi-square test a significant percentage (41.7%) of cells
had expected cell counts less than 5, therefore, the probabilities cannot be estimated
with sufficient precision.

Table 29
Chi-Square Test for Type of Clinical Experience and Program Completion

Value df
Asymp. Sig.
(2-sided)
Pearson Chi-Square 19.832
a
15 .178
Likelihood Ratio 21.735 15 .115
Linear-by-Linear Association 1.780 1 .182
N of Valid Cases 169  
a. 20 cells (62.5%) have expected count less than 5. The minimum expected count is .21.

93
Table 30
Chi-Square Test for Clinical Hours and Program Completion

Value df
Asymp. Sig.
(2-sided)
Pearson Chi-Square 1.804
a
5 .876
Likelihood Ratio 2.023 5 .846
Linear-by-Linear Association .148 1 .701
N of Valid Cases 169  
a. 5 cells (41.7%) have expected count less than 5. The minimum expected count is .64.


Additional Information
Further cross-tabulations revealed additional information demonstrating a
relationship with these predictor measures and program completion.  Several types of
clinical experience demonstrated markedly higher fail rates than others.  Medical
assisting/certified nursing assistant, chiropractors/acupuncturist, optometry
assistant/dental assistant, laboratory technician/surgical technician had fail rates of
26.7%, 33.7%, 33.3%, and 22.2% respectively (see tables in Appendix C) Groups
(with N > 6) that had 100% pass rates on the NCE were identified as registered
nurse, corpsman, paramedic, and physical therapy assistant/chiropractor assistant
/rehabilitation technician/occupation therapy assistant and phlebotomy (see
Appendix C for details).
94
Discussion of Question One
I anticipated that the predictor measures gathered as part of the admission
selection process would not serve as an indicators to predict the success and failure
of students in completing the program.  However, analyses of the data related to
research question one and its supplemental questions indicated that science grade
point average, letters of reference, and work experience had indications that these
predictor measures served as valid indicators to predict student success and/or failure
in completing the program.  Of the predictor measures evaluated in this question,
science grade point average was the best predictor measure used in the admission
selection process.  Surprisingly, findings indicated that letters of reference were a
potentially useful indicator to predict student success and failure in completing the
program.
Research Question Two
Do data gathered as part of the admission selection process into a PA program
serve as an indicator to predict the success or failure of student in passing the
National Certification Examination?
Similar statistical analyses was used to explore the predictor measures in
question one related to program completion were performed to determine the
relationship between to the same predictor measures and performance on the
National Certification Examination.  Independent sample t-test, crosstabulation, and
bivariate correlation were used to determine the relationship between predictor
measures and performance on the NCE.  Each supplemental question to research
95
question two will be investigated independently to establish a relationship with the
criterion measure.
Supplemental Questions to Question Two
To what extent do GPAsci and GPAcum predict student performance on the
NCE?
Independent sample t-test analyses were performed to determine the
significance of GPAsci and GPAcum in predicting student performance on the NCE.  
The correlation between NCEP and GPAsci demonstrated a statistically significant
observed probability of p =0.009, but correlation between NCEP and GPAcum was
not significant (p = .332) (Table 31). The difference between the number of
individuals who passed the NCE (N=142) and the number of individuals who failed
the NCE (N=9) was very large, therefore the effect size was calculated to determine
practical significance.  An effect size of 0.80 was computed for the GPAsci which is
considered large using Cohen’s categories.  The effect size calculated for GPAcum
was only 0.24, a small effect size which demonstrated little practical significance.

Table 31
t-Test Group Statistics for GPAsci, GPAcum and NCEP
NCEP N Mean Std. Deviation Std. Error Mean
0 9 3.21 .229 .076 GPAsci
1 142 3.47 .416 .035
0 9 3.3289 .28825 .09608 GPAcum
1 142 3.2250 .40329 .03384
96
Table 32
Independent Samples t-Test Analyses for GPAsci, GPAcum and Performance on the
National Certification Examination
 Levene's Test for
Equality of
Variances
t-test for Equality of Means
 
 
F Sig. t df
Sig. (2-
tailed)
Mean
Difference
Equal variances
assumed
5.323 .022 -1.861 149 .065 -.261
GPAsci
Equal variances
not assumed
 -3.107 11.660 .009 -.261
Equal variances
assumed
2.019 .157 .759 149 .449 .10389
GPAcum
Equal variances
not assumed
 1.020 10.099 .332 .10389


To what extent do the quality of work experience and the hours of work
experience correlate with student performance on the NCE?
The Pearson’s chi-square test was performed for the type of clinical
experience and student performance on the NCE.  Findings demonstrated a non
significant degree of correlation (r = .755), but 65.6% of the cells had frequencies of
less than 5. (Table 33) The chi-square linear-by-linear association was used to
demonstrate a relationship between clinical work hours and performance on the
NCE.  A correlation coefficient of r= 0.460 demonstrated an insignificant correlation
(p=.498).
97
Table 33
Chi-Square Test for Type of Clinical Experience (EXPTYP) with National
Certification Exam Performance (NCEP)

Value df
Asymp. Sig.
(2-sided)
Pearson Chi-Square 10.966
a
15 .755
Likelihood Ratio 11.196 15 .739
Linear-by-Linear Association .422 1 .516
N of Valid Cases 151  
a. 21 cells (65.6%) have expected count less than 5. The minimum expected count is .12.



Table 34
Chi-Square Test for Clinical Hours Performance on the National Certification
Examination

Value df
Asymp. Sig.
(2-sided)
Pearson Chi-Square 3.787
a
6 .705
Likelihood Ratio 3.788 6 .705
Linear-by-Linear Association .460 1 .498
N of Valid Cases 151  
a. 9 cells (64.3%) have expected count less than 5. The minimum expected count is .06.
98
Additional analysis drawn from the crosstabulation tables was undertaken by
identifying types of clinical experience (with cell frequencies five or greater) with
100% pass rate on the NCE.  Licensed vocational nurses, registered nurses, surgical
technician/laboratory technician and respiratory therapist fell into this class of
experiences (see Appendix C).  Crosstabulation between clinical hours and
performance on the NCE indicated that individuals who completed greater than
35000 clinical hours (N=5) prior to enrollment had the perfect pass rate on the NCE
(see Appendix D).  This finding implies that individual with more clinical hours are
more likely to succeed.
To what extent does the GPAsci predict student performance on the basic
science concept task areas (PANCE 7) on the NCE when used as an indicator of
success?
The Spearman’s rho correlation coefficient was used to determine the
relationship between GPAsci and basic science concepts on the National
Certification Examination (PANCE 7).  The correlation coefficient (r = 0.390,
p<.000) indicates a significant correlation between GPAsci and PANCE 7. (Table
35)
99
Table 35
Correlations between GPAsci and PANCE7

 
GPAsci Pance7
Correlation Coefficient 1.000 .390
**

Sig. (2-tailed) . .000
GPAsci
N 169 128
Correlation Coefficient .390
**
1.000
Sig. (2-tailed) .000 .
Spearman's rho
Pance7
N 128 128
**. Correlation is significant at the 0.01 level (2-tailed).


Discussion of Question Two
The findings demonstrated that the science grade point average had
statistically significant correlation with performance on the NCE and a stronger
correlation with student performance on the basic science concepts on the
examination; but the cumulative grade point average demonstrated no practical or
statistical significance.  Correlation between clinical hours and type of clinical
experience with performance on the NCE could not be determined with sufficient
statistical precision because greater than 64% of the cells had cell counts less than
five.  However, additional analysis drawn from the cross-tabulation tables (see
Appendix C) identified types of experience with 100% pass rate on the NCE. Cross-
tabulation between clinical hours and performance on the NCE  indicated that
100
individuals who complete greater than 35000 hour prior (N=5) prior to enrollment  
had the perfect pass rate on the NCE.
Research Question Three
Is there a statistically significant relationship between the student entry-level
skills and exit skills on the NCE?
Bivariate correlations were performed to determine the relationships between
entry-level skills one through seven (Skill 1 – 7) and exit-level skills (Tables 36- 42).
The correlation coefficients for these data ranged from -0.128 to 0.103.


Table 36
Correlation between Skill 1 and PANCE 1
  Skills1 Pance1
Correlation Coefficient 1.000 -.008
Sig. (2-tailed) . .925
Skills1
N 170 128
Correlation Coefficient -.008 1.000
Sig. (2-tailed) .925 .
Spearman's rho
Pance1
N 128 128
101
Table 37
Correlation between Skill 2 and PANCE 2
  Skills2 Pance2
Correlation Coefficient 1.000 .103
Sig. (2-tailed) . .245
Skills2
N 170 128
Correlation Coefficient .103 1.000
Sig. (2-tailed) .245 .
Spearman's rho
Pance2
N 128 128


Table 38
Correlation between Skill 3 and PANCE 3
  Skills2 Pance2
Correlation Coefficient 1.000 .103
Sig. (2-tailed) . .245
Skills2
N 170 128
Correlation Coefficient .103 1.000
Sig. (2-tailed) .245 .
Spearman's rho
Pance2
N 128 128
102
Table 39
Correlation between Skill 4 and PANCE 4
  Skills4 Pance4
Correlation Coefficient 1.000 -.054
Sig. (2-tailed) . .548
Skills4
N 170 128
Correlation Coefficient -.054 1.000
Sig. (2-tailed) .548 .
Spearman's rho
Pance4
N 128 128


Table 40
Correlation between Skill 5 and PANCE 5
  Skills5 Pance5
Correlation Coefficient 1.000 -.071
Sig. (2-tailed) . .426
Skills5
N 170 128
Correlation Coefficient -.071 1.000
Sig. (2-tailed) .426 .
Spearman's rho
Pance5
N 128 128
103
Table 41
Correlation between Skill 6 and PANCE 6
  Skills6 Pance6
Correlation Coefficient 1.000 -.016
Sig. (2-tailed) . .916
Skills6
N 170 44
Correlation Coefficient -.016 1.000
Sig. (2-tailed) .916 .
Spearman's rho
Pance6
N 44 44


Table 42
Correlation between Skill 7 and PANCE 7
  Skills7 Pance7
Correlation Coefficient 1.000 -.128
Sig. (2-tailed) . .149
Skills7
N 170 128
Correlation Coefficient -.128 1.000
Sig. (2-tailed) .149 .
Spearman's rho
Pance7
N 128 128
104
Discussion of Question Three  
I anticipated that there would be a statistically significant relationship
between the student entry-level skills and exit-level skills on the National
Certification Exam. However, this study found no relationship between entry-level
skills and exit-skills on the NCE.  The findings suggested that student performance
on the NCE did not relate to the skills the individual had prior to enrollment and
skills learned through the curriculum served as a source of knowledge for
performance on the NCE.  Therefore, using entry-level skills as a predictor to
indicate student success or failure on the NCE is not supported by the findings.
Summary of Findings
The following is a summary of the findings identified in this study:
1.  There was no statistically significant correlation between cumulative
grade point average and completing the program.  However, completers had higher
cumulative grade point average than non completers.
2.  There was a nearly significant correlation between science grade point
average and program completion and completers had higher GPA than non
completers.
3.  There was no statistically significant correlation between personal
statement and program completion.  However, completers had higher personal
statement ratings than non completers.
105
4.  There was a practical significance relationship between the letters of
reference and program completion.  In addition, the direction of the means suggested
program completers had better letters of reference.
5.  There was an insignificant correlation between clinical hours and program
completion.
6.  There was a correlation between the type of work experience and program
completion. Register nurses, medical corpsman, paramedic and phlebotomy were the
identified as type of work experience with the best completion rate.
7.  There was a strong statistically significant correlation between science
grade point average and performance on the NCE.
8.  There was no statistically significant correlation between cumulative
grade point average and performance on the NCE.
9.  There was a strong significant correlation between science grade point
average and  performance in the basic science concepts on the NCE.
10.  There was no statistically significant correlation between the entry-level
skills and the exit-level skills on the NCE.
Conclusion
Summary of the findings demonstrated variations in the degree of
relationship among predictor measures and criterion measures.  Results demonstrated
that science grade point average had the greatest degree of correlation with student
outcome in comparison with other predictor measure.  These findings are consistent
with previous research that recognize prior academic achievement as an indicator of
106
future academic performance.  However, the relationship of the remaining predictor
measures with criterion measure was not as significant.  The research demonstrated
that there was practical significance or potentially significant correlations between
the majority of the predictor measures and the criterion measures.
107
CHAPTER V
DISCUSSION
The major implication for this study was to develop reliable and predictive
validity admission criteria for selecting qualified students (with relatively high
probability of success) for admission to a community college-based physician
assistant program. The high cost of training assistants and the extensive time
involved in training competent physician assistants, alone with the limited resources,
are reasons that discriminators in the admission criteria must distinguish individuals
more likely to succeed from those who are less likely to succeed.  In addition, the
increase demand for physician assistants in the workforce and increased public
demand for competent health care providers require that individuals selected for
admission be discriminated effectively.  The existing data from one hundred and
seventy students who enrolled in the program for 1999 to 2005 was evaluated to
determine the significance of the predictor measures used in the admission process as
indicators of student success.
The theoretical frame work for this study relied on the review of literature
related to other predictive validity studies. Review of literature related to admission
practices of health professional programs identified traditional admission variables
commonly used in the admission criteria (Zhang, 1999; Gansky et. al., 2003; Tang &
Lee, 1989; Downey et.al., 2002; Salahdeen, 2004; Confer, 1995; Turnbull, 2003).  
The statistically significant correlation of these variables with student outcomes
108
varied among health professional programs.  Review of prior predictive studies
enable this investigation and will guided the discussion in this chapter.
Summary of Findings
The study examined the correlation between predictor measures and criterion
measures used in the admission selection process.  The literature review indicated
that previous scholastic achievement was by far one of the most significant
predictors of student success (Fontanelle and Cooke, 1992, Thieman, 2003; Zhang,
1999).  The findings in this study demonstrated similar results in that the most
significant predictor identified in this study was prior academic performance.  
Studies by Zhang, (1999) and Fontanelle & Cook (1992) demonstrated that GPAsci
as discriminators effectively identified candidates capable of success.  Espen (2006)
demonstrated a significant high correlation between GPAsci and pass rate on the
NCE and program completion. Evans & Wen (2005) demonstrated the predictive
value of GPAcum and GPAsci in relation to the NCE.  In addition Zhang (1999)
demonstrated a moderate correlation with GPAcum and performance on the NCE.
Like prior research related to this topic, prior academic performance was the
most significant predictor of student outcome, specifically GPAsci.  A correlation
between GPAsci and student performance on the NCE and student performance on
the basic science concepts demonstrated a significantly high correlation.  In addition
there was practical significant correlation between GPAsci and program completion.
Conversely, there was no statistically significant correlation between GPAcum and
program completion and performance on the NCE.  However, practical indication for
109
GPAcum as predictor of students’ success was implied by the direction of the means
presented in the independent sample t-Test analysis.
The literature review emphasized the importance of using subjective data to
augment academic data in health professional program. The significance of using
subjective data was demonstrated in the research done by Confer (1995), Turnbull et.
al (2003).  The research indicated that a combination of academic score with
subjective admission criteria resulted in a better prediction of student academic
performance (Confer, 1995).  Turnbull et. al (2003) demonstrated that utilization of
subjective data  brought about improved changes in the quality of students selected
for admission. The utility of letters of reference and personal statement fell into the
category of subjective data.  Review of the literature indicated that letters of
reference and personal statements showed little meaning as predictors of student
success; the predictive validity of personal statements had fallen to almost zero
(Hughes, 2002).  Negative concepts about letters of reference prevailed throughout
the literature.   Stronck (1979) and Steiren (1981) found letters of reference unrelated
as predictors of program completion or GPA.  Marvis et. al (2006) felt letters of
reference to be meaningless in most cases.
This investigation of the community college-based PA program,
unexpectedly, demonstrated a practical significance relationship between letters of
reference and program completion.  Practical indication of significance was implied
by the direction of the means.  In contrast, there was no statistically significant
correlation between personal statement and program completion.  Similar to findings
110
associated with the letters of reference, practical indication of significance was
implied by the direction of the mean.  The lack of reliable and valid raters by
evaluators was cause for concern.  However, in this investigation, inter-rater
reliability of evaluators demonstrated a moderate reliability coefficient, which
indicated that the inter-rater reliability was good.
The nature of the health profession requires that future physician assistants
have basic skills and abilities to practice medicine.  Salahdeen (2004) believed that
methods of assessing basic skills and abilities of potential students should be
measured as a variable in the admission process.  For this research investigation, the
type of work experience and the number of clinical hours work prior to admission
were identified as the measured variables for assessing the applicant’s basic skills
and abilities.  Studies correlating work experience and academic performance were
primarily found in research related to the Masters in Business Administrative
program. Findings related to work experience and academic performance in a
Masters Degree Business Administration program varied.  McClure, Wells, &
Bowerman (1986); Dreher and Ryan (2000); and Adams (2000) demonstrated
positive correlations between work experience and academic performance.  In
contrast, Dreher and Ryan 2004; Grady et. al, 1996; and Sulaiman et. al 2006
concluded in their study that work experience did not related to student performance.  
Findings related to work experience in this investigation did not provide
overwhelming evidence to support the utilization of work experience in the
admission criteria to predict program completion.  Nevertheless, the chi-square
111
linear-by-linear association indicated a moderate correlation with work experience
(EXPTYP and CLINHRS) and student performance on the NCE.
Gansky et. al (2003) and Tang & Lee (1989) investigated the use of skill tests
as part of the admission criteria in health professional programs.  There
investigations demonstrated that skill tests did not have significant predictive power
in predicting academic performance or success in completing the program.  Despite
findings in the research, it was predicted that a correlation between entry-level skills
and exit-level skills would demonstrate a statistically significant correlation.  
Contrary to the expected outcome, all correlations between skills and PANCE
demonstrated insignificant correlations.  Therefore, using entry-level skills as a
predictor measure to indicate student success in program completion and
performance on the PANCE was not founded.
Overall, summary of the findings indicated that there were both statistically
significant and practical indications that demonstrated a relationship between the
predictor measure and criterion measures.  The following conclusions are drawn
from the results of this study:
1.  The study demonstrated that there was practical significance or potentially
significance correlation between the majority of the predictor measures and the
criterion measures.
2.  The science grade point average demonstrated a strong statistically
significant correlation with performance on the NCE and near significant correlation
with program completion.
112
3.  No significant correlation exists between the entry-level skills and exit-
level skills which implies that skill performance on the NCE was reflective of
information learned during training.
Implications for Practice
The following implications for practice can be drawn from this study:
1.  Findings in the study demonstrated variations in the degree of relationship
among predictor measures and criterion measure and that there was practical
significance or potentially significant correlations between the majority of the
predictor variables.  These findings indicated that the current admission variables
have some potential for predicting student outcomes, therefore the PA program
should not make recommendations for major changes in the admission criteria based
on information found in this study. Evidence in the study implied that the majority of
predictor measures demonstrate potentially significant correlations with student
outcomes, but the evidence was not overwhelming enough to dictate changes in the
admission criteria or to declare that the criteria was reliable and valid.
2.  The study was aimed at identifying variables that forecast academic
success for students in a community college-based physician assistant program.  In
order to show that a prerequisite is necessary for success, the validation procedure
must ensure that a student who has not met the prerequisite is highly unlikely to
succeed.  A criterion-related validity must be established for this purpose. If a
significant correlation exits between predictor measures and criterion measures
criterion-related validity would exist.  Unfortunately, the weak relationships between
113
the predictor measures and criterion measures in most of the correlation tests
performed in this study indicated that criterion-related validity was not demonstrated.
This evidence further supports the decision for the physician assistant program not to
make recommendations for changes in the current admission criteria based on
information found in this study.  In other words, the statistical relationship between
the predictor measures and the criterion measures was not demonstrated sufficiently
enough and with adequate precision to establish criterion-related validity.  On the
other hand, findings did demonstrate that the admission variables have the potential
for predicting student outcomes.
3.  Predictive validity of personal reference and personal statements has
demonstrated little meaning as predictors of student outcomes in past research.  One
major pitfall associated with the use of reference letters and personal statements in
predicting student outcomes is the lack of inter-rater reliability among raters.  Rater
reliability on scores assigned to reference letters and personal statements in this
study were low to moderately reliable, respectively.  The analytic and holistic rubrics
designed to evaluate the written documents and the training of raters prior to rating
contributed to the reliability of the raters’ scoring.   Based on these findings, the
physician assistant program will incorporate the use of these rubric in evaluation
reference letter and personal statement in future admission processes.  This tool can
be adopted by other health professional program and undergo further testing for
reliability and validity.
114
Limitation of the Study
There were several limitations identified as noteworthy when considering the
results in this study. The following limitations are concerns related to this
investigation:
1.  Previous scholastic achievement was recognized as the most significant
predictor of student success identified in this study, however, several limitations
associated with the interpretation of scholastic achievement in this study was noted;
First, the cumulative grade point average was not calculated using all college course
work completed, only the  prerequisite courses was used in this calculation.  
Therefore a true cumulative GPA was not determined.  Second, the science grade
point average was calculated using anatomy, physiology and microbiology;
chemistry and physics prerequisite grades were not included in the GPAsci, but were
included in the calculation of the cumulative grade point average. Consequently, the
true science grade point average was not calculated for this study.  Third, the number
of units that an individual took at any one time was not considered and therefore
does not truly reflect student ability to succeed.  Fourth, the number of units taken at
any given time was not considered when assessing student academic ability.  
Students who take one course at a time will probably have better course grades than
students who take multiple courses or a combination of complex courses at once.  
Finally, the number of times a course was repeated was not calculated into the grade
point average for GPAsci or GPAcum.  The number of times that a course was
completed was not considered in this study and was not considered in the admission
115
process; information on the number of times an applicant repeats a course is valuable
information when determining the true grade point average.
2.  Findings in the study related to the correlation between entry-level skills
and exit-skills were disappointing.  It was expected that a statistically significant
correlation would be demonstrated between entry-level skills and exit-level skills.
The unexpected finding can be contributed to the fact that skills learned during
training served as a source of knowledge which was not taken into consideration
during the investigation.
3.  The sample size for this investigation was also a limitation for this study.  
Chi-square test frequently had expected cell size of less than five which indicated
that the probabilities estimated were not sufficiently precise.  Differences in sample
sizes in the t-Test analyses affect the significance of the data analysis.
Future Research
Future research is definitely warranted to further investigate the reliability
and predictive validity of predictor measurers in identifying individual with the
ability to succeed in the physician assistant program.  This investigation will serve as
a prelude for a more in depth study that will hopefully demonstrate a more reliable
and valid outcome.
1.  To ensure the reliability and predictive validity of the science grade point
average and the cumulative grade point average future research should incorporate
and establish a more accurate calculation for the cumulative grade point average to
include all course work completed, a science grade point average to include the
116
chemistry, physics, anatomy, physiology, and microbiology, number of units
completed prior to admission, highest number of college units taken at any one
interval of time, the number of times a course was repeated.
2.  The study did not determine the cause for the individual receiving an
incomplete.  Multiple factors come into play when assessing the elements of student
success.  Academic and non-academic causes contribute to student success in the
physician assistant program and should be evaluated in future studies to help
determine other attributes that should be considered in the selection process that
would predict student success or failure.
3.  A longitudinal study should be done to compare student performance in
each cohort of students from 2001 to 2008.
117
REFERENCES
Adams, A. (2000). Work experience as a predictor of MBA performance. College
Student Journal, 30, 361-366.

Armstrong, W.B. (2000). The Association Among Student Success in Courses,
Placement Test Scores, Student Background Data, and Instruction Grading
Practices. Community College Journal of Research and Practice, 24: 681-695

Adel, C., Daniel, B. & Berkovits, M. (2000). Education Statistics Quarterly. National
Center for Education Statistics, 5.(3).

Bello, A, Haber, J. & King, V.(1977).  Factors which predict success or failure in an
Associate Degree Nursing program. Hartford, CT: Connecticut State
Department of Education: Division of Vocation Education, Research and
Planning Unit.

Besinque, K.H.., Wong, W.Y., Rho, J.P. (2000). Predictors of Success Rate in the
California State Board of Pharmacy Licensure Examination. American
Journal of Pharmaceutical Education. Vol.64: 50-53.

Slater, S.D. & Boulet, J.R, (2001).  Predicting Holistic Rating of Written
Performance Assessments from Analytic Scoring. Advances in Health
Science Education 6: 103-119.

Catherine, K.C.& Raymond, Y.W.(1989). The Use of the Admission Requirements
in Predicting the Academic Performance of the Physiotherapy Students at the
Hong Kong Polytechnic. The Journal of The Hong Kong Physiotherapy
Association. vol. 11:8-13.

Catherine, K.C. & Tang, M. (1989).  The Use of the Admission Requirements in
Predicting the Academic Performance of the Physiotherapy Students at the
Hong Kong Polytechnic. The Journal of the Hong Kong Physiotherapy
Association. vol. 11:8-13.

Chaisson G.M. (1976). Student selection: logic or lottery. Journal of Allied Health.
Vol. 5: 7-16.

Chancellor’s Office of the California Community College (1991). Assessment
validation project: Local research options.  Sacramento, CA: Chancellor’s
Office of the California Community Colleges, Student Services and Special
Programs Division

118
Chester, M.D. (2005). Introduction to the Special Issue: Test Scores and State
Accountability – Measuring the Impact of State Accountability Programs.  
Educational Measurement:  Issues and Practice. vol. 24 (4): 3-4.

Chestnut, R.J. & Phillips, C.R. (2000).  Current practices and anticipated changes in
academic and non-academic admission sources for entry-level PharmD
programs. American Journal of Pharmaceutical Education, vol. 64 (3): 251-
259.

Confer, A.W. Turnwald, G.H. & Wollenberg, D.E. (1995). Correlation of Objective
and Subjective Admission Criteria with First-year Academic Performance.
Journal of Veterninary Medical Education. Vol. 20 (3).

Cohn, Elchanan, Sharon Cohn, Donald C. Balch and James Bradley Jr. (2004)
“Determinants of Undergraduate GPAs: SAT Scores, High-School GPA and
High-School Rank”. Economics of Education Review 23, 6, 577-586.

Cronbach et. al. (1997). Generalizability Analysis for Performance Assessment of
Student Achievement. Educational and Psychological Measurement. 57:373-
399

Curtis, D.A., Lind, S.L., Plesh, O., & Finzen, F.C. (2007). Correlation of Admissions
Criteria with Academic Performance in Dental Students. Journal of Dental
Education. 71(10: 1314-1321.

De Ball, S., Sullivan, K. & Horine, J. (2001). The Relationship of Performance on
the Dental Admission Test and Performance on Part I of the National Board
Dental Examinations. Journal of Dental Education. Vol 66. No. 4: 478-484.

Deckro, R. & Woudenberg, H. (1997). MBA admission criteria and academic
success.  Decision Science, 8, 765-769.

Downey, M.C, Collins, M.A., Browning, W.A. (2002). Predictors of Success in
Dental Hygiene Education: A Six-Year Review. Journal of Dental
Education.  Vol. 66 No. 11:1269-1273.

Dreher, G., Ryan, K. (2002). Evaluating MBA-program admission criteria:  The
relationship between pre-MBA work experience and post MBA career
outcomes.  Research in Higher Education, 43, 727-744.

Edmund, K. (2005). Factors Related of Physical Therapist License Examination
Scores.  Journal of Physical Therapy Education.

119
Elliott, S.N., Compton, E. & Roach, A.T. (2007). Building Validity Evidence for
Scores on a State-Wide Alternate Assessment: A Contrasting Groups,
Multimethod Approach. Educational Measurement: Issues and Practice. vol.
26 (2), 30-43.

Espen, D., Wright, D.L. & Killion, J. (2006).  American requirements for
Radiography Programs. Radiologic Technology .

Eva, P. & Wen, F. (2007). Does the Medical College Admission Test Predict Global
Academic Performance is Osteopathic Medical School? Journal of American
Osteopath Association. Vol .107:157-162

Evans, J.G. & Dirks, S.J. (2001). Relationships of Admissions Data and
Measurements of Psychological Constructs with Psychomotor Performance
of Dental Technology Students. Journal of Dental Education. Vol. 65, no. 9:
874-882.

Family Educatinal Right to Privacy Act (Buckley Amendment) 20 USC S.1232g.
Available on line: epic.org/privacy/education/ferpa.html 4/14/2008.

Gray, S. and Deem, L. (2002). Predicting Student Performance in Preclinical
Technique Courses Using the Theory of Ability Determinants of Skilled
Performance. Journal of Dental Education. Vol.66. no. 6 p. 721-727.

Hess, B. (2003). Scoring and Validating Certification Examinations. Phoenix, AZ.,
APAP Education Forum.

Houglum, J.E., Aparasu, R.R. & Delfinis, T.M. (2005). Predictors of Academic
Success and Failure in a Pharmacy Professional Program. American Journal
of Pharmaceutical Education. Vol.69 (3) article 43.

Hughes, P. (2002).  Can we improve on how we select medical students? Journal of
the Royal Society of Medicine. Vol. 95; 18-22.

Hulse, S.F. () Admissions Criteria: Eenie, Meenie, Miney, Moe?
www.asrt.org/media/Pdf

Humphries, L.R. (2006). A comparison of interviewed and non-interviewed student
Cohorts for the PA program of study and national physician assistant
certification exam scores. Wichita State University, Dept. of Physician
Assistant. URI: http//hdl.handle.net/10057/944.

120
Kavanagh L.K. (1981). Admission criteria for a college-based radiologic technology
program: relationship for entry levels to subsequent performance course. Vol.
53: 113-117.

Kingsley, K., Swell, J., Ditmyer, M., O’Malley, S. & Galbraith, G. (2006). Creating
an Evidence-Based Admissions Formula for a New Dental School:
University of Nevada, Las Vegas, School of Dental Medicine. Journal of
Dental Education. Vol. 71, no. 4. pp 492-500.

Less, K.H. (2005). Improving the Pass Rates of Community-College Students on
Certification Examinations.

Levine, S.B., Knecht, H.G. &Eisen, R.G. (1986). Selection of physical therapy
students: interview methods and academic predictors.  Journal of Allied
Health. Vol. 15: 143-151.

Lumsden, M.A., Bore, M., Millar, K., Jack, R., & Powis, D. (2005). Assessment of
personal qualities in relation to admission to medical school. Medical
Education. 2005;39:258-265.

Lytle, C. (2007). Admission Criteria as Predictors of NCLEX-RN Success in
Associate Degree Nursing Graduates. Unpublished doctoral dissertation
Florida State University College of Nursing.

Marvis, B.E., Shafer, C.L. & Magallanes, B.M. (2006). The Intentions of Letter
Writers for Applications to a Baccalaureate-M.D. Program: Self-Report and
Content Analyses of Letter of Reference. Medical Education Online. 11:6.
Available from http://ww.med-ed-on-line.org.

McCall, K.L., McLaughlin, E.J., Fike, D.S. & Ruiz, B. (2007). Preadmission
Predictors of PharmD Graduates’ Performance on the NAPLEX. American
Journal of Pharmaceutical Education. V.71(1):05.

McNeill, M.H. & Brockmeir, L.L. (2005). Relationships between Academic Program
Variables and Success on the Registered Health Information Administrator
Certification Examination. Perspectives in Health Information Management.
Vol.2:4.  Available on line
http://library.ahima.org/xpedio/groups/public/documents/ahima/bok1_02809
6.html

121
Middlemas, D., Manning, J., Gazzillo, L. & Young, J. (2001). Predicting
Performance On the National Athletic Trainers’ Association Board of
Certification Examination From Grade Point Average and Number of
Clinical Hours.  Journal of Athletic Training. V.36(2); Apr-Jun 2001: 136-
140.

Naron, R. (1991). Relationship of Academic Variables to National Council for
Licensure Examination for Registered Nurse Performance of Graduates in a
Selected Associate Degree Nursing Program.  Unpublished doctoral
dissertation Nova University.

Park, S.E., Susarla, S.M., & Massey, W.(2005), Do Admissions Data and NBDE Part
I Scores Predict Clinical Performance Among Dental Students? Journal of
Dental Education. Vol.70,5: 518 -524.

Reiter, H.L., Eva, K.W. & Rosenfeld, J. (2007). Multiple mini-interviews predict
clerkship and licensing examination performance. Medical Education 41 (4),
378-384.

Salahdeen, H.M. & Murtala, B.A. (2005). Relationship between Admission Grades
and Performances of Students in the First Professional Examination in a New
Medical School. African Journal of Biomedical Research, Vol. 8 (2005): 51-
57.

Salvatori, P. (2001). Reliability and Validity of Admissions Tools Used to Select
Students for the Health Professions. Advances in Health Sciences Education
6:159-175, 2001.

Smithers, S., Catano, V.M. & Cunningham, D.P. (2004).  What Predicts Performance
in Canadian Dental Schools? Journal of Dental Education. Vol.68 No. 6:598-
613.

Sulaiman, A. & Mohezar, S. (2006). Student Success Factors:  Identifying Key
Predictors. Journal of Education for Business. 2006. pp. 328-333.

Tache’, S. &  Chapman, S. (2006). TheExpanding Roles and Occupational
Characteristics of Medical Assistants: Overview of an Emerging Field in
Allied Health. Vol. 35, No. 4: 233 – 237.

Tamblyn, R., Abrahamowicz, M. & Dauphinee, W.D. (2002).  Association Between
Licensure Examination Scores and Practice in Primary Care. The Journal of
the American Medical Association. Vol. 288 No. 23: 3019-3026.

122
Tang, C. & Lee, R. (1989) The Use of Admission Requirements in Predicting the
Academic Performance of the Physiotherapy Students at the Hong Kong
Polytechnic. The Journal of Hong Kong Physiotherapy Association. Vol. 11,
p -13.

Thomas, T., Weddle, M.L., Moore, M.A.(2003). Predicting Academic, Clinical and
Licensure Examination Performance in a Professional (Entry-Level) Master’s
Degree Program in Physical Therapy. Journal of Physical Therapy
Education.

Turnbull, D., Buckley, P., Robinson, J., Mather, G. Leahy, C. & Marley, J. (2003).
Increasing the evidence base for selection for undergraduate medicine: four
case studies investigating process and interim outcomes. Medical Education.
2003;37:1115-1120.

Utzman, R.R. & Riddle, D.L, & Jewell, D.V. (2007). Use of Demographic and
Quantitative Admissions Data to Predict Performance on the National
Physical Therapy Examination. Physical Therapy 87:1181-1193.

Zhang, J.Q. (1999). The Correlation of Students’ Entry-Level GPA, Academic
Performance and the National Board Examination in All Basic Science
Subjects. The Journal of Chiropractor Education Vol.13, No.2; 91-99.
123
APPENDIX A
ANALYTIC RUBRIC RATING FORM FOR THE PERSONAL STATEMENT

3 2 1 0 Score
Strong statement
of professional
goal to become
a PA  
Good expression
of goal to
become a PA
Fair statement
regarding the
desire to become
a  PA  
No expression
of professional
goal to become a
PA  

Strong
expression of
commitment  to
the PA
profession  
Good expression
of commitment
to the PA
profession
Fair expression
of commitment
to the PA
profession
No expressed
commitment to
the PA
profession

Strong written
communication
skills
Good written
communication
skill  
Fair written
communication
skills
Poor written
communication
skills

Total Score    

Grading scale for “Statement of Professional Goal”
• Goal is clearly stated
• Stated goal is authentic
• Evidence that support professional goal of wanting to become a PA
• Understanding or statement of ability to achieve the goal

Scoring: 4 = strong; 3 = good; 2=fair; and <2 = no expression of professional goal.

Grading scale for “Commitment of the PA Profession”
• Knowledge about the PA profession
• Future plans to practice as a PA
• Stated goal meets the mission of the PA profession

Scoring: 3 = strong; 2 = good; 1 = fair; 0 = no expressed commitment to the profession

Grading scale for “Written Communication Skills”.
• Unique introduction
• Speaks in first person
• Followed the format
• Organization (introduction, body, and conclusion)
• No spelling errors
• Minimal to no grammatical errors
• Answered the question
• Coherent and concise

Scoring: 7 -8 = strong; 5-6 = good; 3-4 = fair; 0-2 = poor
124
APPENDIX B
HOLISTIC RUBRIC FOR RATING LETTER OF REFERENCE

Letter of Reference Scoring Rubric

Score  of 3 Author uses descriptors i.e. excellent, exceptional, great, outstanding,
superior or wonderful.

A reference in this category has the following qualities:

• Individual is described as extremely mature
• Strong expression of the individual’s emotional stability.
• Strong expression of the individuals ability to learn quickly
• Expresses that the individual has strong positive interpersonal skills
• Expresses that the individual’s clinical skills are outstanding

Score of 2 Author uses descriptors i.e. skillful, enjoyable, good quality, fine.


A reference in this category has the following qualities:

• Individual is described as moderately mature.
• Moderate expression of the individuals’ emotional stability
• Moderate expression of the individual’s ability to learn  
• Expresses that the individual has strong positive interpersonal skills
• Expresses that the individual’s clinical skills are good.


Score of 1 Authors use descriptors i.e. adequate, average, reasonable, decent,
and moderately good.

A reference in this category has the following qualities:

• Individual is described as mature with limited description of mature
characteristics
• Fair expression of the individual’s emotional stability
• Fair expression of the individual’s ability to learn
• Expresses that the individual has Interpersonal skills
• Expresses that the individual’s clinical skills are adequate.

125
APPENDIX C
CROSSTABULATION TABLES FOR WORK EXPERIENCE AND
PROGRAM COMPLETION

EXPTYPE * COMP Crosstabulation
% within
EXPTYPE


COMP
 Frequency Fail Pass
1. Medical Assistant/Certified Nursing Assistant 30 26.7% 73.3%
2. Psychiatric Technician 8 12.5% 87.5%
3. Licensed Vocational Nurse 6 16.7% 83.3%
4. Registered Nurse 7  100.0%
5. Corpsman 8  100.0%
6. Emergency Medical Technician 37 5.4% 94.6%
7. Paramedic 11  100.0%
8. Respiratory Therapy 12 8.3% 91.7%
9. Physical Therapy Assistant/Chiropractor Assistant/
Rehabilitation Technician/ Occupational Therapy
Assistant
11  100.0%
10. Foreign Medical Doctor 3  100.0%
11. Optometry Assistant/Dental Assistant 3 33.3% 66.7%
12. Laboratory Technician/Surgical Technician 9 22.2% 77.8%
13. Phlebotomy 6  100.0%
14. Radiology Technician 13 7.7% 92.3%
EXPTYPE
15. Other: Message Therapy, Clinical Health
Educator/EKG Technician
2  100.0%
16. Chiropractor/Acupuncturist 3 33.7% 66.7%
Total  10.7% 89.3%
126
Crosstabulation of Program Completion and Clinical Hours
COMP  
Clinical Hours Frequency Valid Percent
Fail Pass
1 <5000 57 33.7 12.3% 87.7%
2 5001 - 10000 39 23.1 5.1% 94.9%
3 10001 - 15000 27 16.0 11.1% 88.9%
4 15001 – 20000 23 13.6 13.0% 87.0%
5 20001 – 35000 17 10.1 11.8% 88.2%
6 >35001 6 3.6 16.7% 83.3%
Valid
Total  169 100.0  
Missing System  41    
Total  210  
127
APPENDIX D
CROSSTABULATION TABLES FOR WORK EXPERIENCE AND
PERFORMANCE ON THE NATIONAL CERTIFICATION EXAMINATION
Crosstabulation between the Type of Clinical Experience and Performance on the
National Certification Exam

  NCEP
  Failed Passed
Count 1 21 1
% within EXPTYPE 4.5% 95.5%
Count 1 6 2
% within EXPTYPE 14.3% 85.7%
Count 0 5 3
% within EXPTYPE .0% 100.0%
Count 0 7 4
% within EXPTYPE .0% 100.0%
Count 2 6 5
% within EXPTYPE 25.0% 75.0%
Count 2 33 6
% within EXPTYPE 5.7% 94.3%
Count 1 10 7
% within EXPTYPE 9.1% 90.9%
Count 0 11 8
% within EXPTYPE .0% 100.0%
Count 1 10 9
% within EXPTYPE 9.1% 90.9%
Count 0 3 10
% within EXPTYPE .0% 100.0%
Count 0 2 11
% within EXPTYPE .0% 100.0%
Count 0 7 12
% within EXPTYPE .0% 100.0%
Count 1 5 13
% within EXPTYPE 16.7% 83.3%
Count 0 12 14
% within EXPTYPE .0% 100.0%
Count 0 2 15
% within EXPTYPE .0% 100.0%
Count 0 2
EXPTYPE
16
% within EXPTYPE .0% 100.0%
Count 9 142  Total
% within EXPTYPE 6.0% 94.0%  
128
Crosstabulation of Clinical Hours (EXPHRS) with NCE Performance
Clinical Hours NCEP
  Failed Passed
Count 2 48 1 <5000
% within EXPHRS 4.0% 96.0%
Count 1 36 2  5001-10000
% within EXPHRS 2.7% 97.3%
Count 3 21 3  10001 - 15000
% within EXPHRS 12.5% 87.5%
Count 2 18 4  15001 - 20000
% within EXPHRS 10.0% 90.0%
Count 1 14 5  20001 - 35000
% within EXPHRS 6.7% 93.3%
Count 0 5
EXPHRS
6 >35001  
% within EXPHRS .0% 100.0%
Count 9 142 Total
% within EXPHRS 6.0% 94.0% 
Abstract (if available)
Abstract The purpose of this investigation was to examine the reliability and predictive validity of the of admission data in predicting student success in completing a community college-based physician assistant program and their performance on the National Certification Examination (NCE). The files of 170 graduates were reviewed and the following data was complied: 1) science grade point average (GPAsci), 2) cumulative grade point average (GPAcum), 3) reference letter ratings, 4) personal statement ratings, and 5) work experience -- each identified as a predictor measure in this study. The criterion measures identified in the study were 1) program completion, 2) performance on the NCE, and 3) skills. Findings demonstrated variations in the degree of relationship among predictor measures and criterion measures. The GPAsci demonstrated the greatest degree of correlation with student outcome in comparison with other predictor measures, which is consistent with previous research. Overall, the research demonstrated that there was practical significance or potentially significance correlations between the majority of the predictor measures. 
Linked assets
University of Southern California Dissertations and Theses
doctype icon
University of Southern California Dissertations and Theses 
Action button
Conceptually similar
The impact of programs, practices, and strategies on student academic performance: a case study
PDF
The impact of programs, practices, and strategies on student academic performance: a case study 
Effects of teacher autonomy support with structure on  marginalized urban student math achievement
PDF
Effects of teacher autonomy support with structure on marginalized urban student math achievement 
Assessing and addressing random and systematic measurement error in performance indicators of institutional effectiveness in the community college
PDF
Assessing and addressing random and systematic measurement error in performance indicators of institutional effectiveness in the community college 
The impact of the Norton High School early college program on the academic performance of students at Norton High School
PDF
The impact of the Norton High School early college program on the academic performance of students at Norton High School 
Identifying early academic and behavioral indicators predictive of future referral to alternative education in a large K-12 urban school district
PDF
Identifying early academic and behavioral indicators predictive of future referral to alternative education in a large K-12 urban school district 
An evaluation study of the training, orientation, and implementation of a community college new student counseling intervention program: a gap analysis
PDF
An evaluation study of the training, orientation, and implementation of a community college new student counseling intervention program: a gap analysis 
The executive and academic functioning of stimulant-medicated and non-stimulant medicated adult college students with ADHD:  a neuropsychological perspective
PDF
The executive and academic functioning of stimulant-medicated and non-stimulant medicated adult college students with ADHD: a neuropsychological perspective 
An investigation of the relationship between educational attainment goals and motivation theory: a mixed-methods study of past and present graduate students in the United States
PDF
An investigation of the relationship between educational attainment goals and motivation theory: a mixed-methods study of past and present graduate students in the United States 
The impact and sustainability of programs, practices and norms on student academic performance: a case study
PDF
The impact and sustainability of programs, practices and norms on student academic performance: a case study 
The comparison of hybrid intervention and traditional intervention in increasing student achievement in middle school mathematics
PDF
The comparison of hybrid intervention and traditional intervention in increasing student achievement in middle school mathematics 
1:1 device program in a K-12 public school: the influence of technology on teaching and learning
PDF
1:1 device program in a K-12 public school: the influence of technology on teaching and learning 
Achieving satisfactory on-time graduation rates for a partnership DBA program: an innovation study
PDF
Achieving satisfactory on-time graduation rates for a partnership DBA program: an innovation study 
The effects of a math summer bridge program on college self-efficacy and other student success measures in community college students
PDF
The effects of a math summer bridge program on college self-efficacy and other student success measures in community college students 
A multi-perspective examination of developmental education: student progression, institutional assessment and placement policies, and statewide regulations
PDF
A multi-perspective examination of developmental education: student progression, institutional assessment and placement policies, and statewide regulations 
A case study on African American parents' perceptions of Mandarin Dual Language Immersion Programs and the role social capital plays in student enrollment
PDF
A case study on African American parents' perceptions of Mandarin Dual Language Immersion Programs and the role social capital plays in student enrollment 
Evaluation of the long term impact of a yearlong university high school laboratory research program on students’ interest in science and perceptions of science coursework
PDF
Evaluation of the long term impact of a yearlong university high school laboratory research program on students’ interest in science and perceptions of science coursework 
Detrimental effects of dental encroachment on secondary alveolar bone graft outcomes in the treatment of patients with cleft lip and palate: a cone-beam computed tomography study
PDF
Detrimental effects of dental encroachment on secondary alveolar bone graft outcomes in the treatment of patients with cleft lip and palate: a cone-beam computed tomography study 
Action button
Asset Metadata
Creator Middleton, Delores E. (author) 
Core Title A predictive valdity study: correlation of admission variables with program completion and student performance on the National Certification Examination in a physician assistant program 
Contributor Electronically uploaded by the author (provenance) 
School Rossier School of Education 
Degree Doctor of Education 
Degree Program Education 
Publication Date 10/18/2008 
Defense Date 08/29/2008 
Publisher University of Southern California (original), University of Southern California. Libraries (digital) 
Tag admission variable,criterion measures,criterion related evidence,letters of reference,national certification examination,OAI-PMH Harvest,personal statement,prior academic achievement,reliability and predictive validy,work experience 
Language English
Advisor Jimenez y West, Ilda (committee chair), Cole, Darnell (committee member), Hocevar, Dennis J. (committee member) 
Creator Email delores.middleton@rcc.edu,lolokinard@yahoo.com 
Permanent Link (DOI) https://doi.org/10.25549/usctheses-m1678 
Unique identifier UC1221042 
Identifier etd-Middleton-2420 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-119222 (legacy record id),usctheses-m1678 (legacy record id) 
Legacy Identifier etd-Middleton-2420.pdf 
Dmrecord 119222 
Document Type Dissertation 
Rights Middleton, Delores E. 
Type texts
Source University of Southern California (contributing entity), University of Southern California Dissertations and Theses (collection) 
Repository Name Libraries, University of Southern California
Repository Location Los Angeles, California
Repository Email cisadmin@lib.usc.edu
Tags
admission variable
criterion measures
criterion related evidence
letters of reference
national certification examination
personal statement
prior academic achievement
reliability and predictive validy