Close
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
A descriptive analysis focusing on similarities and differences among the U.S. service academies
(USC Thesis Other)
A descriptive analysis focusing on similarities and differences among the U.S. service academies
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
Running head: U.S. SERVICE ACADEMIES 1 A DESCRIPTIVE ANALYSIS FOCUSING ON SIMILARITIES AND DIFFERENCES AMONG THE U.S. SERVICE ACADEMIES by Rufus E. Cayetano ______________________________________________________________________________ A Dissertation Presented to the FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF EDUCATION May 2015 Copyright 2015 Rufus E. Cayetano U.S. SERVICE ACADEMIES 2 Dedication Incidentally, I completed and submitted the draft of this study’s report to dissertation committee chairperson and other members of the distinguished group on the 5 th anniversary of my mom’s passing. She always believed in me and, throughout this doctoral journey, I have felt her constant presence. Mom, you were always proud of me and insisted that education is important. I believed you then and more so today. Thank you. U.S. SERVICE ACADEMIES 3 Acknowledgements The adage “success has many fathers” is apropos to describe the accomplishment of this significant milestone. Several individuals contributed to this endeavor. First, I express profound gratitude to my beloved wife whose steadfast devotion and loyalty sustained me during this journey. Her assumption of numerous household responsibilities that were primarily mine allowed me to focus on completing my education. I'll be eternally grateful to her and my daughters who were forced to endure the “sweet pain” associated with this journey. It is my hope that their experience - seeing a dad work on his dissertation - will provide incentives for them to pursue similar journeys at a much younger age. Second, I give thanks to my dissertation committee, especially Chairperson Dr. Robert Keim who listened keenly to my proposed dissertation topic on accreditation in the California State University system and then calmly recommended the idea for this dissertation. The support and recommendations provided by committee members Dr. Patricia Tobey and Dr. Paul Woolston were invaluable. Third, completing this dissertation would have been difficult without the assistance of Librarian Melanee Vicedo whose initial search for documents on accreditation of the U.S. service academies produced nothing significant. Her realistic assessment following the initial search for data that embarking on a dissertation required innovative thinking and that a doctoral student had to adopt the mindset of a trailblazer were words that I found both encouraging and challenging. Fourth, I extend special thanks to the five Accreditation Liaison Officers at the three U.S. service academies who agreed to participate in this study. I am eternally grateful to each of you for the information that you willingly shared with me. Finally, I look up to heaven and thank the maker and creator without whose guidance and blessings this important work would not have been U.S. SERVICE ACADEMIES 4 possible. I have been blessed beyond words can express. From the bottom of my heart – thank you. U.S. SERVICE ACADEMIES 5 Table of Contents Dedication 2 Acknowledgements 3 List of Tables 7 Abstract 8 Chapter One: Introduction 9 Background of the Problem 9 The Accreditation Process 9 Types of U.S. Accrediting Organizations 11 Regional Accrediting Agencies 11 Middle States Commission on Higher Education (MSCHE) 12 North Central Association of Higher Learning Commission (NCAHLC) 12 Accession Programs 15 The United States Service Academies 16 Student Demographics 22 Nomination 23 Appointment 23 Service Academy Preparatory Schools 24 Accreditation Liaison Officers (ALOs) 25 Statement of Problem 26 Purpose of the Study 29 Importance of the Study 30 Limitations and Delimitations 31 Definition of the Terms 31 Organization of the Study 34 Chapter Two: Literature Review 35 Defining Accreditation 35 Early Institutional Accreditation 35 Regional Accreditation, 1885 to 1919 36 Regional Accreditation, 1920-1950 38 Regional Accreditation 1951 to Present 39 Specialized Accreditation 42 Current State and Future of Accreditation 45 Critical Assessment of Accreditation 48 Costs of Accreditation 52 Effects of Accreditation 58 Trend Toward Learning Assessment 58 Framework for Learning Assessment 59 Benefits of Accreditation on Learning 60 Organizational Effects of Accreditation 61 Future Assessment Recommendations 62 Challenges to Student Learning Outcomes 63 Organization Learning Challenges 63 Lack of Faculty Buy-in 64 Lack of Institutional Investment 65 U.S. SERVICE ACADEMIES 6 Difficulty with Integration into Local Practice 66 Tension Between Improvement and Accountability 67 Transparency Challenges 68 Summary 69 Chapter Three: Methodology 70 Overview 70 Sample and Population 70 Instrumentation 71 Data Collection 72 Data Analysis 75 Chapter Four: Results 77 Reporting the Results 77 Regional Accreditation Standards and Shared Governance 78 High Technical Course Content 80 Immersion into the Military Culture and Lifestyle 81 Continuous Program Improvement Through Internal and External Reviews 82 High Degree of Faculty Engagement in Scholarship 89 Summary 90 Chapter Five: Discussion of the Findings 92 Discussion of Findings 93 Implications for Practice 93 Future Research 97 Conclusion 98 Epilogue 99 References 101 Appendix A: Interview Guide 125 Appendix B Introductory letter to ALOs 126 U.S. SERVICE ACADEMIES 7 List of Tables Table 1: Middle States Commission on Higher Education Accreditation Standards and North Central Association of Higher Learning Commission Criteria for Accreditation 15 Table 2: Student Demographics for the Three Service Academies 22 Table 3: Academic Majors Offered at Three U.S. Service Academies 28 Table 4: Service Academies Engineering Programs 88 Table 5: The Top Three Engineering Programs/Majors Offered at the Service Academies 89 U.S. SERVICE ACADEMIES 8 Abstract This qualitative study provides a descriptive analysis of the similarities and differences among the U.S. service academies with an emphasis on the engineering programs. The three academies are accredited by regional organizations. In addition, the engineering programs offered by the institutions are accredited by ABET. This study sought to answer the following research questions: Do U.S. service academies’ Accreditation Liaison Officers (ALOs) believe that these institutions are held accountable to a different standard for institutional effectiveness and student outcomes? How do ALOs at the service academies believe they are preparing officers for leadership roles compared to other accession programs? What do ALOs believe are some unique similarities or differences among the service academies specifically relating to institutional effectiveness and student outcomes? This study utilized a multiple case study design. Qualitative case study provides the best research method to fully capture the true essence of the quality of education offered at the service academies. Each service academy is unique from the perspective that it exists to educate cadets and midshipmen to be outstanding leaders in their respective military component. The service academies offer numerous ABET-accredited engineering programs and have a high persistence and completion rate. U.S. SERVICE ACADEMIES 9 CHAPTER ONE: INTRODUCTION Background of the Problem Accreditation, like tenure and academic freedom, is a term of art widely used in higher education, yet often misunderstood (Areen, 2011). Higher education’s success depends on the important role of recognition played by the Department of Education (DOE), accreditation oversight by Council on Higher Education Accreditation (CHEA), as well as the responsibilities of respective regional accreditation bodies such as the Middle States Commission on Higher Education (MSCHE) and the North Central Association of Higher Learning Commission (NCAHLC). Additionally, specialized agencies such the Accreditation Board for Engineering and Technology (ABET) also play a very important role in accrediting specific programs, such as engineering, at most institutions. The three service academies, the U.S. Air Force Academy, the U.S. Naval Academy and the U.S. Military Academy at West Point (Army), that are the focus of this study are accredited by regional accrediting agencies as well as by ABET for the engineering programs. ABET accreditation of the service academies is very important for the graduates because many forms of professional licensure, registration, and certification require graduation from ABET-accredited programs as a minimum qualification. The federal government as an employer requires graduation from ABET-accredited programs for employment in certain fields. The Accreditation Process Accreditation of institutions and programs typically takes place on a cycle that may range from every few years to as many as every 10 years depending on whether an institution is seeks initial or continuing status. An institution or program seeking accreditation must go through a number of steps stipulated by the accreditation organization (Eaton, 2012). There are multiple U.S. SERVICE ACADEMIES 10 tasks associated with five major steps in the typical accreditation process: preparation of evidence of accomplishment by the institution or program, scrutiny of the evidence, a site visit by faculty and administrative personnel from peer institutions, and the important action by the accrediting organization is to determine accreditation status (Eaton, 2012). Briefly, the five major steps in the accreditation process are 1. Self-study: During this step, institutions and programs prepare a written summary of performance based on accrediting organizations’ standards. 2. Peer-review: Accreditation review is conducted primarily by faculty and administrative peers in the profession. These colleagues review the self-study and serve on visiting teams that review institutions and programs after the self-study is complete. 3. Site visit: Accrediting organizations send a visiting team to review an institution or program to verify the self-study is consistent with accreditation requirement. All team members are volunteers and are generally not compensated. 4. Judgment by accrediting organization: Accrediting organizations have decision-making bodies referred to as commissions made up of administrators and faculty from institutions and programs as well as public members. Following an extensive review of steps outlined above, these commissions may confirm accreditation for new institutions and programs, reaffirm accreditation for ongoing institutions and programs or deny accreditation to institutions and programs. 5. Periodic external review: Following judgment by an accrediting organization, institutions and programs continue to be reviewed over time (Eaton, 2012). U.S. SERVICE ACADEMIES 11 Types of U.S. Accrediting Organizations Today, there are four types of accrediting organizations in the United States recognized by CHEA: (1) Regional accreditors responsible for accrediting public and private, mainly nonprofit and degree-granting, two- and four-year institutions; (2) national faith-related accreditators that accredit religiously affiliated and doctrinally based institutions, mainly nonprofit and degree-granting; (3) national career-related accreditors that accredit mainly for- profit, career-based, single-purpose institutions, both degree and non-degree; and (4) programmatic accreditors that accredit specific programs, professions and freestanding schools to include law, engineering and health professions (Eaton, 2012). This study concentrates on two of these organizations, regional and programmatic accreditors, which directly affect accreditation for the service academies. Regional Accrediting Agencies This study concentrates on two regional accrediting agencies responsible for accrediting the three service academies. Both the U.S Military Academy and the U.S. Naval Academy are located within the MSCHE’s geographical area of responsibility while the Air Force Academy is located within the North Central Association of Higher Learning Commission (NCAHLC’s) geographical area of responsibility. These two accrediting agencies have different requirements to ensure that the academies meet their corresponding standards of accreditation. Table 1provides an outline for the MSCHE Accreditation Standards and also HLC Criteria for Accreditation. In the Middle States region, an institution must meet 14 different standards to demonstrate it meets or continues to meet accreditation requirements. The 14 Standards of Accreditation are divided into two areas namely institutional context (Standards 1-7) and educational effectiveness (Standards 8-14). In the case of HLC, there are five criteria that an U.S. SERVICE ACADEMIES 12 institution must meet. In addition, each criterion has multiple sub-core components integrated which also must be met. An institution meets the criterion only if all core components are met. The criteria for accreditation are the standards of quality by which the HLC determines whether an institution merits accreditation or reaffirmation of accreditation. The institution must be judged to meet all five criteria to merit accreditation. The HLC will grant or continue accreditation (with or without conditions or sanctions), deny accreditation, or withdraw accreditation based on the outcome of its review (MSCHE, 2015; HLC, 2014). Middle States Commission on Higher Education (MSCHE) In the Middle States region, to be eligible for Candidate Status, Initial Accreditation or Reaccreditation, an institution must demonstrate it meets or continues to meet specific requirements of affiliation of the Commission on Higher Education (MSCHE, 2014). Once eligibility is established, institutions must demonstrate they meet the standards for accreditation. MSCHE allows institutions going through reaccreditation to prepare a self-study document to show they meet standards of accreditation. Though the oldest of the three service academies, the U. S. Military Academy was first accredited in 1949. The institution was reaccredited in 2010 and is currently in the middle of the reaccreditation cycle. The U.S. Naval Academy was first accredited in 1947, two years before the U.S. Military Academy. The institution was reaccredited in 2011 and, like her sister academy, is also is the middle of the reaccreditation cycle (MSCHE, 2014). The U.S. Naval Academy is using the self-study process for the current reaccreditation. North Central Association of Higher Learning Commission (NCAHLC) The HLC maintains processes for determining eligibility for accreditation, for achieving candidacy status, for achieving initial accreditation, and for maintaining accreditation. The youngest of the three service academies, the U.S. Air Force Academy, was first accredited in U.S. SERVICE ACADEMIES 13 1959. The academy’s last reaffirmation of accreditation occurred in the 2009 – 2010 academic terms and is also going through the reaccreditation cycle (NCAHLC, 2014). The HLC currently offers two programs for maintaining accreditation. These are the Program to Evaluate and Advance Quality (PEAQ) and the Academic Quality Improvement Program (AQIP). PEAQ employs a comprehensive evaluation process to determine accreditation status. The program consists of an institutional self-study, an evaluation by a team of trained peer reviewers, and final decision making by the HLC. AQIP provides an alternative evaluation process for institutions already accredited by the HLC, is structured around quality improvement principles and processes and involves a structured set of goal-setting, networking, and accountability activities (NCAHLC, 2014). In September 2012, the HLC began a 3-year transition during which PEAQ will be replaced by two new Pathways, the Standard Pathway and the Open Pathway. At the end of the 3-year transition, the HLC will have three programs for maintaining accreditation: AQIP, Standard Pathway and Open Pathway. Regional accreditation assures quality by verifying an institution meets threshold standards and is engaged in continuous improvement. The Open Pathway requires an institution to designate one major improvement effort it has undertaken as its Quality Initiative for reaffirmation of accreditation. The initiative must be designed to suit the institution’s present concerns or aspirations and takes place between years five and nine of the 10-year Open Pathway Cycle. A Quality Initiative may be designed to begin and be completed during this time or it may continue an initiative already in progress or achieve a key milestone in the work of a longer initiative. The initiative is intended to allow institutions to take risks, aim high, and learn from partial success or even failure (NCAHLC, 2014). U.S. SERVICE ACADEMIES 14 The Quality Initiative can take one of three forms (1) an institutions’ designing and proposing its own Quality Initiative to suit its present concerns or aspirations, (2) choosing an initiative from a menu of topics to include focusing on self-evaluation and reflection leading to revision or restatement of its mission, vision, and goals; identifying and joining a group of peer institutions to develop a benchmarking process for broad institutional self-evaluation; or undertaking a multi-year process to create systemic, comprehensive assessment and improvement of student learning; and (3) choosing to participate in a commission-facilitated program (NCAHLC, 2014). The HLC determines whether an institution may participate in the Open Pathway. This determination is based upon the institution’s present condition, including scheduled monitoring and past relationship with the commission. HLC approved the U.S. Air Force Academy’s request in 2006 to participate in reaccreditation through the Open Pathway program. U.S. SERVICE ACADEMIES 15 Table 1 Middle States Commission on Higher Education Accreditation Standards and North Central Association of Higher Learning Commission Criteria for Accreditation Middle States Standards Higher Learning Commission Institutional Context Standard 1: Mission and Goals Standard 2: Planning, Resource Allocation, and Institutional Renewal Standard 3: Institutional Resources Standard 4: Leadership and Governance Standard 5: Administration Standard 6: Integrity Standard 7: Institutional Assessment Educational Effectiveness Standard 8: Student Admissions and Retention Standard 9: Student Support Services Standard 10: Faculty Standard 11: Educational Offerings Standard 12: General Education Standard 13: Related Educational Activities Standard 14: Assessment of Student Learning Criteria Criterion 1: Mission Criterion 2: Integrity: Ethical and Responsible Conduct Criterion 3: Teaching and Learning: Quality, Resources, and Support Criterion 4: Teaching and Learning: Evaluation and Improvement Criterion 5: Resources, Planning, and Institutional Effectiveness Note: Each criterion has multiple sub- core components integrated. Source: MSCHE (2014) and HLC (2014) Accession Programs Overall, the military services use three types of accession programs that award commissions to officer candidates. These programs are (1) service academies, (2) Reserve Officers’ Training Corps (ROTC), and (3) Officer Candidate School (OCS) for the Army, Navy, and Marine Corps for Officer Training School (OTS) for the Air Force. Graduates of the service academies make up approximately 18% of the officer corps for the nation’s armed services (GAO-03-1000, 2003). The ROTC, and OCS or OTS programs are responsible for accession of the remaining 82% of the officer corps. The services ROTC units are located at civilian colleges U.S. SERVICE ACADEMIES 16 and universities throughout the country, with some academic institutions offering ROTC for more than one service. During fiscal year 2005, Army ROTC was located at 273, Navy ROTC at 71, and Air Force ROTC at 144 academic institutions respectively. That same year, 2,834 officers graduated from the service academies compared to 6,443 and 3,188 who graduated from the ROTC and OCS programs respectively (U.S. GAO-07-224, 2007). Officer candidates enrolled in ROTC programs must meet all graduation requirements of their academic institutions and complete required military training to receive commissions as officers. All Air Force, Navy and Army officers who receive scholarships and graduate from ROTC must commit to four years of active duty military service after graduation. Officers who do not receive scholarship but attend institutions as part of the ROTC program must serve three years on military duty. The OCS/OTS, which also augment the services’ other commissioning programs, focus only on military training. The training period is short, ranging from six weeks for the Marine Enlisted Commissioning Education Program to fourteen weeks for Army OCS. Many, but not all, of the graduates from this program have prior undergraduate degrees and are obligated to serve a minimum of two years on active duty as officers. Compared to the other services, the Marine Corps makes more extensive use of its OCS commissioning program. While the cost of attending the service academies is free to the students, the federal government spends a significant amount of money to support these institutions. The United States Service Academies There are five service academies operated by the federal government. These service academies are the U.S. Air Force Academy located in Colorado Springs, Colorado; the U.S. U.S. SERVICE ACADEMIES 17 Naval Academy located in Annapolis, Maryland; the U.S. Military Academy located in West Point, New York; the U.S. Coast Guard Academy located in New London, Connecticut; and the U.S. Merchant Marine Academy located in Kings Point, New York (U.S.GAO, 1975; U.S.GAO- 03-1000, 2003). The U.S. Coast Guard has a unique role in our nation’s defense. As the force primarily responsible for maritime law enforcement, the U.S. Coast Guard operated under the Department of Transportation until shortly after September 11, 2001, when it was transferred into the newly created Department of Homeland Security (The United States Coast Guard Academy, 2014). The president can transfer the U.S. Coast Guard to the Department of Navy at any time while Congress can authorize the transfer during times of war. Similar to the U.S. Coast Guard, in time of war or national emergency, the U.S. Merchant Marine becomes vital to national security as a “fourth arm of defense” (U.S. Merchant Marine Academy, 2014). Merchant ships operating as auxiliaries to the Navy deliver military troops, supplies and equipment overseas to our forces and allies. This study focused on the academies of the three major fighting components of the military: Air Force, Army and Navy. Therefore, the U.S. Coast Guard and U.S. Merchant Marine Academies are excluded from this study. The U.S. Naval Academy, as an institution of higher learning, provides education for both Navy and Marine Corps military personnel. The purpose of the service academies is “to provide an annual influx of career-officers and future leaders into each service” (U.S.DoD Directive 1322.22 Change 1, 2011 para. 4.1.1). The service academies are structured to provide curriculum critical to the development of successful future officers in academic and military areas of achievement (U.S.GAO-03-1000, 2003). United States Department of Defense (DoD) Directive 1322.22 (Change 1, 2011) establishes policy, assigns responsibilities, and prescribes procedures for DoD oversight of the U.S. SERVICE ACADEMIES 18 service academies, and the organization of each academy, as depicted in Figure 1, comprises a superintendent, a dean of the faculty, a commandant, an athletic director, and a director of admissions. The superintendents, who are equivalent to presidents of universities, are responsible for the immediate governance of each service academy and also serve as the commanding officers of the academies and their military posts. Additionally, the superintendents are “responsible for the day-to-day operation of the academies and the welfare of cadets or midshipmen, and staff” (U.S. DoD Directive 1322.22 Change 1, 2011 para. 4.2.5). United States DoD Directive 1322.22 (Change 1, 2011) stipulates that the deans of the faculties of the academies are responsible for “directing and managing the development and execution of an undergraduate curriculum that recognizes the requirement for graduates to understand technology while gaining a sound historical perspective and an understanding of different cultures” (para. 4.2.6). The academic deans are civilian DoD employees who oversee academic programs and faculties. The respective commandants of the U.S. service academies serve as dean of students and supervisors of all military and professional training. Specifically, they are responsible for directing and managing the “military training programs and shall exercise command over cadets or midshipmen” (U.S. DoD Directive 1322.22 Change 1, 2011 para. 4.2.7). The directors of athletics are responsible for directing and managing the “intercollegiate athletic programs and other physical fitness programs” (U.S. DoD Directive 1322.22 Change 1, 2011 para. 4.2.8). The secretaries of the military departments may employ as many civilian faculty members as necessary (U.S.DoD Directive 1322.22, Change 1, 2011). Review and oversight of the military higher education institutions fall under the purview of Boards of Visitors for the academies responsible for inquiring into the efficiency and effectiveness of the academy operation. Title 10, United States Code (2011) stipulates that the U.S. SERVICE ACADEMIES 19 Board of Visitors provide independent advice and recommendations to the President of the United States on matters related to morale and discipline, curriculum, instruction, physical equipment, fiscal affairs, academic methods, and any other matters relating to the academies that the Board decides to consider. Each Board consists of 15 members. The president appoints six members. The vice president or president pro tempore of the Senate designates three members, with two being members of the Senate Appropriations Committee. The speaker of the House of Representatives designates four members, two being members of the House Appropriations Committee. The Chairman of the Senate Armed Services Committee, or its designee, and the Chairman of the House Armed Services Committee, or its designee, fill the last two positions. Presidential appointees are designated for a period of three years. Each of the additional nine Board members serves a minimum term of one year. Any member whose term of office expired continues to serve until his successor is appointed. The Boards of Visitors are required to visit the academies annually, and each Board must submit a written report to the president within 60 days after its annual visit to the academy (Title 10, United States Code, 2011). Figure 1. Organization of the U.S. Service Academies The mission of the U.S. service academies is to graduate cadets and midshipmen leadership knowledge and character and the motivation to become career officers (White House, 2014). To be eligible to enter a service academy, an applicant must meet four be a US citizen, U.S. SERVICE ACADEMIES 20 have good moral character, be unmarried and have no dependents, and be between ages of 17 and 22 (U.S.DoD Directive 1322.22, 2011). In addition, there are four basic procedures involved in applying for admission. First, candidates must request a pre-candidate questionnaire from the Admissions Office. Second, s/he needs to write to his or her congressional representatives for a nomination. Third, s/he must take the American College Test or the Scholastic Aptitude Test and schedule a physical aptitude exam in addition to a medical exam. Finally, s/he must complete all forms and return them to the academy or academies by the deadline (White House, 2014). The minimum SAT scores for students who apply to the service academies are 500 in the verbal section and 500 in the math section. However, the average SAT scores for those who attend the service academies are between 540 and 620 for the verbal section and between 630 and 710 for the math section. Similarly, the minimum ACT scores for students who apply to the service academies are 21 in English, 19 in Social Studies, 24 in Mathematics, and 24 in Natural Science. However, the average ACT scores for those who attend the service academies are between 23 and 27 in English, between 24 and 29 in social studies, between 27 and 32 in mathematics; and between 28 and 32 in natural science. The minimum and average scores are slightly higher for the U.S. Naval Academy. Virtually all cadets and midshipmen are from the top 25% of their high school classes (White House, 2014). The National Center for Education Statistics (NCES) shows that, for data collected during the fall of 2013, the U.S. Naval Academy experienced the highest first to second year retention rate of 98%, followed very closely by the U.S. Military Academy at 96%. The same data shows that the U.S. Air Force Academy reported 89% for a similar category of students (Table 2). For the same reporting period overall graduation rates were relatively the same for all three service academies with the U.S. Naval Academy, U.S. Air Force Academy and U.S. Military Academy reporting 88%, 86% and 84%, U.S. SERVICE ACADEMIES 21 respectively (NCES, 2014).This data also shows an overall graduation rate of 86% for all students at the three academies who first enrolled in the fall of 2007 (HLC, 2014; MSCHE, 2014; NCES, 2014). U.S. SERVICE ACADEMIES 22 Student Demographics Table 2 Student Demographics for the Three Service Academies Student Data for Fall 2013 U.S. Air Force Academy U.S. Military Academy U.S. Naval Academy Student population 3,993 4,591 4,526 Student-to-faculty ratio 8 to 1 7 to 1 9 to 1 Faculty Full Time Part Time Full Time Part Time Full Time Part Time Instructional 168 5 205 00 370 69 Research and public service 9 1 0 0 9 1 Total 177 6 205 00 379 70 Student Gender Male Female Male Female Male Female 78% 22% 83% 17% 78% 22% Race/Ethnicity American Indian or Alaskan 0% 1% 0% Asian 5% 6% 6% Black or African American 6% 8% 7% Hispanic/Latino 9% 10% 11% Native Hawaiian or other 1% 1% 1% White 67% 69% 65% Two or more races 7% 4% 7% Race/ethnicity unknown 4% 1% 1% Non-resident alien (international student) 1% 1% 1% Admissions Fall 2013 Total Male Female Total Male Female Total Male Female Number of Applicants 9,634 7,329 2,305 15,408 12,283 3,125 19,146 14,710 4,436 Percent admitted 15% 15% 16% 9% 9% 7% 7% 7% 8% Percent admitted who enrolled 78% 79% 74% 85% 86% 81% 85% 86% 82% Test Scores Percentile Percentile Percentile 25 th 75th 25 th 75th 25 th 75th SAT Critical Reading 590 690 580 695 570 680 SAT Math 620 710 600 690 610 700 SAT Writing 550 665 ACT Composite 29 32 27 30 25 32 ACT English 27 33 27 31 26 32 ACT Math 28 33 26 31 ACT Writing 8 9 First-To-Second-Year Retention Rate 89% 96% 98% Overall Graduation Rate (Began in Fall 2007) 86% 84% 88% 6-Year Graduation Rate by Gender Male Female Male Female Male Female 87% 82% 84% 83% 88% 87% Sources: NCES, MSCHE, AND HLC U.S. SERVICE ACADEMIES 23 Nomination To be considered for an appointment to the service academies, a prospective candidate must obtain a nomination. Each congressional office with nominating authority develops its own process for service academy nominations (Congressional Research Service, 2012). The lack of uniformity in congressional nomination has caught the attention of many transparency advocates who have consistently called for more openness and accountability to the tax payers. These advocates argue that the service academies are tax payer funded institutions and that the American public deserves to know who their respective elected representatives are nominating to these institutions. For their part, many congressional representatives argued that most high school seniors who are nominated to the service academies are considered minors and, therefore, consider the issue one of privacy preservation (Gannet, 2014). The respective academy’s admissions office is required to send written acknowledgement to applicants whose nomination was submitted to the academy. Under special circumstances, foreign students may be allowed to receive instruction at one of the service academies, but a stipulation is that the instruction shall be reimbursed. However, the undersecretary of defense for policy may waive reimbursement either wholly or partially (U.S. DoD Directive 1322.22, 2011). Appointment In his capacity as Commander in Chief, the President of the United States is the appointing authority for all service academy admissions (Congressional Research Service, 2012).The president makes direct appointments in several military-affiliated categories, including children of career military personnel, decreased or disabled veterans, military or civilian personnel in missing status, and Medal of Honor recipients (Congressional Research Service, 2012; U.S. DoD Directive 1322.22, 2011). U.S. SERVICE ACADEMIES 24 Service Academy Preparatory Schools Each of the military departments operates postsecondary educational institutions known as academy preparatory schools for the purpose of providing enhanced opportunities for selected candidates to be appointed to the service academies. The schools provide an avenue for effective transition to the service academy environment, and each of the three service academies identified in this study operates its own preparatory school. The mission of academy preparatory schools is to prepare candidates judged to need additional academic preparation for admission (U.S. DoD Directive 1322.22, 2011). The services have been challenged in recruiting officers of racial and ethnic minorities, particularly African Americans and Hispanics. In fact, all services had problems recruiting newly commissioned minority officers to meet DoD’s goal of maintaining a racially and ethnically diverse officer corps (U.S. GAO-07-224 Military Personnel, January 2007). Some pundits, like Fleming (2012), argue that the academies should stop recruiting below-par students who use academy preparatory schools as back doorways into their freshman years. He also contends that these below-par students fill slots for which better qualified applicants are rejected. In an effort to provide an opportunity for minorities to attend the service academies, U.S. DoD Directive 1322.22 (2011) states, [P]rimary consideration for enrollment in the Academy Preparatory Schools shall be accorded to nominees to fill officer accession objectives for minorities, including women, and for those enlisted applicants who, by their professional performance and demonstrated ability, deserve consideration for appointment to an academy. (para 4.9.2) Officials at the three service academies screen all applicants and identify those who can succeed at the academies but who can benefit from more preparation. The preparatory schools U.S. SERVICE ACADEMIES 25 offer a 10-month course of instruction that combines academics, physical conditioning, and an orientation to military life. Instruction offered at the preparatory schools focus on fundamental to maximize students’ potential for success. The preparatory schools provide firm foundations in mathematics, science, chemistry, physics, information technology and English. Specifically, preparatory school programs “provide tailored individual instruction to strengthen candidate abilities and to correct deficiencies in academic areas emphasized by the academies” (U.S. DoD Directive 1322.22, 2011, para 4.9.4).The structure of the preparatory schools is similar to that of the service academies and designed for seamless transition between the two institutions (U.S. GAO-1232R, 2012). As an example, the Air Force Academy Preparatory School accepts approximately 240 students between the ages 17 and 22 each year. The preparatory school program emphasizes the same four areas as the U.S. Air Force Academy: academics, military, athletics and character (academyadmissions.com, 2014). Accreditation Liaison Officers (ALOs) Accreditation Liaison Officers (ALOs) are appointed by the chief executive officers of higher education institutions to serve as liaisons between regional accrediting commissions and their institutions on a variety of matters, particularly during the self-study and evaluation process. ALOs are usually the preferred source of information on campuses about accreditation and the institutions’ accrediting agencies (MSCHE, 2013; New England Association of Schools and Colleges Commission on Institutions of Higher Education, 2001; NCAHLC, 2014). Some duties of ALOs include 1. Provide an identified place on campuses where information can be disseminated and questions answered about accreditation; U.S. SERVICE ACADEMIES 26 2. Interpret for the institutions the policies and procedures of the commissions and to call attention to matters that may have application to the institutions, particularly when the commissions adopt or reverse policies; 3. Undertake the institutions’ accreditation planning and assure the logistics of accreditation work are accomplished at the institutions; 4. Serve as the chair or resource person for the self-study committee, coordinate preparations for evaluation visits, and oversee follow-up studies resulting from the evaluation; 5. Maintain a file of all accreditation material; 6. Attend meetings with commissions as well as meetings of ALOs in order to receive information and to participate in policy development; 7. Prepare accreditation related reports as necessary; 8. Notify the commissions in advance of substantive changes that are planned by the institutions; 9. An active and involved ALO can enhance the relationship between an institution and the commission and can give the institution a more participatory role in accreditation (MSCHE, 2013; New England Association of Schools and Colleges Commission on Institutions of Higher Education, 2001; NCAHLC, 2014). Statement of Problem According to the United States Department of Defense, the academies and preparatory schools cost more than $1.6 billion to operate during fiscal year 2010 (U.S. GAO-12-327R, 2012). There has been much debate about the quality of education offered at each of the service U.S. SERVICE ACADEMIES 27 academies (Fleming, 2012; U.S. GAO-12-327R, 2012) but little, if any, research that compares service academies’ educational outcomes to those of traditional institutions. Some argue that the service academies are subject to more scrutiny by the public and especially the federal government, which is exclusively responsible for funding the institutions (Fleming, 2012). As a result of this scrutiny, they are held to a higher level of accountability by U.S. tax payers. Wellman (2001) suggested that “accountability is necessary for preserving the compact between higher education and society” (p. 48). Moreover, accountability focuses on the priorities of constituencies who have a vested interest in institutional performance (Alexander, 2000). Given both the downsizing of the military and the federal budget, Fleming (2012), asked, “Do the service academies deserve to continue?” (p.1). In light of the high cost the service academies and to aid in understanding how these compare to traditional institutions of higher education in terms of academic preparation, this study sought analyze institutional effectiveness in terms of student learning outcomes. To date, the service academies have not been compared to each other, but the common denominator among them is they must meet accreditation requirements in the same manner as traditional colleges and universities. Therefore, a comparison may yield evidence of benefits not readily measurable in terms of budgetary considerations. The U.S. Air Force Academy, U.S. Naval Academy and U.S. Military Academy at West Point each operate bachelor degree programs along with commissioning graduates as military officers. As indicated in Table 3, the service academies offer numerous academic majors in the science, technology, engineering, and mathematics fields. To gauge the academic outcomes of the service academies, this study provides an in-depth overview of the engineering program offered at the three service academies. U.S. SERVICE ACADEMIES 28 Table 3 Academic Majors Offered at Three U.S. Service Academies U.S. Air Force Academy U.S. Naval Academy U.S. Military Academy Aeronautical Engineering Astronautical Engineering Behavioral Sciences Biology Chemistry Civil Engineering Computer Engineering Computer Science Economics Electrical Engineering English Environmental Engineering Foreign Area Studies Geospatial Science History Legal Studies Management Mathematics Mechanical Engineering Meteorology Military and Strategic Studies Operations Research Philosophy Political Science Systems Engineering Aerospace Engineering Arabic Chemistry Chinese Computer Science Computer Engineering Cyber Operations Economics Electrical Engineering English Engineering, General General Science History Information Technology Mathematics Mechanical Engineering Naval Architecture and Marine Engineering Nuclear Engineering Ocean Engineering Oceanography Operations Research Physics Political Science Quantitative Economics Systems Engineering American Politics Art, Philosophy and Literature Chemical Engineering Civil Engineering, General Comparative Politics Computer Science Defense and Strategic Studies Economics Electrical Engineering Engineering Management Engineering Psychology Environmental Engineering Environmental Geography Environmental Science Foreign Area Studies Foreign Languages Geospatial Information Science History Human Geography Information Technology International Relations Kinesiology Leader Development Science Legal Studies Life Science Management Mathematical Science Mechanical Engineering Nuclear Engineering Operations Research Physics Psychology Sociology Systems Engineering Systems Management Approximately 12,000 officer candidates attend the service academies each year. Each service academy cohort includes 1,000 to 1,200 students (GAO, February 2012). During the U.S. SERVICE ACADEMIES 29 academic year which started in the fall of 2013, enrollment for the three service academies was 3,993 for U. S. Air Force Academy (The Higher Learning Commission, 2013); 4,591 for the U. S. Military Academy (NCES, 2013) and 4,526 for the U. S. Naval Academy (MSCHE, 2013). Similar to traditional institutions of higher learning, the service academies must go through the accreditation process. The three service academies provide officer candidates with a rigorous academic program and a complete immersion into military life. Admission to the academies is highly competitive, and cadets receive free tuition, other expenses, and a stipend of $600 per month. In addition, Academy graduates begin their careers with a number of advantages. By the time they begin active duty, they are well steeped in the procedures and expectations of their service. They enter with a sizable cohort of peers, receive regular commissions, and are more likely to be placed in the occupation of their choice. Purpose of the Study Accreditation, accountability and assessment overlap but are distinct processes fundamental to quality assurance (Hendel & Lewis, 2005). This study discusses accreditation within the context of accountability. Accountability refers to “a focus on the conditions that must exist within any educational system in order for that system to be accountable for the achievement of its goals” (Barbee & Bouck, 1974, p. xiv). Trow (1996) defined accountability as “the obligation to report to others, to explain, to justify, to answer about how resources have been used, and to what effect” (p. 310). Whereas accreditation entails participation by peers, accountability systems exclude external peer-review that has characterized institutional accreditation (Hendel & Lewis, 2005). Quality is determined by whether an institution adheres to standards established by accrediting agencies or appropriate governmental agencies (Johnstone, U.S. SERVICE ACADEMIES 30 2001). Accountability must be part of any solution (Carey, 2010), and this means harnessing institutional self-interest by making what is good for colleges and what is good for society one and the same. With deft, sustained implementation, accountability can preserve institutional independence while helping to close the rift between the academy and the public at large (Carey, 2010). In addition, as costs rise, accountability pressures increase (Hendel & Lewis, 2005). This study focused on accreditation at the three service academies to identify unique similarities or differences among the institutions. The study concentrates on the various engineering programs offered at each of these institutions. The purpose of this study was to address the following research questions: 1. Do U.S. service academies’ Accreditation Liaison Officers (ALOs) believe that these institutions are held accountable to a different standard for institutional effectiveness and student outcomes? 2. How do ALOs at the service academies believe that their institutions are preparing officers compared to other accession programs? 3. What do ALOs believe are some unique similarities or differences among the service academies specifically relating to institutional effectiveness and student outcomes? Importance of the Study President Obama’s proposal during the summer of 2013 to rate institutions of higher education is the latest effort to hold colleges accountable for cost, value, and quality. It joins the 2006 recommendations of former Secretary of Education Margaret Spellings’ Commission on the Future of Higher education and numerous efforts in calling for increased accountability and transparency at colleges and universities (Cowan, 2014). This study is important because it will be the first in-depth analysis focusing on similarities and differences among the three service U.S. SERVICE ACADEMIES 31 academies. Since this study focuses primarily on engineering programs at the service academies, the results will highlight the uniqueness of various programs offered by the three institutions. The results of the study could be useful to university administrators and faculty as well as to prospective engineering students with an interest in military service. Limitations and Delimitations This study is restricted to the three service academies located in three different geographic areas. Two of these institutions, the U.S. Naval Academy and the U.S. Military Academy at West Point are accredited by MSCHE while the U.S. Air Force Academy is accredited by HLC. While all engineering programs offered by these institutions are accredited by ABET, the specific programs vary from one institution to the other based on the need of the specific branch of the military. This study has a number of potential weaknesses or limitations. First, the service academies are unique in the sense that they exist for the purpose of ensuring the armed forces have institutions they control in terms of specific curricula to meet mission requirements. Second, due to the distance involved, visits to the three institutions to observe students was difficult. Instead, the researcher may relied on the ALOs at the respective institutions to provide information for this study. Third, given that this study focuses on accountability, it is expected that the ALOs may identify other individuals who will need to be interviewed in an effort to gain better insight into the three service academies. Definition of the Terms ABET: the specialized accrediting organization for programs in applied sciences, engineering, and technology, formerly (formally) known as the acronym for the Accreditation Board for Engineering and Technology. U.S. SERVICE ACADEMIES 32 Academy(ies): refers to the U.S. Military, the Naval, or the Air Force Academy (see Service). Accountability: refers to “a focus on the conditions that must exist within any educational system in order for that system to be accountable for the achievement of its goals” (Barbee & Bouck, 1974, p. xiv). Trow (1996) defined accountability as “the obligation to report to others, to explain, to justify, to answer about how resources have been used, and to what effect” (p. 310). Accreditation: a process of external quality review created and used by higher education to scrutinize colleges, universities and programs for quality assurance and quality improvement (Eaton, 2012). Accreditation Liaison Officer: an individual appointed by the chief executive officer of an affiliated institution to work with appropriate individuals and agencies on matters of accreditation. Appointment: applicants who are selected for admission to the academies are appointed by the president as cadets or midshipmen. Those who complete the course of instruction at an academy may be appointed as a commissioned officer in the Armed Forces. Board of Visitors: a group of 15 members appointed by the president, vice-president or president pro tempore, speaker of the House of Representatives, and Chairpersons of the Senate and House Armed Services Committee that provides independent advise and recommendations to the President of the United States on matters related to the service academies. U.S. SERVICE ACADEMIES 33 Cadet: title assigned to a student enrolled in the service academies who upon successful degree completion accept commissions to serve as an officer in the United States armed forces. Referred to as Midshipman at the U.S. Naval Academy. Flag Officer: a flag officer is a commissioned officer in U.S. armed forces senior enough to be entitled to fly a flag to mark the position from which the officer exercises command. In the U.S. Navy or U.S. Coast Guard the officer holds a rank higher than captain, such as rear admiral, vice admiral, or admiral. In the US Army, U.S. Air Force or U.S. Marine Corps the term applies to an officer holding the rank of brigadier general, major general, lieutenant general, or general. General Officer: See flag officer. Higher Learning Commission: the accrediting body for institutions of higher learning in Arizona, Arkansas, Colorado, Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, New Mexico, North Dakota, West Virginia, Wisconsin and Wyoming. Middle States Commission on Higher Education: the accrediting body for institutions of higher learning in Delaware, the District of Columbia, Maryland, New Jersey, New York, Pennsylvania, Puerto Rico, and U.S. Virgin Islands. Midshipman: See cadet above. Nomination: the recommendation by one holding authority to recommend candidates for vacancies at the service academies to include the president, the vice president, the Members of Congress and the Delegates, certain Government officials of U.S. Possessions, the Secretaries of the Military Departments, and the superintendents of the academies. U.S. SERVICE ACADEMIES 34 Service Academies: the U.S. Military Academy (USMA), U.S. Naval Academy (USNA), and U.S. Air Force Academy (USAFA) each run four-year programs that provide successful candidates with Bachelor of Science degrees and commissions as military officers. Organization of the Study This dissertation is organized into five chapters. The first chapter provides information relevant to the problem under study, research questions, limitations and delimitations, and operational definitions. Chapter Two provides a comprehensive overview of accreditation to include international accreditation in higher education, critical assessment, cost, and effects on accreditation in terms of student learning outcomes and the specialized accrediting organizations. Chapter Three describes the methodology used for this study, and Chapter Four reports the results of this study. Finally, Chapter Five offers a discussion of the findings, the implications for practice and offers topics for future research associated with the service academies. U.S. SERVICE ACADEMIES 35 CHAPTER TWO: LITERATURE REVIEW The significance of accreditation in higher education is recognized globally. This chapter looks at the evolution of accreditation in the United States to include a focus on regional and specialized organizations. Accreditation is widely considered to be a driving force behind advances in both student learning and outcomes assessment. However, several factors such as the lack of faculty buy-in and the lack of institutional investment, can positively or negatively affect outcomes assessment. Defining Accreditation Accreditation is a process of external quality review created and used by higher education to scrutinize colleges, universities and programs for quality assurance and quality improvement (Eaton, 2012). To earn and maintain accreditation, colleges and universities must demonstrate to colleagues from peer institutions that they meet or surpass mutually agreed-upon standards (MSCHE, 2009). The federal government relies on accreditation to assure the quality of institutions and programs for which it provides federal funds and aid to students (Eaton, 2012). Early Institutional Accreditation Accreditation dates back to the self-initiated external review of Harvard in 1642. This external review, done only six years after Harvard’s founding, was intended to ascertain rigor in its courses by peers from universities in Great Britain and Europe (Davenport, 2000; Brittingham, 2009). This type of self-study is not only the first example in America of peer- review, but it also highlights the need for self- and peer-regulation in the U.S. educational system due to the lack of federal governmental regulation. This lack of federal government intervention in the evaluation process of educational institutions is a main reason for the way accreditation in the U.S. developed (Brittingham, 2009). U.S. SERVICE ACADEMIES 36 While the federal government does not directly accredit educational institutions, the first example of an accrediting body was through a state government. In 1784, the New York Board of Regents was established as the first regionally organized accrediting organization. The Board was set up like a corporate office with the educational institutions being franchisees. Each college or university had to be meet Board mandated standards in order to receive state financial aid (Blauch, 1959). Not only did Harvard pioneer accreditation in the U.S. with its early external review of its courses, but the president of Harvard University initiated a national movement in 1892 when he organized and chaired the Committee of Ten, which was an alliance formed among mostly college and university presidents to seek for standardization regarding educational philosophies and practices through a system of peer approval (Davis, 1945; Shaw, 1993). Around this same time there were associations and foundations that undertook an accreditation review of educational institutions based on their own standards. Associations such as the American Association of University Women, the Carnegie Foundation, and the Association of American Universities would, for a variety of different reasons and clientele (e.g. gender equality, professorial benefits), evaluate various institutions and generate lists of approved or accredited schools. These associations responded to their constituents’ desire to have accurate information regarding the validity and efficacy of the different colleges and universities (Orlans, 1975; Shaw, 1993). Regional Accreditation, 1885-1919 When these associations declined to broaden or continue their accrediting practices, individual institutions united to form regional accrediting bodies to assess secondary schools’ adequacy in preparing students for college (Brittingham, 2009). Colleges were measured by the U.S. SERVICE ACADEMIES 37 quality of students they admitted based on standards at the secondary school level measured by the accrediting agency. The regional accrediting agencies focused also on creating a list of colleges that were good destinations for incoming freshmen. If an institution was a member of the regional accreditation agency, it was considered an accredited college. More precisely, the institutions that belonged to an accrediting agency were considered colleges while those that did not belong were not (Blauch, 1959; Davis, 1932; Ewell, 2008; Orlans, 1974; Shaw, 1993). Regional accrediting bodies were formed in the following years: The New England Association of Schools and Colleges in 1885, the Middle States Association of Colleges and Secondary Schools (MSCSS) and MSCHE in 1887, the North Central Association of Colleges and Schools (NCA) and the Southern Association of Colleges and Schools in 1895, the Northwest Commission on Colleges and Universities (NWCCU) in 1917, and, finally, the Western Association of Schools and Colleges in 1924 (Brittingham, 2009). Regional accrediting associations created instruments for the purpose of establishing unity and standardization in regards to entrance requirements and college standards (Blauch 1959). For example, in 1901 MSCHE and MSCSS created the College Entrance Examination Board to standardize college entrance requirements. The NCA also published its first set of standards for its higher education members in 1909 (Brittingham, 2009). Although there were functioning regional accreditation bodies in most states, in 1910 the DOE created its own national list of recognized (accredited) colleges. As a result of the public’s pressure to keep the federal government from controlling higher education directly, President Taft blocked the publishing of the list of colleges, and the DOE discontinued the active pursuit of accrediting schools. Instead, it reestablished itself as a resource for the regional accrediting bodies in regards to data collection and comparison (Blauch, 1959; Ewell, 2008; Orlans, 1975). U.S. SERVICE ACADEMIES 38 Regional Accreditation, 1920-1950 With the regional accrediting bodies in place, the ideas of what an accredited college was became more diverse (e.g., vocational colleges, community colleges). Out of the greater differences among schools in regards to school types and institutional purposes, there arose a need to apply more qualitative measures and a focus on high rather than minimum outcomes (Brittingham, 2009). School visits by regional accreditors became necessary once a school demonstrated struggles, since qualitative standards became the norm. The regional organizations began to measure success (and, therefore, grant accredited status) on whether an institution met standards outlined in its own mission rather than a predetermined set of criteria (Brittingham, 2009). In other words, if a school did what it said it would do, it could be accredited. The accreditation process later became a requirement for all member institutions. Self- and peer- reviews, which became a standard part of the accreditation process, were undertaken by volunteers from the member institutions (Ewell, 2008). Accrediting bodies began to be challenged as to their legitimacy in classifying colleges as accredited or not. The Langer Case in 1938 is a landmark case that established the standing of accrediting bodies in the United States. Governor William Langer of North Dakota lost a legal challenge of the NCA’s denial of accreditation to North Dakota Agricultural College. This ruling carried over to other legal cases wherein the decision that accreditation was a legitimate as and voluntary process was upheld (Fuller & Lugg, 2012; Orlans, 1974). In addition to the regional accrediting bodies, there arose other associations meant to regulate the accrediting agencies themselves. The Joint Commission on Accrediting was formed in 1938 to validate legitimate accrediting agencies and discredit questionable or redundant ones. After some changes to the mission and the membership of the Joint Commission on U.S. SERVICE ACADEMIES 39 Accreditation, the name was changed to the National Commission on Accrediting (Blauch, 1959). Regional Accreditation 1951 to Present The period between 1951 and 1985 was coined the golden age of higher education and was marked by increasing federal regulations. During this period, key developments in the accreditation process included the standardization of the self-study, the execution of the site visit by colleagues from peer institutions, and the regular, cyclical, visitation of institutions (Woolston, 2013). With the passage of the Veterans' Readjustment Assistance Act of 1952, the U.S. Commissioner of Education was required to publish a list of recognized accreditation associations (Bloland, 2001). This act provided for education benefits to veterans of the Korean War directly rather than to the educational institution they attended, increasing the importance of accreditation as a mechanism for recognition of legitimacy (Woolston, 2012). A more “pivotal event” occurred in 1958 with the National Defense Education Act’s (NDEA) allocation of funding for NDEA fellowships and college loans (Weissburg, 2008). NDEA limited participating institutions to those that were accredited (Gaston, 2014). In 1963, the U.S. Congress passed the Higher Education Facilities Act requiring that higher education institutions receiving federal funds through enrolled students be accredited. Arguably the most striking expansion in accreditation's mission coincided with the passage of the Higher Education Act (HEA) in 1964 (Gaston, 2014). Title IV in this legislation expressed the intent of Congress to use federal funding to broaden access to higher education. According to Gaston (2014), having committed to this much larger role in encouraging college attendance, the federal government found it necessary to affirm that institutions benefiting from such funds were worthy of it. That same year, the National Committee of Regional Accrediting U.S. SERVICE ACADEMIES 40 Agencies (NCRAA) became the Federation of Regional Accrediting Commissions of Higher Education (FRACHE). The Higher Education Act was first signed into law in 1965. That law strengthened the resources available to higher education institutions and provided financial assistance to students enrolled at those institutions. The law was especially important to accreditation because it forced the U.S. Department of Education to determine and list a much larger number of institutions eligible for federal programs (Trivett, 1976). In 1967, the NCA revoked Parsons College accreditation citing “administrative weakness” and a $14 million debt. The college appealed but the courts denied it on the basis that the regional accrediting associations were voluntary bodies (Woolston, 2012). The need to deal with a much larger number of potentially eligible institutions led the U.S Commissioner of Education to create in the Bureau of Higher Education the Accreditation and Institutional Eligibility Staff (AIES) with an advisory committee. The purpose of the AIES, which was created in 1968, was to administer the federal recognition and review process involving the accrediting agencies (Dickey & Miller, 1972). In 1975, the National Committee on Accrediting and FRACHE merged to form a new organization called the Council on Postsecondary Accreditation (COPA). The newly created national accreditation association encompassed an astonishing array of types of postsecondary education to include community colleges, liberal arts colleges, proprietary schools, graduate research programs, bible colleges, trade and technical schools, and home-study programs (Chambers, 1983). Since 1985, accountability has become the issue of paramount importance in the field of education. According to Woolston (2012), key developments in the accreditation process during this period include higher education’s experiencing rising costs resulting in high student loan U.S. SERVICE ACADEMIES 41 default rates as well as accreditation’s enduring increasing criticism for a number of apparent shortcomings, most ostensibly a lack of demonstrable student learning outcomes. Similarly, accreditation is increasingly and formally defended by various champions of the practice. For example, congressional hostility reached a crisis stage in 1992 when Congress, in the midst of debates on the reauthorization of the Higher Education Act, threatened to bring the role of the accrediting agencies as gatekeepers for financial aid to a close. During the early 1990s the federal government grew increasingly intrusive in matters directly affecting accrediting agencies (Bloland, 2001). As a direct consequence, Subpart 1 of Part H of the Higher Education Act amendments involved an increase role for the states in determining the eligibility of instructions to participate in the student financial aid programs of the aforementioned Title IV. For every state, this meant the creation of a State Postsecondary Review Entity that would review institutions that the USDE secretary had identified as having triggered such review criteria as high default rates on student loans (Bloland, 2001). The State Postsecondary Review Entities were short lived and, in 1994, were abandoned largely because of a lack of adequate funding. The 1992 reauthorization also created the National Advisory Committee on Institutional Quality and Integrity (NACIQI) to replace the AIES. For several years, the regional accrediting agencies entertained the idea of pulling out of COPA and forming their own national association. Based on dissatisfaction with the organization, regional accrediting agencies proposed a resolution to terminate COPA by the end of 1993, and following a successful vote on the resolution, COPA was effectively terminated (Bloland, 2001). A special committee, generated by the COPA plan of dissolution of April 1993, created the Commission on Recognition of Postsecondary Accreditation (CORPA) to continue the work of recognizing accrediting agencies (Bloland, 2001). However, CORPA was formed U.S. SERVICE ACADEMIES 42 primarily as an interim organization to continue national recognition of accreditation. In 1995, national leaders in accreditation formed the National Policy Board (NPB) to shape the creation and legitimating of a national organization overseeing accreditation. The national leaders in accreditation were adamant that the new organization should reflect higher education's needs rather than those of postsecondary education. Following numerous intensive meetings, a new organization named the Council for Higher Education Accreditation (CHEA) was formed in 1996 as the official successor to CORPA (Bloland, 2001). In 1996, the Spellings Commission “on the future of higher education” delivered the verdict that accreditation “has significant shortcomings” (USDE Test, 2006, p. 7) and accused accreditation of being both ineffective and a barrier to innovation. Since the release of the Spellings Commissions, the next significant event on the subject of accreditation came during President Barack Obama's State of the Union Address on February 12, 2013. In conjunction with the president's address, the White House released a nine-page document titled “The President's Plan for a Strong Middle Class and a Strong America.” The document stated that the president was going to call on Congress to consider value, affordability, and student outcomes in making determinations about which colleges and universities receive access to federal student aid, either by incorporating measures of value and affordability into the existing accreditation system or by establishing a new, alternative system of accreditation that would provide pathways for higher education models and colleges to receive federal student aid based on performance and results (White House, 2013). Specialized Accreditation Specialized accreditation, also referred to as programmatic accreditation, focuses on the specialized training and knowledge needed for professional degrees and careers. Specialized U.S. SERVICE ACADEMIES 43 accreditation bodies include the Accreditation Council for Pharmacy Education, the Accrediting Council on Education in Journalism and Mass Communications, the Council on Accreditation of Nurse Anesthesia Educational Programs, the Council on Social Work Education Office of Social Work Accreditation, and Teacher Education Accreditation Council, Inc. (CHEA, 2014). Programmatic accreditation is granted and monitored by national organizations unlike regional accrediting organizations like the Western Association of Schools and Colleges, the Southern Association of Colleges and Schools, and North Central Association of Colleges and Schools, which are associated regionally (Adelman & Silver, 1990; Eaton, 2009; Hagerty & Stark, 1989). Institutional self-study is the cornerstone in establishing and keeping programmatic accreditation. It ensures the institution keeps the best interest of the profession while providing the necessary learning, leadership, qualified instructors, and facilities to keep the professional learning goals of the professional degree or career (Bloland, 2001; Gaston & Ochoa, 2013). As noted by the Global University Network for Innovation’s (2007) publication, institutional accreditation focuses on academic programs while programmatic accreditation supports overall institutional accreditation goals, thus working hand-in-hand for overall institutional success. Gaston and Ochoa (2013) contend that professional guidance through program accreditation protects the profession, thus standards are maintained to benefit practitioners and protect the public. Coordinating institutional accreditation efforts where possible can be cost effective, since overlap exists between the process of both regional and programmatic accreditation (Shibley & Volkwein, 2002). Programmatic accrediting organizations recognized by CHEA affirm that the standards and processes of the accrediting organization are consistent with the academic quality, improvement and accountability expectations that CHEA has established (CHEA, 2014). U.S. SERVICE ACADEMIES 44 Institutions acknowledge the pressure of meeting not only institutional accreditation but also specialized accreditation of individual programs, upholding the notion of efficacy of students and professions (Bloland, 2001). Specialized program accreditation distinctively carries institutional quality assurance importance as the credibility of program accreditation review is strengthened on the basis of its achievement, where its more focused on particular area of studies and carried out by colleagues from peer institutions who are specialists in specific disciplines (Ratcliff, 1996). Research on program accreditation suffers from the same lack of volume and rigor as research on institutional accreditation, and strong faculty involvement and instruction are linked to individual program accreditation (Cabrera, Colbeck, & Terenzini, 2001; Daoust, Wehmeyer, & Eubank, 2006). While studies on student outcomes in terms measuring competencies find that program accreditation does not provide enough support for student success (Hagerty& Stark, 1989), program accreditation outlines the parameters of professional education (Ewell, Wellman, & Paulson, 1997; Hagerty& Stark, 1989) and upholds national professional standards (Bardo, 2009; Floden, 1980; Raessler, 1970), This situation calls for further empirical research on specialized accreditation given its importance on students’ educational and professional achievement. ABET, which is the programmatic accrediting entity of this study, is recognized by CHEA as the organization responsible for the accreditation of educational programs leading to degrees in applied science, computing, engineering, and engineering technology (ABET, 2013). ABET accreditation is a form of quality assurance, declaring that a program meets the quality standards set by the technical profession. ABET’s academic accreditation involves an external quality review by a team of experts from academe or industry who volunteer their professional U.S. SERVICE ACADEMIES 45 knowledge and experience to quality assurance and improvements for education. The ABET accreditation review process typically takes 18 months to complete, starting with a program’s formal request for an ABET review (ABET, 2013). Current State and Future of Accreditation Accreditation in higher education is at a crossroads. Since the 2006 Spellings Report called for more government oversight of accreditation to ensure public accountability, the government and critics began to scrutinize a system that had been nongovernmental and autonomous for several decades (Eaton, 2012). The U.S. Congress is currently in the process of reauthorizing the Higher Education Act), and it is expected to address accreditation. All the while, CHEA and other accreditation supporters have been attempting to convince Congress, the academy, and the public at large of accreditation’s current and future relevance in quality higher education. In anticipation of the HEA’s reauthorization, NACIQI had the charge of providing the U.S. Secretary of Education with recommendations on recognition, accreditation, and student aid eligibility (NACIQI, 2012).The committee advised that accrediting bodies should continue their gate keeping role for student aid eligibility, but also recommended some changes to the accreditation process. These changes included more communication and collaboration among accreditors, states, and the federal government to avoid overlapping responsibilities; moving away from regional accreditation and toward sector or mission-focused accreditation, creating an expedited review process and developing more gradations in accreditation decisions; developing more cost-effective data collection and consistent definitions and metrics; and making accreditation reports publically available (NACIQI, 2012). U.S. SERVICE ACADEMIES 46 However, two members of the committee did not agree with the recommendations and submitted a motion to include the Alternative to the NACIQI Draft Final Report, which suggested eliminating accreditors’ gate keeping role; creating a simple, cost-effective system of quality assurance that would revoke financial aid to campuses not financially secure; eliminating the current accreditation process altogether as means of reducing institutional expenditures; breaking the regional accreditation monopoly; and developing a user-friendly, expedited alternative for the reaccreditation process (NACIQI, 2012). The motion did not pass, and the alternative view was not included in NACIQI’s final report. As a result, Hank Brown, the former U.S. Senator from Colorado and founding member of the American Council of Trustees and Alumni, drafted a report seeking accreditation reform and reiterating the alternatives suggested above because accreditation had “failed to protect consumers and taxpayers” (Brown, 2013, p. 1). The same year the final NACIQI report was released, the American Council of Education’s Task Force on Accreditation released its own report that identified challenges and potential solutions for accreditation (ACE, 2012).The task force made six recommendations: (a) increase transparency and communication, (b) increase the focus on student success and institutional quality, (c) take immediate and noticeable action against failing institutions, (d) adopt a more expedited process for institutions with a history of good performance, (e) create common definitions and a more collaborative process between accreditors, and (f) increase cost- effectiveness (ACE, 2012). They also suggested that higher education “address perceived deficiencies decisively and effectively, not defensively or reluctantly” (ACE, 2012, p. 8). President Obama also recently spoken out regarding accountability and accreditation in higher education. In his 2013 State of the Union address, President Obama asked Congress to U.S. SERVICE ACADEMIES 47 “change the Higher Education Act, so that affordability and value are included in determining which colleges receive certain types of federal aid” (White House, 2013a, para. 39). The address was followed by The President’s Plan for a Strong Middle Class and a Strong America, which suggested achieving the above change to the HEA “either by incorporating measures of value and affordability into the existing accreditation system; or by establishing a new, alternative system of accreditation that would provide pathways for higher education models and colleges to receive federal student aid based on performance and results” (White House, 2013b, p. 5). Furthermore, in August 2013, President Obama called for a performance-based rating system that would connect institutional performance with financial aid distributions (White House, 2013c).Though accreditation was not specifically mentioned in his plan, it is not clear if the intention is to replace accreditation with this new rating system or utilize both systems simultaneously (Eaton, 2013b). The president’s actions over the last year have CHEA and other supporters of nongovernmental accreditation concerned. Calling it the “most fundamental challenge that accreditation has confronted to date,” Eaton (2012) has expressed concern over the standardized and increasingly regulatory nature of the federal government’s influence on accreditation. Astin (2014) also stated that, if the U.S. government creates its own process for quality control, the U.S. higher education system is “in for big trouble” (para. 9) like the government-controlled, Chinese higher education system. Though many agree there will be an inevitable increase in federal oversight after the reauthorization of the HEA, supporters of the accreditation process have offered recommendations for minimizing the effect. Gaston (2014) provides six categories of suggestions for implementation: consensus and alignment, credibility, efficiency, agility and U.S. SERVICE ACADEMIES 48 creativity, decisiveness and transparency, and a shared vision. The categories maintain the aspects of accreditation that have worked well and that are strived for around the world – nongovernmental, peer review – as well as address the areas receiving the most criticism. Eaton (2013a) adds that accreditors and institutions must push for streamlining of the federal review of accreditors as a means to reduce federal oversight; better communicate the accomplishments of accreditation and how quality peer-review benefits students, and anticipate any further actions the federal government may take. While the HEA undergoes the process of reauthorization, the future of accreditation remains uncertain. There have been many reports and opinion pieces on how accreditation should change and/or remain the same, much of them with overlapping themes. Only time will tell if the accreditors, states, and the federal government reach an acceptable and functional common ground that ensures the quality of U.S. higher education into the future. Critical Assessment of Accreditation Accreditation, it seems, evolved from simpler days of semi-informal peer assessment into a burgeoning industry of detailed analysis, student learning outcomes assessment, quality and performance review, financial analysis, public attention, and all-around institutional scrutiny (Bloland, 2001; Burke & Minassians, 2002; McLendon, Hearn, & Deaton, 2006; Zis, Boeke & Ewell, 2010). Public demand of institutions to establish their worth along with their contribution to student learning and a progressively regulated demand for institutional proof of success in the forms of evidence and assessment changed accreditation and created a vacuum of knowledge about how accreditation works in practice (Commission on the Future of Higher Ed, 2006; Dougherty, Hare, & Natow, 2009; Leef & Burris, 2002). U.S. SERVICE ACADEMIES 49 Measures of inputs, outputs, local control versus governmental review, performance funding versus institutional choice, rising demands, and institutional costs make difficult the task of understanding trends and movement of regional accreditation in the United States, but, nevertheless, have a great influence upon actual implementation of accreditation standards to real-world institutions (Leef & Burris, 2002). There have been calls for increased public transparency of accreditation findings and actions, including full publication of reports by the commission and by the institutions in question. For example, some institutions are sanctioned for deficiencies and may be given a detailed list of reporting deadlines to show compliance and ongoing quality review for those areas noted to be lacking. Some correspondence between accreditation commissions and the institutions are public, and others are private. Therefore, this semi-public nature to accreditation has been a point of contention in the literature on accountability and assessment (Eaton, 2010; Ikenberry, 2009; Kuh, 2010). There is much debate on whether student learning outcomes are the best measure and appropriate to education, and whether they violate the purview of faculty members, or are truly in the best interest of students, best practices and learning (Eaton, 2010). Furthermore, accreditation has evoked emotional opposition since its inception and much has been expressed in very colorful language. Accreditation has been accused of “[benefiting] the small, weak, and uncertain” (Barzun, 1993, p. 60). It is a “pseudo-evaluative process, set up to give the appearance of self-regulation without having to suffer the inconvenience” (Scriven, 2000, p. 272). It is a “grossly unprofessional evaluation” (p. 271), and “it is scarcely surprising that in large areas of accreditation, the track record on enforcement is a farce” (p. 272). The system of accreditation is “structured in such a way as to subordinate the welfare of the educational institution as an entity and of the general public to the interest of groups representing limited institutional or U.S. SERVICE ACADEMIES 50 professional concerns” (American Medical Association, 1971, F-3). It has been stated that “accreditation standards have already fallen to the lowest common denominator” (American Council of Trustees and Alumni, 2007, p. 16), and accreditation is responsible for the “homogenization of education” and the “perseverance in the status quo” (Finkin, 1973, p. 369). It is “a crazy-quilt of activities, processes and structures that is fragmented, arcane, more historical than logical, and has outlived its usefulness” (Dickeson, 2006, p. 1). It “seeks not only to compare apples with grapes, but both with camels and cods” (Wriston, 1960, p. 329). According to Gillen, Bennett, and Vedder (2010), “the inmates are running the asylum” (p. i). The renewal of the Higher Education Act in 1992 came during a time of heightened government concern over increasing defaults in student loans. Again concerned about the lack of accountability demonstrated by accreditation, this legislation established a new institution: the State Postsecondary Review Entity (Ewell, 2008). The creation of these agencies was intended to shift the review of institutions for federal aid eligibility purposes from regional accreditors to state governments. This direct threat to accreditation led to the dissolution of COPA and the proactive involvement of the higher education community resulting in the creation of CHEA. It was the issue of cost that ultimately led to the abandonment of the SPREs when legislation failed to provide funding for the initiative (Ewell, 2008). Governmental concern did not dissipate, however, and, in 2006, the DOE released the critical report from Spellings Commission (Eaton, 2012b; Ewell, 2008). Other concerns are evident. It is problematic that accreditation is considered a chore to be accomplished as quickly and painlessly as possible rather than an opportunity for genuine self- reflection for improvement, and institutional self-assessment is ineffectual when there is faculty resistance and a lack of administrative incentive (Bardo, 2009; Commission on Regional U.S. SERVICE ACADEMIES 51 Accrediting Commissions, n.d.; Driscoll & De Norriega, 2006; Rhodes, 2012; Smith & Finney, 2008; Wergin, 2012). One of the greatest stresses on accreditation is the tension between assessment for the purpose of improvement and assessment for the purpose of accountability, two concepts that operate in irresolvable conflict with each other (American Association for Higher Education, 1997; Burke & Associates, 2005; Chernay, 1990; Ewell, 1984, 2008; Harvey, 2004; National Advisory Committee on Institutional Quality and Integrity, 2012; Provezis, 2010; Uehling, 1987b), although some argue that the two can be effectively coordinated for significant positive results (Brittingham, 2012; El-Khawas, 2001; Jackson, Davis, & Jackson, 2010; Walker, 2010; Westerheijden, Stensaker, & Rosa, 2007; Wolff, 1990). Another concern involves the way that being held to external standards undermines institutional autonomy, which is a primary source of strength in the American higher education system (Ewell, 1984). The Spelling Commission’s report detailed a new interest from the DOE in critiquing the status quo of regional accreditation commissions (Commission on the Future of Higher Education, 2006). Ewell (2008) describes the report as a scathing rebuke of inability of regional accreditors to innovate and a hindrance to quality improvement. Others called for an outright “end…. to the accreditation monopoly” (Neal, 2008). There have been increasing calls within the last several years to reform or altogether replace accreditation as it is currently known (American Council of Trustees and Alumni, 2007; Gillen, Bennett, &Vedder, 2010; Neal, 2008). The American Council on Education (2012) recently convened a task force of national leaders in accreditation to explore the adequacy of current practices. They recognized the difficulty of reaching a consensus on many issues but recommended strengthening and reinforcing the role of self-regulation in improving academic excellence. The Spelling Commission’s report signaled federal interest in setting the stage for new accountability measures of higher education, raising U.S. SERVICE ACADEMIES 52 the worst fears of some defenders of a more autonomous, peer-regulated industry (Eaton, 2003). Accreditation’s emphasis on value and the enhancement of individual institutions with regional standards was pressed to achieve accountability roles for the entire sector of US higher education (Brittingham, 2008). Costs of Accreditation Gaston (2014) discusses the various costs associated with accreditation. Institutions are required to pay annual fees to the accrediting body. When an institution applies for initial accreditation, it is required to pay an application fee as well as additional fees through the process. The institution seeking accreditation also pays for on-site reviews. In addition to these “external” costs, there are internal costs that must be calculated as well. These internal costs can include faculty and administrative time invested in the assessment and self-study, volunteer service in accreditation activities, preparation of annual or periodic filings, and attendance at mandatory accreditation meetings (p. 9). Costs of initial accreditation can vary greatly from region to region; however, regardless of the region, the costs are substantial. It can cost an institution $23,500.00 to pursue initial accreditation through the Higher Learning Commission (HLC), regardless of whether the pursuit is successful. This does not require the costs associated with the three required on-site visits nor does it include the dues that must be paid during the application and candidacy period. For example, the applicant and candidacy fees for the Southern Association of Colleges and Schools are $12,500 (HLC Initial, 2012). Shibley and Volkwein (2002) claim there has been limited research on the costs of accreditation within the literature. Calculating the cost can be very complex, as institutions must evaluate both monetary and non-monetary costs. One of the most complex and difficult items to U.S. SERVICE ACADEMIES 53 evaluate is time. Reidlinger and Prager (1993) state there are two reasons thorough cost-based analyses of accreditation have not been pursued. First, there is a belief that voluntary accreditation is preferable to governmental control and that it accreditation is worth the cost, despite the price. Second, it is difficult to relate perceived benefits of accreditation to an actual dollar amount (p. 39). CHEA first published an almanac in 1997 and continues to release a revised version every two years. This almanac looks at accreditation practices across the United States on the macro-level, looking at data such as number of volunteers, number of employees, and unit operating budgets of the regional accrediting organizations. Little, if any, information is provided on costs incurred by individual institutions as they go through the accreditation process. In 1998, the North Central Association of Colleges and Schools completed a self-study on the perception of accreditation costs among the institutions within that region (Lee & Crow, 1998). The study revealed the variance of responses among institutional type. Research and doctoral institutions were less apt to claim that benefits outweighed costs while also responding less positively than did other types of institutions regarding the effectiveness and benefits of accreditation. The study suggested that well-established research and doctoral institutions might already have internal processes in place that serve the traditional function of accreditation, in which case a traditional audit system could serve as an appropriate alternative to the formal process by the regional accreditation organization. Looking at the results of all institutional types, the self-study found that 53% of respondents considered the benefits of accreditation outweighed the costs. Approximately 33% of respondents considered benefits of accreditation to be equal to the costs. The remaining 13% of the respondents believed that the costs of accreditation outweighed the benefits. There have been similar case studies performed by U.S. SERVICE ACADEMIES 54 Warner (1977) on the Western Association of Schools and Colleges and by Pigge (1979) on the Committee on Postsecondary Accreditation. In both studies, cost was labeled as a significant concern of the accreditation process. Budget allocations are also affected by accreditation results. Warner (1977) found that approximately one third of responding institutions changed budget allocations based on accreditation results; however, there was no further exploration done. The majority of respondents in the Warner (1977) and Pigge (1979) studies believed that the benefits outweighed the costs. There are three stages that institutions go through when preparing for accreditation. Wood (2006) developed the mode, which includes the release time required for the various coordinators of the accreditation review, the monetary costs of training, staff support, materials, and the site visit of the accreditation team. Each of these stages triggers costs for the institution. Willis (1994) also examined these costs but differentiated between direct and indirect costs. Direct costs include accreditation fees, operating expenses (specific to the accreditation process), and direct payments to individuals who participate in the process, self-study costs, travel costs, and site visit costs. Indirect costs measure things such as time. Willis (1994) identified indirect costs as “probably many times greater than the direct costs due mainly to the personnel time required at the institution” (p. 40). He suggests that caution be exercised when evaluating these costs and that they should not be underestimated. He states that, many times, the normal tasks that cannot be completed by individuals with accreditation responsibilities are distributed to other individuals who are not identified as participants in the accreditation process. Kennedy, Moore, and Thibadoux (1985) attempted to establish a methodology for how costs are determined with particular interest in monetizing time spent on the accreditation process. They looked at a time frame of approximately 15 months, from the time of initial U.S. SERVICE ACADEMIES 55 planning for the self-study through the presentation of the study. They used time logs to gather data on time spent by faculty and administrative staff. There was a high return rate for the time logs (79% were fully completed, with a 93% return rate overall). After reviewing the time logs, they discovered that the time spent by faculty and administrative staff accounted for 94% of the total cost of the accreditation review, over two thirds of which as attributed to administrative staff. These figures demonstrate the fact that the time required by both faculty and administrative staff is the most significant cost involved in the accreditation process. The researchers concluded that this cost was not excessive, as there is a seven-year span between each self-study review process. Kells and Kirkwood (1979) conducted a study in the Middle States region to examine the direct costs of participating in a self-study. Almost 50% of the respondents reportedly spent under $5,000.00 on the self-study, which was not defined as excessive. It was also determined that there was maximum number between 100 and 125 of people directly involved in the self- study. The majority of participants were faculty (41-50%), followed by staff (21-30%), and very few students. The size of the institution and the cost of the self-study itself were believed to have the greatest impact on the composition of the self-study committee. Doerr (1983) used a case study to explore the direct costs and to examine the benefits received when university executives wish to pursue additional programmatic accreditations. Both financial costs and opportunity costs of institutional accreditation granted by the Southern Association of Colleges and Schools and four programmatic accreditations cultivated by the University of West Florida in the 1981-1982 term were examined. Doerr assigned an average wage per hour to faculty and administrative staff while also adding in the cost of material supplies. It was estimated that the total direct costs of accreditation for these reviews totaled U.S. SERVICE ACADEMIES 56 $50,030.71. It was also projected that there would be additional costs in the following years, particularly membership costs for the accrediting organizations and those costs associated with preparing for additional programmatic reviews. Doerr concluded by looking at the opportunity costs while examining alternative ways this money might have been spent. Shibley and Volkwein (2002) evaluated the benefits of a joint accreditation by conducting a case study of a public institution in the Middle States region. This institution has multiple accrediting relationships, including both institutional and programmatic reviews. They confirmed what Willis (1994) suggested: “the true sense of burden arose from the time contributed to completing the self-study process rather than from finding the financial resources to support self-study needs” (Shibley & Volkwein, 2002, p.8). They found the separate accreditation processes had more benefits for individuals than did the joint effort; however, the joint process was less costly and the sense of burden for participants was reduced. There have been several studies on the expense of accreditation and its value to institutions. Britt and Aaron (2008) distributed surveys to radiology programs without specialized accreditation. These institutions reported expense as the primary factor in not pursuing accreditation. A secondary consideration was the time required to go through the process. Many respondents indicated that a decrease in the expense would allow them to consider pursuing accreditation in the future. Bitter, Stryker, and Jens (1999) and Kren, Tatum, and Phillips (1993) looked at specialized accreditation for accounting. Both studies found that non-accredited programs believed that costs outweighed benefits. Many programs claim they follow accreditation standards; however, there is no empirical evidence to prove that this is true. Since the programs do not go through the accreditation process, there is no way to verify if they are actually meeting the established standards. U.S. SERVICE ACADEMIES 57 Cost is frequently cited as a factor as to why institutions have not pursued accreditation. In addition to direct costs, things such as resources, time, energy and energy spent are also included. The Florida State Postsecondary Planning Commission (1995) defined costs in a variety of ways and only sometimes included indirect costs as part of their definition. Benefits could potentially affect three groups: students, departments, and the institution. The Commission recommended that institutions seeking accreditation balance the direct and indirect costs of with the potential benefits to each group before deciding to pursue it. As a result of the concerns of the higher education community and the research on costs, both the National Advisory Committee on Institutional Quality and Integrity and ACE published reports in 2012. These reports call for a cost-benefit analysis of the accreditation process in an attempt to reduce excessive and unnecessary costs. NACIQI recommends that data gathering be responsive to standardized expectations and that it would only seek information that is useful and that cannot be found elsewhere (NACIQI Final, 2012, p. 7). The ACE task force calls for an evaluation of required protocols, such as the self-study, the extent and frequency of on-site visits, expanded opportunities for the use of technology, greater reliance on existing data, and the evaluation of potential duplication of requirements imposed by different agencies and the federal government (ACE, 2012, pp. 26-27). Due to the limited amount of research in the area on the cost-benefit analysis of the process, Woolston (2012) distributed a survey to the primary regional ALO at all baccalaureate- granting regionally accredited institutions in the United States. The survey targeted four primary areas: demographic information, direct costs, indirect costs, and an open-ended section allowing for possible explanation for the costs. Results showed that one of the most complicated things is to determine the monetary value of the time associated with going through the accreditation U.S. SERVICE ACADEMIES 58 process. Through analysis of the open-ended response questions, ALOs indicated that two of the biggest benefits of going through accreditation were self-evaluation and university improvement. There were other themes that emerged, such as campus unity, outside review, ability to offer federal financial aid, reputation, sharing best practices, celebration, and fear associated with not being accredited. While it was agreed that costs are significant and excessive, many ALOs believe that the costs are justified and that the benefits of accreditation outweigh both the direct and indirect costs. Effects of Accreditation Outcome assessment serves two main purposes: quality improvement and external accountability (Bresciani, 2006; Ewell, 2009). Over the years, institutions of higher education made considerable strides with regard to learning assessment practices and implementation, yet, despite such progress, key challenges still remain. Trend Toward Learning Assessment The shift within higher education accreditation toward greater accountability and student learning assessment began in the mid-1980s (Beno, 2004; Ewell, 2001; Wergin, 2005, 2012). During that time, higher education was portrayed in the media as “costly, inefficient, and insufficiently responsive to its public” (Bloland, 2001, p. 34). The impetus behind the public’s concern stemmed from two reasons: the perception that students were underperforming academically and the demand of the business sector (Ewell, 2001). Employers and business leaders expressed their need for college graduates who could demonstrate high levels of literacy, problem-solving ability, and collaborative skills in order to support the emerging knowledge economy of the 21 st century. In response to these concerns, institutions of higher education started emphasizing student learning outcomes as the main process of evaluating effectiveness U.S. SERVICE ACADEMIES 59 (Beno, 2004). For the purpose of this study, the U.S. Air Force Academy focuses on student learning outcomes as it works on reaccreditation through the NLC. Framework for Learning Assessment Accreditation is widely considered to be a significant driving force behind advances in both student learning and outcomes assessment. According to Rhodes (2012), in recent years, accreditation contributed to the proliferation of assessment practices, lexicon, and even products such as e-portfolios, which are used to show evidence of student learning. Kuh and Ikenberry (2009) surveyed provosts or chief academic officers at all regionally accredited institutions granting undergraduate degrees and found that student assessment was driven more by accreditation than by external pressures such as government or employers. Another major finding was that most institutions planned to continue their assessment of student learning outcomes despite budgetary constraints. They also found that gaining faculty support and involvement remained a major challenge. Additionally, college and university faculty and student affairs practitioners stressed how students must now acquire proficiency in a wide scope of learning outcomes to adequately address the unique and complex challenges of today’s ever-changing, economically competitive, and increasingly globalizing society. In 2007, the Association of American Colleges and Universities published a report focusing on the aims and outcomes of a 21 st century collegiate education, with data gathered through surveys, focus groups, and discussions with postsecondary faculty. Emerging from the report were four “essential learning outcomes.” These were (1) knowledge of human cultures and the physical and natural world through study in science and mathematics, social sciences, humanities, history, languages, and the arts; (2) intellectual and practical skills, including inquiry and analysis, critical and creative thinking, written and oral U.S. SERVICE ACADEMIES 60 communication, quantitative skills, information literacy, and teamwork and problem-solving abilities; (3) personal and social responsibility, including civic knowledge and engagement, multicultural competence, ethics, and foundations and skills for lifelong learning; and (4) integrative learning, including synthesis and advanced understanding across general and specialized studies Association of American Colleges and Universities , 2007, p. 12).With the adoption of such frameworks or similar tools at institutions, accreditors can be well-positioned to connect teaching and learning and, as a result, better engage faculty to improve student learning outcomes (Rhodes, 2012). Benefits of Accreditation on Learning Accreditation and student performance assessment have been the focus of various empirical studies, with several pointing to benefits of the accreditation process. Ruppert (1994) conducted case studies in 10 states – Colorado, Florida, Illinois, Kentucky, New York, South Carolina, Tennessee, Texas, Virginia, and Wisconsin – to evaluate different accountability programs based on student performance indicators. The report concluded that “quality indicators appear most useful if integrated in a planning process designed to coordinate institutional efforts to attain state priorities” (p. 155). Furthermore, research also demonstrated how accreditation helps shape outcomes inside college classrooms. Specifically, Cabrera, Colbeck, and Terenzini (2001) investigated classroom practices and their relationship with the learning gains in professional competencies among undergraduate engineering students. The study involved 1,250 students from seven universities. It found that the expectations of accrediting agencies may be encouraging more widespread use of effective instructional practices by faculty. U.S. SERVICE ACADEMIES 61 A study by Volkwein, Lattuca, Harper, and Domingo (2007) measured changes in student outcomes in engineering programs, following the implementation of new accreditation standards by ABET. Based on the data collected from a national sample of engineering programs, the authors noted that the new accreditation standards were indeed a catalyst for change, finding evidence that linked the accreditation changes to improvements in undergraduate education. Students experienced significant gains in the application of knowledge of mathematics, science, and engineering; usage of modern engineering tools; use of experimental skills to analyze and interpret data; designing solutions to engineering problems; teamwork and group work; effective communication; understanding of professional and ethical obligations; understanding of the societal and global context of engineering solutions; and recognition of the need for lifelong learning. The authors also found that accreditation prompted faculty to engage in professional development-related activity. Thus, the study showed the effectiveness of accreditation as a mechanism for quality assurance (Volkwein et al., 2006). Organizational Effects of Accreditation Beyond student learning outcomes, accreditation also has considerable effects on an organizational level. Procopio (2010) noted that the process of acquiring accreditation influences perceptions of organizational culture. According to the study, administrators are more satisfied than are staff members – and especially more so than faculty – when rating organizational climate, information flow, involvement in decisions, and utility of meetings. “These findings suggest institutional role is an important variable to consider in any effort to affect organizational culture through accreditation buy-in” (p. 10). Similarly, a study by Wiedman (1992) describes how the 2-year process of reaffirming accreditation at a public university drives the change of institutional culture. U.S. SERVICE ACADEMIES 62 Meanwhile, Brittingham (2009) explains that accreditation offers organizational-level benefits for colleges and universities. The commonly acknowledged benefits include students’ access to federal financial aid funding, legitimacy in the public, consideration for foundation grants and employer tuition credits, positive reflection among peers, and government accountability. However, Brittingham (2009) points out that there are “not often recognized” benefits as well (p. 18). For example, accreditation is cost effective, particularly when contrasting the number of personnel to carry out quality assurance procedures here in the U.S. versus internationally, where it is far more regulated. Second, “participation in accreditation is good professional development” (p. 19) because those who lead a self-study come to learn about their institution with more breadth and depth. Third, self-regulation by institutions – if done properly – is a better system than government regulation. Lastly, “regional accreditation gathers a highly diverse set of institutions under a single tent, providing conditions that support student mobility for purposes of transfer and seeking a higher degree” (p. 19). Future Assessment Recommendations Many higher education institutions developed plans and strategies to measure student learning outcomes, and such assessments are already in use to improve institutional quality (Beno, 2004). For future actions, CHEA, in its 2012 Final Report, recommends further enhancing commitment to public accountability: Working with the academic and accreditation communities, explore the adoption and implementation of a small set of voluntary institutional performance indicators based on mission that can be used to signal acceptable academic effectiveness and to inform students and the public of the value and effectiveness of accreditation and higher U.S. SERVICE ACADEMIES 63 education. Such indicators would be determined by individual colleges and universities, not government. (p. 7) In addition, Brittingham (2012) outlines three developments that have the capacity to influence accreditation and increase its ability to improve educational effectiveness. First, accreditation is growing more focused on data and evidence, which strengthens its value as a means of quality assurance and quality improvement. Second, “technology and open-access education are changing our understanding of higher education” (p. 65).These innovations – such as massive open online courses – hold enormous potential to open up higher education sources. As a result, this trend will heighten the focus on student learning outcomes. Third, “with an increased focus on accountability – quality assurance – accreditation is challenged to keep, and indeed strengthen, its focus on institutional and programmatic improvement” (p. 68). This becomes particularly important amid the current period of rapid change. Challenges to Student Learning Outcomes Assessment is critical to the future of higher education. As noted earlier, outcome assessment serves two main purposes: quality improvement and external accountability (Bresciani, 2006; Ewell, 2009).The practice of assessing learning outcomes is now widely adopted by colleges and universities since its introduction in the mid-1980s, and assessment is also a requirement of the accreditation process. However, outcomes assessment in higher education is still a work in progress and there is still a fair amount of challenges (Kuh & Ewell, 2010). Organization Learning Challenges Assessment, as clearly stated by the American Association for Higher Education (1992), “is not an end in itself but a vehicle for educational improvement” (p. 28). The process of U.S. SERVICE ACADEMIES 64 assessment is not a means unto its own end. Instead, it provides an opportunity for continuous organizational learning and improving (Maki, 2010). Too often, institutions assemble and report sets of mountainous data just to comply with federal or state accountability policy or accreditation agency’s requirements. However, after the report is submitted, the evaluation team has left, and the accreditation is confirmed, there are few incentives to act on the findings for further improvement. The root causes of deficiencies identified are rarely followed up and real solutions are never sought (Ewell, 2005; Wolff, 2005). Another concern pointed out by Ewell (2005) is that accreditation agencies tend to emphasize the process of, rather than the outcomes, once the assessment infrastructure is established. The accreditors are satisfied with formal statements and goals of learning outcomes, but do not query further about how, the appropriateness, and to what degree these learning goals are applied in the teaching and learning process. As a result, the process tends to be single-loop learning where changes reside at a surface level, instead of double-loop learning where changes are incorporated in the practices, belief, and norms (Bensimon, 2005). Lack of Faculty Buy-in Lack of faculty’s buy-in and participation is another hurdle in the adoption of assessment practice (Kuh & Ewell, 2010). In a 2009 survey by the National Institute for Learning Outcomes Assessment, two-third of all 2,809 surveyed schools noted that more faculty involvement in learning assessment would be helpful (Kun &Ikenberry, 2009).According to Ewell (1993, 2002, 2005), there are several reasons that faculty is inclined to be directly involved in the assessment process. First, faculty views teaching and curriculum development their domain. Assessing their teaching performance and student learning outcomes by external groups can be viewed as an intrusion upon their professional authority and academic freedom. Second, faculty are deterred U.S. SERVICE ACADEMIES 65 by the extra efforts and time required for engaging outcome assessment and the unconvincing added-value perceived. Furthermore, the compliance-oriented assessment requirements are imposed by external bodies, and most faculty members participate in the process indirectly. They tend to show a lukewarm attitude and leave the assessment work to administrative staff. In addition, faculty might have a different view on the definitions and measures of “quality” than does the institution or accreditor (Perrault, Gregory, & Carey, 2002, p. 273). Finally, the assessment process incurs tremendous amount of work and resources. To cut costs, the majority of the work is done by administration at the institution. Faculty consequently perceives that assessment as an exercise performed by administration for external audiences and do not embrace the process. Lack of Institutional Investment Shortage of resources and institutional support is another challenge in the implementation of assessment practice. As commented by Beno (2004), “deciding on the most effective strategies for teaching and for assessing learning will require experimentation, careful research, analyses, and time” (p. 67).With continuously dwindling federal and state funding in the last two decades, higher education, particularly at the public institutions, is stripped of resources to support such an endeavor. A case in point is the recession in early 1990s. Budget cuts forced many states to abandon the state assessment mandates originated in mid-1980s and switched to process-based performance indicators as a way to gain efficiency in large public institutions (Ewell, 2005). The 2009 National Institute for Learning Outcomes Assessment survey shows that majority of the surveyed institutions undercapitalized resources, tools, and expertise for assessment work. Twenty percent of respondents indicated they had no assessment staff, and 65% had two or fewer (Kuh & Ewell, 2010; Kuh & Ikenberry, 2009). U.S. SERVICE ACADEMIES 66 Difficulty with Integration into Local Practice Integrating the value and institutionalizing the practice of assessment into daily operations can be another tall order in many institutions. In addition to redirecting resources, leadership’s involvement and commitment, faculty’s participation, and adequate assessment personnel contribute to the success of cultivating a sustainable assessment culture and framework on campus (Banta, 1993; Kuh & Ewell, 2010; Lind & McDonald, 2003; Maki, 2010). Furthermore, assessment activities, imposed by external authorities, tend to be implemented as an addition to, rather than an integral part of, institutional practice (Ewell, 2002). Assessment, like accreditation, is viewed as a special process with its own funding and committee, instead of as being part of regular business operations. Finally, the work of assessment, program reviews, self-study, and external accreditation at institutional and academic program levels tends to be handled by various offices on campus and coordinating the work can be another challenge (Perrault, Gergory, & Carey, 2002). Colleges also tend to adopt the institutional isomorphic approach by modeling themselves after peers who are more legitimate or successful in dealing with similar situation and the practice widely used to gain acceptance (DiMaggio & Powell, 1983). As reported by Ewell (1993), institutions are prone to “second-guess” and adopt the type of assessment practice acceptable by external agencies as a safe approach instead of adopting or customizing the one appropriate to the local needs and situation (Ewell, 1993) .Institutional isomorphism offers a safer and more predictable route for institutions to deal with uncertainty and competition, to confirm to government mandates or accreditation requirements, or to abide by professional practices (Bloland, 2001). However, the strategy of following the crowd might hinder in-depth inquiry of a unique local situation as well as the opportunity for innovation and creativity. U.S. SERVICE ACADEMIES 67 Furthermore, decision makers may be unintentionally trapped in a culture of doing what everyone is doing without carefully examining unique local situation, the logic, the appropriateness, and the limitations behind the common practice (Miles, 2012). Lack of assessment standards and clear terminology presents another challenge in assessment and accreditation practice (Ewell, 2001). With no consensus on vocabulary, methods, and instrument, assessment practice and outcomes can have limited value. As reported by Ewell (2005), the absence of outcome metrics makes it difficult for state authorities to aggregate performance across multiple institutions and to communicate the outcomes to the public. The exercise of benchmarking is also impossible. Beesciani (2006) stressed the importance of developing a conceptual definition, framework, and common language at institutional level. Tension Between Improvement and Accountability The tension between the equally important goals of outcomes assessment, quality improvement and external accountability can be another factor affecting outcome assessment practice. According to Ewell (2009, 2008), assessment practice evolved over the years into two contrasting paradigms. The first paradigm, assessment for improvement, emphasizes constantly evaluating and enhancing the process or outcomes, while the other paradigm, assessment for accountability, demands conformity to a set of established standards mandated by the state or accrediting agencies. The strategies, the instrumentation, the methods of gathering evidence, the reference points, and the way results are utilized of these two paradigms tend to be at the opposite end of the spectrum (Ewell, 2009, 2008). For example, in the improvement paradigm assessment is mainly used internally to address deficiencies and enhance teaching and learning. It requires periodic evaluation and formative assessment to track progress over time. On the other hand, the accountability paradigm assessment is designed to demonstrate institutional U.S. SERVICE ACADEMIES 68 effectiveness and performance to external constituencies and to comply with pre-defined standards or expectations. The process tends to be performed on set schedules as a summative assessment. The nature of these two constraints can create tension and conflict within an institution. Consequently, an institution’s assessment program is unlikely to achieve both objectives. Well (2009) further pointed out that “when institutions are presented with an intervention that is claimed to embody both accountability and improvement, accountability wins.” (p. 8) Transparency Challenges Finally, for outcome assessment to be meaningful and accountable the process and information need to be shared and open to the public (Ewell, 2005). Accreditation has long been criticized as mysterious or secretive with little information to share with stakeholders (Ewell, 2010). In a 2006 survey, the Council of Higher Education Accreditation reported that only 18% of the 66 accreditors surveyed provide information about the results of individual reviews publicly; less than 17% of accreditors provide a summary on student academic achievement or program performance; and just over 33% of accreditors offer a descriptive summary about the characteristics of accredited institutions or programs (Council of Higher Education Accreditation, 2006). In the 2014 Inside Higher Education survey, only 9% of the 846 college presidents indicated that it is very easy to find student outcomes data on the institution’s website, and only half of the respondents agree that it is appropriate for federal government to collect and publish data on outcomes of college graduates (Jaschik & Ledgerman, 2014). With the public disclosure requirements of the No Child Left Behind Act, there is an impetus for higher education and accreditation agencies to be more open to public and policy makers. It is expected U.S. SERVICE ACADEMIES 69 that further openness will contribute to more effective and accountable business practices as well as the improvement of educational quality. Summary This chapter provides an in-depth overview of accreditation in the United States, which evolved into a burgeoning industry of detailed analysis, student learning outcomes assessment, quality and performance review, financial analysis, public attention, and all-around institutional scrutiny. The costs of going accreditation must include both internal and external variables, and calculating these can be very complex, as institutions must be able to evaluate both monetary and non-monetary costs. Accreditation is widely considered to be a significant driving force behind advances in both student learning and outcomes assessment, but the lack of faculty buy-in and institutional investment are among hurdles to successful outcomes assessment. Additionally, the tension among the goals of outcomes assessment, quality improvement and external accountability can be another factor affecting practice. In order for outcome assessment to be meaningful and accountable, the process and information need to be shared and open to the public. U.S. SERVICE ACADEMIES 70 CHAPTER THREE: METHODOLOGY Overview In a qualitative study, framing one’s questions in terms specific to the setting or participants in a research has several advantages. One of these is that it helps the researcher focus on the specific beliefs, actions, and events being asked about and the actual contexts within which these are situated, rather than seeing these as simply manifestations of abstract, context- free categories (Maxwell, 2013). The focus of this study was accreditation at the three service academies to identify unique similarities or differences among the institutions. This study utilized a multiple case study design. Case studies are a strategy of inquiry in which the researcher explores a program, event, activity, process, or one or more individual in depth (Stake, 1995). Furthermore, Creswell (2013) asserts that a hallmark of a good qualitative case study is that it represents an in-depth understanding of the case (p. 98). Cases are bounded by time and activity, and researchers collect detailed information using a variety of procedures over a sustained period (Stake, 1995). Pertinent to this study, qualitative case study provides the best research method to fully capture the essence of the quality of education offered at the service academies. Conducting a quantitative research study would not yield the type of description-rich result. Merriam (1998) argues the detailed description that case study report provides is necessary to assess the evidence upon which the researcher's analysis is based. This chapter provides an overview of the methodology used for this study. Sample and Population The sample population for this study consists of the ALOs for the three service academies. These three institutions are accredited by two different regional agencies. The U.S. Air Force Academy, located in Colorado Springs, Colorado, is accredited by NCAHLC. The U.S. SERVICE ACADEMIES 71 U.S. Naval Academy located in Annapolis, Maryland, and the U.S. Military Academy located in West Point, New York, are both accredited by MSCHE. In addition, all institutions are accredited by ABET which is the agency responsible for the accreditation of engineering programs. This population is of particular interest because these institutions are responsible for educating approximately 12,000 officer candidates each year (U.S. GAO-12-327R, 2012). Following successful degree completion, these graduates make up approximately 18% of the officer corps for the nation’s armed services, and officers make up the leadership of the United States military. Instrumentation Data for this case study was collected through document analysis and interviews. Case studies in qualitative research are investigations of “bounded systems” with a focus on either the case or an issue illustrated by the case(s) (Stake, 1995). A qualitative case study provides an in- depth study of this “system” based on a diverse array of data collection materials (McCaslin & Wilson, 2003). Credibility for this study was achieved using the validation strategies of thick rich description based on data collected from pertinent documents and interviews followed by intense peer debriefing. Specific data which was collected for this study increased reliability and internal validity. Respondents yielded rich and satisfactory information, which contributed significantly to achieving a rich descriptive qualitative study. A draft copy of this report was shared with each ALO for the purpose of verifying the information contained herein. ALOs were given two weeks to review and provide comments as necessary to the draft report. One ALO from the U.S. Force Academy responded with no significant comments concerning the draft report while the U.S. Naval Academy ALOs made a point to clarify one area addressed in Chapter Five. U.S. SERVICE ACADEMIES 72 Data Collection Following the approval of this study by the Institutional Review Board (IRB), the researcher collected, organized, and analyzed data. Documents gathered were peer-reviewed journal articles, government publications, and pertinent papers published by the respective regional and specialized accrediting agencies. Other useful documents were academic program strategic plans and institutional self-study reports which are available electronically on the service academies websites. Data collection involved searching service academies’ websites as well as the websites for the applicable regional and specialized accreditation agencies. In addition, some of the interviewees offered information pertinent to this research project. Documents were useful in providing guidelines that assisted the researcher with inquiry during interviews. In all forms of qualitative research, some, and occasionally all of the data, are collected through interviews (Merriam, 1998). Patton (1990) asserts that conducting interviews is especially important because the process allows a researcher to find out what “is on someone else’s mind” (p. 278). In addition, Patton states that “the purpose of interviewing is to allow us to enter the other person’s perspective” (p. 196). Therefore, in preparation for interviews with the respective ALOs, the researcher developed questions related to accreditation and accountability. The questions were open-ended to allow ALOs to provide valuable information as well as any additional information that may be appropriate to the research. According to Miriam (1998), “the way in which questions are worded is a crucial consideration in extracting the type of information desired” (p.76) .Mindful of this, the proposed questions were piloted with four individuals who are currently performing ALO duties at four local higher education institutions. These public institutions include one public university and three community colleges. The four U.S. SERVICE ACADEMIES 73 ALOs are graduates of the University of Southern California’s Rossier School of Education doctoral program. Comments and recommendations from the four local ALOs were incorporated into the final list of questions used for interviews. Appendix A provides a list of 14 interview. While these questions formed the foundation for answers needed for this study, additional questions emerged and were asked during the interviews. To identify the ALO for each service academy, the searched the websites for each institution. Once the ALOs were identified, an email with an introduction letter (Appendix B) was sent to each. Simultaneously, the researcher used a military contact assigned to the Pentagon as well as University of Southern California staff working at the institution's Washington D.C. office to reach ALOs. The researcher also used the Navy Marine Corps Internet, which is an email account assigned specifically to military personnel and civilian government employees working on military installations. In an effort to provide credibility for the researcher and conscious of the significance of potential interviews to support this study, the Pentagon contact strongly recommended the researcher used the NMCI email account with his military title when communicating with the service academies ALOs. Both approaches proved very effective, as all ALOs responded timely and positively. The researcher informed the ALOs that he was an officer serving in the Navy Reserves. Without the need for assistance gaining access to military installations, the researcher travelled first to the U.S. Naval Academy and later to the U.S. Air Force Academy. The interviews took place during the months of July and August 2014. The researcher interviewed two deans at the U.S. Naval Academy who were appointed co-ALOs for the current accreditation review cycle. These ALOs referred to themselves as “co-chairs of the current Middle States Accreditation Review Process.” U.S. SERVICE ACADEMIES 74 While all ALOs initially agreed to participate in the study, the ALO for the U.S. Military Academy insisted on having approval from higher authority before meeting with the researcher. Following multiple phone calls, emails and a letter from the dissertation chairperson explaining the significance of the study and that the completed work would indicate this institution was invited to participate in the study but refused, the ALO agreed to participate. While waiting for approval, the primary official responsible for accreditation at the U.S. Military Academy was mobilized to Afghanistan. Surprisingly, the acting ALO came across as very knowledgeable in all matters pertaining to accreditation. In order to maintain consistency in interviewing format, the researcher offered to travel to the institution. However, the ALO offered to interview over the telephone. The interview with the U.S. Military Academy ALO took place during the month of September 2014. The drawback to conducting a phone interview, as explained by Creswell (2013) is that the researcher was unable to see the informal communication actions and reactions of the ALO. During the interview, it was difficult to tell if and when the ALO was finished responding to each of the questions. This always prompted the researcher to respond with the word “okay” when it was assumed that the ALO had finished answering a question. During initial review of the interview conducted with the ALO from the U.S. Air Force Academy, the data provided was incomplete. Therefore, the researcher followed up with an interview with another ALO official identified by the first ALO during the initial interview in August. The second interview with a U.S. Air Force Academy ALO took place in September. Following in-depth analysis of both interviews with U.S. Air Force Academy ALOs, it was determined the institution followed a different accreditation format than did her sister service academies. The different accreditation formats are detailed in Chapter One. In all cases, the researcher asked the ALOs for and received permission to audio record the interview as part of U.S. SERVICE ACADEMIES 75 the data collection. The interviews were tape recorded to secure an accurate account of the conversations and to avoid losing data. The recording of each interview was transcribed and coded to be used consistently with document analysis. Field notes were also taken during the interviews as a secondary measure. This was useful in ensuring that areas that needed clarifications were explored. The ALOs served as the initial contact persons at each institution and for identifying additional academy personnel who were qualified to speak on accreditation issues. ALOs were also helpful in obtaining documents that were not publicly available. Data Analysis Documentary evidence was used to cross-validate information gathered from the interviews. The information that resulted from open coding was compiled into themes or categories for triangulation and comparison. The categories were emergent and focused primarily on how best they help answer the three research questions. The corroboration of multiple qualitative techniques for this study enhanced the validity and reliability of data. Three research questions, presented below, were the focus of this study. The following section provides the corresponding and applicable responses that were offered by the respective institutional ALOs that were used to address each question. 1. Do U.S. service academies’ Accreditation Liaison Officers (ALOs) believe that these institutions are held accountable to a different standard for institutional effectiveness and student outcomes? 2. How do ALOs at the service academies believe that they are preparing officers compared to other accession programs? 3. What do ALOs believe are some unique similarities or differences among the service academies specifically relating to institutional effectiveness and student outcomes? U.S. SERVICE ACADEMIES 76 The first research question was addressed through interview questions 3, 6, 7, 8, 9, and 10 Appendix A). The second was addressed through interview question 11, and the third was addressed by interview questions 7, 8, 9, 12 and 13 (Appendix A). In addition, interview questions 7, 8 and 9 were used to solicit responses for both research questions 1 and 3. Detailed findings of this study are presented in Chapter Four. U.S. SERVICE ACADEMIES 77 CHAPTER FOUR: RESULTS The purpose of this study was to focus on accreditation at the three service academies to identify unique similarities or differences among the institutions. This chapter represents the results of documents obtained and interviews conducted in support of research for this study. Specifically, qualitative analysis of the interview transcripts revealed six themes that collectively address the study’s three research questions. Reporting the Results Analysis of the four transcripts of interviews with ALOs revealed six themes that effectively address the three research questions: (a) regional accreditation standards, (b) shared governance, (c) high technical course content, (d) immersion in military lifestyle and culture, (e) continuous program improvement through internal and external reviews, and (f) high degree of faculty engagement in scholarship. These six themes provide a better understanding of the accreditation process of the three service academies must go through to ensure their relevance among institutions of higher learning. The six themes are imbedded in the responses to each of the research questions. The following three research questions were the focus of this study. 1. Do U.S. service academies’ Accreditation Liaison Officers (ALOs) believe that these institutions are held accountable to a different standard for institutional effectiveness and student outcomes? 2. How do ALOs at the service academies believe that they are preparing officers compared to other accession programs? 3. What do ALOs believe are some unique similarities or differences among the service academies specifically relating to institutional effectiveness and student outcomes? U.S. SERVICE ACADEMIES 78 Regional Accreditation Standards and Shared Governance With respect to research question one, all ALOs are convinced there is no significant difference specifically related to standards of institutional effectiveness and learning outcomes for students attending the service academies compared to those attending other institutions of higher learning. Themes a and b, which are centered on regional accreditation standards and shared governance, are useful in providing comprehensive understanding of the ways the service academies address these major areas. The accreditation process emphasizes institutions of higher learning meet specific requirements related to institutional effectiveness and assessment of learning outcomes. For purposes of accreditation, U. S. service academies are held accountable to the same standards. Consistent with administration requirements for non-military institutions, each of the service academies is headed by a superintendent who performs responsibilities similar to those of a university president. There are several external and internal boards that allow each superintendent to effectively manage the daily administration of the institution. As stipulated by federal law, the board serves in an advisory capacity at each service academy. The board “does not have statutory or fiduciary responsibility” (ALO 5, personal communication, September 17, 2014). “The BOV does not have the same kind of authorities or oversight responsibilities that a typical Board of Governors or Board of Directors would have for almost any other college or university” (ALO 1, personal communication, July 31, 2014). As a result, the board cannot serve as a governing board for these institutions. The U.S Naval Academy and U.S. Military Academy are both located within the geographical responsibility of MSCHE, which has a standard (Standard 4 – Leadership and Governance) requiring that institutions have a governing body in U.S. SERVICE ACADEMIES 79 place. Therefore, both institutions have local governing boards to assure institutional integrity and to fulfill responsibilities of policy and resource development, consistent with their missions. Unlike her sister academies, the U.S. Air Force Academy, which lies within the Higher Learning Commission’s regional accrediting geographical area of responsibility, does not have similar requirement. Therefore, the U.S. Air Force Academy does not have a local governing board. Instead the superintendent reports directly to the chief of staff of the Air Force. The U.S. Naval Academy is governed by an Advanced Education Review Board, which has oversight responsibility for education policy and programs similar to those of a typical board of directors or board of governors at any college or university (OPNAVINST 15.20.42, May 18, 2009). The review board is chaired by the vice chief of naval operations and other key military people who are responsible for personnel, manpower, budgets and planning. The review board usually meets at least twice yearly to discuss all issues pertinent to the U.S. Naval Academy. These issues include students’ retention, graduation, budgets, faculty size, and midshipmen brigade size. For example, “the AERB has been very instrumental in the implementation of a new cyber security course which was deemed necessary following the devastating terrorists attacks on September 11, 2001” (ALO 1, personal communication, July 31, 2014). As part of the reaccreditation process, ALOs mentioned the liaison officer from MSCHE will meet with the vice chief of the naval operations in his capacity as the chairperson for the review board. However, the superintendent for the U.S. Naval Academy reports directly to the chief of naval operations. Like the U.S. Naval Academy, the U.S. Military Academy also has a governing board referred to as the Executive Steering Group, which performs similar functions as the review board. The group’s membership consists of the secretary of the army, the army’s chief of staff, vice chief of staff, director of the army staff, principal staff responsible for operations (G-3), U.S. SERVICE ACADEMIES 80 principal staff responsible for military personnel (G-1), the army war college commandant and the superintendent (ALO 4, personal communication, September 17, 2014). As stated previously, both MSCHE and HLC require institutions of higher learning to demonstrate excellence in institutional effectiveness and student learning outcomes. High Technical Course Content With reference to research question two regarding preparation of officers, ALOs had difficulty answering this question. All ALOs took a long pause before providing a response and, in some cases, admitted it was difficult to provide qualitative judgment to this question. However, the U.S. Air Force Academy’s ALOs stated they were convinced that “the overall quality of their officers was superb (ALO 3, personal communication, August 15, 2014). The U.S. Military Academy’s ALO stated that his institution “produces officers of the highest quality” (ALO 5, personal communication, September 17, 2014). This conclusion is based on the fact that cadets are required to take a core curriculum that includes up to 33 courses, on average, to meet graduation requirements. This, the ALO claims “is about twice as many courses that students at a regular non-military academy are required to take” (ALO 5, personal communication, September 17, 2014). The typical curriculum at the service academies includes courses in humanities, social sciences, engineering, and basic sciences. ALOs are convinced that this broad academic foundation in the arts and sciences adequately prepares students for future operational environments. As a direct result of the strong technical content of courses offered, only Bachelor of Science degrees are awarded to graduates at all three service academies. One interesting experience common to all students at the service academies is that they cannot choose to not take certain courses. Compared to non-military institutions where some students can complete an U.S. SERVICE ACADEMIES 81 undergraduate degree by taking only statistics, students of the military academies must complete calculus as part of their undergraduate education. For instance, midshipmen at the U.S. Naval Academy are required to complete three semesters of calculus (U.S. Naval Academy, 2014). Immersion into the Military Culture and Lifestyle The fifth theme, immersion into the military culture and lifestyle, was a second factor identified by ALOs as being unique to the service academies. Compared to other accession programs, such as the Reserve Officers’ Training Corps (ROTC) at traditional campuses, cadets and midshipmen live an immersion experience while at the service academies. Unlike their ROTC counterparts, cadets and midshipmen are “required to wear their uniforms practically all day and live as military students following strict rules and regulations” (ALO 4, personal communication, September 29, 2014). These students are typically confined to an academic institution except when limited liberty is granted to them. This immersion culture fosters a specific level of military discipline students must abide by or face serious disciplinary consequences. This immersion factor is also important in the context of “a synergistic concept where expected successful student outcomes are tied to effective officers who are trained to lead junior personnel in military service”" (ALO 4, personal communication, September 29, 2014). The service academies consider competition, especially in sports, as very important to the development of potential officers in the military. Besides the health benefits associated with sporting activities, students at the service academies are inculcated with being “comfortable in winning and losing sporting events” (ALO 3, personal communication, August 15, 2014). More importantly, students are taught to understand and appreciate the benefits associated with competition. All three service academies participate in major sports competition and have facilities to practice and host the various sporting events. U.S. SERVICE ACADEMIES 82 In an effort to ascertain the effectiveness of preparing academy graduates compared to other accession programs, ALOs at the U.S. Naval Academy stated the Academy Effectiveness Board conducted several studies focused on the quality of students’ post-graduation performance in follow-on activities. These studies included assessing naval academy graduates’ performance in the immediate follow-on schools such as flight schools, nuclear power school, and the Marine school. “Surveys have been sent to the schools mentioned as well as supervisors to determine the performance of service academy graduates in comparison to other accession programs” (ALO 1, personal communication, July 31, 2014). Unfortunately, the results of these assessments were not available at the time of this study. Overall, ALOs acknowledged difficulties associated with accomplishing an assessment of the quality of officers produced by the service academy compared to that of those produced by the ROTC or military Officer Candidate School accession programs. Continuous Program Improvement Through Internal and External Reviews The service academies make every effort to ensure they properly prepare officers for military service. To that end, the institutions engaged in numerous internal and external reviews. The following are two examples of comprehensive reviews undertaken by the service academies. In 2005, the superintendent of the U.S. Naval Academy initiated a comprehensive internal review of all facets of the Naval Academy Academic Program. That same year, the superintendent also established, in parallel, an external committee, the Academic Program Executive Review Group (AERG), which was charged with identifying what the Navy and Marine Corps would require in the education of their junior officers in the 21st century. The AERG, comprised of retired senior military officers and professional educators and appointed by the superintendent, was asked to consider two broad but basic questions: (1) is the U.S. Naval U.S. SERVICE ACADEMIES 83 Academy educating its graduates to meet the requirements of the Naval Service? and (2) is it doing so in the most effective and efficient way? As the AERG compiled its list of primary recommendations, it found that, in many instances, areas it sought to highlight were already the subject of extensive consideration and effort by the U.S. Naval Academy faculty and administration committees. After 10 months of talking to alumni and faculty, reviewing existing studies and assessments and considering input from Fleet commanders, the AERG eventually came to the conclusion that there was “no rigorous, analytically defensible way to answer the two questions” (p. 28). Instead, the group concluded, [D]etermining how best to educate midshipmen to meet the requirements of the future Fleet is a necessarily cooperative task that must include not only committed U.S. Naval Academy faculty and administrators but also active and sustained dialog with the operational Navy and Marine Corps. (United States Naval Academy, 2014) Similarly, shortly after assuming command of the U.S. Air Force Academy, current superintendent Lieutenant General Michelle D. Johnson asked three fundamental questions consistent with the intent of this research question: (1) What is the essence of the academy? (2) What is it that we are doing that sets us aside from other commissioning sources? and (3) Why is the academy experience different from the ROTC experience? In an effort to answer these questions, the academy developed the Essence document which highlights eight pillars on which the essence of the academy is built: developing character and leadership; focusing on the Air Force mission in the air, space and cyberspace; immersing cadets in a total experience; harmonizing science, technology, engineering and mathematics and the liberal arts; competing; internalizing the Air Force ethos; exposing cadets to professional Air Force culture; and building upon the foundation of an exemplary installation (usafa.af.mil, 2014). All indications are that U.S. SERVICE ACADEMIES 84 these eight pillars are very important for the new superintendent and, as such, are overwhelmingly advocated and adhered to by existing faculty and staff. In a sober acknowledgement, one ALO admitted that surveys administered to follow on supervisors concerning “how graduates from a particular academy were doing compared to the products of other commissioning sources in terms of teamwork, leadership skills, those attributes that are important to officers” (ALO 3, personal communication, August 15, 2014) did not yield results indicating academy graduates were much better than those of other accession programs. The results of these surveys indicate that there is no difference in the quality of officers produced by the various accession programs. This forces the question “are the U.S. service academies worth keeping especially in light of the significant amount of federal funding needed to maintain them?” The result of this study, based on confirmation by ALOs concerning surveys administered to follow-on supervisors, indicates that the quality of officers produced by the costly service academies is no different than that of the other less expensive accession programs. This result, therefore, begs the question as to whether the academies in their current forms are necessary. All ALOs are convinced that the overwhelming consensus among those in uniform is that graduates from the service academies typically serve longer tours in the military as general or flag officers than others who did not attend one of the three academies. The ALOs point to this as anecdotal evidence that the high cost of a service academy education is justified compared to other accession programs. However, many flag or general officers, notably General Colin Powell who rose to the rank of chairman of the Joint Chiefs of Staff, did not attend the service academies. In fact, as chairman of the Joint Chiefs of Staff, General Powell served as the chief military advisor to then Secretary of Defense Dick Cheney and President George H. W. Bush U.S. SERVICE ACADEMIES 85 and successfully oversaw the 1991 Persian Gulf War. American history is replete with many more successful general or flag officers who did not attend one of the service academies. All three superintendents of the service academies graduated from the institutions they are currently in charge of. In order to effectively answer the third research question regarding unique similarities or differences among the service academies specifically relating to institutional effectiveness and student outcomes, one must review existing law. The service academies, as established by law, have much common but are also different from the perspective that each has unique requirements with respect to protecting and defending the homeland. Consistent with the mission of each service component, the academies exist to provide qualified officers to serve as leaders in the Air Force, Army and Navy. These three service components are designed to fight together in order to effectively win wars. The following are some of the similarities among the three service academies. First, since federal law establishes the service academies, the admissions requirements for all three institutions are the same. This means that the academies compete for the same pool of candidates. ALOs contend that the goal for each service academy is “to produce educated officers of high moral integrity that are capable of making good judgments under pressure and are committed to service in defense of the nation” (ALO 5, personal communication, September 17, 2014). Second, all students are required to complete 47 months of rigorous education training in residence before graduating. Cadets and midshipmen have military responsibilities throughout the year while attending school. It is significant to note that students have limited liberty while attending these institutions. All graduates are awarded Bachelor of Science degrees due to the U.S. SERVICE ACADEMIES 86 very high number of technical majors offered. All graduates must serve five years on active military duty. Federal law also dictates the organizational structure of the service academies. All academies have three major departments responsible for academic, athletics and military education. Above all, the students who attend the service academies are preparing for the difficult responsibility of leading men and women into military operations in support of the nation. Sadly, given the role of the military, this significant leadership responsibility could result in the loss of lives of both leaders and followers in times of war. At the time of this study, students at the academies accepted admissions at the height of the Iraq and Afghanistan wars. This commitment, some ALOs argue, is a testament to the extraordinary quality of students who attend these institutions knowing that they could lose their lives defending the constitution of the United States of America. Despite the many similarities among the three service academies, there are significant differences that make each institution meet the specific needs of their service component. Collectively, the military institutions of higher learning are referred to as service academies. However, as the first and oldest institution, the Army is very protective of that term and made every effort to set the record straight when it came to using the proper terminology relative to the names of the academies. During data collection, the ALO for U.S. Military Academy corrected the researcher by making it clear that the service academies are not referred to as “military academies” because there is only one military academy (ALO 5, personal communication, September 17, 2014). Data compiled from NCES, as shown in Table 4, highlights completion rates for the various engineering programs at the three service academies. In 2013, the total number of U.S. SERVICE ACADEMIES 87 graduates from the three service academies was 3,145. The U.S. Air Force Academy reported 420 students graduated with Bachelor of Science degrees in the various engineering programs out of a total graduating class of 1,033 in 2013. The total number of graduates from the engineering program represents 41% of all academy graduates. Specifically, the engineering majors with the highest number of graduates were systems engineering, aerospace, aeronautical and astronautic/space engineering, and general civil engineering, representing142 graduates (34%), 79 graduates (19%) and 67 graduates (16%), respectively. The data also shows that, in the case of the U.S. Military Academy, a total of 247 students graduated from the various engineering programs out of a total graduating class of 1,054 in 2013. The total number of graduates from the engineering program represents 23% of all academy graduates. Engineering majors with the highest number of graduates were mechanical engineering, systems engineering and civil engineering, and general engineering representing 67 graduates (27%), 55 graduates (22%) and 45 (18%) respectively. Unlike the other service academies, the U.S. Military Academy offers majors in engineering technology and engineering-related fields that are accounted differently. For the same reporting period, a total of 56 students graduated from these majors with 53 students graduating from the Engineering/Industrial Management Program, representing 95% of all graduates in this category. For the remaining 5% of graduates in this category, two students graduated from the engineering technologies and engineering-related fields, other, and one student graduated from the Mechanical Engineering/Mechanical Technology/Technician Program. The U.S. Naval Academy reported 362 students graduated with degrees in the various engineering programs out of a total graduating class of 1,058 in 2013. The total number of graduates from the engineering program represents 34% of all academy graduates. Engineering U.S. SERVICE ACADEMIES 88 majors with the highest number of graduates were systems engineering, mechanical engineering, and aerospace, aeronautical and astronautic/space engineering, which represents 92, 82 and 63 graduates corresponding to 25%, 23% and 17% of all graduates from the overall engineering programs, respectively (NCES, 2014). Table 4 Service Academies Engineering Programs U.S. Air Force Academy U.S. Military Academy U.S. Naval Academy Programs/Majors Bachelor Engineering Aerospace, Aeronautical and 79 Astronautical/Space Engineering Civil Engineering, General 67 Computer Engineering, General 11 Electrical and Electronics 8 Engineering Engineering Mechanics 0 Engineering Science 0 Engineering, General 13 Environmental/Environmental 8 Health Engineering Mechanical Engineering 49 Operations Research 43 Systems Engineering 142 Total 420 Total graduates 1,033 Programs/Majors Bachelor Engineering Chemical Engineering 15 Civil Engineering, General 45 Electrical And Electronics 26 Engineering Environmental/Environmental 15 Health Engineering Mechanical Engineering 67 Nuclear Engineering 15 Operations Research 9 Systems Engineering 55 Total 247 Engineering Technology and Engineering-Related Fields Civil Engineering Technology/0 Technician Engineering Technologies and 2 Engineering-Related Fields, Other Engineering/Industrial Management 53 Environmental Engineering Technology/Environmental Technology Mechanical Engineering/Mechanical 1 Technology/Technician Nuclear Engineering Technology/0 Technician 0 Total 56 Total graduates 1,054 Programs/Major Bachelor Engineering Aerospace, Aeronautical and 63 Astronautical/Space Engineering Computer Engineering 9 General Electrical and Electronics 11 Engineering Engineering, General 28 Mechanical Engineering 82 Naval Architecture and Marine 19 Engineering Ocean Engineering 58 Systems Engineering 92 Total 362 Total graduates 1,058 Source: National Center for Education Statistics Table 5 shows the three service academies graduate more students in four major engineering programs. Of the total 1,029 graduates from the various engineering programs during the reporting period, 741 received degrees in these four majors. This number represents 72% of graduates from all engineering programs and 24% of total graduates from the three U.S. SERVICE ACADEMIES 89 service academies. Overall, in 2013, the total number of students who graduated with engineering degrees from the three service academies was 33% (35% when engineering technology and engineering-related fields are added). All three service academies graduated students in systems engineering and mechanical engineering majors. A total of 49 students graduated with mechanical engineering degrees from the U.S. Air Force Academy; however, this number was not factored in during the previous analysis because the focus was primarily on the top three engineering programs. With respect to the four major engineering programs, both the U.S. Air Force Academy and the U.S. Naval Academy offer the aerospace, aeronautical and astronautic/space engineering programs. General civil engineering is offered at the U.S. Air Force and the U.S. Military Academy but not at the U.S. Naval Academy (nces.ed.gov, 2014). Table 5 The Top Three Engineering Programs/Majors Offered at the Service Academies U.S. Air Force Academy U.S. Military Academy U.S. Naval Academy Programs/Majors Bachelor Engineering Aerospace, Aeronautical and 79 Astronautical/Space Engineering Civil Engineering, General 67 Systems Engineering 142 Programs/Majors Bachelor Engineering Civil Engineering, General 45 Mechanical Engineering 67 Systems Engineering 55 Programs/Major Bachelor Engineering Aerospace, Aeronautical and 63 Astronautical/Space Engineering Mechanical Engineering 82 Systems Engineering 92 Source: National Center for Education Statistics. High Degree of Faculty Engagement in Scholarship A “high degree of scholarship activity exists among the civilian faculty and some military faculty” (ALO 1, personal communication, July 31, 2014). These faculty members are “very engaged in the scholarship of their disciplines by requirement” (ALO 2, personal communication, July 31, 2014). U.S. Naval Academy’s ALOs contend that the make-up of the U.S. SERVICE ACADEMIES 90 faculty at that institution is deliberately different than that at the other service academies. The U.S. Naval Academy strives for an even mix of civilian and military faculty. The other service academies maintain a consistent 60% military and 40% civilian faculty. The faculty make-up is deliberate because the Navy wants individuals with practical and relevant naval experience available for a balanced approach to student development. U.S. Naval Academy ALOs also highlight that the vast majority of civilian faculty typically spend 30 to 35 years at the institution. The advantage to this civilian faculty longevity is that they provide much needed history and knowledge related to educational issues that often require long-term oversight. ALOs are convinced that having civilian faculty adds significant value by ensuring the institution avoids repeating actions proven ineffective or unproductive in the past. Civilian workforce longevity is very significant, especially in light of the short three-year tenure that superintendents are required to serve at the service academies. Another significant difference between the U.S. Naval Academy and the other service academies according to ALOs at the U.S. Naval Academy involves mandated naval operation edict, which specifies that 65% of all graduates must be in the science, technology, engineering or mathematics. While this mandated requirement is unique to the U.S. Naval Academy, it only applies to graduates entering naval service and does not apply to those accepting commissions in the Marine Corps. Summary The main purpose of an interview is to obtain a special kind of information (Merriam, 1998). The result of interviews conducted with the service academies’ ALOs indicates that all ALOs are convinced there are no significant difference among the service academies. Indeed, the three service academies have more in common primarily because they follow specific federal U.S. SERVICE ACADEMIES 91 guidelines. For purposes of accreditation, service academies are held accountable to the same standards and criteria that include emphasis on institutional effectiveness as well as student learning outcomes. Consistent with the mission of each service component, the academies exist to provide qualified officers who serve as leaders in the Air Force, Army and Navy. ALOs are convinced that the service academies produce officers of the highest quality with an education based on academic programs with strong technical content offered that lead to Bachelor of Science degrees. Besides the specific immersion in military lifestyle, students at the three service academies are expected to demonstrate specific learning outcomes consistent with all institutions of higher learning. U.S. SERVICE ACADEMIES 92 CHAPTER FIVE: DISCUSSION OF THE FINDINGS The purpose of the service academies is “to provide an annual influx of career-officers and future leaders into each service” (U.S. DoD Directive 1322.22 Change 1, 2011 p. 2).The service academies are structured to provide a curriculum critical to the development of successful future officers in academic and military areas of achievement (U.S. GAO-03-1000, 2003).To date, the service academies have not been compared to each other. This research offers the first study focused on similarities and differences among the three service academies. Each service academy is unique and designed to meet the needs of the respective branch of the military. However, the common denominator among these service academies is that all must meet accreditation requirements much like other institutions of higher education. Graduates of the service academies make up approximately 18% of the officer corps for the nation’s armed services (U.S. GAO-03-1000, 2003). Therefore, it is important to learn of the value provided to society in general through the provision of a free education to cadets and midshipmen. The purpose of this study was to address the following research questions: 1. Do U.S. service academies’ Accreditation Liaison Officers (ALOs) believe that these institutions are held accountable to a different standard for institutional effectiveness and student outcomes? 2. How do ALOs at the service academies believe that their institutions are preparing officers compared to other accession programs? 3. What do ALOs believe are some unique similarities or differences among the service academies specifically relating to institutional effectiveness and student outcomes? U.S. SERVICE ACADEMIES 93 Discussion of Findings The emphasis of this study was to determine the personal and professional beliefs held by the service academies’ ALOs. However, their individual and collective beliefs were based on specific requirements stipulated by the two regional accreditation agencies responsible for accrediting the service academies. The beliefs of the ALOs were corroborated by factual data obtained through various official documents. With the exception of the U.S. Military Academy, accreditation information was readily available on the service academies’ websites. Information pertaining to what the U.S. Military Academy is engaged in was very difficult to obtain. This lack of information made it difficult to comprehensively compare the three service academies. As a result, this study relied heavily on data which reported to NCES and MSCHE, the regional accreditation body for the U.S. Military Academy. Implications for Practice The federal government provides full funding for the three service academies allowing students to attend the institutions tuition-free. Traditional institutions of higher learning rely heavily on the accreditation process to ensure they qualify for federal government funding. As discussed previously, the federal government relies on accreditation to assure the quality of institutions and programs for which the government provides federal funds and for which the government provides federal aid to students (Eaton, 2012). Mindful of this reality, it was not surprising to learn from some ALOs that accreditation is important primarily because of the unique emphasis on the peer review component. This demonstrates that administrators and faculty understand the significance that the accreditation process has on their respective institution’s academic programs. ALOs stated that most faculty take their commitment to scholarship very seriously. This shows that faculty at the service academies, like those at non- U.S. SERVICE ACADEMIES 94 military academic institutions, share the same seal and commitment to academic excellence and a passionate desire to work for institutions of higher learning that meet and maintain required accreditation standards. This study provides pertinent information on the various engineering programs offered at the three service academies. Most of these programs are accredited by ABET, demonstrating that the service academies offer credible specialized curricula for students interested in pursuing a degree in the engineering field and for qualified faculty who wish to continue working in this area of scholarship. ALOs shared two significant challenges. First, superintendents are assigned to the institutions for short 3-year tenure. While the superintendents are senior flag (general) officers at the two or three-star rank, they usually do not have a career in higher education administration before appointment to the service academies. For example, the current superintendent of the U.S. Military Academy is the only academy leader who served two tours in an institutional capacity. The two tours included one as Commandant of Cadets for the U.S. Military Academy and as Commander of the Combined Arms Center at Fort Leavenworth, Kansas, overseeing the Command and General Staff College and 17 other schools, centers, and other training programs located throughout the Unites States (USMA, 2015). The current superintendent of the U.S. Naval Academy previously served a tour as president of the U.S. Naval War College before assuming his current position (USNA, 2015). The U.S. Naval War College is primarily responsible for providing current, rigorous and relevant professional military education programs (U.S. Naval War College, 2015). While a Rhodes Scholar, the superintendent of the U.S. Air Force Academy completed one tour in an instructional setting as Assistant Professor of Political Science, T-41 instructor pilot and Associate Air Officer Commanding, U.S. Air Force Academy. However, that U.S. SERVICE ACADEMIES 95 assignment was completed 21 years before the superintendent was appointed to the U.S. Air Force Academy. Academically, all superintendents graduated from their respective service academies. Additionally, the superintendent of the U.S. Military Academy earned master’s degrees from Long Island University and Kansas State University (USMA, 2015). The current superintendent of the U.S. Air Force Academy earned a Master of Arts degree in politics and economics from Brasenose College, Oxford University, England and also a Master of Science degree in national security strategy from the National War College, Fort Lesley J. McNair, Washington, D.C. (U.S. Naval War College, 2015). The biography for current U.S. Naval Academy superintendent indicates he graduated from the Navy Fighter Weapons School and the Navy’s Nuclear Power Program (USNA, 2015). However, neither of these programs offer advanced degrees. ALOs contend that the average superintendent does not stay at a service academy long enough to implement any significant academic change. This limited tenure fosters the assumption that most superintendents assume responsibilities with a myopic view of accomplishing very little, if any, in the short term before the end of the tour of duty. During fact checking with ALOs, U.S. Naval Academy ALOs expressed they did not feel the preceding depiction was reasonably accurate and offered the following clarification: For U.S. Naval Academy the tour for superintendents is four years, not three. It used to be three, but roughly around the time that it became a three star position, it was extended to four years precisely to resolve the issues you bring up. So, some ALOs may have said this, but I don’t think we would agree. Additionally, at the U.S. Naval Academy, superintendents have implemented significant changes to academics, some within their tour and others that they began and that have been completed after their retirement. So, while they may not always see their vision through to completion, they often see stages U.S. SERVICE ACADEMIES 96 completed and know that it will be carried on by their successors. (personal communication, February 02, 2014) Second, as stated above, the federal government is responsible for providing full funding for the three service academies. This reliance on federal funds has a significant impact especially during periods of sequestration. Funding uncertainty in the form of government shutdown and lack of funding to support daily operations affects the civilian government workforce of administration, faculty and staff. While full funding by the federal government is generally guaranteed, funding uncertainty can have significant adverse consequences on the effective administration of any institution. There needs to be predictability and certainty when it comes to funding the three service academies. Predictability and certainty are very important because any disruption in the execution of academic curriculum can have a significant impact on student learning outcomes. Even without sequestration and its effects, the service academies are worried as a result of recent reduction in overall military spending. The end of extended conflicts in the Middle East resulted in the downsizing of the military, especially within the Army. Consequently, the Army anticipates this downsizing will affect the number of military faculty assigned to the U.S. Military Academy (USMA, 2013). As stated previously, the U.S. Military Academy maintains a mix of 60% military and 40% civilian faculty. Therefore, the potential impact of this downsizing will require the Army to increase the number of civilian faculty, thereby increasing the operating budget for the academy at a time of reduced spending on the military. The three service academies offer rigorous educational programs, especially in science, technology, engineering and mathematics. Indeed, due to the nature of the duties and responsibilities of military officers, successful completion of these programs indicates the U.S. SERVICE ACADEMIES 97 military force will be educated and prepared for the challenges of leadership opportunities. Following successful degree completion, graduates of the service academies have excellent opportunities to hone in their skills through a mandatory five year commitment to military service. With a service academy education and subsequent solid practical experience, these graduates have a significant advantage over their non-military counterparts. Future Research The results of this descriptive qualitative study revealed that three service academies share many similarities pertaining to accreditation. Indeed, the service academies adhere to the same accreditation requirements that all institutions of higher learning are required to meet. Given this reality, it is important to learn of the motivations and aspirations of cadets and midshipmen who join the service academies. To address this concern, future research can focus on the question of what motivates students to attend one of the three service academies. Some possible motives may include a free college education, a b desire for leadership opportunities in the military and post military civilian employment, the call of duty and service to country, or legacy for students whose parents are service academy alumni. Students who attend the service academies provide an excellent population for studies focused on motivational theories. The three service academies have numerous applicants and produce thousands of graduates who can provide adequate feedback to determine motives for attending the respective institutions. This study is the first to compare the three service academies specifically in terms of institutional accreditation. To date there has been no known study that compares the service academies to non-military institutions of higher learning. To address this information deficiency, future studies can compare the quality of education offered at the service academies to that U.S. SERVICE ACADEMIES 98 offered at non-service academies. Potential areas of study could address student interest or desire to pursue studies and careers in science, technology, engineering and mathematics. Other studies can look at the aforementioned short three-year tenure for academy superintendents. The studies can address the rationale or justification for this superintendent requirement to determine if there impact on institutional effectiveness. Conclusion A dearth of information exists on the various service academies. The purpose of the service academies is “to provide an annual influx of career-officers and future leaders into each service” (U.S. DoD Directive 1322.22 Change 1, 2011 p. 2).The service academies are structured to provide a curriculum critical to the development of successful future officers in academic and military areas of achievement (U.S. GAO-03-1000, 2003).Accreditation is a significant aspect of the service academies capabilities to provide excellent educational opportunities for service members as well as allowing faculty to pursue specific areas of scholarship that is unique to military application. Like traditional institutions of higher education the service academies are required to demonstrate that they meet existing accreditation standards. The three service academies which are the focus of this research consistently meet accreditation standards. All three institutions are accredited by the respective regional agencies as well as by ABET which is the specialized agency responsible for engineering programs. Since the service academies exist to educate leaders of the military, graduates enjoy excellent opportunities to gain practical experience in their respective fields. This study attempts to open a window to the world of this unique subset of higher learning institutions. It is my hope that future studies will build on the foundations that were laid by this seminal work. U.S. SERVICE ACADEMIES 99 Epilogue In many ways, the circumstances which led to the birth of this study involved a dissertation chairperson currently serves at the rank of Auxiliary Commander in the U.S. Coast Guard, so designated for his professional expertise as a dentist. He saw me as the perfect candidate to pursue this study primarily based on my current status as a member of the armed forces serving at the rank of Commander in the U.S. Navy combined with being a student in the accreditation dissertation group. I enthusiastically accepted the challenge until initial research resulted in a dearth of information on the service academies in the area of accreditation and in- depth analysis on comparison and differences among the three institutions. In spite of these many challenges, I was the perfect candidate to undertake this study. Though I sought assistance from the University of Southern California staff assigned to the office in Washington, D.C. with reaching out to the respective ALO at the three service academies, this effort yielded very limited results. My military contact at the Pentagon proved to be more productive in providing an avenue to effectively communicate with the service academy ALOs. It was the Pentagon contact who strongly recommended I communicate with the respective ALOs using my military assigned email through Navy Marine Corps Internet and to identify myself using military rank as commander in the Navy. All service academy ALOs responded very favorably to participate in the study. I am convinced that the ease with scheduling on-site interviews was possible primarily because I carry a military identification card which grants unescorted access to military installations. As institutions responsible for educating more officers of the armed forces compared to other accession programs, such as the Reserve Officer Training Corps and Officer Candidate School or Officer Training School, the service academies are located on military installations. U.S. SERVICE ACADEMIES 100 Finally, serving as a senior officer in the military allows me to better understand the structure, culture and language that are unique to this organization. I attempted to translate these characteristics to demonstrate a fundamental understanding that, in the end, the service academies while unique adhere to the same accreditation requirements like non-military institutions of higher learning. U.S. SERVICE ACADEMIES 101 References Adelman, C., & Silver, H. (1990). Accreditation: The American experience. London, England: Council for National Academic Awards. Alexander, F. K. (2000). The changing face of accountability: Monitoring and assessing institutional performance in higher education. The Journal of Higher Education, 71, 411- 431. American Accounting Association, Committee on Consequences of Accreditation. (1977). Report of the Committee on Consequences of Accreditation, 52, 165, 167- 177. American Association for Higher Education. (1997). Assessing impact: Evidence and action. Washington, DC: Author. American Association for Higher Education. (1992). 9 principles of good practice for assessing student learning. North Kansas City, MO: Author. American Council on Education, Task Force on Accreditation. (2012). Assuring Academic Quality in the 21 st Century: Self-regulation in a New Era. Retrieved from <http://www.acenet.edu/news-room/Documents/Accreditation-TaskForce-revised- 070512.pdf> American Council of Trustees and Alumni. (2007). Why accreditation doesn't work and what policymakers can do about it. Washington, DC: American Council of Trustees and Alumni. Retrieved from<https://www.goacta.org/publications/downloads/Accreditation2007Final.pdf> American Council on Education. (2012). Assuring academic quality in the 21st century: Self regulation in a new era: A report of the ACE National Task Force on institutional accreditation. Washington, DC: American Council on Education. Retrieved from U.S. SERVICE ACADEMIES 102 http://www.acenet.edu/AM/Template.cfm?Section=Government_Relations_and_ Public_Policy&Template=/CM/ContentDisplay.cfm&ContentID=45275 American Medical Association. (1971). Accreditation of health educational programs. Part I: Staff working papers. Washington, DC: American Medical Association. Areen, J. (2011). Accreditation reconsidered. Iowa Law Review, 96.5, 1471. Association of American Colleges and Universities (2007). College Learning for the New Global Century. Washington, DC: Author. Retrieved from: <http://www.aacu.org/leap/documents/GlobalCentury_final.pdf> Astin, A.W. (2014, February 18). Accreditation and autonomy. Inside Higher Ed, Vol(No.), pages. Retrieved from <http://www.insidehighered.com/views/2014/02/18/accreditation- helps-limit-government intrusion-us-higher-education-essay> Baker, R. L. (2002). Evaluating quality and effectiveness: Regional accreditation principles and practices. The Journal of Academic Librarianship, 28(1), 3-7. Banta, T. W. (1993).Summary and conclusion: Are we making a difference? In T. W. Banta (Ed.), Making a difference: Outcomes of a decade of assessment in higher education (pp. 357 376). San Francisco, CA: Jossey-Bass. Barbee, D. E &Bouck, A. J. (1974).Accountability in education. New York: Petrocelli. Bardo, J. W. (2009). The impact of the changing climate for accreditation on the individual college or university: Five trends and their implications. New Directions for Higher Education, 145, 47-58. Barzun, J. (1993). The American university: How it runs, where it is going. Chicago, IL: University of Chicago Press. U.S. SERVICE ACADEMIES 103 Beno, B. A. (2004). The role of student learning outcomes in accreditation quality review. New Directions for Community College, 236, 65-72. Bensimon, E. M. (2005). Closing the achievement gap in higher education: An organizational learning perspective. New Directions for Higher Education, 131, 99-111. Berger, A. A. (2008). Seeing is believing: An introduction to visual communication. Boston: McGraw-Hill. Bernhard, A. (2011). Quality assurance in an international higher education area: A case study approach and comparative analysis. Wiesbaden, Germany: VS VerlagfürSozialwissenschaften. Bitter, M. E., Stryker, J. P, & Jens, W. G. (1999). A preliminary investigation of the choice to obtain AACSB accounting accreditation. Accounting Educators' Journal, XI, 1-15. Blauch, L. E. (1959). Accreditation in higher education. Washington, DC: United States Government Printing Office. Retrieved from http://babel.hathitrust.org/cgi/pt?id=mdp.39015007036083;view=1up;seq=1 Bloland, H. G. (2001). Creating the Council for Higher Education Accreditation (CHEA). Phoenix, AZ: Oryx Press. Bradshaw, C. P., Sudhinaraset, M., Mmari, K., & Blum, R. W. (2010). School transitions among military adolescents: A qualitative study of stress and coping. School Psychology Review, 39(1), 84-105. Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A compilation of institutional good practice. Sterling, VA: Stylus Britt, B., & Aaron, L. (2008). Nonprogrammatic accreditation: Programs and attitudes. Radiologic Technology, 80(2), 123-129. U.S. SERVICE ACADEMIES 104 Brittingham, B. (2008, September/October). An uneasy partnership: Accreditation and the federal government. Change, 40(5), 32-38. Brittingham, B. (2009). Accreditation in the United States: How did we get to where we are? New Directions for Higher Education, 145, 7-27. Brittingham, B. (2012). Higher education, accreditation, and change, change, change: What's teacher education to do? In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality assurance (59-75). Washington, DC: Teacher Education Accreditation Council. Retrieved from <http://www.teac.org/wp- content/uploads/2012/03/Festschrift-Book.pdf> Brown, H. (2013, September). Protecting Students and Taxpayers: The Federal Government's Failed Regulatory Approach and Steps for Reform. American Enterprise Institute, Center on Higher Education Reform. Retrieved from http://www.aei.org/files/2013/09/27/- protecting-students-and-taxpayers_164758132385.pdf Burke, J. C. & Associates. (2005). Achieving accountability in higher education: Balancing public, academic, and market demands. San Francisco, CA: Jossey- Bass. Burke, J. C. &Minassians, H. P. (2002). The new accountability: From regulation to results. New Directions for Institutional Research, 2002(116), 5-19. Cabrera, A. F., Colbeck, C. L., &Terenzini, P. T. (2001). Developing performance indicators for assessing classroom teaching practices and student learning: The case of engineering. Research in Higher Education, 42(3), 327-352. Carey, K. (2009, September/October). College for $99 a month. Washington Monthly. Retrieved from http://www.washingtonmonthly.com U.S. SERVICE ACADEMIES 105 Carey, K. (2010). Death of a university. In K. Carey & M. Schneider (Eds.), Accountability in American higher education. New York, NY: Palgrave Macmillan. Chabot, D. R. (2011). Family systems theories of psychotherapy. In J. C. Norcross, G. R. VandenBos, & D. K. Freedheim (Eds.), History of psychotherapy: Continuity and change (2nd ed., pp. 173–202). Washington, DC: American Psychological Association. Chandra, A., Martin, L. T., Hawkins, S. A., & Richardson, A. (2010). The Impact of Parental Deployment on Child Social and Emotional Functioning: Perspectives of School Staff. Journal of Adolescent Health, 46(3), 218-223. Retrieved from: http://dx.doi.org.library.capella.edu/10.1016/j.jadohealth.2009.10.009 Chernay, G. (1990). Accreditation and the role of the Council on Postsecondary Accreditation. Washington, DC: Council on Postsecondary Accreditation. Committee on Institutional Quality and Integrity. (2012). Higher education accreditation reauthorization policy recommendations. Retrieved from http://www2.ed.gov/about/bdscomm/list/naciqi-dir/naciqi_draft_final_report.pdf Congressional Research Service. (November 30, 2012). Congressional nominations to U.S. service academies: An overview and resources for outreach and management. Retrieved from htpp://www.crs.gov Conn, P. (1999). Teacher education as it relates to national accreditation and state approval. Education, 120, 10-13. Coombs, C., Garn, C., & Allred, R. (1993). NCATE Accreditation: Getting the most from the self-study. Journal of Teacher Education, 44, 165-169. Council for Higher Education Accreditation (2006). CHEA survey of recognized accrediting organizations: Providing information to the public. Washington, DC: Author. U.S. SERVICE ACADEMIES 106 Council for Higher Education Accreditation. (2010). Quality review 2009: CHEA almanac of external quality review. Washington, DC: Council for Higher Education Accreditation. Council for Higher Education Accreditation. (2012). The CHEA initiative final report. Washington, DC: Council for Higher Education Accreditation. Retrieved from: <http://www.chea.org/pdf/TheCHEAInitiative_Final_Report8.pdf> Council for Higher Cowen, K. (2013).Higher education's higher accountability. American Council on Education. Retrieved from <http://www.acenet.edu/the-presidency/columns-and- features/Pages/Higher-Education's-Higher-Accountability>. Council on Postsecondary Accreditation. Cremonini, L., Epping, E., Westerheijden, D., & Vogelsang, K. (2012). Impact of Quality Assurance on Cross-Border Higher Education. Enschede, Netherlands: Center for Higher Education Policy Studies. Creswell, J. W. & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39, 124-130. Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Los Angeles, CA: Sage Publications, Inc. Creswell, J. W. (2013). Qualitative inquiry and research design: choosing among five approaches. Sage Publication, Inc. Daoust, M. P., Wehmeyer, W., & Eubank, E. (2006).Valuing an MBA: Authentic outcome measurement made easy. Unpublished manuscript. Retrieved from http://www.momentumbusinessgroup.com/resourcesValuingMBA.pdfDavenport, C. A. (2000). Recognition chronology. Retrieved from <http://www.aspa-> usa.org/documents/Davenport.pdf U.S. SERVICE ACADEMIES 107 Davis, C. O. (1932). The North central association of colleges and secondary schools: aims, organization, activities. [Chicago]: The Association. Retrieved from http:// babel.hathitrust.org/cgi/pt?id=mdp.39015031490645;view=1up;seq=15 Davis, C. O. (1945). A history of the North Central Association of Colleges and Secondary Schools 1895-1945. Ann Arbor, MI: The North Central Association of Colleges and Secondary Schools. Retrieved from <http://library.usc.edu/uhtbin/cgisirsi> /x/0/0/5?Searchdata1=1175460{CKEY} Degree Mills. (2014). Retrieved from Council for Higher Education Accreditation website: http://www.chea.org/degreemills/default.htm Department of Defense. (2011). Strengthening our military families: Meeting America's commitment. Retrieved from http://www.defense.gov/home/features/2011/0111_initiative/Strengthening_our_Military January_2011.pdf Dewatripont, M., Sapir, A., Van Pottelsberghe, B., & Veugelers, R. (2010).Boosting innovation in Europe. Bruegel Policy Contribution 2010/06. Dickeson, R. (2009). Recalibrating the accreditation-federal relationship. Washington, DC: University of Northern Colorado. Dickeson, R. C. (2006). The need for accreditation reform. Issue paper (The Secretary of Education’s Commission on the Future of Higher Education). Washington, DC. Retrieved from http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/dickeson.pdf Dickey, F., & Miller, J. ((1972). “Federal involvement in nongovernmental Accreditation.” Educational Record 53, No. 2 (Spring): 139. U.S. SERVICE ACADEMIES 108 Dill, D.D., Massy, W. F., Williams, P. R., &Cook, C. M. (1996, September/October). Accreditation and academic quality assurance: Can we get there from here? Change: The Magazine of Higher Learning, 28(5). DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147- 160. Directories. (2014). Retrieved from Council for Higher Education Accreditation website: http://www.chea.org/Directories/special.asp Doerr, A. H. (1983). Accreditation: Academic boon or bane. Contemporary Education, 55(1), 6- 8. Dougherty, K. J., Hare, R., & Natow, R. S. (2009). Performance accountability systems for community colleges: Lessons for the voluntary framework of accountability for community colleges. Community College Research Center. Columbia University, NYC: Teachers College. Dowd, A. C. (2003). From access to outcome equity: Revitalizing the democratic mission of the community college. Annals of the American Academy of Political and Social Science, 586, 92-119. Dowd, A. C., & Grant, J. L. (2006). Equity and efficiency of community college appropriations: The role of local financing. The Review of Higher Education, 29(2), 167-194. Driscoll, A., & De Noriega, D. C. (2006).Taking ownership of accreditation: Assessment processes that promote institutional improvement and faculty engagement. Sterling, VA: Stylus Publishing. U.S. SERVICE ACADEMIES 109 Eaton, J. S. (2003a). Is accreditation accountable? The continuing conversation between accreditation and the federal government. Washington, DC: Council for Higher Education Accreditation. Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher Education, 145, 79-86. doi:10.1002/he.337 Eaton, J. S. (2010). Accreditation and the federal future of higher education.Academe, 96(5), 21- 24. Eaton, J. S. (2012, August). An Overview of U.S. Accreditation. Council for Higher Education Accreditation: Washington DC Eaton, J.S. (2013a, June 13). Accreditation and the next reauthorization of the Higher Education Act. Inside Accreditation with the President of CHEA, 9(3). Retrieved from http://www.chea.org/ia/IA_2013.05.31.html Eaton, J.S. (2013b, November-December). The changing role of accreditation: Should it matter to governing boards? Trustee. Retrieved from http://agb.org/trusteeship/2013/11/ changing-role-accreditation-should-it-matter-governing-boards Eaton, J. S. (2012b). What future for accreditation: The challenge and opportunity of the accreditation - federal government relationship. In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality assurance (77-88). Washington, DC: Teacher Education Accreditation Council. Retrieved from http://www.teac.org/wp- content/uploads/2012/03/Festschrift-Book.pdf Eaton, J.S. (2012, April-June). The future of accreditation: Can the collegial model flourish in the context of the government's assertiveness and the impact of nationalization and technology? How? Planning for Higher Education, 40(3), 8 U.S. SERVICE ACADEMIES 110 Education Accreditation. (2014). 2013-2014 Directory of CHEA-Recognized Organizations. Retrieved from http://www.chea.org/pdf/20132014_Directory_of_CHEA_Recognized_ Organizations.pdf Council of Regional Accrediting Commissions. (n.d.). A guide for institutions and evaluators. Retrieved from http://www.sacscoc.org/pdf/handbooks/GuideForInstitutions.pdf El-Khawas, E. (2001).Accreditation in the USA: Origins, developments and future prospects. Paris, France: International Institute for Educational Planning. Esposito-Smythers, C., Wolff, J., Lemmon, K. M., Bodzy, M., Swenson, R. R., & Spirito, A. (2011). Military youth and the deployment cycle: Emotional health consequences and recommendations for intervention. Journal of Family Psychology, 25, 497-507. doi: 10.1037/a0024534 Ewell, P. T. (1984). The self-regarding institution: Information for excellence. Boulder, Conational Center for Higher Education Management Systems. Ewell, P. T. (1993).The role of states and accreditors in shaping assessment practice. In T. W. Banta (Ed.), Making a difference: Outcomes of a decade of assessment in higher education (pp. 339-356). San Francisco, CA: Jossey-Bass. Ewell, P. T. (2001). Accreditation and student learning outcomes: A proposed point of departure. Washington, DC: Council for Higher Education Accreditation. Retrieved from <http://www.chea.org/award/StudentLearningOutcomes2001.pdf> Ewell, P. T. (2002).An emerging scholarship: A brief history of assessment. In T. W. Banta (Ed.), Building a scholarship of assessment (pp. 3-25). San Francisco, CA: Jossey-Bass. U.S. SERVICE ACADEMIES 111 Ewell, P. T. (2005). Can assessment serve accountability? It depends on the question. In J. C. Burke (Ed.), Achieving accountability in higher education: Balancing public, academic, and market demands (pp. 78-105). San Francisco, CA: Jossey-Bass. Ewell, P. T. (2008). Assessment and accountability in America today: Background and context. New Directions for Institutional Research, 2008(S1), 7-17. Ewell, P. T. (2008). U.S. accreditation and the future of quality assurance: A tenth anniversary report from the Council for Higher Education Accreditation. Washington, DC: Council for Higher Education Accreditation. Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension. Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved from <http://www.learningoutcomeassessment.org/documents/PeterEwell_005.pdf> Ewell, P. T. (2010). Twenty years of quality assurance in higher education: What's happened what's different? Quality in higher education, 16(2), 173-175. Ewell, P. T. (2012). Disciplining peer review: Addressing some deficiencies in U.S. accreditation practices. In M.LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality assurance (89-105).Washington, DC: Teacher Education AccreditationCouncil.Retrieved from <http://www.teac.org/wp- content/uploads/2012/03/Festschrift-Book.pdf> Ewell, P. T., Wellman, J. V., & Paulson, K. (1997). Refashioning accountability: Toward a coordinated system of quality assurance for higher education. Denver, CO: Education Commission of the States. Finkin, M. W. (1973). Federal reliance on voluntary accreditation: The power to recognize as the power to regulate. Journal of Law and Education, 2(3), 339-375. U.S. SERVICE ACADEMIES 112 Finn, Jr. C. E. (1975, Winter). Washington in academe we trust: Federalism and the universities: The balance shifts. Change, 7(10), 24-29, 63. Fleming, B. (2012, October). The few, the proud, the infantilized. The Chronicle of Higher Education, 59(7). Retrieved from <http://chronicle>.com/article/The-Few-The-Proud- the/134830. Floden, R. E. (1980). Flexner, accreditation, and evaluation. Educational Evaluation and Policy Analysis, 2(2), 35-46. doi:10.3102/01623737002002035 Florida State Postsecondary Education Planning Commission. (1995). A review of specialized accreditation. Tallahassee, FL: Florida State Postsecondary Education Planning Commission. Fuller, M. B., & Lugg, E. T. (2012). Legal precedents for higher education accreditation Gaston, P. L. (2014). Higher Education Accreditation: How it's changing, why it must. Sterling, VA. Stylus Publishing. Gillen, A., Bennett, D. L, &Vedder, R. (2010). The inmates running the asylum? An analysis of higher education accreditation. Washington, DC: Center for College Affordability and Productivity. Retrieved from <http://www.centerforcollegeaffordability.org/uploads/Accreditation.pdf> Global University Network for Innovation. (2007). Higher education in the world 2007: Accreditation for quality assurance: What is at stake? New York, NY: Palgrave Macmillan. Gorman, G. H., Eide, M., & Hisle-Gorman, E. (2010). Wartime military deployment and increased pediatric mental and behavioral health complaints. Pediatrics, 126, 1058–1066. doi: 10.1542/peds.2009-2856 This is not in the text Graham, P. A., Lyman, R. W., & Trow, M. (1995). Accountability of colleges and U.S. SERVICE ACADEMIES 113 Gruson, E. S., Levine, D. O, & Lustberg, L. S. Issues in accreditation, eligibility and institutional quality. Cambridge, MA: Sloan Commission on Government and Higher Education. Hagerty, B. M. K., & Stark, J. S. (1989). Comparing educational accreditation standards in selected professional fields. The Journal of Higher Education, 60(1), 1-20. Handel, D. D. & Lewis, D. R. (2005). Quality assurance of higher education in transition countries: Accreditation - accountability and assessment. Tertiary Education and Management, 11: 239-258. DOI 10.1007/s11233-005-5111- Harcleroad, F.F. & Dickey, F.G. (1975). Educational auditing and voluntary institutional accrediting. Higher Education research report. Washington, DC. Harcleroad, F. F.(1980). Accreditation: History, process, and problems. Washington, DC: American Association for Higher Education. Hart Research Associates. (2009). Learning and assessment: Trends in undergraduate education (A survey among members of the Association of American College and Universities). Washington, DC: Author. Retrieved from <https://www.aacu.org/membership/documents/2009MemberSurvey_Part1.pdf> Harvey, L. (2004). The power of accreditation: Views of academics. Journal of Higher Education Policy and Management, 26(2), 207-223 Heath and Company. HLC Initial. (2012). Seeking accreditation. Chicago, IL: Higher Learning Commission of the North Central Association. Retrieved from http://www.ncahlc.org/Information for- Institutions/obtaining-accreditation.html HLC. Accreditation Liaison Officer. Retrieved from <http://www.ncahlc.org/HLC-Institutions/accreditation-liaison-officer>. U.S. SERVICE ACADEMIES 114 Horton, D. (2005). Consultation with military children and schools: A proposed model. Consulting Psychology Journal: Practice and Research, 57, 259-265. doi: 10.1037/1065- 9293.57.4.259 Ikenberry, S. O. (2009). Where do we take accreditation? Washington, DC: Council for Higher Education Accreditation. Jaschik, S., & Ledgerman, D. (2014).The 2014 Inside Higher Ed survey of college & university presidents. Washington, DC: Inside Higher Ed. Johnstone, D. B. (2001).The financing and management of higher education: A status report on worldwide reforms. Washington, DC: The World Bank. Journal of Higher Education Management 27(1). Retrieved from http://www.aaua.org/images/JHEM_-_Vol_27_Web_Edition_.pdf#page=53 Gannett, N. G. (May 11, 2014). Service academy nominations often withheld from public. Retrieved from http://www.navytimes.com/article/20140511/NEWS05/305110032. Kells, H. R. (1976). The reform of regional accreditation agencies. Educational Record 57(1), 24 28. Kells, H. R., & Kirkwood, R. (1979). Institutional self-evaluation processes. The Educational Record, 60(1), 25-45. Kennedy, V. C., Moore, F. I., &Thibadoux, G. M. (1985). Determining the costs of self- study for accreditation: A method and a rationale. Journal of Allied Health, 14(2), 175-182. Kim, A. M., & Yeary, J., (2008). Making Long-Term Separations Easier for Children and Families. Young Children, 63, 32-36. Retrieved from http://search.proquest.com.library.capella.edu/docview/197607072?accountid=27965 U.S. SERVICE ACADEMIES 115 Kren, L., Tatum, K. W., & Phillips, L. C. (1993). Separate accreditation of accounting programs: An empirical investigation. Issues in Accounting Education, 8(2), 260- 272. Kuh, G. D. (2010). Risky business: Promises and pitfalls of institutional transparency. Kuh, G. D., &Ewell, P. T. (2010). The state of learning outcomes assessment in the United States. Higher education management and policy, 22(1), 1-20. Kuh, G., &Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved from <http://www.learningoutcomeassessment.org/documents/niloafullreportfinal2.pdf> Learned, W. S., & Wood, B. D. (1938). The student and his knowledge: A report to the Carnegie Foundation on the results of the high school and college examinations of 1928, 1930, and 1932.New York, NY: The Carnegie Foundation for the Advancement of Teaching. Lee, M. B., & Crow, S. D. (1998). Effective collaboration for the twenty-first century: The Commission and its stakeholders (Report and Recommendations of the Committee on Organizational Effectiveness and Future Directions). Chicago, IL: North Central Association of Colleges and Schools. Leef, G. C., & Burris, R. D. (2002).Can college accreditation live up to its promise? Washington, DC: American Council of Trustees and Alumni. Retrieved from <https://www.goacta.org/publications/downloads/CanAccreditationFulfillPromise.pdf> Lind, C. J., & McDonald, M. (2003). Creating and assessment culture: a case study of success and struggles. In S. E. Van Kollenburg (Ed.), A collection of papers on self study and institutional improvement, 3. Promoting student learning and effective teaching, pp.21- 23. (ERIC Document Reproduction Service No. ED 476 673). Retrieved from U.S. SERVICE ACADEMIES 116 http://files.eric.ed.gov/fulltext/ED476673.pdf#page=22 http://files.eric.ed.gov/fulltext/ED476673.pdf M. K., Hearn, J. C., & Deaton, R. (2006). Called to account: Analyzing the origins and spread of state performance-accountability policies for higher education. Educational Evaluation and Policy Analysis, 28(1), 1-24. Maki, P. L. (2010). Assessing for learning: Building a sustainable commitment across the institution (2nd ed.). Sterling, VA: Stylus Publishing. Manos, G. H. (2010). War and the Military Family. Journal of the American Academy of Child & Adolescent Psychiatry, 49(4), 297-299. Retrieved from http://dx.doi.org.library.capella.edu/10.1016/j.jaac.2010.01.005 Maxwell, J. A. (2013). Qualitative research design: an interactive approach. 3 rd ed. Thousand Oaks, CA. Sage Publications, Inc. McCaslin, M. & Wilson, S. K. (2003). The five question method for framing a qualitative research study. The Qualitative Report, 8(3), 447-461. Retrieved October 23, 2014, from http://www.nova.edu/ssss/QR/QR8-3/mccaslin.pdf McLendon, Merriam, S. B. (1998).Qualitative research and case study applications in education. San Francisco, CA. Jossey-Bass Publishers. Middaugh, M. F. (2012). Introduction to themed PHE issue on accreditation in higher education. Planning for Higher Education, 40(3), 6-7. Middle States Commission on Higher Education (2009, March).Becoming Accredited, Handbook for applicants and candidates for accreditation. Middle States Commission on Higher Education (2013, June).Accreditation Liaison Officer. Orientation e-Packet. U.S. SERVICE ACADEMIES 117 Miles, J. A. (2012). Jossey-Bass business and management reader: Management and organization theory. Hoboken, NJ: Wiley. Moltz, D. (2010). Redefining community college success. Inside Higher Ed. Vol(No.). Retrieved From <http://www.insidehighered.com/news/2011/06/06/u_s_panel_drafts_and_debates_meas u>res_to_gauge_community_college_success. NACIQI Final. (2012, April). Higher Education Act accreditation policy recommendations By the National Advisory Committee on Institutional Quality and Integrity. National Advisory Committee on Institutional Quality and Integrity.(2012, April). Report to the U.S. Secretary of Education, Higher Education Act Reauthorization, Accreditation Policy Recommendations. Retrieved from http://www2.ed.gov/about/bdscomm/list/naciqi-dir/2012- spring/teleconference-2012/naciqi-final-report.pdf National Center for Education Statistics. (2014). College navigator. Retrieved from <http://nces.ed.gov/collegenavigator> Neal, A. D. (2008). Dis-accreditation. Academic Questions, 21(4), 431-445. New England Association of Schools and Colleges. Commission on Institutions of Higher Education. Accreditation Liaison Officer. Retrieved from <http://cihe.neasc.org>. North Central Association of Higher Learning Commission. (2014). Criteria and core components. Retrieved from http://www.ncahlc.org/Criteria-Eligibility-and-candidacy. Obama, B. (2013a, February 12). State of the Union Address. The White House. Retrieved from http://www.whitehouse.gov/the-press-office/2013/02/12/president-barack-obamas-state- union-address U.S. SERVICE ACADEMIES 118 Obama, B. (2013b, February 12). The President's Plan for a Strong Middle Class and a Strong America. The White House. Retrieved from http://www.whitehouse.gov/sites/default/files/uploads/sotu_2013_blueprint_embargo.pdf Obama, B. (2013c, August 22). Fact Sheet on the President's Plan to Make College More Affordable: A Better Bargain for the Middle Class. The White House. Retrieved from <http://www.whitehouse.gov/the-press-office/2013/08/22/fact-sheet-president-s-plan- make-college-more-affordable-better-bargain-> Orlans, H. O. (1974). Private accreditation and public eligibility: Volumes 1 and 2. Retrieved from ERIC database. (ED097858) Orlans, H. O. (1975). Private accreditation and public eligibility. Washington,D.C: U.S. Office of Education. Perrault, A. H.; Gergory, V. L.; & Carey, J. O. (2002).The integration of assessment of student learning outcomes with teaching effectiveness. Journal of Education for Library and Information Science, 43(4), 270-282. Pigge, F. L. (1979).Opinions about accreditation and interagency cooperation: The results of a nationwide survey of COPA institutions. Washington, DC: Committee on Postsecondary Education. Planning for Higher Education, 40(3), 6-7. Procopio, C. H. (2010). Differing administrator, faculty, and staff perceptions of organizational culture as related to external accreditation. Academic Leadership Journal, 8(2), 1-15. Provezis, S. J. (2010). Regional accreditation and learning outcomes assessment: Mapping the territory (Doctoral dissertation, University of Illinois at Urbana- Champaign). Raessler, K. R. (1970). An analysis of state requirements for college or university accreditation in music education. Journal of Research in Music Education, 18(3), 223-233. U.S. SERVICE ACADEMIES 119 Ratcliff, J. L. (1996). Assessment, accreditation, and evaluation of higher education in the US. Quality in Higher Education, 2(1), 5-19. Reed, S. C., Bell, J. F., & Edwards, T. C. (2011). Adolescent well-being in Washington state military families. American Journal of Public Health, 101(9), 1676-82. Retrieved from http://search.proquest.com.library.capella.edu/docview/884707062?accountid=27965 Reidlinger, C. R., & Prager, C. (1993). Cost-benefit analyses of accreditation. New Directions for Community Colleges, 83, 39-47. Retrieved from http://www.sacscoc.org/pdf/ 081705/QEP%20Handbook.pdf <http://www.sacscoc.org/pdf/081705/QEP%20Handbook.pdf> Rhodes, T. L. (2012). Show me the learning: Value, accreditation, and the quality of the degree. Planning for Higher Education, 40(3) Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality assurance (27-38). Washington, DC: Teacher Education Accreditation Council. Retrieved from http://www.teac.org/wp- content/uploads/2012/03/Festschrift-Book.pdf Sacks, P., & Whildin, S. (1993). Preparing for accreditation: A handbook for academic librarians. Chicago, IL: American Library Association. Schermerhorn, J. W., Reisch, J. S., & Griffith, P. J. (1980). Educator perceptions of accreditation. Journal of Allied Health 9(3), 176-182. Scriven, M. (2000). Evaluation ideologies. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models (250-278). Boston, MA: Kluwer Academic Publishers. Shaw, R. (1993). A backward glance: To a time before there was accreditation. North Central U.S. SERVICE ACADEMIES 120 Shibley, L. R., &Volkwein, J. F. (2002, June). Comparing the costs and benefits of re- accreditation processes. Paper presented at the annual meeting of the Association forInstitutional Research, Toronto, Ontario, Canada. Smith, V. B., & Finney, J. E. (2008, May/June). Redesigning regional accreditation: An interview with Ralph A. Wolff. Change, 18-24. Southern Association of Colleges and Schools. (2007). The Quality Enhancement Plan. Spangehl, S. D. (2012). AQIP and accreditation: Improving quality and performance: merely getting reaccredited is hardly a mark of distinction for well operated colleges and universities. Planning for Higher Education 40(3), 29. Stake, R. E. (1995).The art of case study research. Thousand Oaks, CA: Sage. Stensaker, B., & Harvey, L. (2006). Old wine in new bottles? A comparison of public and private accreditation schemes in higher education. Higher Education Policy, 65-85. Sursock, A. & Smidt, H. (2010). Trends 2010: A decade of change in European higher education. Brussels: European University Association. Trivett, D. A. (1976). Accreditation and Institutional Eligibility. ERIC/Higher Education Research Report No. 9. Washington, DC: American Association for Higher Education. Trow, M. (1996). Trust, markets, and accountability in higher education: A comparative perspective. Higher Education Policy, 9(4), 309-24. Uehling, B. S. (1987a). Accreditation and the institution. North Central Association Quarterly, 62(2), 350-360. United Nations Educational, Scientific and Cultural Organization. (2005). Guidelines for Quality Provision in Cross-border Higher Education. Paris: UNESCO. United States Air Force U.S. SERVICE ACADEMIES 121 Academy. (2014). Welcome to admissions. Retrieved from htpp://www.academyadmissions.com/admissions/the-application-process/nominations. United States Coast Guard Academy. A brief history. Retrieved from www.uscg.gov <http://www.uscg.gov>. United States Department of Defense. (2011). Directive Number 1322.22. Service academies. Incorporating Change 1, September 20, 2011. New York, NY: Columbia University. University of North Carolina at Chapel Hill. United States Department of Education. (2006). A test of leadership: Charting the future of US higher education. A report of the commission appointed by Secretary of Education Margaret Spellings. Washington, DC: Author. Retrieved from <http://www2.ed.gov/about/bdscomm/list/hiedfuture/>reports/final-report.pdf. United States Government Accounting Office. (1975). Academic and military programs of the five service academies. Report to the Congress by the Comptroller General of the United States. Washington, DC: Author. United States Government Accounting Office. (2003, September).Military Education: DOD needs to enhance performance goals and measures to improve oversight of military academies. Report to the Subcommittee on Defense, Committee on Appropriations, House of Representatives. Washington, DC. GAO-03-1000. United States Government Accounting Office. (2007, January). Military personnel: Strategic plan needed to address Army's emerging officer accession and retention challenges. Report to the Committee on Armed Services, House of Representatives. Washington, DC. GAO-07-224. United States Government Accounting Office. (2012, February). Military Education: U.S. SERVICE ACADEMIES 122 United States Military Academy West Point. (2012). Academic program strategic plan 2013 - 2018. Van Damme, D. (2000). Internationalization and quality assurance: Towards worldwide accreditation? European Journal for Education Law and Policy, 4, 1-20. Van der Ploeg, F. &Veugelers, R. (2008). Towards evidence-based reform of European universities. CESifo Economic Studies, 54(2), 99‒120. Volkwein, J. F., Lattuca, L. R., Harper, B. J., & Domingo, R. J. (2007). Measuring the impact of professional accreditation on student experiences and learning outcomes. Research in Higher Education, 48(2), 251-282. Walker, J. J. (2010). A contribution to the self-study of the postsecondary accreditation protocol: A critical reflection to assist the Western Association of Schools and Colleges. Paper presented at the WASC Postsecondary Summit, Temecula, CA. Warner, W. K. (1977). Accreditation influences on senior institutions of higher education in the western accrediting region: An assessment. Oakland, CA: Western Association of Schools and Colleges. Washington, DC: USDE. Retrieved from http://www2.ed.gov/about/bdscomm/list Naciqi- dir/2012-spring/teleconference-2013/naciqi-final-report.pdf National Advisory Weissburg, P. (2008). Shifting alliances in the accreditation of higher education: self-regulatory organizations. Dissertation Abstracts International, DAI-A 70/02, August 2009. ProQuest ID 250811630. Wellman, J. V. (2001, March/April). Assessing state accountability systems. Change, 47-52. Wergin, J. F. (2005). Waking up to the importance of accreditation. Change, 37(3) 35-41. Wergin, J. F. (2012). Five essential tensions in accreditation. In M. LaCelle-Peterson & D. U.S. SERVICE ACADEMIES 123 Westerheijden, D. F., Stensaker, B., & Rosa, M. J. (2007).Quality assurance in higher education: Trends in regulation, translation and transformation. Dordrecht, The Netherlands: Springer. Western Association of Schools and Colleges. (2002).Guide to using evidence in the accreditation process: A resource to support institutions and evaluation teams. Retrieved from www.wascweb.org/senior/Evidence_Guide.pdf Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges (WASC-ACCJC) (2011). Retrieved from http://www.accjc.org. Western Association of Schools and Colleges. (1998). Eight perspectives on how to focus White House. (2013, February 12).The President's plan for a strong middle class and a strong America. Retrieved from <http://www.whitehouse.gov/th> e-press- office/2013/02/12/president-s-plan-strong-middle-class-and-strong-america White House. (2014). Nominations and appointments. Retrieved from htpp://whitehouse.gov Wiedman, D. (1992). Effects on academic culture of shifts from oral to written traditions: The case of university accreditation. Human Organization, 51(4), 398-407. Willis, C. R. (1994). The cost of accreditation to educational institutions. Journal of Allied Health, 23, 39-41 Wolff, R. A. (1990, June 27-30). Assessment 1990: Accreditation and renewal. Paper presented at The Fifth AAHE Conference on Assessment in Higher Education, Washington, DC. Wolff, R. A. (2005). Accountability and accreditation: Can reforms match increasing demands? In J. C. Burke (Ed.), Achieving accountability in higher education: Balancing public, academic, and market demands (78-105). San Francisco, CA: Jossey-Bass. U.S. SERVICE ACADEMIES 124 Wood, A. L. (2006). Demystifying accreditation: Action plans for a national or regional accreditation. Innovative Higher Education, 31(1), 43-62. doi: 10.1007/s10755- 006- 9008-6 Woolston, P. J. (2012). The costs of institutional accreditation: A study of direct and indirect costs. (Order No. 3542492, University of Southern California). ProQuest Dissertations and Theses, 313. Retrieved from http://search.proquest.com/docview/1152182950?accountid=14749. (1152182950). World Bank (2002).Constructing knowledge societies: New challenges for tertiary education. Washington, DC: World Bank. Wriston, H. M. (1960). The futility of accrediting. The Journal of Higher Education, 31(6), 327- 329. Yung-chi Hou, A. (2014). Quality in cross-border higher education and challenges for the internationalization of national quality assurance agencies in the Asia-Pacific region: the Taiwanese experience. Studies in Higher Education, 39(1). 135-152. Zis, S., Boeke, M., &Ewell, P. (2010). State policies on the assessment of student learning outcomes: Results of a fifty-state inventory. Boulder, CO: National Center for Higher Education Management Systems (NCHEMS). U.S. SERVICE ACADEMIES 125 Appendix A Interview Guide The following are some of the proposed guiding questions which will be asked of the Accreditation Liaison Officers for the respective service academies. The open-ended questions are designed to get detailed responses from each ALO. 1. Describe your current role and responsibility as ALO. 2. Describe the organizational relationship among the various stakeholders at your institution (Board of Visitors, Superintendent, Faculty, and Students). 3. What are the stages involved in preparation for accreditation? 4. What are some of the benefits that have resulted from the accreditation process? 5. Describe key concepts or areas in accountability that impact the accreditation process at your academy. 6. What process does your institution have in place for the development and improvement of student learning outcomes? 7. What impact does accreditation have on your institution's student learning outcomes? 8. How do student learning outcomes affect budgetary decisions? 9. Describe the quality of officers produced by your institution compared to other accession programs. 10. What makes your institution different from the other service academies? 11. What similarities does your institution have in common with the other service academies? 12. Is there anyone else that you can recommend that I can speak with to gain more information about the accreditation process at your institution? U.S. SERVICE ACADEMIES 126 Appendix B Introductory letter to ALOs June 10, 2014 Dear ___________________________: My name is Professor Robert G. Keim, Ed.D.I am an Associate Professor at the University of Southern California’s Rossier School of Education. I am writing to introduce one of my current doctoral students, Commander Rufus E. Cayetano, USN. For the last 5 years, I have been chairing a series of dissertation projects in the field of Higher Education on the topic of accreditation in higher education. Mr. Cayetano is in the process of compiling his dissertation material on the specific topic of accreditation processes and procedures in the United States military academies. This will be a comparative, descriptive study, without assessment of or judgmental comment on those processes and procedures. He is interested in identifying any unique aspects of the accreditation process in the academies compared to what is done in similar, civilian regionally accredited programs. It is anticipated that the results of his research will be a significant contribution to the literature on academic accreditation. As the chairman of Commander Cayetano’s dissertation committee, I have final responsibility for all aspects of his dissertation process. I will ensure any and all confidentiality requested by all respondents to his inquiry. I would be happy to discuss this project in depth with anyone. My contact information is:email rkeim@usc.edu or phone at 213-740-6864. We deeply respect how valuable your time is and are sincerely grateful for your cooperation and any help you can provide. Mr. Cayetano will contact you with further details of his dissertation project in the near future. Thank you very much. Sincerely Yours, Professor Robert G. Keim, Ed.D.
Abstract (if available)
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
PDF
Priorities and practices: a mixed methods study of journalism accreditation
PDF
Assessment, accountability & accreditation: a study of MOOC provider perceptions
PDF
Perspectives on accreditation and leadership: a case study of an urban city college in jeopardy of losing accreditation
PDF
An evaluation of nursing program administrator perspectives on national nursing education accreditation
PDF
The effects of accreditation on the passing rates of the California bar exam
PDF
The costs of institutional accreditation: a study of direct and indirect costs
PDF
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
PDF
U.S. Army Reserve: the journey to psychological health resources
PDF
Assessment and accreditation of undergraduate study abroad programs
PDF
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
PDF
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
PDF
Institutional student loan cohort default rates by institution type
PDF
Transfer students from California community colleges: a narrative approach to understanding the social capital and institutional factors that lead to a timely transfer to a public, four-year univ...
PDF
Examining the implications of return on investment using the gap analysis framework on the executive master of business administration program at a four year research university
PDF
A study of the pedagogical strategies used in support of students with learning disabilities and attitudes held by engineering faculty
PDF
Evaluating the effects of diversity courses and student diversity experiences on undergraduate students' democratic values at a private urban research institution
PDF
Averting a labor shortage in the U.S. biomedical engineering industry
PDF
Acceptance, belonging, and capital: the impact of socioeconomic status at a highly selective, private, university
PDF
The Mackey University program for Innovation and Disruption: a case study analysis on innovation and disruption in higher education
Asset Metadata
Creator
Cayetano, Rufus E.
(author)
Core Title
A descriptive analysis focusing on similarities and differences among the U.S. service academies
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
04/09/2015
Defense Date
02/05/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accreditation,OAI-PMH Harvest,U.S. Air Force Academy,U.S. Military Academy,U.S. Naval Academy,U.S. service academies
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Keim, Robert G. (
committee chair
), Tobey, Patricia Elaine (
committee member
), Woolston, Paul J. (
committee member
)
Creator Email
cayetanor@metro.net,rcayetan@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-545213
Unique identifier
UC11297597
Identifier
etd-CayetanoRu-3269.pdf (filename),usctheses-c3-545213 (legacy record id)
Legacy Identifier
etd-CayetanoRu-3269.pdf
Dmrecord
545213
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Cayetano, Rufus E.
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accreditation
U.S. Air Force Academy
U.S. Military Academy
U.S. Naval Academy
U.S. service academies