Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
Priorities and practices: a mixed methods study of journalism accreditation
(USC Thesis Other)
Priorities and practices: a mixed methods study of journalism accreditation
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
i PRIORITIES AND PRACTICES: A MIXED METHODS STUDY OF JOURNALISM ACCREDITATION by Benedict de la Merced Dimapindan A Dissertation Presented to the FACULTY OF THE USC ROSSIER SCHOOL OF EDUCATION UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF EDUCATION August 2015 Copyright 2015 Benedict de la Merced Dimapindan ii ACKNOWLEDGEMENTS The process of writing a dissertation has taught me as much about myself as about the subject matter of this study. In no way has this journey been a singular one. All along the way, I’ve had help, and lots of it. There is not enough room to properly recognize all of the people who have played a role in getting me to this point. But that won’t stop me from trying. To my cohort, from the Saturday weekend gang, to the ed psych concentration, to the dissertation group, I have been so fortunate to have been surrounded by such thoughtful, kind- hearted, and considerate people, whom I am honored to call my friends. I especially thank Joey, Dinesh, Susan, and Nathan for putting up with my incessant questions and steering me back on course every time I felt lost. To Dr. Keim and the Rossier faculty, I owe you a great deal of thanks for the support over these past years. Because of your guidance, I haven’t just learned; I’ve grown. To my parents, brother, sister, grandmothers, cousins, friends, family, and godchildren, thank you for believing in me. You’ve brought the joy and balance I’ve so greatly needed throughout this entire process. And finally, to my beautiful and caring wife, Diana, you make me strive to be my best me every day. And you deserve nothing less. I am humbled to have a truly better half with whom to share this achievement. This would not, and could not, be possible without you. Thank God it’s done. iii TABLE OF CONTENTS ACKNOWLEDGEMENTS ii LIST OF TABLES vi LIST OF FIGURES vii ABSTRACT ix CHAPTER ONE: OVERVIEW OF THE STUDY 1 Introduction 1 Journalism and Mass Communications 2 Accreditation in U.S. Higher Education 3 Definition and Process of Accreditation 3 History of Accreditation 5 Early Institutional Accreditation 5 Regional Accreditation: 1885 to 1920 6 Regional Accreditation: 1920-1950 7 Accreditation: 1950 to Present 8 Current State and Future of Accreditation 12 Journalism Accreditation 16 Rationale for the Study 19 Statement of the Problem 19 Purpose of the Study 21 Significance of the Study 22 CHAPTER TWO: LITERATURE REVIEW 24 Introduction 24 Critical Assessment of Accreditation 25 Effects of Accreditation 29 Trend toward Learning Assessment 30 Framework for Learning Assessment 30 Benefits of Accreditation on Learning 31 Organizational Effects of Accreditation 33 Future Assessment Recommendations 34 Challenges to Student Learning Outcomes 35 Organization Learning Challenges 35 Lack of Faculty Buy-in 36 Lack of Institutional Investment 37 Difficulty with Integration into Local Practice 37 Outcome Equity 39 Tension between Improvement and Accountability 39 Transparency Challenges 40 Future of Outcomes Assessment 41 iv Institutional Costs of Accreditation 42 Alternatives to Accreditation 48 International Accreditation in Higher Education 52 Internationalization of Accreditation 54 Specialized Accreditation 55 Research on Journalism Accreditation 57 Little Observed Difference in Curriculum 58 Perceived Value of Journalism Accreditation 59 The Internet and Convergence 60 Impact on Diversity 62 Assessment of Journalism Student Learning 63 Assessment of Student Writing 66 Critique of Journalism Education 66 Evaluating ACEJMC Standards 69 Current Journalism Education Landscape 72 CHAPTER THREE: METHODOLOGY 74 Purpose of the Study 74 Sample and Population 75 Instrumentation 75 Survey 76 Validity 78 Data Collection 78 Response Rate 79 Limitations 79 CHAPTER FOUR: RESULTS 81 Purpose of the Study 81 Response Rate 82 Why Pursue Accreditation? 83 Ranking the ACEJMC Professional Values and Competencies 87 Values/Competencies Seen as Most Important 89 Rankings by Accredited vs. Non-Accredited Schools 101 Significant Difference in Mean Scores 103 Explanation of Curricular Emphasis 104 Assessment of Learning 107 How is learning assessed? 107 No Significant Difference in Assessment Practices 110 Rationale for use of measures 110 Effectiveness of assessment measures 113 Student Outcomes 114 Job Placement, Graduation, and Retention Rates 115 Accredited vs. Non-Accredited Mean Scores 124 Significant Difference in Mean Scores 125 Correlations: Competencies and Student Outcomes 127 No Significance: Assessments and Student Outcomes 129 v Conclusion 129 CHAPTER FIVE: DISCUSSION 131 Purpose of the Study 131 Discussion of Key Findings and Research Recommendations 132 Why Do Schools Pursue Accreditation? 132 Which competencies do schools prioritize as most important? 133 How do schools assess student learning? 135 What are schools’ measurable student outcomes? 136 Limitations 139 Implications for Practice 140 Conclusion 142 REFERENCES 144 APPENDIX A: SURVEY COVER EMAIL 169 APPENDIX B: SURVEY INSTRUMENT 170 vi LIST OF TABLES Table 1: ACEJMC professional values/competencies mean scores, by school accreditation status 102 Table 2: Effectiveness of student learning assessments 114 Table 3: Student outcome rates, by school accreditation status 125 vii LIST OF FIGURES Figure 1. Accreditation status of journalism programs. ................................................................ 82 Figure 2. Percentage of respondents who are accreditation liaisons. ........................................... 83 Figure 3. Understand and apply principles and laws of freedom of speech and press. ................ 90 Figure 4. Demonstrate an understanding of the history and role of professionals and institutions in shaping communications........................................................................................................... 91 Figure 5. Demonstrate an understanding of gender, race, ethnicity, sexual orientation and, as appropriate, other forms of diversity in domestic society in relation to mass communications. .. 92 Figure 6. Demonstrate an understanding of the diversity of peoples and cultures and of the significance and impact of mass communications in a global society. ......................................... 93 Figure 7. Understand concepts and apply theories in the use and presentation of images and information. ................................................................................................................................... 94 Figure 8. Demonstrate an understanding of professional ethics. .................................................. 95 Figure 9. Thinking critically, creatively, and independently. ....................................................... 96 Figure 10. Conduct research appropriate to communications professions. .................................. 97 Figure 11. Writing correctly and clearly. ...................................................................................... 98 Figure 12. Critically evaluate own work and that of others for accuracy, fairness, clarity, and style. .............................................................................................................................................. 99 Figure 13. Apply basic numerical and statistical concepts. ........................................................ 100 Figure 14. Apply current tools and technologies appropriate for communications professions. 101 Figure 15. Methods of student learning assessment. .................................................................. 109 Figure 16. Job placement rate in 2012. ....................................................................................... 116 Figure 17. Job placement rate in 2013. ....................................................................................... 117 viii Figure 18. Job placement rate in 2014. ....................................................................................... 118 Figure 19. Graduation rate in 2012. ............................................................................................ 119 Figure 20. Graduation rate in 2013. ............................................................................................ 120 Figure 21. Graduation rate in 2014. ............................................................................................ 121 Figure 22. Retention rate in 2012. .............................................................................................. 122 Figure 23. Retention rate in 2013. .............................................................................................. 123 Figure 24. Retention rate in 2014. .............................................................................................. 124 ix ABSTRACT This mixed methods study focused on the curriculum, learning assessments, and student outcomes (job placement, graduation, and retention rates) at undergraduate journalism programs. The goal was to explore the relationship between curricular priorities/learning assessment practices and measurable student outcomes. The study also sought to determine whether any differences exist between ACEJMC-accredited and unaccredited programs. The study led to several noteworthy findings. The results showed considerable overlap in how accredited and unaccredited programs prioritize professional competencies in their respective curriculum; while also finding a statistically significant difference with regard to ethics and numerical/statistical concepts. The study also showed that capstone projects were the most common summative learning assessment used by journalism schools, either as a singular measure or in combination with other measures. Perhaps even more important, there was no significant difference found in how accredited versus unaccredited undergraduate programs assess student learning. Moreover, an unexpected pattern emerged among the self-reported student outcomes. In all but one metric, unaccredited undergraduate journalism programs had higher mean scores than their accredited counterparts; and a statistically significant difference was found for graduation rates in 2012 and 2013. In addition, data analysis revealed nine moderate to strong correlations between programs’ rankings of professional journalism competencies and their student outcomes. Among them, there were strong, negative correlations between programs’ curricular emphasis on numerical concepts and their job placement rates in both 2012 and 2013; and there were strong, positive correlations between curricular emphasis on evaluation skills and retention rates in both 2013 and 2014. There were also moderate, negative correlations found between emphasis on gender/other diversity and retention rates in 2012 and 2013. 1 CHAPTER ONE: OVERVIEW OF THE STUDY Introduction Accreditation is a topic of broad importance and much debate in higher education. It is a voluntary, complex process of external review in which colleges and universities, as well as the specific academic programs within them, are scrutinized for quality assurance and improvement. In the United States, accreditation is conducted by private, nonprofit organizations – rather than by the government – and its history dates back more than a century, stemming from concerns to protect public health and serve public interest (Eaton, 2012). The most commonly acknowledged benefits of accreditation include students’ access to federal financial aid funding, legitimacy in the public, government accountability, consideration for foundation grants and employer tuition credits, positive reflection among peers, and standards to support student mobility in terms of transfer and seeking a higher degree. In addition, the U.S. accreditation process is more cost-effective in contrast to international models, which are far more regulated (Brittingham, 2009). Beginning in the mid-1980s, the focus of higher education accreditation in the U.S. shifted toward greater accountability and student learning assessment (Ewell, 2001; Beno, 2004; Wergin, 2005, 2012). At that time, higher education was portrayed in the media as costly, inefficient, and unresponsive to the public (Bloland, 2001). Two reasons in particular fueled the public’s concerns: first was the widespread perception that students were underperforming academically, and second were the needs of the business sector (Ewell, 2001). Businesses and employers demanded that college graduates enter the workforce with high levels of literacy, problem solving ability, and collaborative skills in order to support the emerging knowledge economy of the 21st Century. As a result, institutions of higher education placed strong 2 emphasis on student learning outcomes as the primary means of gauging effectiveness (Beno, 2004). Journalism and Mass Communications Apart from an institutional level, the separate academic programs which comprise colleges and universities may also seek accreditation from an accrediting organization representing a specific field. This process, known a specialized accreditation, is meant to ensure that an adequate quality of specialized training and knowledge needed for professional degrees and careers is met. Specialized accrediting organizations exist for a breadth of disciplines including education, law, medicine, social work, and journalism, which is the focus of this study. In the field of journalism, the Accrediting Council on Education in Journalism and Mass Communications (ACEJMC) oversees the external review and grants the accreditation of journalism and mass communications programs. This study seeks to analyze the curricular priorities and learning assessment practices of undergraduate journalism programs nationwide to determine if there are any correlations to student outcomes (namely, job placement rates, graduation rates, and retention rates), and to see whether differences exist among accredited and non-accredited programs. In addition, this study is conducted at a critical point in journalism education, amid declining undergraduate student enrollment and major shifts in employment opportunities; both of which will be discussed in more detail later in the chapter. At the direction of the dissertation chair, Chapter One was written by group authorship, including Nathan Barlow, Rufus Cayetano, Benedict Dimapindan, and Jill Richardson. The first sections of this chapter provide a robust overview of U.S. higher education accreditation. It explains the definition and process of accreditation; its history, which dates 3 back to the mid-17th Century (written by Barlow and Cayteno); and the current state and future of accreditation (written by Richardson). The latter portion of the chapter focuses on journalism accreditation specifically. It includes a summary of the process of journalism accreditation as well as an overview of this particular study — including the rationale of the study, statement of the problem, and significance of the study (written by Dimapindan). Accreditation in U.S. Higher Education Definition and Process of Accreditation Accreditation is a process of external quality review created and used by higher education to scrutinize colleges, universities and programs for quality assurance and quality improvement (Eaton, 2012a). In the U.S., this process is conducted by private, nonprofit organizations. Its origins emanate from concerns to ensure public safety and to protect public interest. To earn and maintain accreditation, colleges and universities must demonstrate to colleagues from peer institutions that they meet or surpass mutually agreed-upon standards (Middle States Commission on Higher Education, 2009). The process of accreditation is an ongoing cycle of peer review that takes place anywhere from every few years to every 10 years (Eaton, 2012a). In order to gain or reacquire accreditation, an institution must complete a series of tasks, which include: “preparation of evidence of accomplishment by the institution or program, scrutiny of this evidence and a site visit by faculty and administrative peers and action by the accrediting organization to determine accreditation status” (p. 4). Typically, an institution will engage in a self-study, or a written summary of performance related to accrediting criteria. Next, an accreditation team consisting of faculty and administrative peers will review the self-study, and a contingent is sent to the 4 school to conduct a site visit. Afterward, an accrediting organization’s decision-making commission – which may include faculty and administrators of peer institutions, as well as members of the public – to affirm, reaffirm, or deny accreditation (Eaton, 2012a). If accreditation is conferred, institutions and program will continue to be reviewed periodically. According to the Council for Higher Education (2012), there are four primary functions of accreditation: quality assurance; means to provide access to federal and state funds; assisting student mobility; and encouraging confidence from the private sector. First, as a quality assurance mechanism, accreditation status informs current and prospective students, as well as the broader public, that an institution meets at least the basic standards of quality. Second, institutional accreditation is a requirement in order for students to access federal – and in some cases, state – financial aid. Third, accreditation helps facilitate the process for students looking to transfer to different schools or pursue graduate school. And fourth, the accreditation status of an institution signals to employers that graduates have the requisite credentials to enter into a profession (CHEA, 2012). In addition, there are four types of accrediting organizations: regional accreditors for public degree-granting, two- and four-year institutions (these include commissions such as WASC, the Western Association of Schools and Colleges); national faith-related accreditors for religiously affiliated institutions; national career-related accreditors, mainly for for-profit, career-focused institutions; and programmatic accreditors, which accredit specialized professional fields such as journalism (Eaton, 2012a). 5 History of Accreditation Early Institutional Accreditation Accreditation has a long parentage among the universities and colleges of the United States, dating back to the self-initiated external review of Harvard in 1642. This external review, done only six years after Harvard’s founding, was intended to ascertain rigor in its courses by peers from universities in Great Britain and Europe (Davenport, 2000; Brittingham, 2009). This type of self-study is not only the first example in America of peer-review, but it also highlights the need for self and peer regulation in the U.S. educational system due to the lack of federal governmental regulation. This lack of federal government intervention in the evaluation process of educational institutions is a main reason for the way accreditation in the U.S. developed (Brittingham, 2009). While the federal government does not directly accredit educational institutions, the first example of an accrediting body was through a state government. In 1784 the New York Board of Regents was established as the first regionally organized accrediting organization. The Board was set up like a corporate office with the educational institutions being franchisees. The Board created mandated standards that had to be met by each college or university if that institution was to receive state financial aid (Blauch, 1959). Not only did Harvard pioneer accreditation in the U.S. with its early external review of its own courses, but the president of Harvard University initiated a national movement in 1892 when he organized and chaired the Committee of Ten, which was an alliance formed among educators (mostly college and university presidents) to seek for standardization regarding educational philosophies and practices in the U.S. through a system of peer approval (Davis, 1945; Shaw, 1993). 6 Around this same time there began to be different associations and foundations that undertook an accreditation review of educational institutions in the U.S. based on their own standards. Associations such as the American Association of University Women, the Carnegie Foundation, and the Association of American Universities would, for a variety of different reasons, and clientele (e.g. gender equality, professorial benefits), evaluate various institutions and generate lists of approved or accredited schools. These associations were responding to the desire of their constituents to have accurate information regarding the validity and efficacy of the different colleges and universities (Orlans, 1975; Shaw, 1993). Regional Accreditation: 1885 to 1920 When these associations declined to broaden or continue their accrediting practices, individual, institutions began to unite together to form regional accrediting bodies to assess secondary schools’ adequacy in preparing students for college (Brittingham, 2009). Colleges were then measured by the quality of students they admitted based on standards at the secondary school level that were measured by the accrediting agency. The regional accrediting agencies began to focus also on creating a list of colleges that were good destinations for in-coming freshmen. If an institution was a member of the regional accreditation agency, it was considered an accredited college; or more precisely the institutions that belonged to an accrediting agency were considered colleges while those that did not belong were not (Blauch, 1959; Davis, 1932; Ewell, 2008; Orlans, 1974; Shaw, 1993). Regional accrediting bodies were formed in the following years: New England Association of Schools and Colleges (NEASC) in 1885, the Middle States Association of Colleges and Secondary Schools (MSCSS and Middle States Commission on Higher Education [MSCHE]) in 1887, the North Central Association of Colleges and Schools (NCA) and the 7 Southern Association of Colleges and Schools (SACS) in 1895, the Northwest Commission on Colleges and Universities (NWCCU) in 1917, and finally the Western Association of Schools and Colleges (WASC) in 1924 (Brittingham, 2009). Regional accrediting associations began to create instruments for the purpose of establishing unity and standardization in regards to entrance requirements and college standards (Blauch 1959). For example, in 1901 MSCHE and MSCSS created the College Entrance Examination Board to standardize college entrance requirements. The NCA also published its first set of standards for its higher education members in 1909 (Brittingham, 2009). Although there were functioning regional accreditation bodies in most of the states, in 1910 the Department of Education created its own national list of recognized (accredited) colleges. Because of the public’s pressure to keep the federal government from controlling higher education directly, President Taft blocked the publishing of the list of colleges and the Department of Education discontinued the active pursuit of accrediting schools. Instead, it reestablished itself as a resource for the regional accrediting bodies in regards to data collection and comparison (Blauch, 1959; Ewell, 2008; Orlans, 1975). Regional Accreditation: 1920-1950 With the regional accrediting bodies in place, the ideas of what an accredited college was became more diverse (e.g. vocational colleges, community colleges). Out of the greater differences among schools in regards to school types and institutional purposes, there arose a need to apply more qualitative measures and a focus on high rather than minimum outcomes (Brittingham, 2009). School visits by regional accreditors became necessary once a school demonstrated struggles, since qualitative standards became the norm. The regional organizations began to measure success (and therefore grant accredited status) on whether an institution met its 8 own standards outlined in its own mission, rather than a predetermined set of criteria (Brittingham, 2009). In other words, if a school did what it said it would do, it could be accredited. The accreditation process later became a requirement for all member institutions. Self- and peer-reviews, which became a standard part of the accreditation process were undertaken by volunteers from the member institutions (Ewell, 2008). Accrediting bodies began to be challenged as to their legitimacy in classifying colleges as accredited or not. The Langer Case in 1938 is a landmark case that established the standing of accrediting bodies in the United States. Governor William Langer of North Dakota lost in his legal challenge of the NCA’s denial of accreditation to North Dakota Agricultural College. This ruling carried over to other legal cases that upheld the decision that accreditation was legitimate as well as a voluntary process (Fuller & Lugg, 2012; Orlans, 1974). In addition to the regional accrediting bodies, there arose other associations that were meant to regulate the accrediting agencies themselves. The Joint Commission on Accrediting was formed in 1938 to validate legitimate accrediting agencies and discredit questionable or redundant ones. After some changes to the mission and the membership of the Joint Commission on Accreditation, the name was changed to the National Commission on Accrediting (Blauch, 1959). Accreditation: 1950 to Present The period 1950 to 1985 has been coined the golden age of higher education and was marked by increasing federal regulations. During this period key developments in the accreditation process included self-study becoming standard, site visit was executed by colleagues from peer institutions, and institutions were visited regularly on a cycle (Woolston, 2013). With the passage of the Veterans' Readjustment Assistance Act of 1952, the U.S. 9 Commissioner of Education was required to publish a list of recognized accreditation associations (Bloland, 2001). This act provided for education benefits to veterans of the Korean War directly rather than to the educational institution being attended, increasing the importance of accreditation as a mechanism for recognition of legitimacy (Woolston, 2012). A more "pivotal event" occurred in 1958 with the National Defense Education Act's (NDEA) allocation of funding for NDEA fellowships and college loans (Weissburg, 2008). The NDEA limited participating institutions to those that were accredited (Gaston, 2014). In 1963, the Higher Education Facilities Act was passed by the U.S. Congress. This act required that higher education institutions receiving federal funds through enrolled students be accredited. Arguably the most striking expansion in accreditation's mission coincided with the passage of the Higher Education Act (HEA) in 1964 (Gaston, 2014). Title IV in this legislation expressed the intent of Congress to use federal funding to broaden access to higher education. According to Gaston (2014), having committed to this much larger role in encouraging college attendance, the federal government found it necessary to affirm that institutions benefitting from such funds were worthy of it. That same year, the National Committee of Regional Accrediting Agencies (NCRAA) became the Federation of Regional Accrediting Commissions of Higher Education (FRACHE). In 1965, the Higher Education Act was first signed into law. That law strengthened the resources available to higher education institutions and provided financial assistance to students enrolled at those institutions. The law was especially important to accreditation because it forced the U.S. Department of Education (USDE) to determine and list a much larger number of institutions eligible for federal programs (Trivett, 1976). In 1967, the NCA revoked Parsons College accreditation citing "administrative weakness" and a $14 million debt. The college 10 appealed but the courts denied it on the basis that the regional accrediting associations were voluntary bodies (Woolston, 2013). The need to deal with a much larger number of potentially eligible institutions led the office of Education's U.S Commissioner of Education to create in the Bureau of Higher Education the Accreditation and Institutional Eligibility Staff (AIES) with an advisory committee. The purpose of the AIES, which was created in 1968, was to administer the federal recognition and review process involving the accrediting agencies (Dickey and Miller, 1972). In 1975, the National Committee on Accrediting (NCA) and FRACHE merged to form a new organization called the Council on Postsecondary Accreditation (COPA). The newly created national accreditation association encompassed an astonishing array of types of postsecondary education to include community colleges, liberal arts colleges, proprietary schools, graduate research programs, bible colleges, trade and technical schools, and home-study programs (Chambers, 1983). Since 1985, accountability has become the issue of paramount importance in the field of education. According to Woolston (2013), key developments in the accreditation process during this period include higher education experiencing rising costs resulting in high student loan default rates as well as accreditation enduring increasing criticism for a number of apparent shortcomings, most ostensibly a lack of demonstrable student learning outcomes. Similarly, accreditation is increasingly and formally defended by various champions of the practice. For example, congressional hostility reached a crisis stage in 1992 when Congress, in the midst of debates on the reauthorization of the Higher Education Act, threatened to bring to a close the role of the accrediting agencies as gatekeepers for financial aid. During the early 1990s the federal government grew increasingly intrusive in matters directly affecting the accrediting agencies 11 (Bloland, 2001). As a direct consequence, Subpart 1 of Part H of the Higher Education Act amendments involved an increase role for the states in determining the eligibility of instructions to participate in the student financial aid programs of the aforementioned Title IV. For every state, this meant the creation of a State Postsecondary Review Entity (SPRE) that would review institutions that the USDE secretary had identified as having triggered such review criteria as high default rates on student loans (Bloland, 2001). The SPREs were short lived and in 1994 were abandoned largely because of a lack of adequate funding. The 1992 reauthorization also created the National Advisory Committee on Institutional Quality and Integrity (NACIQI) to replace the AIES. For several years, the regional accrediting agencies had entertained the idea of pulling out of COPA and forming their own national association. Based on dissatisfaction with the organization, regional accrediting agencies proposed a resolution to terminate COPA by the end of 1993. Following a successful vote on the resolution, COPA was effectively terminated (Bloland, 2001). A special committee, generated by the COPA plan of dissolution of April 1993, created the Commission on Recognition of Postsecondary Accreditation (CORPA) to continue the work of recognizing accrediting agencies (Bloland, 2001). However, CORPA was formed primarily as an interim organization to continue national recognition of accreditation. In 1995, national leaders in accreditation formed the National Policy Board (NPB) to shape the creation and legitimation of a national organization overseeing accreditation. The national leaders in accreditation were adamant that the new organization should reflect higher education's needs rather than those of postsecondary education. Following numerous intensive meetings, a new organization named the Council for Higher Education Accreditation (CHEA) was formed in 1996 as the official successor to CORPA (Bloland, 2001). In 2006, the Spellings Commission 12 "on the future of higher education" delivered the verdict that accreditation "has significant shortcomings" (USDE Test, 2006, p. 7) and accused accreditation of being both ineffective and a barrier to innovation. Since the release of the Spellings Commissions, the next significant event on the subject of accreditation came during President Barack Obama's State of the Union Address on February 12, 2013. In conjunction with the president's address, the White House released a nine-page document titled "The President's Plan for a Strong Middle Class and a Strong America". The document stated that the President was going to call on Congress to consider value, affordability, and student outcomes in making determinations about which colleges and universities receive access to federal student aid, either by incorporating measures of value and affordability into the existing accreditation system; or by establishing a new, alternative system of accreditation that would provide pathways for higher education models and colleges to receive federal student aid based on performance and results (White House, 2013). Current State and Future of Accreditation Accreditation in higher education is at a crossroads. Since the Spellings Report was released in 2006, which called for more government oversight of accreditation to ensure public accountability, the government and critics have begun scrutinizing a system that had been nongovernmental and autonomous for several decades (Eaton, 2012). The U.S. Congress is currently in the process of reauthorizing the Higher Education Act (HEA), and it is expected that they will address accreditation head-on. All the while, CHEA and other accreditation supporters have been attempting to convince Congress, the academy, and the public at-large of accreditation’s current and future relevance in quality higher education. In anticipation of the HEA’s reauthorization, NACIQI was charged with providing the U.S. Secretary of Education with recommendations on recognition, accreditation, and student aid 13 eligibility (NACIQI, 2012). The committee advised that accrediting bodies should continue their gatekeeping role for student aid eligibility, but also recommended some changes to the accreditation process. These changes included more communication and collaboration between accreditors, states, and the federal government to avoid overlapping responsibilities; moving away from regional accreditation and toward sector or mission-focused accreditation, creating an expedited review process and developing more gradations in accreditation decisions; developing more cost-effective data collection and consistent definitions and metrics; and making accreditation reports publically available (NACIQI, 2012). However, two members of the committee did not agree with the recommendations and submitted a motion to include the Alternative to the NACIQI Draft Final Report, which suggested eliminating accreditor’s gatekeeping role; creating a simple, cost-effective system of quality assurance that would revoke financial aid to campuses not financially secure; eliminating the current accreditation process altogether as means of reducing institutional expenditures; breaking the regional accreditation monopoly; and developing a user-friendly, expedited alternative for the re-accreditation process (NACIQI, 2012). The motion failed to pass, and the alternative view was not included in NACIQI’s final report. As a result, Hank Brown, the former U.S. Senator from Colorado and founding member of the American Council of Trustees and Alumni, drafted a report seeking accreditation reform and reiterating the alternatives suggested above, because accreditation had “failed to protect consumers and taxpayers.” (Brown, 2013, p. 1). The same year the final NACIQI report was released, the American Council of Education’s (ACE) Task Force on Accreditation released its own report that identified challenges and potential solutions for accreditation (ACE, 2012). The task force made six 14 recommendations: a) increase transparency and communication, b) increase the focus on student success and institutional quality, c) take immediate and noticeable action against failing institutions, d) adopt a more expedited process for institutions with a history of good performance, e) create common definitions and a more collaborative process between accreditors, and f) increase cost-effectiveness (ACE, 2012). They also suggested that higher education “address perceived deficiencies decisively and effectively, not defensively or reluctantly.” (ACE, 2012, p. 8). President Obama has also recently spoken out regarding accountability and accreditation in higher education. In his 2013 State of the Union address, Obama asked Congress to “change the Higher Education Act, so that affordability and value are included in determining which colleges receive certain types of federal aid” (Obama, 2013a, para 39). The address was followed by The President’s Plan for a Strong Middle Class and a Strong America, which suggested achieving the above change to the HEA “either by incorporating measures of value and affordability into the existing accreditation system; or by establishing a new, alternative system of accreditation that would provide pathways for higher education models and colleges to receive federal student aid based on performance and results.” (Obama, 2013b, p. 5). Furthermore, in August 2013, President Obama called for a performance-based rating system that would connect institutional performance with financial aid distributions (Obama, 2013c). Though accreditation was not specifically mentioned in his plan, it is not clear if the intention is to replace accreditation with this new rating system or utilize both systems simultaneously (Eaton, 2013b). The President’s actions over the last year have CHEA and other supporters of nongovernmental accreditation concerned. Calling it the “most fundamental challenge that 15 accreditation has confronted to date,” Eaton (2012) has expressed concern over the standardized and increasingly regulatory nature of the federal government’s influence on accreditation. Astin (2014) also stated that if the U.S. government creates its own process for quality control, the U.S. higher education system is “in for big trouble” (para. 9), like the government-controlled, Chinese higher education system. Though many agree there will be an inevitable increase in federal oversight after the reauthorization of the HEA, supporters of the accreditation process have offered recommendations for minimizing the effect. Gaston (2014) provides six categories of suggestions, which include stages for implementation: consensus and alignment, credibility, efficiency, agility and creativity, decisiveness and transparency, and a shared vision. The categories maintain the aspects of accreditation that have worked well and that are strived for around the world – nongovernmental, peer review – as well as addressing the areas receiving the most criticism. Eaton (2013a) adds that accreditors and institutions must push for streamlining of the federal review of accreditors as a means to reduce federal oversight; better communicate the accomplishments of accreditation and how quality peer-review benefits students; and anticipate any further actions the federal government may take. While the HEA undergoes the process of reauthorization, the future of accreditation remains uncertain. There have been many reports and opinion pieces on how accreditation should change and/or remain the same, much of them with overlapping themes. Only time will tell if the accreditors, states, and the federal government reach an acceptable and functional common ground that ensures the quality of U.S. higher education into the future. 16 Journalism Accreditation The focus of this study centers on program-level accreditation, namely schools of journalism. Since 1945, accreditation in this field has been administered by the Accrediting Council on Education in Journalism and Mass Communications (ACEJMC, 2012). Currently, the ACEJMC judges programs across nine accrediting standards: Standard 1. Mission, Governance and Administration Standard 2. Curriculum and Instruction Standard 3. Diversity and Inclusiveness Standard 4. Full-Time and Part-Time Faculty Standard 5. Scholarship: Research, Creative and Professional Activity Standard 6. Student Services Standard 7. Resources, Facilities and Equipment Standard 8. Professional and Public Service Standard 9. Assessment of Learning Outcomes (ACEJMC, 2012) Within Standard 2, Curriculum and Instruction, the accrediting body describes 12 professional values and competencies – listed below – that all students should acquire after having completed the program. As part of this study, participants were asked to rank the importance of each value and competency with regard to their respective programs. The ACEJMC “requires that, irrespective of their particular specialization, all graduates should be aware of certain core values and competencies and be able to” perform the following: understand and apply the principles and laws of freedom of speech and press for the country in which the institution that invites ACEJMC is located, as well as receive instruction in and understand the range of systems of freedom of expression around the 17 world, including the right to dissent, to monitor and criticize power, and to assemble and petition for redress of grievances; demonstrate an understanding of the history and role of professionals and institutions in shaping communications; demonstrate an understanding of gender, race, ethnicity, sexual orientation and, as appropriate, other forms of diversity in domestic society in relation to mass communications; demonstrate an understanding of the diversity of peoples and cultures and of the significance and impact of mass communications in a global society; understand concepts and apply theories in the use and presentation of images and information; demonstrate an understanding of professional ethical principles and work ethically in pursuit of truth, accuracy, fairness and diversity; think critically, creatively and independently; conduct research and evaluate information by methods appropriate to the communications professions in which they work; write correctly and clearly in forms and styles appropriate for the communications professions, audiences and purposes they serve; critically evaluate their own work and that of others for accuracy and fairness, clarity, appropriate style and grammatical correctness; apply basic numerical and statistical concepts; apply current tools and technologies appropriate for the communications professions in which they work, and to understand the digital world (ACEJMC, 2012). 18 In addition, much like institutional-level accreditation, the process of acquiring or reacquiring accreditation for journalism schools through the ACEJMC involves completion of a comprehensive self-study report, site visits and a rigorous review by peer educators and practitioners. Specifically, the ACEJMC outlines four steps: 1. The unit undertakes a self-study, a rigorous and detailed examination of the program by faculty, administrators, and students. 2. A team consisting of educators and professionals visits the campus to assess curriculum, faculty, administration, students, facilities, and resources. 3. The national Accrediting Committee, composed of educators and professionals, each year reviews and discusses the reports of all the site teams and votes whether to recommend each unit to the Accrediting Council for accreditation. 4. The national Accrediting Council reviews the work of the site teams and the recommendations of the Accrediting Committee and takes final action (ACEJMC, N.D.a). Furthermore, ACEJMC accreditation is valid for a six-year period. If denied, a school or program can reapply after two years. The costs involved in the accreditation process include $1,000 for the application fee, $2,000 for annual dues, and between $3,500 to $5,500 for the site visit, depending upon how many people are on the visiting team, which can range from three to six members (ACEJMC, N.D.b). Presently, there are more than 450 schools or programs offering undergraduate and/or graduate degrees within the field of journalism and mass communication in the U.S. (Association for Education in Journalism and Mass Communication, 2014). Of those schools, 114 nationwide are ACEJMC-accredited (ACEJMC, 2014), which means that about 75% of all schools are not 19 accredited. In its FAQ section of the website, the ACEJMC specifically addresses the question, “Are accredited schools better than non-accredited schools?” (ACEJMC, N.D.c). Their response is, “Not necessarily. Accreditation is entirely voluntary, and many fine schools do not choose to seek it. However, accredited programs may offer scholarships, internships, competitive prizes and other activities not available in non-accredited programs” (ACEJMC, N.D.c). Given that so many journalism schools decide not pursue accreditation, this study examines what, if any, educational differences may exist between accredited and non-accredited programs, particularly with regard to curriculum, assessment practices, and student outcomes. Rationale for the Study About three-quarters of journalism programs in the United States are not accredited by the ACEJMC. Given that so many programs have chosen to refrain from participating in the external peer review accreditation process, it is important to understand what effects, if any, accreditation may have on student outcomes. Also, as explained more fully in the next chapter, previous research has indicated that the curricula of journalism programs – regardless of their accreditation status – are generally similar (Carroll, 1977; Masse & Popovich, 2007; Blom & Davenport, 2012). This study contributes to the existing body of journalism accreditation literature by attempting to identify any differences between accredited and non-accredited programs. Statement of the Problem This study is taking place at a pivotal time in journalism education. Overall, student enrollments in journalism and mass communications programs have declined for three consecutive years, from fall 2011 through fall 2013 (Becker, Vlad, & Simpson, 2014). “Journalism and mass communication as a field of study is dominated by undergraduate 20 enrollments” (p. 352). Even at the undergraduate level, enrollments dropped 1.0% in fall 2013, which followed a 1.5% dip in fall 2012, and 0.5% decrease in fall 2011. This is the first time in nearly three decades – going back to 1988 – that undergraduate enrollments have declined three years in a row (Becker, Vlad, & Simpson, 2014). This receding pattern stands in contrast to the national trend in higher education, which is experiencing enrollment gains across fields. According to projections, college and university enrollment is expected to continue rising for at least another seven years (Becker, Vlad, & Simpson, 2013). In addition, major shifts are also occurring in the employment landscape. Jurkowitz (2014) noted that there is substantial growth in digital news ventures. Online content outlets are now engaging in original news reporting, and as a result have generated thousands of jobs in recent years. The 2012 Annual Survey of Journalism & Mass Communication Graduates, which polls students from 82 schools nationwide, found that 73.2% of 2012 graduates had at least one job offer at the time of graduation, which is statistically comparable to and slightly above the 72.5% rate from the previous year (Becker, Vlad, Simpson, & Kalpen, 2013). Other key findings from the report: 65.6% of bachelor’s degree recipients reported holding a full-time job six to eight months after graduation. Among graduates age 20-24, bachelor’s degree recipients from journalism and mass communication programs outperformed their fellow college graduates, registering a lower level of unemployment for four consecutive years. 59.7% of bachelor’s degree recipients landed a job in communications or a related field within six to eight months after graduation. That figure was 54.8% in 2011 and just over 48% in 2009. 21 Racial and ethnic minorities who earned a bachelor’s degree faced more hardship finding work than their counterparts (Becker, Vlad, Simpson, & Kalpen, 2013). But despite the expansion of digital news ventures, Jurkowitz (2014) cautions that these new openings have “compensated for only a modest percentage” of the positions lost in the newspaper and magazine business in the past decade (p. 3). From 2003 to 2012, the American Society of News Editors lost more than 16,000 full-time editorial jobs. The magazine sector recorded a total job decline of 38,000. And, employment cuts continued in 2013 and early 2014 (Jurkowitz, 2014). Given the confluence of these major trends educationally and professionally, it would be valuable to acquire more data about student learning, outcomes, and curricular priorities, as students head into a rapidly evolving work environment. Furthermore, Seamon (2012), in conducting a literature review on the value of journalism accreditation, pointed to an observed void in the research. One key “element missing from the literature is a study that truly compares the effectiveness of accredited and unaccredited programs by measuring the abilities of their graduates” (p. 18). This study, in part, seeks to address that void. By focusing on programs’ curricular emphases, learning assessments, and student outcomes (such as job placement rates, graduation rates, and retention rates), this study will compare and contrast accredited and non-accredited journalism schools along several important measures. Purpose of the Study The purpose of this study is to explore the relationship between curricular priorities/learning assessment practices and quantifiable student outcomes at journalism programs. This study surveyed a national sample of accredited and non-accredited journalism program directors and faculty administrators. Individuals from ACEJMC-accredited programs 22 were asked to explain the reasons why their programs chose to pursue accreditation. All survey recipients were asked to prioritize the 12 ACEJMC professional competencies using a Likert- type scale, then indicate their use – if any – of a summative assessment (e.g. capstone project, comprehensive examination, thesis, portfolio, exit survey, etc.), and also to self-report student outcomes (such as graduation rates, retention rates, and job placement rates). The study examines whether the way they prioritize these competencies and/or their use of summative learning assessments has any correlation to their student outcomes. This study also attempts to indicate if any differences exist between accredited and non-accredited programs. Guiding this study are the following research questions: Which competencies do journalism schools prioritize as being most important? What are the measurable student outcomes at undergraduate journalism schools (i.e. job placement rates, graduation rates, retention rates)? How do schools assess student learning, at the conclusion of the program? (What summative measure is used? How effective is it, in their experience?) Are there differences between accredited and non-accredited programs? Significance of the Study In response to public concerns and the demands of the business sector, accreditation in U.S. higher education, beginning in the 1980s, has placed heavy emphasis on demonstrating accountability through quantifiable student outcomes and assessment of learning (Ewell, 2001; Beno, 2004; Wergin, 2005, 2012). In the field of journalism and mass communications, there is a need for additional empirical data on how accredited and non-accredited programs compare to one another with regard to student learning and outcomes (Seamon, 2012). This study contributes to the expansion of research in the area of journalism accreditation, by focusing on 23 the relationship between curricular priorities/learning assessment practices and measurable student outcomes. This study is meant to be of broad interest to several constituencies. First, the data from this study may be useful for the field’s accrediting organization – the ACEJMC – as it monitors the effectiveness of its policies and processes. Second, this study may be of value to prospective students who are interested in pursuing a degree in journalism. Third, this study may serve as useful knowledge for the general public by offering programmatic-level statistics on student outcomes. And fourth, the data may help inform future accreditation-related decisions by journalism program administrators. It is important to note one limitation of the study, which is that the student outcomes were self-reported by each respondent. 24 CHAPTER TWO: LITERATURE REVIEW Introduction Accreditation in U.S. higher education has evolved substantially over the course of its nearly 400-year history. The present trend in accreditation places heavy emphasis on accountability, quantifying student outcomes and measuring learning through assessment. As this shift has taken place, researchers likewise have begun to examine the multitude of aspects concerning accreditation, as it pertains to overall effectiveness, student learning, institutional costs versus benefits, and even program-level evaluation. At the direction of the dissertation chair, Chapter Two was written by group authorship, including Jennifer Barczykowski, Benedict Dimapindan, Deborah Hall Kinley, Richard May, Dinesh Payroda, Win Shih, and Kristopher Tesoro. This first portion of this chapter reviews the body of research on higher education accreditation and organizes the literature around seven main themes. First, it offers a critical assessment of accreditation (written by Kinley). Second, this study will explore the effects of accreditation, addressing the assessment of student learning outcomes as well as the challenges that institutions face in doing so (written by Dimapindan and Shih). Third, it looks at the institutional cost related to the accreditation process (written by Barczykowski). Fourth, it sheds light on alternatives to accreditation (written by Tesoro). Fifth, the study provides an overview of the international accreditation process (written by May). Sixth, it briefly explains specialized accreditation, referring to academic programs or departments within a single institution (written by Payroda). The latter part of this chapter reviews scholarly literature and empirical studies related specifically to journalism program accreditation, which serves as the focus of the study (written 25 by Dimapindan). The journalism-centered section organizes the literature around nine main themes: Little observed difference in curriculum Perceived value of journalism accreditation The internet and convergence Impact on diversity Assessment of journalism student learning Assessment of student writing Critique of journalism education Evaluating ACEJMC standards Current journalism education landscape Critical Assessment of Accreditation Accreditation, it seems, has evolved from simpler days of semi-informal peer assessment into a burgeoning industry of detailed analysis, student learning outcomes assessment, quality and performance review, financial analysis, public attention, and all-around institutional scrutiny (Bloland, 2001; Burke & Minassians, 2002; McLendon, Hearn, & Deaton, 2006; Zis, Boeke, & Ewell, 2010). Public scrutiny of institutions to establish their worth, their contribution to student learning, and a progressively regulated demand for institutional proof of success shown by evidence and assessment has changed accreditation and created a vacuum of knowledge about how accreditation is truly working in practice (Commission on the Future of Higher Ed, 2006; Dougherty, Hare, and Natow, 2009; Leef and Burris, 2002). Measures of inputs, outputs, local control versus governmental review, performance funding versus institutional choice, rising demands, and institutional costs make difficult the task 26 of understanding trends and movement of regional accreditation in the United States, but nevertheless have a great influence upon actual implementation of accreditation standards to real-world institutions (Leef & Burris, 2002). There have been calls for increased public transparency of accreditation findings and actions, including full publication of reports by the commission and by the institutions in question. For example, some institutions are sanctioned for deficiencies and may be given a detailed list of reporting deadlines to show compliance and ongoing quality review for those areas noted to be lacking. Some correspondence between accreditation commissions and the institutions are public, whereas others are private. Therefore, this semi-public nature to accreditation has been a point of contention in the literature on accountability and assessment (Eaton, 2010; Ikenberry, 2009; Kuh, 2010). There is much debate on whether student learning outcomes are the best measure and appropriate to education, whether they violate the purview of faculty members, or are truly in the best interest of students, best practices and learning (Eaton, 2010). Accreditation has evoked emotional opposition since its inception and much has been expressed in very colorful language. Accreditation has been accused of “[benefiting] the small, weak, and uncertain” (Barzun, 1993, p. 60). It is a “pseudo-evaluative process, set up to give the appearance of self-regulation without having to suffer the inconvenience” (Scriven, 2000, p. 272). It is a “grossly unprofessional evaluation” (p. 271), and “it is scarcely surprising that in large areas of accreditation, the track record on enforcement is a farce” (p. 272). Accreditors “[make] the accreditation process a high- wire act for schools” (American Council of Trustees and Alumni, 2007, p. 12). The system of accreditation is “structured in such a way as to subordinate the welfare of the educational institution as an entity and of the general public to the interest of groups representing limited institutional or professional concerns” (American Medical 27 Association, 1971, F-3). It has been stated that “accreditation standards have already fallen to the lowest common denominator” (American Council of Trustees and Alumni, 2007, p. 16), and accreditation is responsible for the “homogenization of education” and the “perseverance in the status quo” (Finkin, 1973, p. 369). “It is an impossible game with artificial counters” which ignores the student (Learned & Wood, 1938, p. 69). It is “a crazy-quilt of activities, processes and structures that is fragmented, arcane, more historical than logical, and has outlived its usefulness” (Dickeson, 2006, p. 1). It “seeks not only to compare apples with grapes, but both with camels and cods” (Wriston, 1960, p. 329). “As a mechanism for the assurance of quality, the private voluntary accreditation agencies are a failure” (Gruson, Levine, & Lustberg, 1979, p. 6). It is “to be tolerated only as a necessary evil” (Blauch, 1959, p. 23). “While failing to protect the taxpayer and the consumer from being ripped off by irresponsible institutions, it has also quashed educational diversity and reform” (Finn, 1975, p. 26). At the same time (and according to the same author) it constitutes a system of “sturdy walls and deep moats around... academic city-states” (Carey, 2009, para. 28), and it is a “tissue-thin layer of regulation” (Carey, 2010, p. 166). “The word ‘accreditation’ is so misunderstood and so abused that it should be abandoned,” (Kells, 1976). According to Gillen, Bennett, and Vedder (2010), “the inmates are running the asylum” (p. i). The renewal of the Higher Education Act in 1992 came during a time of heightened government concern over increasing defaults in student loans. Again concerned about the lack of accountability demonstrated by accreditation, this legislation established a new institution: the State Postsecondary Review Entity, or SPRE (Ewell, 2008). The creation of these agencies was intended to shift the review of institutions for federal aid eligibility purposes from regional accreditors to state governments. This direct threat to accreditation led to the dissolution of the 28 Council on Postsecondary Accreditation (COPA) and the proactive involvement of the higher education community resulting in the creation of the Council for Higher Education Accreditation (CHEA). It was the issue of cost that ultimately led to the abandonment of the SPREs when legislation failed to provide funding for the initiative (Ewell, 2008). The governmental concern did not dissipate however, and in 2006 the U.S. Department of Education released a report known as the Spellings Commission which criticized accreditation for being both ineffective and a barrier to innovation (Eaton, 2012b; Ewell, 2008). Other concerns are evident. It is problematic when accreditation is considered a chore to be accomplished as quickly and painlessly as possible rather than an opportunity for genuine self- reflection for improvement, and institutional self-assessment is ineffectual when there is faculty resistance and a lack of administrative incentive (Bardo, 2009; Commission on Regional Accrediting Commissions, n.d.; Driscoll & De Norriega, 2006; Rhodes, 2012; Smith & Finney, 2008; Wergin, 2012). One of the greatest stresses on accreditation is the tension between assessment for the purpose of improvement and assessment for the purpose of accountability, two concepts that operate in irresolvable conflict with each other (American Association for Higher Education, 1997; Burke & Associates, 2005; Chernay, 1990; Ewell, 1984; Ewell, 2008; Harvey, 2004; National Advisory Committee on Institutional Quality and Integrity, 2012; Provezis, 2010; Uehling, 1987b), although some argue that the two can be effectively coordinated for significant positive results (Brittingham, 2012; El-Khawas, 2001; Jackson, Davis, & Jackson, 2010; Walker, 2010; Westerheijden, Stensaker, & Rosa, 2007; Wolff, 1990). Another concern involves the way that being held to external standards undermines institutional autonomy which is a primary source of strength in the American higher education system (Ewell, 1984). 29 The Spellings Commission report detailed a new interest from the U.S. Department of Education in critiquing the status quo of regional accreditation commissions (Commission on the Future of Higher Education, 2006). Ewell (2008) describes the report as a scathing rebuke of inability of regional accreditors to innovate and a hindrance to quality improvement. Others have called for an outright “end… to the accreditation monopoly” (Neal, 2008). There have been increasing calls within the last several years even since the Spellings report of 2006 to reform or altogether replace accreditation as it is currently known (American Council of Trustees and Alumni, 2007; Gillen, Bennett, & Vedder, 2010; Neal, 2008). The American Council on Education (2012) recently convened a task force comprised of national leaders in accreditation to explore the adequacy of the current practice of institutional accreditation. They recognized the difficulty of reaching a consensus on many issues but nevertheless recommended strengthening and reinforcing the role of self-regulation in improving academic excellence. The Spelling’s Commission report signaled Federal interest in setting the stage for new accountability measures of higher education, raising the worst fears of some defenders of a more autonomous, peer- regulated industry (Eaton, 2003). Accreditation’s emphasis upon value and the enhancement of individual institutions with regional standards was now being pressed to achieve accountability roles for the entire sector of U.S. higher education (Brittingham, 2008). Effects of Accreditation This section of the literature review will examine the effects of accreditation, focusing primarily on the assessment of student learning outcomes. Specifically, outcome assessment serves two main purposes — quality improvement and external accountability (Bresciani, 2006; Ewell, 2009). Over the years, institutions of higher education have made considerable strides 30 with regard to learning assessment practices and implementation. Yet despite such progress, key challenges remain. Trend toward Learning Assessment The shift within higher education accreditation toward greater accountability and student learning assessment began in the mid-1980s (Ewell, 2001; Beno, 2004; Wergin, 2005, 2012). During that time, higher education was portrayed in the media as “costly, inefficient, and insufficiently responsive to its public” (Bloland, 2001, p. 34). The impetus behind the public’s concern stemmed from two reasons: first was the perception that students were underperforming academically, and second was the demand of the business sector (Ewell, 2001). Employers and business leaders expressed their need for college graduates who could demonstrate high levels of literacy, problem solving ability, and collaborative skills in order to support the emerging knowledge economy of the 21 st Century. In response to these concerns, institutions of higher education started emphasizing student learning outcomes as the main process of evaluating effectiveness (Beno, 2004). Framework for Learning Assessment Accreditation is widely considered to be a significant driving force behind advances in both student learning and outcomes assessment. According to Rhodes (2012), in recent years, accreditation has contributed to the proliferation of assessment practices, lexicon, and even products such as e-portfolios, which are used to show evidence of student learning. Kuh and Ikenberry (2009) surveyed provosts or chief academic officers at all regionally accredited institutions granting undergraduate degrees, and found that student assessment was driven more by accreditation than by external pressures such as government or employers. Another major finding was that most institutions planned to continue their assessment of student 31 learning outcomes despite budgetary constraints. They also found that gaining faculty support and involvement remained a major challenge — an issue that will be examined in more depth later in this section. Additionally, college and university faculty and student affairs practitioners have stressed how students must now acquire proficiency in a wide scope of learning outcomes to adequately address the unique and complex challenges of today’s ever-changing, economically competitive, and increasingly globalizing society. In 2007, the Association of American Colleges and Universities published a report focusing on the aims and outcomes of a 21 st Century collegiate education, with data gathered through surveys, focus groups, and discussions with postsecondary faculty. Emerging from the report were four “essential learning outcomes” which include: (1) knowledge of human cultures and the physical and natural world, through study in science and mathematics, social sciences, humanities, history, languages, and the arts; (2) intellectual and practical skills, including inquiry and analysis, critical and creative thinking, written and oral communication, quantitative skills, information literacy, and teamwork and problem-solving abilities; (3) personal and social responsibility, including civic knowledge and engagement, multicultural competence, ethics, and foundations and skills for lifelong learning; and (4) integrative learning, including synthesis and advanced understanding across general and specialized studies (p. 12). With the adoption of such frameworks or similar tools at institutions, accreditors can be well-positioned to connect teaching and learning and, as a result, better engage faculty to improve student learning outcomes (Rhodes, 2012). Benefits of Accreditation on Learning Accreditation and student performance assessment have been the focus of various empirical studies, with several pointing to benefits of the accreditation process. Ruppert (1994) 32 conducted case studies in 10 states – Colorado, Florida, Illinois, Kentucky, New York, South Carolina, Tennessee, Texas, Virginia, and Wisconsin – to evaluate different accountability programs based on student performance indicators. The report concluded that “quality indicators appear most useful if integrated in a planning process designed to coordinate institutional efforts to attain state priorities” (p. 155). Furthermore, research has also demonstrated how accreditation is helping shape outcomes inside college classrooms. Specifically, Cabrera, Colbeck, and Terenzini (2001) investigated classroom practices and their relationship with the learning gains in professional competencies among undergraduate engineering students. The study involved 1,250 students from seven universities. It found that the expectations of accrediting agencies may be encouraging more widespread use of effective instructional practices by faculty. A study by Volkwein, Lattuca, Harper, and Domingo (2007) measured changes in student outcomes in engineering programs, following the implementation of new accreditation standards by the Accreditation Board for Engineering and Technology (ABET). Based on the data collected from a national sample of engineering programs, the authors noted that the new accreditation standards were indeed a catalyst for change, finding evidence that linked the accreditation changes to improvements in undergraduate education. Students experienced significant gains in the application of knowledge of mathematics, science, and engineering; usage of modern engineering tools; use of experimental skills to analyze and interpret data; designing solutions to engineering problems; teamwork and group work; effective communication; understanding of professional and ethical obligations; understanding of the societal and global context of engineering solutions; and recognition of the need for life-long learning. The authors also found accreditation also prompted faculty to engage in professional 33 development-related activity. Thus, the study showed the effectiveness of accreditation as a mechanism for quality assurance (Volkwein et al., 2006). Organizational Effects of Accreditation Beyond student learning outcomes, accreditation also has considerable effects on an organizational level. Procopio (2010) noted that the process of acquiring accreditation influences perceptions of organizational culture. According to the study, administrators are more satisfied than staff – and especially more so than faculty – when rating organizational climate, information flow, involvement in decisions, and utility of meetings. “These findings suggest institutional role is an important variable to consider in any effort to affect organizational culture through accreditation buy-in” (p. 10). Similarly, a study by Wiedman (1992) describes how the two-year process of reaffirming accreditation at a public university drives the change of institutional culture. Meanwhile, Brittingham (2009) explains that accreditation offers organizational-level benefits for colleges and universities. The commonly acknowledged benefits include students’ access to federal financial aid funding, legitimacy in the public, consideration for foundation grants and employer tuition credits, positive reflection among peers, and government accountability. However, Brittingham (2009) points out that there are “not often recognized” benefits as well (p. 18). For example, accreditation is cost-effective, particularly when contrasting the number of personnel to carry out quality assurance procedures here in the U.S. versus internationally, where it’s far more regulated. Second, “participation in accreditation is good professional development” because those who lead a self-study come to learn about their institution with more breadth and depth (p. 19). Third, self-regulation by institutions – if done properly – is a better system than government regulation. And fourth, “regional accreditation 34 gathers a highly diverse set of institutions under a single tent, providing conditions that support student mobility for purposes of transfer and seeking a higher degree” (p. 19). Future Assessment Recommendations Many higher education institutions have developed plans and strategies to measure student learning outcomes, and such assessments are already in use to improve institutional quality (Beno, 2004). For future actions, the Council for Higher Education Accreditation, in its 2012 Final Report, recommends to further enhance commitment to public accountability: “Working with the academic and accreditation communities, explore the adoption and implementation of a small set of voluntary institutional performance indicators based on mission that can be used to signal acceptable academic effectiveness and to inform students and the public of the value and effectiveness of accreditation and higher education. Such indicators would be determined by individual colleges and universities, not government” (p. 7). In addition, Brittingham (2012) outlines three developments that have the capacity to influence accreditation and increase its ability to improve educational effectiveness. First, accreditation is growing more focused on data and evidence, which strengthens its value as a means of quality assurance and quality improvement. Second, “technology and open-access education are changing our understanding of higher education” (p. 65). These innovations – such as massive open online courses – hold enormous potential to open up higher education sources. As a result, this trend will heighten the focus on student learning outcomes. Third, “with an increased focus on accountability – quality assurance – accreditation is challenged to keep, and indeed strengthen, its focus on institutional and programmatic improvement” (p. 68). This becomes particularly important amid the current period of rapid change. 35 Challenges to Student Learning Outcomes Assessment is critical to the future of higher education. As noted earlier, outcome assessment serves two main purposes – quality improvement and external accountability (Bresciani, 2006; Ewell, 2009). The practice of assessing learning outcomes is now widely adopted by colleges and universities since its introduction in the mid-1980s. Assessment is also a requirement of the accreditation process. However, outcomes assessment in higher education is still a work in progress and there is still a fair amount of challenges (Kuh & Ewell, 2010). Organization Learning Challenges First, there is the organizational culture and learning issue. Assessment, as clearly stated by the American Association for Higher Education (1992), “is not an end in itself but a vehicle for educational improvement.” The process of assessment is not a means unto its own end. Instead, it provides an opportunity for continuous organizational learning and improving (Maki, 2010). Too often, institutions assemble and report sets of mountainous data just to comply with federal or state accountability policy or accreditation agency’s requirements. However, after the report is submitted, the evaluation team left, and the accreditation confirmed, there are little incentives to act on the findings for further improvement. The root causes of deficiencies identified are rarely followed up and real solutions are never sought (Ewell, 2005; Wolff, 2005). Another concern pointed out by Ewell (2005) is that accreditation agencies tend to emphasize the process of, rather than the outcomes, once the assessment infrastructure is established. The accreditors are satisfied with a formal statements and goals of learning outcomes, but do not query further about how, the appropriateness, and to what degree these learning goals are applied in the teaching and learning process. As a result, the process tends to 36 be a single loop learning where changes reside at a surface level, instead of a double-loop learning, where changes are incorporated in the practices, belief, and norms (Bensimon, 2005). Lack of Faculty Buy-in Lack of faculty’s buy-in and participation is another hurdle in the adoption of assessment practice (Kuh & Ewell, 2010). In a 2009 survey by the National Institute for Learning Outcomes Assessment, two-third of all 2,809 surveyed schools noted that more faculty involvement in learning assessment would be helpful (Kun & Ikenberry, 2009). According to Ewell (1993, 2002, 2005), there are several reasons that faculty is inclined to be directly involved in the assessment process. First, faculty views teaching and curriculum development their domain. Assessing their teaching performance and student learning outcomes by external groups can be viewed as an intrusion of their professional authority and academic freedom. Second, the extra efforts and time required for engaging outcome assessment and the unconvincing added- value perceived by faculty can be another deterrent. Furthermore, the compliance-oriented assessment requirements are imposed by external bodies and most faculty members participate in the process indirectly. They tend to show a lukewarm attitude and leave the assessment work to administrative staff. In addition, faculty might have a different view on the definitions and measures of “quality” than that of institution or accreditors (Perrault, Gregory, & Carey, 2002, p. 273). Finally, the assessment process incurs tremendous amount of work and resources. To cut costs, the majority of the work is done by administration at the institution. Faculty consequently perceives that assessment as an exercise performed by administration for external audiences, instead of embracing the process. 37 Lack of Institutional Investment Shortage of resources and institutional support is another challenge in the implementation of assessment practice. As commented by Beno (2004), “[d]eciding on the most effective strategies for teaching and for assessing learning will require experimentation, careful research, analyses, and time” (p. 67). With continuously dwindling federal and state funding in the last two decades, higher education, particularly at the public institutions, is stripped of resources to support such an endeavor. A case in point is the recession in early 1990s. Budget cuts forced many states to abandon the state assessment mandates originated in mid-1980s and switched to process-based performance indicators as a way to gain efficiency in large public institutions (Ewell, 2005). The 2009 National Institute for Learning Outcomes Assessment survey shows that majority of the surveyed institutions undercapitalized resources, tools, and expertise for assessment work. Twenty percent of respondents indicated they had no assessment staff and 65% had two or less (Kuh & Ewell, 2010; Kuh & Ikenberry, 2009). The resource issue is further described by Beno (2004): “A challenge for community colleges is to develop the capacity to discuss what the results of learning assessment mean, to identify ways of improving student learning, and to make institutional commitments to that improvement by planning, allocating needed resources, and implementing strategies for improvement” (p. 67) Difficulty with Integration into Local Practice Integrating the value and institutionalizing the practice of assessment into daily operations can be another tall order in many institutions. In addition to redirecting resources, leadership’s involvement and commitment, faculty’s participation, and adequate assessment personnel contribute to the success of cultivating a sustainable assessment culture and framework 38 on campus (Banta, 1993; Kuh & Ewell, 2010; Lind & McDonald, 2003; Maki, 2010). Furthermore, assessment activities, imposed by external authorities, tend to be implemented as an addition to, rather than an integral part of, an institutional practice (Ewell, 2002). Assessment, like accreditation, is viewed as a special process with its own funding and committee, instead of being part of regular business operations. Finally, the work of assessment, program reviews, self-study, and external accreditation at institutional and academic program levels tends to be handled by various offices on campus and coordinating the work can be another challenge (Perrault, Gergory, & Carey, 2002). Colleges also tend to adopt the institutional isomorphic approach by modeling itself after those peers who are more legitimate or successful in dealing with similar situation and the practice widely used to gain acceptance (DiMaggio & Powell, 1983). As reported by Ewell (1993), institutions are prone to “second-guess” and adopt the type of assessment practice acceptable by external agencies as a safe approach instead of adopting or customizing the one appropriate to the local needs and situation (Ewell, 1993). Institutional isomorphism offers a safer and more predictable route for institutions to deal with uncertainty and competition, to confirm to government mandates or accreditation requirements, or to abide by professional practices (Bloland, 2001). However, the strategy of following the crowd might hinder in-depth inquiry of a unique local situation, as well as the opportunity for innovation and creativity. Furthermore, decision makers may be unintentionally trapped in a culture of doing what everyone is doing without carefully examining unique local situation, the logic, the appropriateness, and the limitations behind the common practice (Miles, 2012). Lack of assessment standards and clear terminology presents another challenge in assessment and accreditation practice (Ewell, 2001). With no consensus on vocabulary, 39 methods, and instrument, assessment practice and outcomes can have limited value. As reported by Ewell (2005), the absence of outcome metrics makes it difficult for state authorities to aggregate performance across multiple institutions and to communicate the outcomes to the public. The exercise of benchmarking is also impossible. Beesciani (2006) stressed the importance of developing a conceptual definition, framework, and common language at institutional level. Outcome Equity Outcome assessment that focuses on students’ academic performance while overlooks the equity and disparity of diverse student population, as well as the student engagement and campus climate issues is another area of concern. In discussing local financing of community colleges, Dowd and Grant (2006) stressed the importance of including “outcome equity” in additional to performance-based budget allocation. Outcome equity pays special attention to the equal outcomes of educational attainment among populations of different social, economic, and racial groups (Dowd, 2003). Tension between Improvement and Accountability The tension between the twin goals of outcomes assessment, quality improvement and external accountability, can be another factor affecting outcome assessment practice. According to Ewell (2009, 2008), assessment practice has evolved over the years into two contrasting paradigms. The first paradigm, assessment for improvement, emphasizes on constant evaluating and enhancing the process or outcomes, while the other paradigm, assessment for accountability, demands conformity to a set of established standards mandated by the state or accrediting agencies. The strategies, the instrumentation, the methods of gathering evidences, the reference points, and the way results are utilized of these two paradigms tend to be at the opposite end of 40 the spectrum (Ewell, 2009, 2008). For example, in the improvement paradigm assessment is mainly used internally to address deficiencies and enhance teaching and learning. It requires periodic evaluation and formative assessment to track progress over time. On the other hand, the accountability paradigm assessment is designed to demonstrate institutional effectiveness and performance to external constituencies and to comply with pre-defined standards or expectations. The process tends to be performed on set schedules as a summative assessment. The nature of these two constraints can create tension and conflict within an institution. Consequently, an institution’s assessment program is unlikely to achieve both objectives. Ewell (2009) further pointed out that “when institutions are presented with an intervention that is claimed to embody both accountability and improvement, accountability wins.” (p. 8) Transparency Challenges Finally, for outcome assessment to be meaningful and accountable the process and information need to be shared and open to the public (Ewell, 2005). Accreditation has long been criticized as mysterious or secretive with little information to share with stakeholders (Ewell, 2010). In a 2006 survey, the Council of Higher Education Accreditation reported that only 18% of the 66 accreditors surveyed provide information about the results of individual reviews publicly; less than 17% of accreditors provide a summary on student academic achievement or program performance; and just over 33% of accreditors offer a descriptive summary about the characteristics of accredited institutions or programs (Council of Higher Education Accreditation, 2006). In the 2014 Inside Higher Education survey, only 9% of the 846 college presidents indicate that it is very easy to find student outcomes data on the institution’s website, and only half of the respondents agree that it is appropriate for federal government to collect and publish data on outcomes of college graduates (Jaschik & Ledgerman, 2014). With the public 41 disclosure requirements of the No Child Left Behind Act, there is an impetus for higher education and accreditation agencies to be more open to public and policy makers. It is expected that further openness will contribute to more effective and accountable business practices as well as the improvement of educational quality. Future of Outcomes Assessment It has been three decades since the birth of the assessments movement in U.S. higher education and a reasonable amount of progress has been made (Ewell, 2005). Systematic assessment of student learning outcomes is now a common practice at most institutions. As reported by two nation-wide surveys. The 2009 National Institute for Learning Outcomes Assessment shows that more than 75% of surveyed institutions have adopted common learning outcomes for all undergraduate students and most institutions conduct assessments at both the instructional and program level (Kun & Ikenberry, 2009). The 2008 survey performed by the Association of American Colleges and Universities also reported that 78% of the 433 surveyed institutions have a common set of learning outcomes for all their undergraduate students and 68% of the institutions also assess learning outcomes at the departmental level (Hart Research Associates, 2009). As the public concern about the performance and quality of American colleges and universities continues to grow, it is more imperative than ever to embed assessment in the everyday work of teaching and using assessment outcomes to further improve practice, to inform decision makers, to communicate effectively with the public, and to be accountable for preparing the national learners in the knowledge economy. With effort, transparency, continuous improvement and responsiveness to society’s demands, higher education institutions will be able to regain the trust from the public. 42 Institutional Costs of Accreditation Gaston (2014) discusses the various costs associated with accreditation. Institutions are required to pay annual fees to the accrediting body. If an institution is applying for initial accreditation, they are required to pay an application fee as well as pay additional fees as they progress through the process. The institution seeking accreditation also pays for any on-site reviews. In addition to these “external” costs, there are internal costs that must be calculated as well. These internal costs can include faculty and administrative time invested in the assessment and self-study, volunteer service in accreditation activities, preparation of annual or periodic filings, and attendance at mandatory accreditation meetings (p. 9). Costs of initial accreditation can vary greatly from region to region; however, regardless of the region, the costs are substantial. It can cost an institution $23,500.00 to pursue initial accreditation through the Higher Learning Commission (HLC), regardless if the pursuit is successful or not. This does not require the costs associated with the three required on-site visits nor does it include the dues that must be paid during the application and candidacy period. For example, the applicant and candidacy fees for the Southern Association of Colleges and Schools (SACS) are $12,500. (HLC Initial, 2012) Shibley & Volkwein (2002) claim there has been limited research on the costs of accreditation within the literature. Calculating the cost can be very complex, as institutions must be able to evaluate both monetary and non-monetary costs of going through the accreditation process. One of the most complex and difficult items to evaluate is time. Reidlinger and Prager (1993) state there are two reasons why thorough cost-based analyses of accreditation have not been pursued. First, there is a belief that voluntary accreditation is preferable to governmental 43 control and that it accreditation is worth the cost, despite the price. Second, it is difficult to relate perceived benefits of accreditation to an actual dollar amount (p. 39). The Council for Higher Education Accreditation (CHEA) began publishing an almanac in 1997, and continue to release a revised version every two years. This almanac looks at accreditation practices across the United States. This reference guide looks at accreditation on the macro-level, looking at data such as: number of volunteers, number of employees, and unit operating budgets of the regional accrediting organizations. Little, if any, information is provided on costs incurred by individual institutions as they go through the accreditation process. In 1998, the North Central Association of Colleges and Schools (NCACS) completed a self-study, in which they examined the perception of accreditation costs among the institutions within that region (Lee & Crow, 1998). The study revealed some significant findings, which included the dissimilarity of responses by institutional type. Research and doctoral institutions were less apt to claim that benefits outweighed costs while also responding less positively than other types of institutions regarding the effectiveness and benefits of accreditation. The study suggested that well-established research and doctoral institutions might already have internal processes in place that serve the traditional function of the accreditation process, in which case, a traditional audit system could serve as an appropriate alternative to the formal process by the regional accreditation organization. In looking at the results of all institutional types, the self- study found that 53% of respondents considered that the benefits of accreditation outweighed the costs. Approximately 33% of respondents considered benefits of accreditation to be equal to the costs. The remaining 13% of the respondents believed that the costs of accreditation outweighed the benefits. There have been similar case studies done by Warner (1977) on the Western Association of Schools and Colleges (WASC) and by Pigge (1979) on the Committee on 44 Postsecondary Accreditation. In both studies, cost was labeled as a significant concern of the accreditation process. Budget allocations have also been impacted by accreditation results. Warner (1977) found that approximately one third of responding institutions had changed budget allocations based on accreditation results; however, there was no further exploration done. The majority of respondents in the Warner (1977) and Pigge (1979) studies believed that despite the costs of accreditation, the benefits outweighed the costs. There are three stages of preparation that institutions go through when preparing for accreditation. Wood (2006) developed the model, which includes the release time required for the various coordinators of the accreditation review, the monetary costs of training, staff support, materials, and the site visit of the accreditation team. Each of these stages triggers cost to the institution. Willis (1994) also examined these costs but made the differentiation between direct and indirect costs. Direct costs include this such as accreditation fees, operating expenses (specific to the accreditation process), direct payments to individuals who participate in the process, self-study costs, travel costs, and site visit costs. Indirect costs measure things such as time. Willis (1994) identified indirect costs as “probably many times greater than the direct costs due mainly to the personnel time required at the institution” (p. 40). He suggests that caution is exercised when evaluating these costs, and that they should not be underestimated. He states that many times the normal tasks that cannot be completed by individuals with accreditation responsibilities are distributed to other individuals who are not identified as a participant in the accreditation process. Kennedy, Moore, and Thibadoux (1985) attempted to establish a methodology for how costs are determined, with particular interest given to monetizing time spent on the accreditation process. They looked at a time frame of approximately 15 months, from the time from when the 45 initial planning of the self-study began through the presentation of the study. They used time logs to gather data on time spent by faculty and administrative staff. There was a high return rate for the time logs (79% were fully completed, with a 93% return rate overall). After reviewing the time logs, it was discovered that the time spent by faculty and administrative staff accounted for 94% of the total cost of the accreditation review, over two-thirds of which as attributed to administrative staff. These figures demonstrate the fact that the time required by both faculty and administrative staff is the most significant cost involved in the accreditation process. It was concluded that this cost was not excessive as there is a seven-year span between each self-study review process. Kells and Kirkwood (1979) conducted a study in the Middle States region, which looked at the direct costs of participating in a self-study. Almost 50% of the respondents reportedly spent under $5,000.00 on the self-study, which did not defined as excessive. It was also determined that there was maximum number between 100 and 125 people directly involved in the self-study. The majority of participants were faculty (41-50%), followed by staff (21-30%), and very few students. The size of the institution was believed to have had the greatest impact on the composition of the self-study committee (number of faculty vs. staff) as well as the cost of the self-study itself. Doerr (1983) used a case study to explore the direct costs of accreditation and to examine the benefits received from accreditation when university executives wish to pursue additional programmatic accreditations. Both financial costs and opportunity costs of institutional accreditation granted by SACS and four programmatic accreditations cultivated by the University of West Florida in 1981-1982 were examined. He assigned an average wage per hour to faculty and administrative staff, while also adding in the cost of material supplies. It was 46 estimated that the total direct costs of accreditation for these reviews totaled $50,030.71. It was also projected that there would be additional costs in the following years, particularly membership costs for the accrediting organizations and those costs associated with preparing for additional programmatic reviews. He concluded by looking at the opportunity costs while examining alternative ways this money might have been spent. Shibley and Volkwein (2002) evaluated the benefits of a joint accreditation by conducting a case study of a public institution in the Middle States region. This institution has multiple accrediting relationships, including both institutional and programmatic reviews. They confirmed what Willis (1994) had suggested, which was that “the true sense of burden arose from the time contributed to completing the self-study process rather than from finding the financial resources to support self-study needs” (Shibley & Volkwein, 2002, p.8). They found that the separate accreditation processes had more benefits for individuals than the joint effort; however, the joint process was less costly and the sense of burden for participants was reduced. There have been several studies released on the expense of accreditation and its value to institutions. Britt and Aaron (2008) distributed surveys to radiology programs without specialized accreditation. These institutions reported that the expense of accreditation was the primary factor in not pursuing accreditation. A secondary consideration was the time required to go through the accreditation process. Many respondents indicated that a decrease in the expense would allow them to consider pursuing accreditation in the future. Bitter, Stryker, and Jens (1999) and Kren, Tatum, and Phillips (1993) looked at specialized accreditation for accounting. Both studies found that non-accredited programs believed that accreditation costs outweighed the benefits. Many programs claim they follow accreditation standards; however, there is no empirical evidence to prove that this is true. Since the programs do not go through the 47 accreditation process, there is no way to verify if they are actually meeting the established accreditation standards. Cost is frequently used as a factor as to why institutions have not pursued accreditation. In addition to direct costs of accreditation, things such as resources, time, energy and energy spent are also included. The Florida State Postsecondary Planning Commission (1995) defined costs in a variety of ways and only sometimes included indirect costs as part of their definition. Benefits could potentially impact up to three groups: students, departments, and the institution. The Commission recommended that institutions who were seeking out accreditation balance the direct and indirect costs of accreditation with the potential benefits to each group before making the decision to pursue accreditation. As a result of the concerns of the higher education community and the research on the costs associated with accreditation, both the National Advisory Committee on Institutional Quality and Integrity (NACIQI) and the American Council on Education (ACE) published reports in 2012. These reports call for a cost-benefit analysis of the accreditation process in an attempt to reduce excessive and unnecessary costs. NACIQI recommends that data gathering be responsive to standardized expectations and that it would only seek out information that is useful and that cannot be found elsewhere (NACIQI Final, 2012, p. 7). The ACE task force calls for an evaluation of required protocols such as the self-study, the extent and frequency of on-site visits, expanded opportunities for the use of technology, greater reliance on existing data, and the evaluation of potential duplication of requirements imposed by different agencies and the federal government (ACE, 2012, pp. 26-27). The cost of accreditation can be defined in many ways, including both time and money. Schermerhorn, Reisch, and Griffith (1980) indicated that the time commitment required by 48 institutions to prepare for accreditation was one of the most significant barriers of the entire process. Due to the limited amount of research in the area on the cost-benefit analysis of the accreditation process, Woolston (2012) conducted a study entitled “The costs of institution accreditation: A study of direct and indirect costs.” This study consisted of distributing a survey to all regionally accredited institutions of higher education in the United States, who grant baccalaureate degrees. The survey was sent via email to the primary regional Accreditation Liaison Officer (ALO) for each institution. It targeted four primary areas, including: demographic information, direct costs, indirect costs, and an open-ended section allowing for possible explanation for the costs. Results showed that one of the most complicated things is to determine the monetary value of the time associated with going through the accreditation process. Through analysis of the open-ended response questions, Accreditation Liaison Officers indicated that two of the biggest benefits of going through accreditation were self-evaluation and university improvement. There were other themes that emerged, such as: campus unity, outside review, ability to offer federal financial aid, reputation, sharing best practices, celebration, and fear associated with not being accredited. While it was agreed that accreditation costs are significant and excessive, many Accreditation Liaison Officers believe that the costs are justified and that the benefits of accreditation outweigh both the direct and indirect costs. Alternatives to Accreditation As the role of accreditation has been thrust into the public spotlight within the United States, it is important to review the alternatives to the current system that have been proposed in previous years. 49 Generally speaking, the alternatives to accreditation that have been proposed by scholars or administrators in the past revolved around the common theme of increased government involvement (either at the state or federal government level). To illustrate this notion, Orlans (1975) described the development (at the national level) of a Committee for Identifying Useful Postsecondary Schools that would ultimately allow for accrediting agencies to focus on a wider range of schools. This committee was part of Orlan’s greater overall idea that there be an increase in the amount of competition amongst accrediting agencies in order to further the advancement of education (Orlans, 1975). Trivett (1976) demonstrated that there was a triangular relationship between accrediting agencies, state governments, and federal governments. Trivett said: In its ideal form, the states establish minimum legal and fiscal standards, compliance with which signifies that the institutions can enable a student to accomplish his objectives because the institution has the means to accomplish what it claims it will do. Federal regulations are primarily administrative in nature. Accrediting agencies provide depth to the evaluation process in a manner not present in either the state or federal government’s evaluation of an institution by certifying academic standards. (Trivett, 1976, pg. 7) Trivett’s statement speaks to the ever-present relationship between accreditation agencies, state governments, and federal governments. Fred Harcleroad (1976; 1980) identified six different methods for accreditation in his writings; three vouched for an expansion of responsibility for state agencies, one called for an expansion of federal government responsibility, and the remaining two asked for a modification of the present system (by increasing staff members or auditors) or keeping the present system in place. Harcleroad (1976; 1980) wrote that: 50 A combination of the second (present system with modifications) and third options (increased state agency responsibility without regional and national associations) seems the most likely plan for the near future. This possibility will become even more viable if both regional and national associations continue refinements in their process and increase the objectivity of an admittedly subjective activity. (Harcleroad, 1980, pg. 46) These methods proposed by Harcleroad clearly demonstrate a preference for increased state government involvement within the accreditation process. Harcleroad (1976) also spoke about the use of educational auditing and accountability as an internal review to increase both external accountability and internal quality. This concept is modeled after the auditing system developed by the Securities and Exchange Commission (SEC) that was used to accredited financial organizations (Harcleroad, 1976). Another example of internal and external audits was demonstrated by the proposals in the essay produced by three scholars (Graham, Lyman, and Trow, 1995). The essay (also known as the Mellon report) was the result of a grant funding the study of accountability of higher education institutions to their three major constituencies (Students, Government, and the Public) (Bloland, 2001). This essay emphasized the notion that accountability had both an internal and external aspect and the authors suggested that institutions conduct internal reviews (primarily within their teaching and research units) every 5-10 years (Bloland, 2001). Once this internal review was completed, an external review would then be conducted in the form of an audit on the procedures of the internal review (Bloland, 2001). Specifically, this external audit would be conducted by regional accrediting agencies while institutional accrediting agencies were encouraged to pay close attention to the internal processes in order to determine if the institution has the ability to learn and address its weaknesses (Bloland, 2001). These concepts surrounding 51 auditing were later explored by other authors and, most recently, have been linked to discussions regarding the future of higher education accreditation (Bernhard, 2011; Burke & Associates, 2005; Dill, Massy, Williams, & Cook, 1996; Ewell, 2012; Ikenberry, 2009; Western Association of Schools and Colleges, 1998; Wolff, 2005). In examining alternatives to accreditation, it is important to note the alternative programs that have been established by regional accreditors as enhancements to current accreditation processes. For example, the Higher Learning Commission (an independent commission within the North Central Association of Colleges and Schools) established (in 2000) an alternative assessment for institutions that have already been accredited: the Academic Quality Improvement Program (AQIP). According to Spangehl (2012), this process instilled the notion of continuous quality improvement through the processes that would ultimately provide evidence for accreditation. An example of AQIP offering continuous improvement for higher educational institutions would be its encouragement of institutions to implement the use of various categories (i.e. the Helping Students Learn, category allows for institutions to continuously monitor their ongoing program and curricular design) to stimulate organizational improvement (Spangehl, 2012). Another example of an alternative program is the use of the Quality Enhancement Plan (QEP) by the Southern Association of Colleges and Schools (Jackson, Davis, & Jackson, 2010; Southern Association of Colleges and Schools, 2007). QEP was adopted in 2001 and defined as an additional accreditation requirement that would help guide institutions to produce measurable improvement in the areas of student learning (Jackson, Davis, & Jackson, 2010). A few common themes of student learning that have been utilized by institutions (through the use of QEP) 52 include student engagement, critical thinking, and promoting international tolerance (Jackson, Davis, & Jackson, 2010). This section has offered a glimpse into the alternatives to accreditation that have been proposed and implemented in the past. It is important to note that while accreditation has been criticized by many, the general thoughts of many is that accreditation is a critical piece of academia and vital to accomplishing the goal of institutional quality assurance and accountability (Bloland, 2001). International Accreditation in Higher Education The United States has developed a unique accreditation process (Brittingham, 2009). The most obvious difference between the U.S. and other countries is in the way education is governed. In the U.S. education is governed at the state level whereas other nations are often governed by a ministry of education (Ewell, 2008; Middaugh, 2012). Dill (2007) outlined three traditional models of accreditation. These include “the European model of central control of quality assurance by state educational ministries, the U.S. model of decentralized quality assurance combining limited state control with market competition, and the British model in which the state essentially ceded responsibility for quality assurance to self-accrediting universities” (p. 3). These models have been used in some form by other nations in South America, Africa, and Asia. Historically, direct government regulation (the European model) of higher education has been the most prevalent form of institutional oversight outside of the United States (Dickeson, 2009). Unfortunately the low level of autonomy historically granted to post-secondary institutions has limited their ability to effectively compete against institutions in the United States and other countries (Dewatripont et. al, 2010; Jacobs & Van der Ploeg, 2006; Sursock & 53 Smidt, 2010). Overall, European institutions "suffer from poor governance, are insufficiently autonomous and offer often insufficient incentives to devote time to research," (Dewatripont et al., 2010, p. 3). Many European countries have a “very centralized” system of higher education, such as France, Germany, Italy, and Spain (Van der Ploeg & Veugelers, 2008). In addition, the level of governmental intervention inhibits European universities from innovating and reacting quickly to changing demands (Van der Ploeg and Veugelers, 2008). Institutions in Europe with low levels of autonomy have historically had little to no control in areas including hiring faculty, managing budgets, and setting wages (Aghion, et. al., 2008). Thus, it is difficult for universities with low autonomy to attract and retain the faculty needed to compete for top spots in global ranking indices (Jacobs & Van der Ploeg, 2006; Aghion, et. al, 2008; Dewatripont, et. al., 2010). However, some European nations have conducted serious reform to their higher education systems, including Denmark, Netherlands, Sweden, and the United Kingdom. Not surprisingly, universities with high autonomy in these countries have higher levels of research performance compared to European countries with low levels of institutional autonomy (Dewatripont et. al., 2010). This sentiment is echoed by Aghion et al (2008), who argues research performance (which impacts academic prestige and rankings) is negatively impacted by less institutional autonomy. While research on accreditation’s direct impact on student learning outcomes is sparse, Jacobs and Van der Ploeg (2006) argue the European system of greater regulation has some benefits. They concluded that institutions in continental Europe had better access for students with lower socioeconomic status, better outcomes in terms of student completion, and even lower spending per student. 54 Internationalization of Accreditation Due to globalization, there is an increased focus on how to assure quality of standards in higher education across nations. Assessment frameworks are being initiated and modified to meet these increased demands for accountability (World Bank, 2002). Recent studies have tried to compare these assessment trends across multiple countries. Bernhard (2011) conducted a comparative analysis of such reforms in six countries (Austria, Germany, Finland, the United Kingdom, the United States, and Canada). Stensaker and Harvey (2011) identified a growing trend that nations are relying on forms of accreditation distinctly different from the U.S. accreditation processes. Specifically, they identified the academic audit as an increasingly used alternative in countries such as Australia and Hong Kong. Yung-chi Hou (2014) examined challenges the Asia-Pacific region faces in implementing quality standards that cross national boundaries. Another outcome of globalization is the internationalization of the quality assurance process itself. Rather than each nation setting its own assessment frameworks, international accords are attempting to bridge academic quality issues between nations. Student mobility across national borders has driven this need for “international mutual accreditation networks” (Van Damme, 2000, p. 17). There are many loosely or unconnected initiatives that have formed over the last decade. The United Nations Educational, Scientific and Cultural Organization (UNESCO) has begun the discussion on guidelines for international best practices in higher education (UNESCO, 2005). The International Network for Quality Assurance Agencies in Higher Education (INQAAHE) is a network of quality assurance agencies aimed to help ensure cross- border quality assurance measures. Public-policy led initiatives in Europe include the 55 establishment of the “European Standards and Guidelines for quality assurance in higher education (ESG) in the framework of the Bologna Process” (Cremonini et al, 2012, p. 17). The CHEA International Quality Group (CIQG) provides a forum to discuss quality assurance issues in an international context. In conclusion, the U.S. system of accreditation has served as a model for higher education assessment world-wide. Nonetheless, there is considerable difference in how other nations govern quality assurance. While internationalization of the higher education accreditation process will continue to increase, the precise frameworks used to achieve cross-national quality standards remains undetermined. For the immediate future, nations will continue to use their own frameworks for accreditation. International accreditation processes may eventually supersede these existing frameworks, but not anytime soon. Specialized Accreditation Specialized accreditation focuses on the specialized training and knowledge needed for professional degrees and careers. Specialized accrediting bodies include the Accreditation Council for Pharmacy Education (ACPE), Accrediting Council on Education in Journalism and Mass Communications (ACEJMC), Council on Accreditation of Nurse Anesthesia Educational Programs (CoA-NA), Council on Social Work Education Office of Social Work Accreditation (CSWE), and Teacher Education Accreditation Council, Inc. (TEAC). These bodies represent only a small sampling of those noted by the Council for Higher Education Accreditation, which recognizes 60 institutional and programmatic accrediting organizations associated with 3,000 degree-granting colleges and universities (CHEA, 2014). Programmatic accreditation is granted and monitored by national organizations, unlike regional accrediting organizations; for example, the Western Association of Schools and Colleges (WASC), Southern Association of Colleges 56 and Schools (SACS), and North Central Association of Colleges and Schools, which are associated regionally by geographic region (Adelman & Silver, 1990; Eaton, 2009; Hagerty & Stark, 1989). The continuous self-study has become the cornerstone in establishing and maintaining programmatic accreditation. The self-study helps ensure that the institution upholds the best interests of the profession, while providing the necessary learning, leadership, expertise such as qualified instructors, and facilities to meet the professional learning goals laid out by the specialized accrediting body (Bloland, 2001; Gaston & Ochoa, 2013). As noted by the Global University Network for Innovation (2007), programmatic accreditation should align with and contribute to an institution’s overall accreditation goals, thereby working hand-in-hand toward institutional success. Originally, professional guidance through program accreditation was meant to safeguard the profession from incompetency, thus standards from the start were maintained most zealously to benefit practitioners and protect the public (Gaston & Ochoa, 2013). Coordinating institutional accreditation efforts where possible can be cost effective, since overlap exists between the process of both regional and programmatic accreditation. However, the review process and resource allocations can become complicated (Shibley & Volkwein, 2002). Programmatic accrediting organizations recognized by CHEA affirms that the standards and processes of the accrediting organization are consistent with the academic quality, improvement, and accountability expectations that CHEA has established. Institutions acknowledge the pressure of meeting not only institutional accreditation but also specialized accreditation of individual programs, upholding the notion of efficacy of students and professions (Bloland, 2001). Specialized program accreditation carries important quality assurance ramifications for each institution, as individual programs are often compared to the 57 entire institution. The credibility of program accreditation review is strengthened on the basis of its achievement — it is more focused on particular areas of study and carried out by colleagues from peer institutions who are specialists in specific disciplines (Ratcliff, 1996). Research on program accreditation suffers from the same lack of volume and rigor as research on institutional accreditation. Strong faculty involvement and instruction have been linked to individual program accreditation (Cabrera, Colbeck, & Terenzini, 2001; Daoust, Wehmeyer, & Eubank, 2006), while other studies focused on student outcomes in measuring competencies find that program accreditation does not provide enough support for student success (Hagerty & Stark, 1989). Program accreditation outlines the parameters of professional education (Ewell, Wellman, & Paulson, 1997; Hagerty & Stark, 1989) and upholds the national professional standards (Bardo, 2009; see for example American Accounting Association, 1977; Floden, 1980; Raessler, 1970), which calls for further empirical research on specialized accreditation given its importance on students’ educational and professional achievement. Research on Journalism Accreditation In the United States, there are currently more than 450 schools or programs – offering undergraduate and/or graduate degrees – within the field of journalism and mass communications (Association for Education in Journalism and Mass Communication, 2014). Of that total, only 114 programs nationwide are accredited by the field’s accrediting organization, the Accrediting Council on Education in Journalism and Mass Communication (ACEJMC, 2014). Like any other accrediting body, the goal of the ACEJMC is to set standards that best prepare students to become practitioners. Given that so many programs choose to refrain from participating in this voluntary process of external peer review, many important questions are raised. Why do some programs 58 decide to pursue accreditation, while most others do not? What are the similarities between accredited and non-accredited programs? How do their curricula differ? Much empirical research has been dedicated to answering such questions. Little Observed Difference in Curriculum Numerous articles have found that journalism news writing curricula – whether at an accredited or non-accredited program – are often similar (Carroll, 1977; Masse & Popovich, 2007; Blom & Davenport, 2012). Nearly 40 years ago, Carroll (1977) studied the curricular language from 60 journalism programs, and found no significant differences between accredited and non-accredited schools, “at least not in the printed materials describing the curricula of the news editorial sequences or emphases” (p. 42). A separate study by Stone (1989), who surveyed 173 journalism schools nationwide, found a general consistency in the content of the first news writing course, regardless of accreditation status. Typically, the first part of the course was dedicated to preparing students to write — providing an overview of news values, basic journalistic writing structure, standard style guides, theory of news and the press, and English grammar. The middle portion of the course is devoted to content such as interviewing, news sources, attributions and quotes, and types of news stories. The latter part of the course covers specialized story types and often topics such as legal and ethical concerns. The only significant difference, according to Stone (1989), was that accredited programs spent more class time on ethics (1.6 percent versus 0.1 percent). It should be noted, however, that these two studies were published before the mainstream use of the Internet, which profoundly shaped the journalism field and education. More recently, researchers have arrived at similar conclusions about the similarity of journalism curricula. Masse and Popovich (2007) analyzed responses from 376 faculty 59 representing 240 journalism and mass communication institutions about media writing. Their analysis revealed strong overlap in writing course structure, approaches to teaching writing, and similarities in faculty qualifications and attitudes about writing among accredited and non- accredited programs. For example, in introductory writing classes, faculty from accredited programs most often chose accident/crime stories (51.7%) and speech coverage stories (51.7%) for hard-news assignments. Their favorite feature writing assignment was a personality profile (58.9%). Faculty from non-accredited programs also chose speech coverage stories (63.5%) and accident/crime story (59.3%) as their top hard-news assignments; and their favorite feature writing assignment was a personality profile (62.5%) as well. In addition, Seamon (2012), in a literature review covering 17 studies, also noted the lack of curricular distinction among accredited and non-accredited journalism programs. In considering future research, Seamon (2012) suggests contrasting the two types of programs in terms student outcomes. “Another important element missing from the literature is a study that truly compares the effectiveness of accredited and unaccredited programs by measuring the abilities of their graduates” (p. 18). This study, in part, will attempt to address that void in the research. By analyzing job placement rates, graduation rates, retention rates, and types of learning assessments used among accredited and non-accredited programs, this study aims to expand the base of research on student outcomes in journalism schools. Perceived Value of Journalism Accreditation According to journalism school administrators, the most important reason for acquiring or maintaining accreditation is reputation enhancement (Blom, Davenport, & Bowe, 2012). In a study of 128 journalism program directors across the United States, Blom, Davenport, and Bowe (2012) found that strengthening reputation was the primary reason behind a school’s choice to 60 pursue accreditation. Many administrators felt that being accredited helps in the recruitment of prospective students and increases credibility, both inside and outside of their respective institutions. Only a small contingent of respondents (18 percent) considered the self- and peer- review process to be most valuable in creating a better program. Interestingly, many program directors – including a few from accredited programs – admitted that they see “little value in being accredited” (p. 400). Blom, Davenport, and Bowe (2012) also discovered that the most common response for not pursuing accreditation – or reaccreditation – was the cap on the number of journalism credits that students can take. Despite the perceptions of accreditation as a reputation enhancer and a recruitment tool, one study showed that graduating from an accredited journalism school does not reliably predict professional success. Becker, Kosicki, Engleman, and Viswanath (1993) polled 2,171 bachelor’s degree recipients from journalism and mass communications programs six to eight months following their graduation. According to their data, accreditation was not “a strong or consistent predictor of success in the job market” (p. 930). The Internet and Convergence One of the most important developments in the journalism field has been the advent of the Internet. Since the early 1990s, the Internet has emerged as an increasingly popular medium of communication and information, and has especially changed how people consume news (Dimmick, Chen, & Li, 2004). Stempel, Hargrove, and Bernt (2000), surveyed a national sample of adults on their media habits from 1990 to 1995, finding enormous gains for Internet usage and declines for TV news and newspapers during that time period. Building on those findings, Dimmick, Chen, and Li (2004) conducted a study to examine the competition between Internet and traditional news media. The researchers indeed found the presence of a displacement effect, 61 in which one medium partially replaces another medium for specific functions and in satisfying audience needs. The largest displacement effect was seen in television usage, as 33.7% of respondents indicated that they watched TV news less often after they started using the Internet for news; and 28% reported using newspapers less (Dimmick, Chen, & Li, 2004). The Internet has also affected journalism education. One notable example is the rise of teaching media convergence. “Convergence journalism has evolved as newsrooms have gone digital, blending media formats” (Kraeplin & Criado, 2005, p. 47). Convergence refers to the confluence of communication, storytelling, and technology, as specialized journalism skills (e.g. print versus broadcast) become less valuable due to blended digital environments (Tanner, 2005). For example, practitioners who want to engage in online journalism need to know print reporting, broadcast reporting, photography, infographics, design, editing, audio/visual production, etc. Tanner (2005) surveyed top administrators at ACEJMC-accredited programs (N=53) and TV news directors (N=170) to explore educational trends regarding convergence. The study found that about 80% of news directors and 80% of educators practice or teach convergence in some fashion. The study further reported that, despite the emphasis on convergence, both educators and professionals agree that writing is still key to landing a job. In a later article, Tanner, Forde, Besley, and Weir (2012) noted how “several studies found that most programs maintained specialized tracks (such as broadcast news and print news) while simultaneously emphasizing convergence” (p. 221). Another major impact of the Internet has been the delivery of journalism courses online. Castañeda (2011) cited a report that the University of Memphis and the University of Nebraska- Omaha in 1994 were among the first journalism schools to establish online degrees. Castañeda’s (2011) study polled all ACEJMC programs across the U.S., garnering a 71.6% response rate, and 62 found that 13% of these accredited programs now offer or plan to offer online degrees; 84% offer web-facilitated journalism courses. According to open-ended survey responses, the most common reasons for offering, or planning to offer, online degrees include boosting enrollment and reaching new markets (Castañeda, 2011). Impact on Diversity One notable benefit of journalism accreditation has been its impact on diversity. According to the ACEJMC’s “Diversity and Inclusion” standard, each unit should have a “diversity plan for achieving an inclusive curriculum, a diverse faculty and student population, and a supportive climate for working and learning and for assessing progress toward achievement of the plan,” as well as demonstrate effective initiatives to recruit women and minority faculty members (ACEJMC, 2012). Subervi and Cantrell (2007) surveyed 137 journalism schools – 69 accredited and 68 non- accredited – to assess minority faculty hiring and retention practices. Their data revealed major differences with regard to efforts and policies for recruiting, retaining, and promoting minority faculty, with accredited schools outperforming non-accredited schools. Ross et al. (2007) similarly found that ACEJMC accreditation standards led to an increase in non-white and female faculty and students in journalism and mass communication schools. For example, in 1989, non- whites and females represented 10.9% and 28%, respectively, of journalism and mass communication programs. In 2001, they comprised 15.3% and 38.5%, respectively. Although positive gains have been made in boosting minority representation in journalism schools, Subervi and Cantrell (2007) and Ross et al. (2007) agree that additional strategies and directives are needed to strengthen diversity in this academic field. 63 With regard to curriculum, academic discourse on the significance of diversity-focused coursework and content gathered strong momentum in the early 1990s (Biswas & Izard, 2009). In a study polling journalism and mass communication programs nationwide, Biswas and Izard (2009) reported that 74% of respondents (of which 45 were accredited and 33 non-accredited) offered at least one specialized media diversity course. Respondents cited the ACEJMC accreditation standards and their respective school’s multicultural environment as factors facilitating this steady upward trend. Assessment of Journalism Student Learning As noted earlier in this chapter, the current trend in higher education accreditation places strong emphasis on accountability, which involves measuring student learning through assessment. Similarly, in the journalism field, accreditation-related literature has also addressed the topic of assessment. Donald (2006), whose work was published as part of the book Assessing Media Education: A Handbook for Educators and Administrators, focused on the use of portfolios as a direct measure of student learning. Portfolios are a compilation of coursework and/or projects produced by students to demonstrate their proficiency in a subject area; they are also used by faculty to chronicle the growth of students’ knowledge and skills in a particular domain. The use of portfolios is becoming a common practice for assessing student learning, with numerous undergraduate journalism and mass communication programs having adopted it. According to Donald (2006), the advantages with portfolios include formative feedback from faculty; students are better prepared and more confident to compete for jobs; and the ability to identify common weaknesses among students that can lead to content or curricular improvement. 64 In addition, capstone courses are another method of summative assessment and direct measure (Moore, 2006). The capstone course “not only assesses previous cognitive learning in the major, but also provides a forum that allows an instructor to assess the student's overall collegiate experience” (p. 440). The educational advantages of a capstone course include: facilitating independent learning among students; enabling faculty to resolve perceived curricular weaknesses; the flexibility to be tailored to measure outcomes in a variety of specializations within the journalism/communications field; and promoting high-level student performance through analysis, synthesis, and the application of prior knowledge (Moore, 2006). In a separate chapter in the Assessing Media Education handbook, Grady (2006) reviewed indirect measures, such as internships, job placement, and student performance in competitions. These indirect measures are reflective of the success of an academic program, and are largely based on the assessment of student outcomes, Grady (2006) states. The ACEJMC Standard 9 recommends that both direct and indirect measures be woven into a program’s written assessment plan. Grady (2006) views indirect measures as providing “an opportunity to gauge the success or failure of an academic program” (p. 365). Internships, in particular, can serve as a critical method of assessment, as they bridge classroom theory with professional application. Internships may involve objective measures (i.e. number of students completing internships, types, locations) and subjective measures (i.e. supervisor evaluation comments, student journals, and reflective papers) (Grady, 2006). Williams (2010) sought to assess student learning outcomes through journalism internship data. In a qualitative case study of an accredited undergraduate journalism program (n=16), nine general workplace competencies – determined by faculty and staff – were evaluated through student internships: ability to work independently, ability to evaluate work of self and 65 others; understanding of law and workplace issues; effective presentation skills; interpersonal skills; reliability and punctuality; appropriate appearance; ability to constructive criticism, and ability to complete work on time (Williams, 2010). The data were collected by supervisor evaluations, student feedback, worksite visits, intern surveys, and descriptive records. The researcher found that the results from the internship evaluations consequently led to steps toward the enhancement of the program’s academic experience and promoting student learning. Specifically, the data were used to strengthen ties with practitioners/internship supervisors; contribute to discussions about a new writing course; revise weekly journalism assignments; and validate classroom instructional approaches (Williams, 2010). Thus, in this case, the internship assessment data were useful for improving curriculum and instruction. Another case study, by Alderman and Milrod (2009), utilized the ACEJMC professional competencies to evaluate interns from the University of Tennessee at Chattanooga’s accredited communications program. Worksite supervisors rated students using an evaluation form provided by the program director. The form included professional skills and personal work habits along several categories: accuracy, appearance, communication skills, creativity, dependability, sense of initiative, interpersonal skills, pride in work, professional skills, self- confidence and speed. A key finding of the study was that his internship form was determined to be a reliable measure, given high correlations between the students’ final grade and the indicators. “The data demonstrate that the final internship evaluation form used in this study is a reliable measure of both the students’ professional skills and personal work habits and their ability to meet the criteria set forth in the eleven professional values and competencies” (p. 24). 66 Assessment of Student Writing Within the literature on assessment, a key part focuses particularly on the assessment of journalism student writing. Lingwall (2010) surveyed 166 journalism programs about the quality of student writing, and found that – regardless of whether or not a unit was accredited – all respondents rated their students at a similar, moderate proficiency level and experienced the same writing-related challenges. In addition, all respondents reported using many of the same measures for student writing, encountering mixed success. The results of this study point to two key implications: the importance of understanding of students’ writing ability as they start the program, and the need to implement more meaningful measures. In a recent case study, one accredited journalism program attempted to address the issue of assessing student learning through the implementation of a pre-test/post-test strategy. The program sought to measure learning outcomes by administering entry and exit examinations for incoming and graduating students, respectively (Weir, 2010). Presuming the validity of such instruments, this case study may offer a novel approach to demonstrate actual student learning in journalism programs. Critique of Journalism Education Several scholarly articles have identified perceived gaps in journalism education. In response to these observed gaps, some researchers have made recommendations to enhance journalism curriculum and better prepare students for work in the professional arena. Cusatis and Martin-Kratzer (2009) analyzed the state of math education at undergraduate journalism programs. Math skills are vital in today’s knowledge economy. Cusatis and Martin- Kratzer (2009) cited prior research which asserted that more than half of 21st Century jobs require the ability to use math; and journalists likewise utilize math skills for a wide range of 67 responsibilities, including reports involving science, medicine, and budgets, for example. After surveying journalism chairs at accredited and unaccredited journalism schools, Cusatis and Martin-Kratzer (2009) found that only 12.4% of respondents offered a specialized math course specifically for the journalism major; most relied on general education to develop this competency. Furthermore, 70.2% of journalism chairs rated their students’ math skills as fair or poor. In their conclusion, Cusatis and Martin-Kratzer (2009) argue that an “increase in math education for journalism students could result in higher journalistic numeracy, which could in turn result in fewer math errors in journalism” (p. 372). Therefore, they suggest that journalism programs work to improve the state of math education. In a separate empirical study, Cohn (2013) polled news media editors and/or supervisors who were part of an internship program. Of all the skills they considered very important in order to succeed professionally, four of the top five were soft skills — time management, ability to learn, high ethical standards, and teamwork. Other soft skills such as adaptability/flexibility and interpersonal skills also earned high ratings. However, “respondents in this study perceive students as not graduating with the soft skills needed in order to be competitive in the job market” (p. 177). As a result, Cohn (2013) stated that the ACEJMC should recommend journalism schools to integrate soft skills into the curriculum, given their importance in the workplace according to journalism practitioners. Focusing on media literacy, Christ (2004) notes the difficulty of defining the concept of media literacy as well as developing standards for it. The author cites that media literacy involves the ability to access, analyze, evaluate, and communicate via print and digital media. Christ (2004) further points out that media literacy is not explicitly listed as a competency in the ACEJMC standards, but it is inherent in the standards — in other words, the ACEJMC 68 requirements are meant to produce media literate practitioners. In order to effectively assess whether or not students are media literate, Christ (2004) suggests adopting a student learning- outcomes approach that begins by asking, “What do we teach?” (p. 95). In a case study involving one ACEJMC-accredited university in the southern U.S., Fuse and Lambiase (2010) examined the perceptions of 137 alumni on the ACEJMC’s competencies across three dimensions: their degree of liking; the department’s performance; and the frequency of using these skills or knowledge on the job. According to the results, alumni found writing and critical/creative thinking to be the most valued competencies and the most useful on the job; while history and law were found to be their least favorite areas in school and the least useful in the professional arena. Given their findings, the researchers argue that the ACEJMC core competencies may not necessarily align with actual on-the-job demands. They recommend that “journalism and mass communication programs need to work harder to make connections between the large knowledge sets that are taught (e.g., history, diversity, statistics, technology) and the skills-related competencies that alumni perceive as most valuable (e.g., writing)” (p. 52). Better and more assessment of student learning is one way to help foster these connections (Fuse & Lambiase, 2010). Singh (2005) surveyed a national sample of faculty from ACEJMC-accredited programs and assessed their perceptions of journalism students’ information literacy with regard to library research. This is not only an important skill to ensure academic success, but it is also highly relevant for practitioners in the journalism and mass communications field. The ability to conduct research and evaluate information is explicitly listed by the ACEJMC as a professional competency, and “the provision of adequate library and information resources” is included among its standards for accreditation (p. 294). Of the 425 survey responses, 97.6% of faculty 69 indicated that they give assignments requiring library research, but only 8.6 made library instruction a regular part of every course. Thus, in light of the frequent use of library research in journalism coursework and relevant professional application, Singh (2005) advocates integrating of information literacy education as a fundamental part of journalism curriculum. Evaluating ACEJMC Standards Very recently, several researchers have published articles evaluating the ACEJMC standards, or various aspects of them (Reinardy & Crawford, 2013; Christ & Henderson, 2014; Henderson & Christ, 2014). In their study, which sought to critique the nine ACEJMC standards, Reinardy and Crawford (2013) analyzed responses from 68 administrators of accredited journalism and mass communications programs. Six of the nine standards were rated “good as is” by a large majority of respondents. These include: mission, governance, and administration (86%); full-time and part-time faculty (73%); student services (95%); resources, facilities, and equipment (83%), professional and public service (78%); and scholarship: research, creativity, and professional activity (68%). In contrast, the lowest-rated was Standard 2, curriculum and instruction, which 40 percent said “needs major changes.” Christ and Henderson (2014) assessed the 12 ACEJMC professional values and competencies, breaking down each one and identifying issues with the language used. According to the ACEJMC, all graduates of accredited journalism and mass communication programs, should be able to: • understand and apply the principles and laws of freedom of speech and press for the country in which the institution that invites ACEJMC is located, as well as receive instruction in and understand the range of systems of freedom of expression around 70 the world, including the right to dissent, to monitor and criticize power, and to assemble and petition for redress of grievances; • demonstrate an understanding of the history and role of professionals and institutions in shaping communications; • demonstrate an understanding of gender, race, ethnicity, sexual orientation and, as appropriate, other forms of diversity in domestic society in relation to mass communications; • demonstrate an understanding of the diversity of peoples and cultures and of the significance and impact of mass communications in a global society; • understand concepts and apply theories in the use and presentation of images and information; • demonstrate an understanding of professional ethical principles and work ethically in pursuit of truth, accuracy, fairness and diversity; • think critically, creatively and independently; • conduct research and evaluate information by methods appropriate to the communications professions in which they work; • write correctly and clearly in forms and styles appropriate for the communications professions, audiences and purposes they serve; • critically evaluate their own work and that of others for accuracy and fairness, clarity, appropriate style and grammatical correctness; • apply basic numerical and statistical concepts; • apply current tools and technologies appropriate for the communications professions in which they work, and to understand the digital world (ACEJMC, 2012). 71 After a thorough analysis, Christ and Henderson (2014) assert that these 12 professional values and competencies actually embody 36 assessment requirements. To remedy this problem, the authors raise two potential solutions. The first would be to “shorten, simplify, clarify, or even eliminate” a few of the competencies, which they admit will be difficult to do politically and practically (p. 10). The alternative solution would be to develop a multi-tiered system within the 12 values and competencies system, in which each unit “would be given the latitude to identify those three or more competencies they most emphasize in their programs” (p. 10). Expanding on that article, Henderson and Christ (2014) also published a recent study that benchmarked the 12 ACEJMC competencies. The authors surveyed 176 journalism program administrators about which competencies were most emphasized in their programs. According to the data overall, the most emphasized were: writing effectively (72.2%); thinking critically, creatively, and independently (51.7%); and applying technology (45.5%). After those, the competencies related to ethics received 30.8% and freedom of speech at 30.1%. Conversely, with regard to programs’ least emphasized priorities, the competencies of history, gender diversity, and numerical/statistical concepts each received 10 percent or less. Even when the data were disaggregated to compare accredited and non-accredited programs, the emphases showed considerable overlap. For accredited units, the top five, in descending order, were: writing effectively; critical/creative thinking and freedom of speech (tied for second); applying technology; and ethics. For non-accredited units, the top five were: writing effectively; critical/creative thinking; applying technology; ethics; and theories. Given their findings, Henderson and Christ (2014) once again argue that a tiered system of assessment may be useful and beneficial for journalism schools. “For example, “writing,” “thinking,” and “technology” 72 could be assessed for understanding, application, or even mastery, while “free speech” and “ethics” might be assessed at the level of understanding or application” (p. 9). Similar to Henderson and Christ (2014), this study surveyed a national sample of accredited and non-accredited journalism program directors to prioritize the ACEJMC competencies. This study asks respondents to prioritize all 12 competencies, and also to self- report student outcomes (such as graduation rates, retention rates, and job placement rates), to determine if there are any correlations between how they prioritize these competencies and their student outcomes. In addition, this study asks respondents to indicate their use – if any – of a summative assessment (e.g. capstone project, comprehensive examination, thesis, portfolio, exit survey, etc.), and share their thoughts on how successful this measure is at their respective institutions. Lastly, this study also seeks to determine if any differences exist between accredited and non-accredited programs concerning these points of emphasis. Current Journalism Education Landscape The Poynter Institute, a leading institution in journalism education, recently conducted a study that garnered 1,800 responses – equally distributed – from professionals and academics nationwide. In their white paper, titled the State of Journalism Education 2013, the institute detailed key findings and research from the study. Based on the survey results, the institute argues that there currently exists a disconnect between educators and practitioners with regard to journalism education (Finberg, Krueger, & Klinger, 2013). For example, 96% of educators indicated that a journalism degree is very important to understanding the value of journalism, but only 57% of practitioners agreed. A large majority (98%) of educators stated that a journalism degree is very to extremely important to news-gathering abilities, but only 59% of practitioners agreed. When asked whether they feel journalism education is keeping up with industry 73 changes, 39% of educators responded not at all or a little; 48% of newsroom leaders/staff say it isn’t keeping up. And when asked how important a journalism degree is to hiring, 53% of educators believe the degree is very to extremely important; while 41% of professionals shared that view. Furthermore, three core themes emerge from the report’s articles: 1) experimentation is necessary, in terms of curriculum and delivery platforms; 2) the core values of journalism are still important; and 3) there is a need for increased cooperation and interaction between the academy and those who in media organizations (Finberg, Krueger, & Klinger, 2013). 74 CHAPTER THREE: METHODOLOGY Purpose of the Study Currently, there are more than 450 schools and programs – offering undergraduate and/or graduate degrees – within the field of journalism and mass communications nationwide (Association for Education in Journalism and Mass Communication, 2014). Of that number, 114 are accredited by the field’s accrediting organization, the ACEJMC (ACEJMC, 2014). Because such a vast majority of schools have decided not to pursue programmatic accreditation, it is critical to have empirical data that show what impact, if any, accreditation may have on student outcomes. The purpose of this study is to explore the relationship between curricular priorities/learning assessment practices and quantifiable student outcomes at undergraduate journalism programs. This chapter will discuss the methodology employed in the researcher’s data collection process. The study employs a mixed methods research approach. A national sample of accredited and non-accredited journalism program directors was surveyed online, using a survey instrument created and distributed via Qualtrics software. Individuals from ACEJMC-accredited programs were asked to explain the reasons why their programs chose to pursue accreditation. In addition, all survey recipients were asked to prioritize the 12 ACEJMC professional competencies using a Likert-type scale, then indicate their use – if any – of a summative assessment (e.g. capstone project, comprehensive examination, thesis, portfolio, exit survey, etc.), and also to self-report student outcomes (such as graduation rates, retention rates, and job placement rates). Qualitative data were gathered from open-ended questions in the survey. The study examines whether the way they prioritize these competencies and/or their use of summative learning assessments have 75 any correlation to their student outcomes. This study also attempts to indicate whether any differences exist between accredited and non-accredited programs. Guiding this study are the following research questions: Which competencies do journalism schools prioritize as being most important? What are the measurable student outcomes at undergraduate journalism schools (i.e. job placement rates, graduation rates, retention rates)? How do schools assess student learning, at the conclusion of the program? (What summative measure is used? How effective is it, in their experience?) Are there differences between accredited and non-accredited programs? Sample and Population The population of this study is all journalism and mass communications schools nationwide offering an undergraduate journalism degree program, as listed in the Association for Education in Journalism and Mass Communication (AEJMC) 2014 Directory. The population is purposely meant to be broad in order to maximize the generalizability of the data and findings of this study. The survey was sent to 517 program directors and faculty administrators representing a total of 441 undergraduate journalism programs. In the cover email, recipients were asked to have the person most knowledgeable about the journalism program's curriculum to respond to the survey (see Appendix A). Instrumentation As an information collection method, surveys provide a description of trends and opinions of a population by examining a sample of it (Creswell, 2009). According to Fink (2013), the use of surveys is particularly beneficial for evaluating the effectiveness of programs, 76 as well as “to get information about how to guide studies and programs” (p. 2). Given the nature of this study, it seemed appropriate to employ a survey as a means of collecting data. Therefore, in this mixed methods research study, the methodology involves the use of a survey administered online with subsequent statistical analysis using SPSS, as well as qualitative analysis of open-ended responses. The researcher-developed survey seeks to gather data from undergraduate journalism program directors and faculty administrators nationwide. The study uses correlation coefficient tests to acquire an understanding of student outcomes as a result of curricular priorities and practices, and explore any possible differences between accredited and non-accredited programs. The data and analysis from this study aim to contribute to filling a gap in the literature that addresses accreditation and student learning and outcomes. Survey The survey (see Appendix B) consisted of 16 questions, and was broken up into four sections. Part I captured Demographic Information, such as accreditation status, whether or not the respondent was the program’s accreditation liaison, and the program’s rationale for choosing to pursue or maintain accreditation (if ACEJMC-accredited). Part II examined Professional Values and Competencies, asking respondents to indicate which of the 12 ACEJMC values and competencies their program emphasize the most, using a five-point Likert-type scale that ranged from least important to most important. The 12 professional values and competencies required by the ACEJMC are: understand and apply the principles and laws of freedom of speech and press for the country in which the institution that invites ACEJMC is located, as well as receive instruction in and understand the range of systems of freedom of expression around the 77 world, including the right to dissent, to monitor and criticize power, and to assemble and petition for redress of grievances; demonstrate an understanding of the history and role of professionals and institutions in shaping communications; demonstrate an understanding of gender, race, ethnicity, sexual orientation and, as appropriate, other forms of diversity in domestic society in relation to mass communications; demonstrate an understanding of the diversity of peoples and cultures and of the significance and impact of mass communications in a global society; understand concepts and apply theories in the use and presentation of images and information; demonstrate an understanding of professional ethical principles and work ethically in pursuit of truth, accuracy, fairness and diversity; think critically, creatively and independently; conduct research and evaluate information by methods appropriate to the communications professions in which they work; write correctly and clearly in forms and styles appropriate for the communications professions, audiences and purposes they serve; critically evaluate their own work and that of others for accuracy and fairness, clarity, appropriate style and grammatical correctness; apply basic numerical and statistical concepts; apply current tools and technologies appropriate for the communications professions in which they work, and to understand the digital world (ACEJMC, 2012). 78 The final question of this section presented an opportunity to explain – in an open-ended response – how their program emphasizes these “most important” values/competencies over others in their curriculum. Part III focused on Assessment of Learning, asking respondents to identify the summative measure they use to assess undergraduate student learning at the conclusion of the program. They are also asked to explain the rationale behind their decision to use such measures and how effective these measures have been. Part IV asks respondents to self-report their Student Outcomes, namely job placement rates, graduation rates, and retention rates over the last three years. Validity A faculty administrator from the University of Southern California’s Annenberg School for Communication and Journalism served as an expert reviewer to assist with establishing content validity for the survey. This subject-matter expert conducted a review of the survey instrument and provided feedback and recommendations in October 2014, which were subsequently adopted prior to the survey’s launch. Data Collection Following approval by the University of Southern California’s Institutional Review Board (IRB), the survey was distributed via email to 441 undergraduate journalism programs, listed in the Association for Education in Journalism and Mass Communication (AEJMC) 2014 Directory. The initial email invitation to take part in the survey was sent on November 25, 2014, through Qualtrics. A follow-up email invitation from the researcher was sent on December 19, 2014. A final follow-up email from the faculty chair was sent on January 25, 2015. 79 The researcher closed the survey on February 7, 2015. Statistical analysis of the results, including correlation coefficient tests, was conducted using SPSS. Response Rate The survey yielded 78 valid responses, which equates to a 17.7% response rate. Of that total, 35 programs indicated that they were accredited by the ACEJMC, while 43 were unaccredited. Limitations With regard to this study, several limitations should be taken into consideration. First, the design of the survey allowed for respondents to skip questions and move on to the next section without fully completing the form. As a result, most of the responses had one or more unanswered fields. Second, the design of the survey was not optimally conducive for gathering data on Question 13. In Part III: Assessment of Learning, Question 13 asked, “How do you assess undergraduate student learning at the conclusion of the program?” The possible list of answers included: capstone or professionally focused project; comprehensive examination; thesis; portfolio; exit survey or interview; other; or none. These answers were available in multiple-choice format, but did not allow respondents the ability to select more than a one option. Several used the “Other” option to provide a written explanation of how they presently used numerous measures to assess student learning. As a result, the researcher coded each of these individual explanations as a new level, prior to running further statistical analysis. Third, the student outcome numbers – job placement rates, graduation rates, and retention rates – were self-reported by each institution. There was no independent verification conducted to confirm these statistics. Three schools noted that their student numbers were estimates or approximations; these were included in the data set for subsequent statistical analysis. There was 80 also a small number of responses submitted in the student outcomes section of the survey. Lastly, several respondents commented that the wording on some of the questions was either unclear or not aligned with how they typically track student data. For example, some schools stated that they track graduation and retention rates by entry year, rather than by exit year. Meanwhile, 21 schools stated that, for one or more of the student outcome metrics, their program and/or their university didn’t track these numbers, or they didn’t know. 81 CHAPTER FOUR: RESULTS The focus of this study is to examine what impact, if any, accreditation may have on undergraduate journalism program curriculum and student outcomes. This chapter will present the results of the survey (see Appendix B) distributed to a national sample of journalism program directors and faculty administrators from both ACEJMC-accredited and non-accredited programs. The chapter will begin by restating the research questions, as well as reviewing the response rate of the survey. The five subsequent sections of the chapter will review the data pertaining to: why ACEJMC-accredited schools chose to pursue accreditation; ranking of the ACEJMC professional values and competencies; assessment of student learning; and student outcomes. Purpose of the Study The purpose of this study is to explore the relationship between curricular priorities/learning assessment practices and quantifiable student outcomes at undergraduate journalism programs. It explores whether the way schools prioritize the 12 ACEJMC values and competencies and/or their use of summative learning assessments have any correlation to their student outcomes. This study also attempts to indicate whether any differences exist between accredited and non-accredited programs. Guiding this study are the following research questions: Which competencies do journalism schools prioritize as being most important? What are the measurable student outcomes at undergraduate journalism schools (i.e. job placement rates, graduation rates, retention rates)? How do schools assess student learning, at the conclusion of the program? (What summative measure is used? How effective is it, in their experience?) 82 Are there differences between accredited and non-accredited programs? Response Rate The Qualtrics-produced survey was distributed via email to 441 undergraduate journalism programs, listed in the Association for Education in Journalism and Mass Communication (AEJMC) 2014 Directory. The survey yielded 78 valid responses, which equates to a 17.7% response rate. Of that total, 35 programs (44%) indicated that they were accredited by the ACEJMC, while 43 programs (55%) were unaccredited (see Figure 1). Figure 1. Accreditation status of journalism programs (N=78). In addition, a total of 16 respondents (32.7%) out of 49 indicated that they were the accreditation liaison at their respective program (see Figure 2). 83 Figure 2. Percentage of respondents who are accreditation liaisons (N=49). Why Pursue Accreditation? Because only about one-fourth of all U.S. journalism programs are accredited, it was important to find out what the motivating factors for pursuing accreditation are, from the perspective of the journalism program directors and faculty administrators. As part of the survey, ACEJMC-accredited programs were asked to provide an open-ended explanation of their school’s reasons behind pursuing or maintaining accreditation. Of the 35 accredited schools, a total of 15 categories of responses emerged; most schools offered more than one reason. Quality. The most frequent reason (listed by 14 schools) was to ensure program quality, improvement, and/or accountability. As one program noted, “It is the discipline's only national benchmark of program quality” (personal communication, December 19, 2014). Another wrote, accreditation “keeps us honest in terms of pursuing diversity, assessment, etc.” (personal communication, November 29, 2014). Many also agreed that accreditation serves a mechanism to spur improvement over time. According to a respondent, “it ensures that we will self-assess in 84 crucial areas regularly” (personal communication, December 1, 2014). While another added, it provides “an outside assessment of the quality of our program and to build in a timeline of regular program improvement and accountability” (personal communication, November 25, 2014). Prestige. Nine respondents named reputation, stature, or prestige as a reason. As one respondent explained, “Faculty believe accreditation is linked to the department's regional and statewide reputation as a worthy program that produces competent and work-force ready graduates” (personal communication, January 1, 2015). Class size. Six schools noted that pursuing ACEJMC accreditation helped justify smaller class sizes. As one program stated, “Keeping student-faculty ratios at manageable levels in skills courses even as the university has moved to significantly increase student enrollment by mandating larger classes” (personal communication, January 5, 2015). Mandate. Six schools pointed out that their decision to undergo accreditation was a university mandate. Among those responses, one noted that its program’s decision to acquire accreditation followed a state mandate, as well as a university one. Belief in standards. Five schools pursued accreditation because of a belief in the ACEJMC standards. According to one respondent, “Faculty have discussed not pursuing accreditation, but, in the end, realize that the program must be evaluated and that ACEJMC guidelines have been effective” (personal communication, December 8, 2014). Recruitment. Five schools pointed out that student recruitment was a motivating factor. As one school put it, “It is a "seal of approval" that helps in recruiting” (personal communication, November 25, 2014). Another school explained very directly that accreditation 85 helps “To distinguish ourselves from non-accredited journalism programs” (personal communication, November 28, 2014). University accountability. Accountability to university administration was listed as a reason by five programs. Two respondents explained that accreditation was also valuable for bureaucratic leverage against university administration. According to one program, part of their reason to pursue accreditation was “to fend off unreasonable demands from school administration” (personal communication, December 21, 2014). Another mentioned how accreditation provides “leverage with University administration on issues surrounding class sizes, support for the program, facilities upgrades, etc.” (personal communication, November 25, 2014). Credential. Four schools noted that accreditation served as a credential, or external validation, of their program. One program wrote, “We have decided to maintain accreditation given it's [sic] value as a credential in a field of professional practice” (personal communication, December 19, 2014). Another respondent added, “It's also validation that the unit is incorporating best practices” (personal communication, November 28, 2014). Leadership’s choice. Three respondents stated that the decision to pursue accreditation was done at the direction of program leadership. As one put it, “our dean is a big proponent” (personal communication, November 25, 2014). Meanwhile, another noted, “Had no choice. The previous chair had started the process” (personal communication, November 25, 2014). Tradition. Three schools listed “tradition” as a reason. But as one program pointed out, “However, tradition only goes so far. As a small program, accreditation has provided a mechanism for maintaining high standards in several areas” (personal communication, January 5, 2015). 86 Alumni. Two noted that accreditation was important for alumni support. One program explained that ACEJMC accreditation holds “Worth to graduates” (personal communication, December 20, 2014). Competition. Two schools indicated that being ACEJMC accredited provided a competitive edge. As one program succinctly stated, “most of the top J schools are accredited” (personal communication, November 25, 2014). Student opportunities. Another two respondents listed student opportunities as a reason. According to one respondent, there are “opportunities for students to participate in special programs and contests open only to accredited programs” (personal communication, November 25, 2014). Faculty. One school wrote that having more full-time faculty was a motivating reason, citing that the ACEJMC has a “requirement that majority of classes be taught by full-time faculty” (personal communication, November 25, 2014). Professional connections. And, one school pursued accreditation because of the professional connections it engendered: “We appreciate participation in the national convention each year, the workshops, the standards the organization holds to, and the professional fellowship with other writers and editors” (personal communication, November 28, 2014). In contrast, one respondent – after having explained the reasons why the program sought accreditation – proceeded to offer a rigid critique of journalism accreditation. The respondent wrote that accreditation stifles educational innovation; and that the standards inhibit the type of experimentation which may help advance journalism education: “I'm philosophically opposed to accreditation because it stifles innovation. Programs are forced to fit within a specific box, created by those who are already are within the system 87 and industry. I feel that hinders our ability to try new things that might advance journalism education. Also, the system is quite arbitrary - the top schools are "too big to fail" even when they often are really poor in some of the standards” (personal communication, November 28, 2014). Furthermore, the respondent is weighing the possibility of abandoning accreditation. Ranking the ACEJMC Professional Values and Competencies In Part II of the survey, the journalism directors and faculty administrators were asked which of the 12 ACEJMC professional values and competencies their program emphasizes most. Each respondent was asked to rank the 12 values and competencies along a Likert-type scale from 1 to 5, with 1 being the least important and 5 being the most important. According to the ACEJMC, the 12 professional values and competences required of all graduates of an accredited journalism and mass communication programs are: • understand and apply the principles and laws of freedom of speech and press for the country in which the institution that invites ACEJMC is located, as well as receive instruction in and understand the range of systems of freedom of expression around the world, including the right to dissent, to monitor and criticize power, and to assemble and petition for redress of grievances; • demonstrate an understanding of the history and role of professionals and institutions in shaping communications; • demonstrate an understanding of gender, race, ethnicity, sexual orientation and, as appropriate, other forms of diversity in domestic society in relation to mass communications; 88 • demonstrate an understanding of the diversity of peoples and cultures and of the significance and impact of mass communications in a global society; • understand concepts and apply theories in the use and presentation of images and information; • demonstrate an understanding of professional ethical principles and work ethically in pursuit of truth, accuracy, fairness and diversity; • think critically, creatively and independently; • conduct research and evaluate information by methods appropriate to the communications professions in which they work; • write correctly and clearly in forms and styles appropriate for the communications professions, audiences and purposes they serve; • critically evaluate their own work and that of others for accuracy and fairness, clarity, appropriate style and grammatical correctness; • apply basic numerical and statistical concepts; • apply current tools and technologies appropriate for the communications professions in which they work, and to understand the digital world (ACEJMC, 2012). This section of the chapter will first report the overall frequency and descriptive statistics for each of the 12 areas. Then, it will show the differences in means between accredited and non-accreditation schools. Third, it will show the results of an independent samples t-test for equality of means, to determine if there is any statistically significant difference between accredited and unaccredited journalism programs in terms of how their curricula emphasize the ACEJMC professional values/competencies. And finally, it will review the open-ended survey 89 responses addressing how each school’s curriculum emphasized the values/competencies it ranked as most important. Values/Competencies Seen as Most Important Of the 12 ACEJMC professional values and competencies, the three that received the highest mean scores were writing (4.9), critical/creative thinking (4.76), and professional ethics (4.59). Conversely, the three values and competencies garnering the lowest mean scores were history of communications (3.343), numerical and statistical concepts (3.348), and the application of theories in presenting images and information (3.426). In comparison, the earlier study by Henderson and Christ (2014) found that the three most emphasized professional values and competencies were: writing effectively; thinking critically, creatively, and independently; and applying technology. The three least emphasized were history, gender diversity, and numerical/statistical concepts. In the following sub-sections, the scores and descriptive statistics for each of the 12 professional areas collected as part of this current study will be reported in greater detail. Freedom of speech and press. For the value/competency related to understanding and applying the principles and laws of freedom of speech and press, the survey yielded 70 total valid responses (Figure 3). The overall mean score was 4.39, with a standard deviation of .804. Forty respondents (57.1%) rated this value/competency as being most important, 18 (25.7%) ranked it as important, 11 (15.7%) marked it as average, and one (1.4%) ranked it as being less important. This competency had the fourth highest average mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -.994 and a kurtosis of -.142. 90 Figure 3. Understand and apply principles and laws of freedom of speech and press. History. For the value/competency related to demonstrating an understanding of the history and role of professionals and institutions in shaping communications, the survey yielded 70 total valid responses (Figure 4). The overall mean score was 3.34, with a standard deviation of .866. Only seven respondents (10%) rated this competency/value as being most important, 21 (30%) ranked it as important, 31 (44.3%) marked it as average, and 11 (15.7%) ranked it as being less important. This competency had the lowest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of .228 and a kurtosis of -.526. 91 Figure 4. Demonstrate an understanding of the history and role of professionals and institutions in shaping communications. Gender and other forms of diversity. For the value/competency related to demonstrating an understanding of gender, race, ethnicity, sexual orientation and other forms of diversity in society, the survey yielded 70 total valid responses (Figure 5). The overall mean score was 3.79, with a standard deviation of .991. Seventeen respondents (24.3%) rated this competency/value as being most important, 30 (42.9%) ranked it as important, 16 (22.9%) marked it as average, five (7.1%) ranked it as being less important, and two (2.9%) rated it least important. This competency had the eighth highest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -.747 and a kurtosis of .377. 92 Figure 5. Demonstrate an understanding of gender, race, ethnicity, sexual orientation and, as appropriate, other forms of diversity in domestic society in relation to mass communications. Cultural diversity. For the competency/value related to demonstrating an understanding of cultural diversity and the significance and impact of mass communications in a global society, the survey yielded 69 total valid responses (Figure 6). The overall mean score was 3.94, with a standard deviation of .983. Twenty-three respondents (33.3%) rated this competency/value as being most important, 26 (37.7%) ranked it as important, 14 (20.3%) marked it as average, five (7.2%) ranked it as being less important, and one (1.4%) rated it least important. This competency had the seventh highest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -.741 and a kurtosis of .055. 93 Figure 6. Demonstrate an understanding of the diversity of peoples and cultures and of the significance and impact of mass communications in a global society. Theories. For the competency/value related to applying theories in the use and presentation of images and information, the survey yielded 68 total valid responses (Figure 7). The overall mean score was 3.43, with a standard deviation of 1.04. Twelve respondents (17.6%) rated this competency/value as being most important, 19 (27.9%) ranked it as important, 25 (36.8%) marked it as average, 10 (14.7%) ranked it as being less important, and two (2.9%) rated it least important. This competency had the 10th highest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -.125 and a kurtosis of -.539. 94 Figure 7. Understand concepts and apply theories in the use and presentation of images and information. Ethics. For the competency/value related to understanding professional ethics and working ethically in pursuit of truth, accuracy, fairness and diversity, the survey yielded 71 total valid responses (Figure 8). The overall mean score was 4.59, with a standard deviation of .688. Fifty respondents (70.4%) rated this competency/value as being most important, 13 (18.3%) ranked it as important, and eight (11.3%) marked it as average. None of the respondents rated it as less or least important. This competency had the third highest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -1.422 and a kurtosis of .635. 95 Figure 8. Demonstrate an understanding of professional ethics. Critical and creative thinking. For the competency/value related to thinking critically, creatively and independently, the survey yielded 71 total valid responses (Figure 9). The overall mean score was 4.76, with a standard deviation of .520. Fifty-seven respondents (80.3%) rated this competency/value as being most important, 11 (15.5%) ranked it as important, and three (4.2%) marked it as average. None of the respondents rated it as less or least important. This competency had the second highest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -2.135 and a kurtosis of 3.825. 96 Figure 9. Thinking critically, creatively, and independently. Research. For the competency/value related to conducting research and evaluating information, the survey yielded 70 total valid responses (Figure 10). The overall mean score was 3.76, with a standard deviation of .859. Fifteen respondents (21.4%) rated this competency/value as being most important, 27 (38.6%) ranked it as important, 24 (34.3%) marked it as average, and four (5.7%) ranked it as being less important. This competency had the ninth highest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -.069 and a kurtosis of -.753. 97 Figure 10. Conduct research appropriate to communications professions. Writing. For the competency/value related to writing correctly and clearly in forms and styles appropriate for the communications professions, audiences and purposes, the survey yielded 71 total valid responses (Figure 11). The overall mean score was 4.90, with a standard deviation of .344. Of the 71 responses, 65 (91.5%) rated this competency/value as being most important, five (7%) ranked it as important, and just one (1.4%) marked it as average. None of the respondents rated it as less or least important. This competency had the highest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -3.764 and a kurtosis of 14.956. 98 Figure 11. Writing correctly and clearly. Evaluate. For the competency/value related to critically evaluating one’s own work and that of others for accuracy, fairness, clarity, style, and grammar, the survey yielded 69 total valid responses (Figure 12). The overall mean score was 4.38, with a standard deviation of .709. Thirty-five respondents (50.7%) rated this competency/value as being most important, 25 (36.2%) ranked it as important, and nine (13%) marked it as average. None of the respondents rated it as less or least important. This competency had the fifth highest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -.694 and a kurtosis of -.717. 99 Figure 12. Critically evaluate own work and that of others for accuracy, fairness, clarity, and style. Numerical concepts. For the competency/value related to applying numerical and statistical concepts, the survey yielded 69 total valid responses (Figure 13). The overall mean score was 3.35, with a standard deviation of 1.10. Eleven respondents (15.9%) rated this competency/value as being most important, 20 (29%) ranked it as important, 25 (36.2%) marked it as average, eight (11.6%) ranked it as being less important, and five (7.2%) rated it least important. This competency had the second lowest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -.335 and a kurtosis of -.332. 100 Figure 13. Apply basic numerical and statistical concepts. Technology. For the competency/value related to applying current tools and technologies appropriate for communications professions and to understand the digital world, the survey yielded 70 total valid responses (Figure 14). The overall mean score was 4.33, with a standard deviation of .696. Thirty-two respondents (45.7%) rated this competency/value as most important, 29 (41.4%) ranked it as important, and nine (12.9%) marked it as average. None of the respondents rated it as less or least important. This competency had the sixth highest mean score among the 12 areas. Moreover, the data’s distribution had a skewness of -.550 and a kurtosis of -.787. 101 Figure 14. Apply current tools and technologies appropriate for communications professions. Rankings by Accredited vs. Non-Accredited Schools The results of the survey show that there is considerable overlap in how accredited and unaccredited schools rank the professional values and competencies. For example, administrators from both accredited and unaccredited programs agreed that the three ACEJMC values and competencies emphasized most in their curricula are: (1) writing correctly, clearly, and in appropriate styles; (2) thinking critically, creatively and independently; and (3) demonstrating an understanding of professional ethics. In addition, administrators from accredited and unaccredited schools rated the two competencies of applying basic numerical and statistical 102 concepts and demonstrating an understanding of the history and role of professionals and institutions in shaping communications in the bottom three of their rankings. As Table 1 shows, all of the values and competencies were rated quite similarly, irrespective of accreditation status. In fact, the largest difference between means was for gender and other forms of diversity, with a gap of .807. Its mean score was 4.212 among accredited schools was (placing at seventh), while it averaged 3.405 among unaccredited schools (which was 10th). Table 1. ACEJMC professional values/competencies mean scores, by school accreditation status ACEJMC Professional Value Accreditation Status N Mean Standard Deviation Standard Error Mean Freedom of speech Yes 33 4.515 .7124 .1240 No 37 4.270 .8708 .1432 History Yes 33 3.364 .8594 .1496 No 37 3.324 .8836 .1453 Gender Diversity Yes 33 4.212 .9604 .1672 No 37 3.405 .8647 .1422 Cultural Diversity Yes 33 4.121 1.0535 .1834 No 36 3.778 .8980 .1497 Theories Yes 32 3.438 1.0140 .1793 No 36 3.417 1.0790 .1798 Ethics Yes 33 4.788 .4846 .0844 No 38 4.421 .7929 .1286 Critical Thinking Yes 33 4.848 .4417 .0769 No 38 4.684 .5745 .0932 Research Yes 33 3.818 .8461 .1473 No 37 3.703 .8777 .1443 Writing Yes 33 4.879 .3314 .0577 No 38 4.921 .3588 .0582 Evaluation Yes 33 4.394 .7044 .1226 No 36 4.361 .7232 .1205 103 Numerical Concepts Yes 33 3.697 1.0454 .1820 No 36 3.028 1.0820 .1803 Technology Yes 33 4.485 .6185 .1077 No 37 4.189 .7393 .1215 Significant Difference in Mean Scores Using SPSS, the researcher conducted independent samples t-tests for equality of means, to determine if there is any statistically significant difference between accredited and unaccredited journalism program in terms of how their curricula emphasize the ACEJMC professional values/competencies. Accreditation status was the independent variable; while curricular emphasis was the dependent variable. This sub-section will review the findings of the analysis. According to the statistical analysis, only two of the 12 professional values/competencies have a statistically significant difference in mean scores: professional ethics (demonstrating an understanding of professional ethics) and numerical concepts (applying basic numerical and statistical concepts). Levene’s test for equality of variances was used for the t-tests, and the researcher decided upon the α level of .05. With regard to ethics, equal variances could not be assumed (at .000). The t-test revealed a statistically significant difference between the mean scores of accredited undergraduate journalism programs (M = 4.79, s = .485) and unaccredited programs (M = 4.42, s = .793) in how they emphasize professional ethics in their respective curriculum, t(62.34) = 2.38, p = .02 (two-tailed), α = .05. For numerical concepts, equal variances could be assumed (at .905). The t-test again revealed a statistically significant difference between the mean scores of accredited undergraduate journalism programs (M = 3.7, s = 1.04) and unaccredited programs (M = 3.03, s 104 = 1.08) in how they emphasize numerical and statistical concepts in their respective curriculum, t(67) = 2.61, p = .011 (two-tailed), α = .05. Explanation of Curricular Emphasis As part of the study, respondents were asked to provide an explanation of how each program emphasized in its curriculum the professional values/competencies it ranked as being most important. This sub-section will review the open-ended survey responses. The survey yielded 52 total valid written responses explaining how programs emphasize top-rated values/competencies over others. Of that amount, 29 were from ACEJMC-accredited programs and 23 were from unaccredited programs. Six general themes emerged from their explanations; and many respondents listed more than one way in which they accomplished their desired curricular emphasis. Course objectives and work. Course objectives and coursework – including projects and assignments – focusing heavily on key competencies were the most frequently cited approach in the open-ended explanations (listed by 32 schools). For example, one ACEJMC-accredited program noted that ethics is incorporated into course syllabi. That respondent also wrote: “Writing and critical thinking -- both are emphasized in coursework throughout the program through constant practice and feedback. Freedom of speech/press -- Our students encounter the challenges of a free press serving a community every day in their reporting, editing and producing classes” (personal communication, December 1, 2014). According to an unaccredited program, “Journalism syllabi are designed to encourage enhanced thinking among the students. Writing is a major, major focus on the entire curriculum. While there is a significant focus on using and adapting to technology, it is not at the expense of other important topics” (personal communication, December 19, 2014). 105 Course offerings. Course offerings, which include both required and elective classes centered on important competencies, were the second most common explanation (listed by 18 schools). As one accredited program explained, “Writing courses build upon the fundamental skills established early in the core classes, and students are encouraged to develop from competent, precise writers to more sophisticated, thoughtful and nuanced ones as the curriculum progresses” (personal communication, December 19, 2014). Along similar lines, an unaccredited program added, “Our beginning classes emphasize writing and this is carried through all the curriculum. You cannot survive in the media field without knowing how to use technology, first, just the sheer knowledge of how it works, then secondly, being able to use technology strategically” (personal communication, November 25, 2014). Opportunities. Real-world opportunities to apply classroom learning and/or expand professional networks – whether through study abroad programs, working on a student newspaper or lab, or attending professional conferences – were the third most cited method of curricular emphasis (listed by five schools). According to one accredited program, “We make the students serve as reporters and editors for the school weekly newspaper. We encourage the students to build personal portfolios by marketing their freelance articles. We take students to writers' conferences” (personal communication, November 28, 2014). Evaluations. Four schools listed student learning evaluations as their method of emphasis. From one unaccredited program: “The most important competencies are ones that are taught and assessed across all the courses in the journalism curriculum. They are also some of the competencies on which the department basis its self evaluation each semester” (personal communication, January 6, 2015). Meanwhile, a separate accredited program stated, “We grade 106 and edit all student assignments meticulously and meet with students to explain their writing weaknesses” (personal communication, November 28, 2014). In-class discussions. Three schools specifically mentioned in-class discussions on important professional values/competencies, such as diversity and ethics. As one unaccredited program explained: “Diversity is something we talk about in all our classes, emphasizing how different people communicate and how different people receive communication. We want our students to think critically about media, communication and media effects, so they are able to be informed consumers of media and capable producers of media content” (personal communication, November 25, 2014). Faculty hires. And two schools pointed out that they hire faculty who are experts in the areas that they deem as being most important. One unaccredited program wrote, “Every class is infused with writing and technology. Faculty are hired, generally, for their emphases on these issues” (personal communication, November 25, 2014). Furthermore, five specific values/competencies were mentioned repeatedly in the open- ended responses. Not surprisingly, the overall top three-rated values/competencies noted earlier – writing, thinking, and ethics – were among them. Writing was mentioned the most in the explanations, by 12 accredited and six unaccredited programs. Ethics was mentioned second most, by nine accredited and four unaccredited programs. Thinking critically, creatively and/or independently was mentioned by seven accredited and three unaccredited programs. In addition, technology and diversity – although not in the top five of the overall mean scores – were frequently mentioned in the explanations. Technology was explicitly noted 10 times, by five accredited and five unaccredited programs. Diversity was addressed by six accredited programs 107 and three unaccredited programs. Interestingly, the professional competency of numerical and statistical concepts, which was found to have a statistically significant difference between accredited and unaccredited journalism programs, was not addressed at all. Assessment of Learning As noted earlier, since the mid-1980s, the focus of higher education accreditation in the U.S. has shifted toward greater accountability and student learning assessment (Ewell, 2001; Beno, 2004; Wergin, 2005, 2012). In Part III of the survey, the journalism directors and faculty administrators were asked to indicate how they assess undergraduate student learning at the conclusion of their program, and then provide a rationale for their decision to use these types of measures, as well as describe how effective the assessment measures are. This section of the chapter will report on the findings of the statistical and qualitative analysis conducted of the data collected from the survey. How is learning assessed? One of the research questions guiding this study was: what summative measures are used to assess student learning? In response to this question, the survey yielded 68 valid responses from journalism directors and faculty administrators at ACEJMC-accredited and unaccredited programs (see Figure 15). Of those responses, 17 different permutations of summative assessment measures were listed: The most common assessment measure was a capstone or professionally focused project, used by 26 journalism programs (11 accredited; 15 unaccredited) accounting for 38.2% of all responses. The second most common was a portfolio of student work, used by 11 programs (four accredited; seven unaccredited), representing 16.2% of all responses. 108 Exit survey or interview was the third most common assessment measure, used by eight programs (five accredited; three unaccredited) accounting for 11.8%. The combination of a capstone project, portfolio, and exit interview/survey was the fourth most cited assessment measure, used by four programs (two accredited; two unaccredited), representing 5.9% of all responses. Two programs (both unaccredited) use a thesis as their summative assessment measure, accounting for 2.9%. Two programs (both ACEJMC-accredited) rely on the combination of a capstone project, portfolio, exit survey, and an exam to measure student learning. Two programs (one accredited; one unaccredited) use a capstone and portfolio to assess student learning. Two programs (both unaccredited) explicitly indicated that they rely on course rubrics to assess learning. One unaccredited program uses a comprehensive examination, which represents 1.5% of the total. One ACEJMC-accredited program uses both a capstone and exam to assess student learning. One unaccredited program employs both a capstone and exit surveys as their measures. One ACEJMC-accredited program relies on a combination of a capstone, internship, writing exam, pre- and post-testing to assess learning. One ACEJMC-accredited program uses the combination of a capstone, exit exam, portfolio and internship evaluations. One ACEJMC-accredited program assesses learning through an exam and portfolio. 109 One ACEJMC-accredited program uses a course-based assessment of student work conducted by external reviewers. Additionally, one unaccredited school listed “other” on the survey, but did not specify the exact measure. Furthermore, three unaccredited programs indicated that they do not use a summative assessment measure. Figure 15. Methods of student learning assessment (N=68). It is important to note that 38 total programs employ capstone projects as a method of summative assessment, whether as the singular measure or in tandem with other measures. This 0 5 10 15 20 25 30 Frequency 110 means that 55.9% – well over half of all undergraduate journalism programs in the sample – incorporate capstones. No Significant Difference in Assessment Practices Having collected these data, a Pearson chi square test was run in SPSS to analyze the relationship between the two nominal variables of (1) accreditation status and (2) types of assessment measures used. The results of the test show that there is no significant difference in how accredited versus unaccredited undergraduate journalism programs assess student learning, chi square = 18.245, df = 16, p = .310. This means that unaccredited journalism schools are engaging in the same or similar assessment practices as the ACEJMC-accredited programs to ensure students’ educational outcomes and professional preparation. This is a very important finding of the study, and will be discussed further in Chapter Five. Rationale for use of measures Beyond having to indicate which learning assessment measures each of the programs use, the sample of journalism directors and faculty administrators was also asked to explain the rationale behind their program’s decision to use these measures. The vast majority of respondents noted how their assessment measures are designed to focus on real-world application; it is their hope that these practices will help to prepare students to launch their professional careers. Many also feel their assessment measures provide the best approach to determine – and demonstrate – students’ mastery of core competencies. For example, one unaccredited journalism program, which uses an internship as their undergraduate student capstone project, wrote, “It is the best way to assess learning throughout the program. Are the students successful in knowing how to do the work? Are the students 111 capable of carrying out assignments, with the proper skill and use of technology?” (personal communications, November 25, 2014). Among the schools that use portfolios of student work, one ACEJMC-accredited program preferred this measure not only because of it develops versatility of professional skills. “The students are expected to graduate with a diversity of writing and editing talents, thus their portfolios must consist of reviews, features, interviews, devotions, testimonies, sports items, humor writing, investigative journalism, and editorials” (personal communications, November 28, 2014). An unaccredited school added that “A portfolio allows us to assess and offer feedback to students and gives them a platform for showcasing their work for admission to graduate school or for job applications” (personal communications, December 29, 2014). However, one accredited program contended that although portfolios provide an opportunity to evaluate competencies such as writing, appropriate use of technology, multimedia storytelling, editing, and design, other measures as still needed. The respondent wrote, “additional assignments (final papers, written exams and projects) are needed from specific courses to help adequately assess compentencies [sic] that might not be as evident in a portfolio of work such as ethics, law, copy editing, history, understanding of diversity” (personal communications, January 6, 2015). For exit surveys and interviews, an unaccredited program pointed out that this this type of assessment is beneficial for collecting year-over-year data: “it's easy and there's a longitudinal aspect to it in that there are past surveys to compare to” (personal communications, December 23, 2014). As noted earlier in this section, many programs rely of more than one measure to evaluate learning. According to one accredited program that uses a capstone, portfolio, and exit 112 survey/interview, “We use both direct and indirect measures. The capstone/portfolio help them get jobs, the survey and exam help us to gage their understanding of the 12 and what we value” (personal communications, November 25, 2014). Another accredited program using the same combination of three measures, added: “Capstone projects allow us to compare data with our introductory writing/reporting course to see how students have progressed, portfolios allow us to help prepare students for job searches and gives us outside perspectives since they are reviewed by professionals, exit interviews give us anecdotal info about what works well and what doesn't within our program instruction” (personal communications, November 25, 2014). Additionally, an accredited school that administers a portfolio and exam to assess learning, explained how the portfolio “provides direct measure of skills by professionals in the media fields” while the “learning outcomes posttest provides objective, direct measurement of knowledge” (personal communications, November 25, 2014). One unaccredited program that relies on course rubrics to assess learning, felt that “Comprehensive, high stakes exams are counter-productive to higher education” (personal communications, November 25, 2014). Students in this program must complete an internship and portfolio, but neither one is assessed. The respondent further noted, “The whole purpose of a course grade is to assess the level of learning in a class, and we do that on multiple levels in each course. Employers and graduate schools find our students to be desireable [sic], as indicated by our 86% placement rate” (personal communications, November 25, 2014). 113 Effectiveness of assessment measures Also as part of the survey, journalism directors and faculty administrators were asked to elaborate on how effective these measures have been, in their experiences. The survey yielded 78 total responses; 49 of which were valid and 29 no responses (see Table 2). From analyzing their open-ended answers, five distinct levels of effectiveness emerged: extremely/very effective (by 15 programs), effective (by 19 programs), fairly/moderately effective (by eight programs), marginally effective (by four programs), and not effective (by three programs). Fifteen schools stated their assessment practices were very, highly, or extremely effective. According to one ACEJMC-accredited program, their use of a student capstone project “inspires students to do their best work, since it will be assessed by outside clients and professionals, as well as their instructors. Plus it provides valuable material for their professional portfolios” (personal communication, November 25, 2014). The respondent also noted that the capstone projects are examined by actual practitioners, “either in contests or by professionals invited to campus, which provides neutral third-party assessment of our program” (personal communication, November 25, 2014). Another accredited program noted that portfolios of student work have been their top priority for nearly 20 years, and have found this assessment approach to be very effective. “When students seek internships, practicums, or full-time employment, published manuscripts really show diversity, talent, and a range of skills” (personal communication, November 28, 2014). Meanwhile, one unaccredited program using a capstone and exit survey to assess learning, wrote that the faculty “have made specific programmatic changes in response to shortcomings flagged as we reviewed exit survey data and portfolios” (personal communication, December 19, 2014). Similarly, other programs explained that their assessment practices have helped identify weaknesses in the curriculum and/or enabled them to 114 make changes to courses and assignments as part of an ongoing effort to improve the quality of education. In contrast, three programs – all unaccredited – indicated that their assessment methods were not effective. According to one school, its use of exit surveys/interviews is not effective because it “Relies on self-reported data and students often think they're better than what they are” (personal communication, November 25, 2014). Another respondent contended that the program’s use of a singular assessment method (capstone project) was not sufficient to measure student learning. “I have argued for a second measure in addition to the project reviews” (personal communication, November 25, 2014). Table 2. Effectiveness of student learning assessments (N=49) Effectiveness of Measure Frequency % Valid % Cumulative % Extremely/Very 15 19.2 19.2 19.2 Effective 19 24.4 24.4 43.6 Fair/Moderate 8 10.3 10.3 53.8 Marginal 4 5.1 5.1 59.0 Not effective 3 3.8 3.8 62.8 No response 29 37.2 37.2 100.0 Total 78 100.0 100.0 Student Outcomes In Part IV of the survey, the journalism directors and faculty administrators were asked to self-report key metrics of student outcomes, namely: job placement rates (in the journalism/mass communications field) for academic years ending in 2012, 2013, and 2014; graduation rates 115 (bachelor’s degree within six years) for academic years ending in 2012, 2013, and 2014; and retention rates for academic years ending in 2012, 2013, and 2014. This section of the chapter will first report the overall frequency and descriptive statistics for job placement rates, graduation rates, and retention rates. Next, it will show the differences in means between accredited and non-accreditation schools across these variables. Third, it will show the results of an independent samples t-test for equality of means, to determine if there is any statistically significant difference between accredited and unaccredited journalism programs with regard to job placement rates, graduation rates, and retention rates. Fourth, it will show the results of a bivariate Pearson correlation coefficient test to examine whether there is a relationship between programs’ student outcomes and how they rank the ACEJMC professional values/competencies. And finally, it will show the results of a one-way analysis of variance (ANOVA) test to explore the impact of journalism programs’ use of learning assessments on their student outcomes. Job Placement, Graduation, and Retention Rates One of the research questions guiding this study was: what are the measurable student outcomes at undergraduate journalism schools? To answer that question, the sample of journalism directors and faculty administrators were asked to self-report their statistics related to job placement rates (in the journalism/mass communications field), graduation rates (bachelor’s degree within six years), and retention rates. The data collected from their responses are detailed in this sub-section. 2012 job placement. For the overall job placement rate of undergraduate journalism programs in 2012, the survey yielded 17 total valid responses (Figure 16). The overall mean was 116 76.29%, with a standard deviation of 25.35. Additionally, the data’s distribution had a skewness of -1.53 and a kurtosis of 1.94. Figure 16. Job placement rate in 2012. 2013 job placement. For the overall job placement rate of undergraduate journalism programs in 2013, the survey yielded 17 total valid responses (Figure 17). The overall mean was 76.65%, with a standard deviation of 25.88. Additionally, the data’s distribution had a skewness of -1.77 and a kurtosis of 2.67. 117 Figure 17. Job placement rate in 2013. 2014 job placement. For the overall job placement rate of undergraduate journalism programs in 2014, the survey yielded 15 total valid responses (Figure 18). The overall mean was 71.2%, with a standard deviation of 26.39. Additionally, the data’s distribution had a skewness of -1.20 and a kurtosis of .91. 118 Figure 18. Job placement rate in 2014. 2012 graduation. For the overall graduation rate of undergraduate journalism programs in 2012, the survey yielded 26 total valid responses (Figure 19). The overall mean was 68.88%, with a standard deviation of 22.66. Additionally, the data’s distribution had a skewness of -.26 and a kurtosis of -.62. 119 Figure 19. Graduation rate in 2012. 2013 graduation. For the overall graduation rate of undergraduate journalism programs in 2013, the survey yielded 24 total valid responses (Figure 20). The overall mean was 72.08%, with a standard deviation of 21.71. Additionally, the data’s distribution had a skewness of -.17 and a kurtosis of -1.07. 120 Figure 19. Graduation rate in 2013. 2014 graduation. For the overall graduation rate of undergraduate journalism programs in 2014, the survey yielded 26 total valid responses (Figure 21). The overall mean was 71.62%, with a standard deviation of 21.31. Additionally, the data’s distribution had a skewness of -.14 and a kurtosis of -1.12. 121 Figure 20. Graduation rate in 2014. 2012 retention. For the overall retention rate of undergraduate journalism programs in 2012, the survey yielded 27 total valid responses (Figure 22). The overall mean was 79.85%, with a standard deviation of 11.56. Additionally, the data’s distribution had a skewness of -.32 and a kurtosis of -1.29. 122 Figure 21. Retention rate in 2012. 2013 retention. For the overall retention rate of undergraduate journalism programs in 2013, the survey yielded 24 total valid responses (Figure 23). The overall mean was 82.17%, with a standard deviation of 9.86. Additionally, the data’s distribution had a skewness of -.29 and a kurtosis of -.94. 123 Figure 22. Retention rate in 2013. 2014 retention. For the overall retention rate of undergraduate journalism programs in 2014, the survey yielded 20 total valid responses (Figure 24). The overall mean was 84.6%, with a standard deviation of 8.91. Additionally, the data’s distribution had a skewness of -.60 and a kurtosis of -.08. 124 Figure 23. Retention rate in 2014. Accredited vs. Non-Accredited Mean Scores Despite the low sample size, the data did reveal an emergent pattern. In all but one metric (i.e. job placement rate in 2012), unaccredited undergraduate journalism programs had higher mean scores than ACEJMC-accredited programs. As Table 3 shows, the largest differences in means were found for graduation rates in 2012 and 2013. In 2012, unaccredited programs (N = 14) reported a graduation rate of 77.29%, compared to 59.08% for accredited programs (N = 12), accounting for an 18.2% difference. In 2013, unaccredited programs (N = 14) reported a 79.77% graduation rate in comparison 61.3% for accredited programs (N = 10), representing an 18.49% difference. 125 Table 3. Student outcome rates, by school accreditation status Accreditation Status N Mean Standard Deviation Standard Error Mean Job Rate 2012 Yes 7 77.714 22.4255 8.4761 No 10 75.300 28.3590 8.9679 Job Rate 2013 Yes 7 75.714 26.2152 9.9084 No 10 77.300 27.0393 8.5506 Job Rate 2014 Yes 7 66.714 23.7747 8.9860 No 8 75.125 29.5076 10.4325 Graduation Rate 2012 Yes 12 59.083 22.0514 6.3657 No 14 77.286 20.2918 5.4232 Graduation Rate 2013 Yes 10 61.300 21.1085 6.6751 No 14 79.786 19.2880 5.1549 Graduation Rate 2014 Yes 12 64.667 20.0015 5.7739 No 14 77.571 21.2629 5.6827 Retention Rate 2012 Yes 13 75.923 11.7577 3.2610 No 14 83.500 10.4863 2.8026 Retention Rate 2013 Yes 10 77.700 9.7417 3.0806 No 14 85.357 8.9495 2.3919 Retention Rate 2014 Yes 8 81.125 8.7902 3.1078 No 12 86.917 8.5542 2.4694 Significant Difference in Mean Scores To determine if there is any statistically significant difference between accredited and unaccredited journalism programs related to job placement rates, graduation rates, and retention rates, the researcher conducted an independent samples t-test for equality of means. Accreditation status was the independent variable; while student outcome rates served as the dependent variable. This sub-section will review the findings of the analysis. Among the range of student outcome metrics examined within this study, only two were found to have a statistically significant difference in mean scores: graduation rates in 2012 and in 126 2013. Levene’s test for equality of variances was used for the t-tests, and the researcher decided upon the α level of .05. For the academic year ending in 2012, equal variances could be assumed (at .721). The t- test revealed a statistically significant difference between the mean scores of accredited undergraduate journalism programs (M = 59.08, s = 22.05) and unaccredited programs (M = 77.29, s = 20.29) in their graduation rates for that year, t(24) = -2.19, p = .038 (two-tailed), α = .05. For the academic year ending in 2013, equal variances could also be assumed (at .945). Again, the t-test revealed a statistically significant difference between the mean scores of accredited undergraduate journalism programs (M = 61.3, s = 21.11) and unaccredited programs (M = 79.79, s = 19.29) in their graduation rates for that year, t(22) = -2.23, p = .037 (two-tailed), α = .05. These statistically significant differences in 2012 and 2013 graduation rates could be due to a low sample size; or, they may reflect a broader trend that took place in journalism education at that time. As noted earlier in this study, the higher education field of journalism and mass communication has experienced a decline in student enrollment for three consecutive years, spanning 2011, 2012, and 2013 (Becker, Vlad, & Simpson, 2014). Furthermore, in 2012, a major debate was taking place at the Association for Education in Journalism and Mass Communication (AEJMC) annual conference, in which the foundations that fund journalism schools issued a letter contending that programs were not adapting to the industry’s rapid changes (Basu, 2012). Their letter advocates bringing in industry expert professionals to serve as instructors and encourages schools to consider a teaching hospital-type of model, where students gain a strong foundation of journalistic principles inside the classroom, then have opportunities 127 to apply this knowledge in actual news coverage — and through this process, they serve their respective communities by producing news (Basu, 2012; Newton et al., 2012). This issue dominated discussions among journalism professors at the conference, according to Basu (2012). And in 2013, the proliferation of online education was spotlighted as a key trend shaping the journalism field (Barnathan, 2013; Glenn, 2013). Glenn (2013) noted that numerous journalism programs, including nationally esteemed schools, had embraced online educational delivery. For example, the Missouri School of Journalism, University of Washington, University of Florida’s College of Journalism and Communications, American University’s School of Communications, and the Poynter Institute’s NewsU all offer online degrees and/or learning programs for undergraduates, graduate students, and current professionals. In addition, that same year, the University of Texas-Austin School of Journalism’s Knight Center for Journalism in the Americas, which already had an established distance learning program, offered – for the first time ever – a MOOC (massive open online course) in infographics and data visualization, drawing 2,000 for its first class and 5,000 for its second course (Ellis, 2013; Glenn, 2013). It is not clear whether any of these factors may have played a role in producing these statistically significant differences in graduation rates; however, it is important to provide a robust context of the educational landscape during which the differences occurred. Correlations: Competencies and Student Outcomes A primary goal of this study is to explore whether the way schools prioritize the 12 professional journalism values and competencies have any correlation to their student outcomes. Having collected pertinent data via the online survey, the researcher conducted statistical analysis in SPSS to investigate the relationship between these two variables, using a bivariate Pearson correlation coefficient test. The results of the analysis found nine moderate to strong 128 correlations with statistical significance, in which p < .05. For purposes of this study, the following standard guidelines were used to determine the strength of the relationship: small ranging from r = .10 to .29; moderate ranging from r = .30 to .49; and strong ranging from r = .50 to 1.0 (Pallant, 2013). Numerical concepts. When focusing on the competency/value related to applying numerical and statistical concepts, the study led to two key correlational findings. There was a strong, negative correlation between journalism programs’ curricular emphasis on numerical concepts and their job placement rates in 2012, r = -.594, n = 16, p < .015; higher ranking of this professional value/competency was associated with a lower rate of job placement. Similarly, there was a strong, negative correlation between journalism programs’ curricular emphasis on numerical concepts and their job placement rates in 2013, r = -.520, n = 16, p < .039; again, higher ranking of numerical and statistical concepts was associated with a lower rate of job placement. Gender and other diversity. With regard to the value/competency of demonstrating an understanding of gender, race, ethnicity, sexual orientation and other forms of diversity in society, two important correlational findings emerged. There was a moderate, negative correlation between journalism programs’ curricular emphasis on gender/other diversity and their retention rates in 2012, r = -.456, n = 26, p < .019; higher ranking of this professional value/competency was associated with a lower rate of retention. There was also a moderate, negative correlation between journalism programs’ curricular emphasis on gender/other diversity and their retention rates in 2013, r = -.438, n = 23, p < .037; again, higher ranking of gender/other diversity was associated with a lower rate of retention. 129 Evaluate. For the value/competency related to critically evaluating one’s own work and that of others for accuracy, fairness, clarity, style, and grammar, the study revealed several correlational findings. There was a strong, positive correlation between journalism programs’ curricular emphasis on evaluation skills and their retention rates in 2013, r = .614, n = 23, p < .002; higher ranking of this professional value/competency was associated with a higher rate of retention. There was a strong, positive correlation between journalism programs’ curricular emphasis on evaluation skills and their retention rates in 2014 as well, r = .574, n = 19, p < .010; again, higher ranking of evaluation skills was associated with a higher rate of retention. In 2012, there was a moderate, positive correlation between journalism programs’ curricular emphasis on evaluation skills and their retention rates, r = .428, n = 26, p < .029. In addition, there was a moderate, positive correlation between journalism programs’ curricular emphasis on evaluation skills and their graduation rates in 2012, r = .410, n = 25, p < .042; higher ranking of this professional value/competency was associated with a higher rate of graduation. Similarly, there was a moderate, positive correlation between programs’ curricular emphasis on evaluation skills and their graduation rates in 2013, r = .438, n = 23, p < .037. No Significance: Assessments and Student Outcomes To explore the impact of journalism programs’ use of learning assessments on their student outcomes, a one-way analysis of variance (ANOVA) test was conducted. The results of the ANOVA produced no statistically significant difference at the p < .05 level. Thus, the researcher could not reject the null hypothesis wherein the population means are equal. Conclusion The journalism program directors and faculty administrators who took part in this study provided a breadth of valuable data and rich explanations about journalism education and 130 programmatic accreditation. Their participation offered insights that will help expand the body of knowledge on, and the broader understanding of, curricular priorities, learning assessments practices, and student outcomes in the journalism field. The study led to several important and noteworthy findings that would be of interest to academics and practitioners alike. For example, the results showed considerable overlap in how accredited and non-accredited programs prioritize professional competencies in their respective curriculum; while also finding a statistically significant difference with regard to ethics and numerical/statistical concepts. The study also showed that capstone projects were the most commonly used summative learning assessment by journalism schools (either as a singular measure or in combination with other measures). Perhaps even more importantly, there was no significant difference found in how accredited versus unaccredited undergraduate journalism programs assess student learning. Moreover, an unexpected pattern emerged among the reported student outcomes. In all but one metric (job placement rate in 2012), unaccredited undergraduate journalism programs had higher mean scores than ACEJMC-accredited programs; and a statistically significant difference was found for graduation rates in 2012 and in 2013. Additionally, analysis of the data revealed nine moderate to strong correlations between programs’ rankings of professional journalism values and competencies and their student outcomes. Among them, there were strong, negative correlations between programs’ curricular emphasis on numerical concepts and their job placement rates in 2012 and 2013; and there were strong, positive correlations between programs’ curricular emphasis on evaluation skills and their retention rates in both 2013 and 2014. The study’s results and their implications for practice will be discussed in more detail in the final chapter. 131 CHAPTER FIVE: DISCUSSION Purpose of the Study The purpose of this study is to explore the relationship between curricular priorities/learning assessment practices and quantifiable student outcomes at journalism programs. This study takes place at a pivotal time in the field, amid declining student enrollment (Becker, Vlad, & Simpson, 2014), debates nationally on the utility and direction of journalism education (Basu, 2012), and shifts in the employment landscape (Jurkowitz, 2014). It also contributes toward helping fill a gap in the literature, where according to Seamon (2012), the effectiveness of accredited and non-accredited programs is compared by measuring their graduates’ abilities. This study surveyed a national sample of accredited and non-accredited journalism program directors and faculty administrators. Individuals from ACEJMC-accredited programs were asked to explain the reasons why their programs chose to pursue accreditation. All survey recipients were asked to prioritize the 12 ACEJMC professional competencies using a Likert- type scale, then indicate their use, if any, of a summative assessment (e.g. capstone project, comprehensive examination, thesis, portfolio, exit survey, etc.), and also to self-report student outcomes (such as graduation rates, retention rates, and job placement rates). The study examines whether the way they prioritize these competencies and/or their use of summative learning assessments has any correlation to their student outcomes. This study also attempts to determine whether any differences exist between accredited and non-accredited programs. Guiding this study are the following research questions: Which competencies do journalism schools prioritize as being most important? 132 What are the measurable student outcomes at undergraduate journalism schools (i.e. job placement rates, graduation rates, retention rates)? How do schools assess student learning, at the conclusion of the program? (What summative measure is used? How effective is it, in their experience?) Are there differences between accredited and non-accredited programs? Discussion of Key Findings and Research Recommendations This exploratory study touched on a number of different issues and constructs across journalism education and programmatic accreditation. Through analysis of the data, several important and interesting findings were made which have the potential to inform the decisions of both educators and accreditors in the field of journalism. And due to the broad nature of this inquiry, many paths for future research may be recommended, expanding upon these findings. Why Do Schools Pursue Accreditation? A total of 35 ACEJMC-accredited schools participated in the survey, and each of them provided their reasons to pursue or maintain programmatic accreditation. Many of the responses were well within the realm of expectation, such as: ensuring program quality, improvement, and/or accountability; reputation, stature, or prestige; recruitment; external validation; student opportunities; and professional connections. However, two responses in particular, stood out as unique. In the first case, a couple of respondents wrote accreditation served as a form of bureaucratic leverage against university administration, to fight off unreasonable demands and to strengthen their argument to keep class sizes down, upgrade facilities, and gain support for their program. These responses seem to underscore a larger struggle that many colleges and universities face today, in which leaders must balance competing demands for limited financial resources and somehow preserve educational quality. And perhaps even more interesting, one 133 respondent openly stated that the program is considering abandoning journalism accreditation. The person contended that the accreditation standards stifle educational innovation and inhibit the type of experimentation which may help advance journalism education. This position is also reflective of many professionals who feel that journalism education isn’t keeping pace with industry changes (Finberg, Krueger, & Klinger, 2013). Future research. Although one school’s decision certainly does not constitute a trend, this response nonetheless provides insight into what educators see as the shortcomings of the journalism accreditation currently. One possible avenue for future research would be to gather rich, qualitative data from other programs that were once accredited, but no longer are. It would be very valuable to look closely at what, if any, effective educational and curricular innovations these schools have implemented since their decision. Particularly if themes begin emerging, then there may be some aspects that other journalism schools – or maybe even the accrediting body – might want to consider adopting. Which competencies do schools prioritize as most important? Overall, the three ACEJMC professional values and competencies that received the highest mean scores were writing (4.9), critical/creative thinking (4.76), and professional ethics (4.59). Conversely, the three lowest were history of communications (3.343), numerical and statistical concepts (3.348), and the application of theories in presenting images and information (3.426). The top three are certainly understandable, given how essential writing, critical/creative thinking, and ethics are in the professional arena. Although, it is somewhat surprising that, in this study, applying technology was sixth out of 12 competencies, especially since a digital focus has become increasingly important among journalism schools. Even in the earlier Henderson and Christ (2014) study, applying technology was ranked third. With regard to the least 134 prioritized competencies, history of communications and numerical/statistical concepts appear among the bottom three in both this study and in the Henderson and Christ (2014) study. This finding raises questions about their value and whether they truly need to be included in the standards. In addition, in respondents’ open-ended answers about how their programs emphasize certain values/competencies over others in their curriculum, diversity was explicitly addressed by six accredited programs and three unaccredited programs. These programs either offered courses dedicated to the subject or integrated it into its class discussions. But that level of importance is not reflected in their prioritization, as cultural diversity ranked seventh, while gender and other diversity ranked eighth overall. Differences. In addition, the results showed considerable overlap in how accredited and non-accredited programs prioritize the professional competencies in their respective curriculum; both classifications of schools were in agreement on the three most important and their order: writing, creative/critical thinking, and ethics. Both also ranked numerical/statistical concepts and history of communications among the bottom three. As previously reported in this study, literature has shown there is little observed difference between accredited and non-accredited journalism programs (Carroll, 1977; Masse & Popovich, 2007; Blom & Davenport, 2012). Now, the results of this study reveal that there is also much similarity in how programs emphasize professional values and competencies in their respective curriculum. Future research. Having established these similarities, future research might examine the costs – monetary, personnel, and time allocation – associated with programmatic accreditation and determine whether they feel these costs are justified. It would also be valuable to delve deeper into the reasons why so many programs choose to remain unaccredited. 135 Furthermore, only two statistically significant differences were found among the mean scores of accredited and non-accredited programs — ethics and numerical/statistical concepts. This finding also lends itself to further inquiry. Future research can focus on exploring why these two values/competencies are statistically significant different. What are the factors, curricular or otherwise, that lead to this result. How do schools assess student learning? Another focus of the study was on undergraduate journalism programs’ use of summative learning assessments. Respondents indicated that their programs relied on measures such as a capstone project, portfolio of student work, exit survey or interview, thesis, comprehensive examination, writing exam, course rubrics, a course-based assessment of student work by external reviewers, internship evaluations, or some combination of one or more, to assess student learning. The data showed that capstone projects were the most commonly used summative learning assessment — either as a singular measure or in tandem with other measures. Thirteen programs currently use more than one assessment type. Meanwhile, one program noted that it switched from one measure, which they didn’t find effective, to another approach. Because learning assessment is central to ensuring the preparedness of graduates, these sorts of experimentations contribute not only to advancing education but also to promoting program quality. Differences. Even more important, there was no statistically significant difference found in how accredited versus unaccredited undergraduate journalism programs assess student learning. This means that unaccredited journalism schools are engaging in the same or similar assessment practices as the ACEJMC-accredited programs to ensure students’ educational outcomes and professional preparation. 136 Future research. The ACEJMC, under their Principles of Accreditation, specifically addresses learning assessment by stating, “The Council seeks to promote student learning and encourages experimentation and innovation. The Council evaluates curricula and instruction in the light of evidence and expects programs seeking accreditation to assess students' attainment of professional values and competencies” (ACEJMC, 2009). Since no significant difference was found, it may be concluded that the accreditation process does not necessarily lead to more effective assessment practices or more experimentation and innovation in methods of assessment. Future replication studies, perhaps with a larger sample size, would be beneficial in order to determine whether this major finding is supported. Moreover, the widespread use of capstones as method of assessment seems to merit its own further inquiry. What guidelines to schools use to evaluate projects? Is the evaluation conducted by faculty, practitioners, or a combination of both? Do schools keep track of student performance in capstones? A study that addresses these and similar questions may contribute to producing a set of best practices that other schools may opt to follow. What are schools’ measurable student outcomes? Another research question guiding this study was: what are the measurable student outcomes at undergraduate journalism schools? Journalism directors and faculty administrators were asked to self-report their statistics for job placement rates (in the journalism/mass communications field), graduation rates (bachelor’s degree within six years), and retention rates. The overall mean scores for each metric were: Job placement rate in 2014 — 71.2% with a standard deviation of 26.39 (N = 15). Job placement rate in 2013 — 76.65% with a standard deviation of 25.88 (N = 17). Job placement rate in 2012 — 76.29% with a standard deviation of 25.35 (N = 17). 137 Graduation rate in 2014 — 71.62%, with a standard deviation of 21.31 (N = 26). Graduation rate in 2013 — 72.08% with a standard deviation of 21.71 (N = 24). Graduation rate in 2012 — 68.88% with a standard deviation of 22.66 (N = 26). Retention rate in 2014 — 84.6%, with a standard deviation of 8.91 (N = 20). Retention rate in 2013 — 82.17% with a standard deviation of 9.86 (N = 24). Retention rate in 2012 — 79.85% with a standard deviation of 11.56 (N = 27). Differences. These student outcomes statistics alone serve as valuable data for educators, administrators, accreditors, students, and even practitioners. But what was a truly unexpected and interesting finding was the pattern that emerged among these reported student outcomes when the data were disaggregated by accreditation status. In all but one metric (job placement rate in 2012), unaccredited undergraduate journalism programs had higher mean scores than ACEJMC-accredited programs; the full results are listed in Table 3 of Chapter Four. In addition, a statistically significant difference was found between accredited and unaccredited programs for graduation rates in 2012 and in 2013. Future research (differences). These findings present a wide range of directions for potential research. For example, future research could address what factors account for unaccredited programs outperforming their accredited counterparts, nearly across the board? What were the reasons for the statistically significant differences in the 2012 and 2013 graduation rates? As noted earlier in Chapter Four, these years coincide with enrollment declines (Becker, Vlad, & Simpson, 2014), a major national debate over how journalism education should adapt to the industry’s rapid changes (Basu, 2012), and the proliferation of online education (Barnathan, 2013; Glenn, 2013). Although this context was provided, it did not pinpoint the determinant factors, which may include a breadth of reasonable possibilities such 138 the economic climate. Additionally, a subsequent study could be conducted to support the overall results reported in the study, given that the sample sizes were relatively low. Correlations. Statistical analysis of the data was done to address the study’s primary goal of exploring the relationship between curricular priorities/learning assessment practices and quantifiable student outcomes. The results of a bivariate Pearson correlation coefficient test revealed nine moderate to strong correlations between programs’ rankings of professional journalism values and competencies and their student outcomes. Most notably, there were strong, negative correlations between journalism programs’ curricular emphasis on numerical concepts and their job placement rates in 2012 (r = -.594, n = 16, p < .015) and in 2013 (r = -.52, n = 16, p < .039); which means that higher ranking of this professional value/competency was associated with a lower rate of job placement. There were also moderate, negative correlations between an emphasis on gender/other diversity and retention rates in 2012 and 2013. The only other strong correlational findings involved strong, positive correlations between programs’ curricular emphasis on evaluation skills and their retention rates in both 2014 (r = .574, n = 19, p < .010) and 2013 (r = .614, n = 23, p < .002); which means that higher rankings of this professional value/competency was associated with higher rates of retention. In addition, with regard to the relationship between learning assessment practices and quantifiable student outcomes, it is worth mentioning once again that the results of a one-way analysis of variance (ANOVA) test found no statistically significant difference at the p < .05 level. Future research (correlations). If one were to assume that all 12 ACEJMC professional values and competencies are equally essential components in putting together a sound and effective curriculum, then it would make sense that any correlation would move in the positive 139 direction. Under this presumption, any negative correlation seems odd; after all, why would focusing on any value or competency correspond with a drop in a student outcome rate? One possible explanation stems from our earlier findings on the programs’ rankings of the values and competencies. According to the results of this study, not all professional values/competencies are regarded as equal. Individual journalism programs do indeed prioritize and emphasize some values/competencies over others. Thus, if a lower-ranked competency, such as numerical/statistical concepts or gender/other diversity, are given heavier emphasis in a curriculum, it is feasible that the amount of time spent on another, typically higher-ranked competency may be truncated to some degree — which may then potentially result in lowering a student outcome rate. Further research is needed to test this hypothesis. Limitations Several limitations should be taken into consideration for this study. First, the design of the survey allowed for respondents to skip questions and move on to the next section without fully completing the form. As a result, most of the responses had one or more unanswered fields. Second, the design of the survey was not optimally conducive for gathering data on Question 13. In Part III: Assessment of Learning, Question 13 asked, “How do you assess undergraduate student learning at the conclusion of the program?” The possible list of answers included: capstone or professionally focused project; comprehensive examination; thesis; portfolio; exit survey or interview; other; or none. These answers were available in multiple-choice format, but did not allow respondents the ability to select more than a one option. Several used the “Other” option to provide a written explanation of how they presently used numerous measures to assess student learning. As a result, the researcher coded each of these individual explanations as a new level, prior to running further statistical analysis. Third, the student outcome numbers – job 140 placement rates, graduation rates, and retention rates – were self-reported by each institution. There was no independent verification conducted to confirm these statistics. Three schools noted that their student numbers were estimates or approximations; these were included in the data set for subsequent statistical analysis. There was also a small number of responses submitted in the student outcomes section of the survey. Lastly, several respondents commented that the wording on some of the questions was either unclear or not aligned with how they typically track student data. For example, some schools stated that they track graduation and retention rates by entry year, rather than by exit year. Meanwhile, 21 schools stated that, for one or more of the student outcome metrics, their program and/or their university didn’t track these numbers, or they didn’t know. Implications for Practice This exploratory study, with its emphasis on curricular priorities, learning assessment practices, and measurable student outcomes, adds to the breadth of research in the area of journalism accreditation. Not only does it seek to enhance understanding of journalism education and programmatic accreditation, but its findings also have implications for practice to different constituencies, on various levels. First, the data from this study may be relevant to the field’s accrediting organization, the ACEJMC, as it monitors the effectiveness of its policies and standards. This study has particular implications with regard to the ACEJMC’s 12 professional values and competencies. Because journalism programs prioritize some professional values and competencies over others, and subsequently emphasize these values and competencies more strongly in their curriculum, it seems logical to at least consider a multi-tiered system of competencies. This notion was first proposed by Henderson and Christ (2014): 141 “In a multi-tiered system, programs would be given the latitude to identify those three or more competencies they most emphasize in their programs. Those that were emphasized would be assessed for understanding and application and even mastery (perhaps using direct measures), and those that were covered but not emphasized would be assessed for familiarity or awareness (perhaps using indirect measures). This would allow programs to focus on their areas of strength and distinction” (p. 10-11). This present study and its findings support the argument put forth by Henderson and Christ (2014). With a tiered system, perhaps students’ skillsets would develop proportionately in better alignment with the real-world demands of today’s professional market. In addition, the measurable student outcome statistics may be especially of interest to the ACEJMC. Given that unaccredited schools reported higher mean scores than accredited schools in job placement rates in 2013 and 2014, in graduation rates in 2012, 2013, and 2014, and in retention rates in 2012, 2013, and 2014, the ACEJMC may want to examine the factors behind this pattern. These findings may also warrant the exploration of new initiatives or efforts to raise schools’ performance in these key metrics. Second, this study certainly has implications for journalism educators, from both accredited and unaccredited programs. Although the sample size was low, the statistical analysis of the data yielded nine moderate to strong correlations between programs’ rankings of professional journalism values/competencies and their student outcomes. In particular, there were strong, negative correlations between programs’ curricular emphasis on numerical concepts and their job placement rates in 2012 and in 2013. And there were strong, positive correlations between curricular emphasis on evaluation skills and retention rates in 2013 and 2014. These findings may help inform administrators’ decisions on the structure and design of journalism 142 curricula. For example, in light of these findings, it may be worthwhile to experiment with course objectives and/or assignments, by abbreviating a numerical/statistical focus and bolstering time spent on instructing students how to critically evaluate their own work others’ work for accuracy, fairness, clarity, style and grammar; and then measure to see if it has an impact on student outcomes. Also, given the relatively low number of responses for the student outcome metrics, it might serve to prompt educators to implement processes to track these statistics. As noted previously, 21 schools stated that, for one or more of the student outcome metrics, their program and/or their university didn’t track these numbers, or they didn’t know. There is no question that ascertaining these numbers can be complicated and cumbersome; but at the same time, there is also no question that these statistics provide great value to the programs and the students. These numbers are a measurable form of accountability, which can help evaluate the effectiveness of a program and its educational practices. Finally, this study carries implications for journalism practitioners. As demonstrated at numerous points throughout the literature, the journalism profession is closely tied to journalism education. The profession has a vested interest in the ability of journalism programs to prepare graduates to enter into a constantly evolving industry. This study, in a way, serves as a snapshot of undergraduate journalism education. Therefore, these findings can offer a theoretical place from which practitioners can assess the state of journalism education and determine the proper next steps to help advance the field. Conclusion Seventy years ago, the Accrediting Council on Education in Journalism and Mass Communications was established, and a framework for evaluating and accrediting journalism 143 programs was put into place (ACEJMC, 2012). Today, the field faces unique challenges with declining enrollments (Becker, Vlad, & Simpson, 2014), a rapidly altering and digitizing employment landscape (Jurkowitz, 2014), and debate over the direction and effectiveness of the programs themselves (Basu, 2012; Finberg, Krueger, & Klinger, 2013). The purpose of this study is to explore the relationship between curricular priorities/learning assessment practices and quantifiable student outcomes at undergraduate programs. It seeks to expand the body of research on journalism accreditation and education at this pivotal moment. The study provides needed data on emphases of the curriculum, various uses of learning assessments, and measurable student outcomes. But in addition, it identifies correlations and parses out differences between accredited and unaccredited schools, so that educators, accreditors, and practitioners alike can gain a more robust understanding of the field and have a solid foundation of research-based evidence to inform their decisions going forward. And hopefully, the findings from this study will open up the prospect of new pathways for future and deeper research in these areas. 144 REFERENCES Adelman, C., & Silver, H. (1990). Accreditation: The American experience. London, England: Council for National Academic Awards. Aghion, P., Dewatripont, M., Hoxby, C. M., Mas-Colell, A., & Sapir, A. (2008). Higher aspirations: An agenda for reforming European Universities. Brussels: Bruegel. Accrediting Council on Education in Journalism and Mass Communication. (2012). ACEJMC accrediting standards. Retrieved from: http://www2.ku.edu/~acejmc/PROGRAM/STANDARDS.SHTML Accrediting Council on Education in Journalism and Mass Communication. (2009). Principles of accreditation. Retrieved from: http://www2.ku.edu/~acejmc/PROGRAM/PRINCIPLES.SHTML Accrediting Council on Education in Journalism and Mass Communication. (2014). ACEJMC accreditation status 2013 - 2014. Retrieved from: http://www2.ku.edu/~acejmc/STUDENT/PROGLIST.SHTML Accrediting Council on Education in Journalism and Mass Communication. (N.D.a). Values, objectives, and purposes of accreditation in journalism and mass communications. Retrieved from: http://www2.ku.edu/~acejmc/PROGRAM/ACCREDBASICS.SHTML Accrediting Council on Education in Journalism and Mass Communication. (N.D.b). The mechanisms of accreditation. Retrieved from: https://www2.ku.edu/~acejmc/PROGRAM/MECHANISMS.SHTML Accrediting Council on Education in Journalism and Mass Communication. (N.D.c). Frequently asked questions concerning accreditation. Retrieved from: http://www2.ku.edu/~acejmc/FAQS.SHTML 145 Alderman, B. B., & Milrod, L. (2009). Use of the eleven professional values and competencies to evaluate interns: a case study. ASJMC Insights, 19-24. American Accounting Association, Committee on Consequences of Accreditation. (1977). Report of the Committee on Consequences of Accreditation, 52, 165, 167- 177. American Association for Higher Education. (1992). 9 principles of good practice for assessing student learning. North Kansas City, MO: American Association for Higher Education. American Association for Higher Education. (1997). Assessing impact: Evidence and action. Washington, DC: American Association for Higher Education. American Council of Trustees and Alumni. (2007). Why accreditation doesn’t work and what policymakers can do about it. Washington, DC: American Council of Trustees and Alumni. Retrieved from https://www.goacta.org/publications/downloads/Accreditation2007Final.pdf American Council on Education, Task Force on Accreditation. (2012). Assuring Academic Quality in the 21 st Century: Self-regulation in a New Era. Retrieved from http://www.acenet.edu/news-room/Documents/Accreditation-TaskForce-revised- 070512.pdf American Medical Association. (1971). Accreditation of health educational programs. Part I: Staff working papers. Washington, DC: American Medical Association. Association of American Colleges and Universities. (2007). College Learning for the New Global Century. Washington, DC: Association of American Colleges and Universities. Retrieved from: http://www.aacu.org/leap/documents/GlobalCentury_final.pdf Association for Education in Journalism and Mass Communication. (2014). Journalism and mass communication directory. Columbia, S.C.: Association for Education in Journalism 146 and Mass Communication. Astin, A.W. (2014, February 18). Accreditation and autonomy. Inside Higher Ed. Retrieved from http://www.insidehighered.com/views/2014/02/18/accreditation-helps-limit-government- intrusion-us-higher-education-essay Baker, R. L. (2002). Evaluating quality and effectiveness: Regional accreditation principles and practices. The Journal of Academic Librarianship, 28(1), 3-7. Banta, T. W. (1993). Summary and conclusion: Are we making a difference? In T. W. Banta (Ed.), Making a difference: Outcomes of a decade of assessment in higher education (pp. 357-376). San Francisco, CA: Jossey-Bass. Bardo, J. W. (2009). The impact of the changing climate for accreditation on the individual college or university: Five trends and their implications. New Directions for Higher Education, 145, 47-58. Barnathan, J. (2013). ICFJ president names five trends in journalism education. International Center for Journalists. Retrieved from: http://www.icfj.org/news/icfj-president-names- five-trends-journalism-education Barzun, J. (1993). The American university: How it runs, where it is going. Chicago, IL: University of Chicago Press. Basu, K. (2012). Debate grows over journalism education. Inside Higher Ed. Retrieved from: https://www.insidehighered.com/news/2012/08/10/debate-grows-over-journalism- education Becker, L. B., Kosicki, G. M., Engleman, T., & Viswanath, K. (1993). Finding work and getting paid: Predictors of success in the mass communications job market. Journalism & Mass Communication Quarterly, 70(4), 919-933. 147 Becker, L. B., Vlad, T., & Simpson, H. A. (2013). 2012 annual survey of journalism and mass communication enrollments: enrollments decline for second year in a row. Journalism & Mass Communication Educator, 68(4), 305-334. Becker, L. B., Vlad, T., & Simpson, H. A. (2014). 2013 annual survey of journalism and mass communication enrollments: enrollments decline for third consecutive year. Journalism & Mass Communication Educator, 69(4), 349-365. Becker, L. B., Vlad, T., Simpson, H., & Kalpen, K. (2013). 2012 annual survey of journalism & mass communication graduates. James M. Cox Jr. Center for International Mass Communication Training and Research, Grady College of Journalism & Mass Communication, University of Georgia. Retrieved from: http://www.grady.uga.edu/annualsurveys/Graduate_Survey/Graduate_2012/Grdrpt2012m ergedv2.pdf Beno, B. A. (2004). The role of student learning outcomes in accreditation quality review. New Directions for Community College, 236, 65-72. Bensimon, E. M. (2005). Closing the achievement gap in higher education: An organizational learning perspective. New Directions for Higher Education, 131, 99-111. Bernhard, A. (2011). Quality assurance in an international higher education area: A case study approach and comparative analysis. Wiesbaden, Germany: VS Verlag für Sozialwissenschaften. Biswas, M., & Izard, R. (2009). 2009 assessment of the status of diversity education in journalism and mass communication programs. Journalism & Mass Communication Educator, 64(4), 378-394. Bitter, M. E., Stryker, J. P, & Jens, W. G. (1999). A preliminary investigation of the choice to 148 obtain AACSB accounting accreditation. Accounting Educators’ Journal, XI, 1-15. Blauch, L. E. (1959). Accreditation in higher education. Washington, DC: United States Government Printing Office. Retrieved from http://babel.hathitrust.org/cgi/ pt?id=mdp.39015007036083;view=1up;seq=1 Bloland, H. G. (2001). Creating the Council for Higher Education Accreditation (CHEA). Phoenix, AZ: Oryx Press. Blom, R., Davenport, L. D., & Bowe, B. J. (2012). Reputation Cycles The Value of Accreditation for Undergraduate Journalism Programs. Journalism & Mass Communication Educator, 67(4), 392-406. Bresciani, M. J. (2006). Outcomes-based academic and co-curricular program review: A compilation of institutional good practice. Sterling, VA: Stylus. Britt, B., & Aaron, L. (2008). Nonprogrammatic accreditation: Programs and attitudes. Radiologic Technology, 80(2), 123-129. Brittingham, B. (2008, September/October). An uneasy partnership: Accreditation and the federal government. Change, 32-38. Brittingham, B. (2009). Accreditation in the United States: How did we get to where we are? New Directions for Higher Education , 145, 7-27. Brittingham, B. (2012). Higher education, accreditation, and change, change, change: What’s teacher education to do? In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality assurance (59-75). Washington, DC: Teacher Education Accreditation Council. Brown, H. (2013, September). Protecting Students and Taxpayers: The Federal Government’s 149 Failed Regulatory Approach and Steps for Reform. American Enterprise Institute, Center on Higher Education Reform. Retrieved from http://www.aei.org/files/2013/09/27/- protecting-students-and-taxpayers_164758132385.pdf Burke, J. C. & Associates. (2005). Achieving accountability in higher education: Balancing public, academic, and market demands. San Francisco, CA: Jossey- Bass. Burke, J. C. & Minassians, H. P. (2002). The new accountability: From regulation to results. New Directions for Institutional Research, 2002(116), 5-19. Cabrera, A. F., Colbeck, C. L., & Terenzini, P. T. (2001). Developing performance indicators for assessing classroom teaching practices and student learning: The case of engineering. Research in Higher Education, 42(3), 327-352. Carey, K. (2009, September/October). College for $99 a month. Washington Monthly. Retrieved from http://www.washingtonmonthly.com Carey, K. (2010). Death of a university. In K. Carey & M. Schneider (Eds.), Accountability in American higher education. New York, NY: Palgrave Macmillan. Carroll, B. A. (1977). Accredited, Non-Accredited News Curricula Are Similar. Journalism Educator, 32(1), 42-3. Castañeda, L. (2011). Disruption and innovation: Online learning and degrees at accredited journalism schools and programs. Journalism & Mass Communication Educator, 66(4), 361-373. Chambers, C. M. (1983). "Council on Postsecondary Education." P. 289-314 in Understanding Accreditation, edited by K. E. Young, C. M. Chambers, and H. R. Kells. San Francisco: Jossey-Bass. Chernay, G. (1990). Accreditation and the role of the Council on Postsecondary Accreditation. 150 Washington, DC: Council on Postsecondary Accreditation. Christ, W. G. (2004). Assessment, media literacy standards, and higher education. American Behavioral Scientist, 48(1), 92-96. Christ, W. G., & Henderson, J. J. (2014). Assessing the ACEJMC professional values and competencies. Journalism & Mass Communication Educator, 1-13, 1077695814525408. Cohn, D. (2013). Practitioners' perception of entry-level and graduating journalists versus academic requirements of ACEJMC (Doctoral dissertation). Wilmington University, Delaware. Commission on the Future of Higher Education (Spellings Commission Report) (2006). U.S. Department of Education. Retrieved from http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports.html. Council for Higher Education Accreditation (2006). CHEA survey of recognized accrediting organizations: Providing information to the public. Washington, DC: Author. Council for Higher Education Accreditation. (2010). The value of accreditation. Washington, DC: Council for Higher Education Accreditation. Retrieved from: http://www.chea.org/pdf/Value%20of%20US%20Accreditation%2006.29.2010_buttons. pdf Council for Higher Education Accreditation. (2012). The CHEA initiative final report. Washington, DC: Council for Higher Education Accreditation. Retrieved from: http://www.chea.org/pdf/TheCHEAInitiative_Final_Report8.pdf 151 Council for Higher Education Accreditation, Directory. (2014). 2013-2014 Directory of CHEA- Recognized Organizations. Retrieved from http://www.chea.org/pdf/2013- 2014_Directory_of_CHEA_Recognized_Organizations.pdf Council for Higher Education Accreditation, Degree Mills. (2014). Degree mills. Retrieved from: http://www.chea.org/degreemills/default.htm Council of Regional Accrediting Commissions. (n.d.). A guide for institutions and evaluators. Retrieved from http://www.sacscoc.org/pdf/handbooks/GuideForInstitutions.pdf Cremonini, L., Epping, E., Westerheijden, D., & Vogelsang, K. (2012). Impact of Quality Assurance on Cross-Border Higher Education. Enschede, Netherlands: Center for Higher Education Policy Studies. Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches (3rd ed.). Los Angeles, CA: Sage Publications, Inc. Cusatis, C., & Martin-Kratzer, R. (2009). Assessing the State of Math Education in ACEJMC- accredited and Non-accredited Undergraduate Journalism Programs. Journalism & Mass Communication Educator, 64(4), 355-377. Daoust, M. P., Wehmeyer, W., & Eubank, E. (2006). Valuing an MBA: Authentic outcome measurement made easy. Unpublished manuscript. Retrieved from http://www.momentumbusinessgroup.com/resourcesValuingMBA.pdf Davenport, C. A. (2000). Recognition chronology. Retrieved from http://www.aspausa.org/documents/Davenport.pdf Davis, C. O. (1932). The North central association of colleges and secondary schools: aims, organization, activities. Chicago, IL: The Association. Retrieved from 152 http://babel.hathitrust.org/cgi/pt?id=mdp.39015031490645;view=1up;seq=15 Davis, C. O. (1945). A history of the North Central Association of Colleges and Secondary Schools 1895-1945. Ann Arbor, MI: The North Central Association of Colleges and Secondary Schools. Retrieved from http://library.usc.edu/uhtbin/cgisirsi/x/0/0/5?Searchdata1=1175460{CKEY} Dewatripont, M., Sapir, A., Van Pottelsberghe, B., & Veugelers, R. (2010). Boosting innovation in Europe. Bruegel Policy Contribution 2010/06. Dickeson, R. C. (2006). The need for accreditation reform. Issue paper (The Secretary of Education’s Commission on the Future of Higher Education). Washington, DC. Retrieved from http://www2.ed.gov/about/bdscomm/list/hiedfuture/reports/dickeson.pdf Dickeson, R. (2009). Recalibrating the accreditation-federal relationship. Washington, D.C.: University of Northern Colorado. Dickey, F., & Miller, J. (1972). Federal involvement in nongovernmental Accreditation. Educational Record 53(2), 139. Dill, D. (2007). Quality Assurance in Higher Education: Practices and Issues. University of North Carolina at Chapel Hill. Dill, D. D., Massy, W. F., Williams, P. R., & Cook, C. M. (1996, September/October). Accreditation and academic quality assurance: Can we get there from here? Change 28(5), 16-24. DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48(2), 147- 160. 153 Dimmick, J., Chen, Y., & Li, Z. (2004). Competition between the Internet and traditional news media: The gratification-opportunities niche dimension. The Journal of Media Economics, 17(1), 19-33. Doerr, A. H. (1983). Accreditation: Academic boon or bane. Contemporary Education, 55(1), 6-8. Donald, R. (2006). Direct measures: Portfolios. Assessing media education: A resource handbook for educators and administrators, 421-438. Routledge. Dougherty, K. J., Hare, R., & Natow, R. S. (2009). Performance accountability systems for community colleges: Lessons for the voluntary framework of accountability for community colleges. Community College Research Center. Columbia University, NYC: Teachers College. Dowd, A. C. (2003). From access to outcome equity: Revitalizing the democratic mission of the community college. Annals of the American Academy of Political and Social Science, 586, 92-119. Dowd, A. C., & Grant, J. L. (2006). Equity and efficiency of community college appropriations: The role of local financing. The Review of Higher Education, 29(2), 167-194. Driscoll, A., & De Noriega, D. C. (2006). Taking ownership of accreditation: Assessment processes that promote institutional improvement and faculty engagement. Sterling, VA: Stylus Publishing. Eaton, J. S. (2003a). Is accreditation accountable? The continuing conversation between accreditation and the federal government. Washington, DC: Council for Higher Education Accreditation. 154 Eaton, J. S. (2009). Accreditation in the United States. New Directions for Higher Education, 145, 79-86. doi:10.1002/he.337 Eaton, J. S. (2010). Accreditation and the federal future of higher education. Academe, 96(5), 21-24. Eaton, J. S. (2012a). An overview of U.S. accreditation. Council for Higher Education Accreditation. Retrieved from: http://www.chea.org/pdf/Overview%20of%20US%20Accreditation%202012.pdf Eaton, J. S. (2012b). What future for accreditation: The challenge and opportunity of the accreditation – federal government relationship. In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality assurance (77-88). Washington, DC: Teacher Education Accreditation Council. Retrieved from http://www.teac.org/wp- content/uploads/2012/03/Festschrift-Book.pdf Eaton, J.S. (2013a, June 13). Accreditation and the next reauthorization of the Higher Education Act. Inside Accreditation with the President of CHEA, 9(3). Retrieved from http://www.chea.org/ia/IA_2013.05.31.html Eaton, J.S. (2013b, November-December). The changing role of accreditation: Should it matter to governing boards? Trustee. Retrieved from http://agb.org/trusteeship/2013/11/changing-role-accreditation-should-it-matter- governing-boards El-Khawas, E. (2001). Accreditation in the USA: Origins, developments and future prospects. Paris, France: International Institute for Educational Planning. Ellis, J. (2013). When j-school goes online: putting journalism education in front of a massive audience. Neiman Journalism Lab, Harvard. Retrieved from: 155 http://www.niemanlab.org/2013/01/when-j-school-goes-online-putting-journalism- education-in-front-of-a-massive-audience/ Ewell, P. T. (1984). The self-regarding institution: Information for excellence. Boulder, CO: National Center for Higher Education Management Systems. Ewell, P. T. (1993). The role of states and accreditors in shaping assessment practice. In T. W. Banta (Ed.), Making a difference: Outcomes of a decade of assessment in higher education (pp. 339-356). San Francisco, CA: Jossey-Bass. Ewell, P. T. (2001). Accreditation and student learning outcomes: A proposed point of departure. Washington, DC: Council for Higher Education Accreditation. Retrieved from http://www.chea.org/award/StudentLearningOutcomes2001.pdf Ewell, P. T. (2002). An emerging scholarship: A brief history of assessment. In T. W. Banta (Ed.), Building a scholarship of assessment (pp. 3-25). San Francisco, CA: Jossey-Bass. Ewell, P. T. (2005). Can assessment serve accountability? It depends on the question. In J. C. Burke (Ed.), Achieving accountability in higher education: Balancing public, academic, and market demands (pp. 78-105). San Francisco, CA: Jossey-Bass. Ewell, P. T. (2008). Assessment and accountability in America today: Background and context. New Directions for Institutional Research, 2008(S1), 7–17. Ewell, P. T. (2008). U.S. accreditation and the future of quality assurance: A tenth anniversary report from the Council for Higher Education Accreditation. Washington, DC: Council for Higher Education Accreditation. Ewell, P. T. (2009). Assessment, accountability, and improvement: Revisiting the tension. Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomeassessment.org/documents/PeterEwell_005.pdf 156 Ewell, P. T. (2010). Twenty years of quality assurance in higher education: What’s happened what’s different? Quality in higher education, 16(2), 173-175. Ewell, P. T. (2012). Disciplining peer review: Addressing some deficiencies in U.S. accreditation practices. In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality assurance (89- 105). Washington, DC: Teacher Education Accreditation Council. Retrieved from http://www.teac.org/wp- content/uploads/2012/03/Festschrift-Book.pdf Ewell, P. T., Wellman, J. V., & Paulson, K. (1997). Refashioning accountability: Toward a coordinated system of quality assurance for higher education. Denver, CO: Education Commission of the States. Finberg, H., Krueger, V., & Klinger, L. (2013). State of Journalism Education 2013. Poynter Institute. Retrieved from http://www.newsu.org/course_files/StateOfJournalismEducation2013.pdf Fink, A. (2013). How to conduct surveys: a step-by-step guide (5th ed.). Thousand Oaks, CA: Sage Publications, Inc. Finkin, M. W. (1973). Federal reliance on voluntary accreditation: The power to recognize as the power to regulate. Journal of Law and Education, 2(3), 339-375. Finn, Jr. C. E. (1975, Winter). Washington in academe we trust: Federalism and the universities: The balance shifts. Change, 7(10), 24-29, 63. Floden, R. E. (1980). Flexner, accreditation, and evaluation. Educational Evaluation and Policy Analysis, 2(2), 35-46. doi:10.3102/01623737002002035. Florida State Postsecondary Education Planning Commission. (1995). A review of specialized accreditation. Tallahassee, FL: Florida State Postsecondary Education 157 Planning Commission. Fuller, M. B., & Lugg, E. T. (2012). Legal precedents for higher education accreditation Journal of Higher Education Management, 27(1). Retrieved from http://www.aaua.org/images/JHEM_-_Vol_27_Web_Edition_.pdf#page=53 Fuse, K., & Lambiase, J. J. (2010). Alumni perceptions of the ACE JMC's 11 professional values and competencies. Southwestern Mass Communication Journal, 25(2), 41-54. Gaston, P. L. (2014). Higher Education Accreditation: How it’s changing, why it must. Sterling, VA. Stylus Publishing. Gillen, A., Bennett, D. L, & Vedder, R. (2010). The inmates running the asylum?: An analysis of higher education accreditation. Washington, DC: Center for College Affordability and Productivity. Retrieved from http://www.centerforcollegeaffordability.org/uploads/Accreditation.pdf Global University Network for Innovation. (2007). Higher education in the world 2007: Accreditation for quality assurance: What is at stake? New York, NY: Palgrave Macmillan. Glenn, A. A. (2013). Trending in journalism education: online degrees; entrepreneurship; mobile design. PBS Mediashift. Retrieved from: http://www.pbs.org/mediashift/2013/01/trending- in-journalism-education-online-degrees-entrepreneurship-mobile-design008/ Grady, D. A. (2006). Indirect measures: Internships, careers, and competitions. Assessing media education: A resource handbook for educators and administrators, 349-371. Routledge. Graham, P. A., Lyman, R. W., & Trow, M. (1995). Accountability of colleges and universities: An essay. New York, NY: Columbia University. Gruson, E. S., Levine, D. O, & Lustberg, L. S. Issues in accreditation, eligibility and 158 institutional quality. Cambridge, MA: Sloan Commission on Government and Higher Education. Hagerty, B. M. K., & Stark, J. S. (1989). Comparing educational accreditation standards in selected professional fields. The Journal of Higher Education, 60(1), 1-20. Harcleroad, F. F. (1976). Educational auditing and accountability. Washington, DC: The Council on Postsecondary Accreditation. Harcleroad, F. F. (1980). Accreditation: History, process, and problems. Washington, DC: American Association for Higher Education. Hart Research Associates. (2009). Learning and assessment: Trends in undergraduate education (A survey among members of the Association of American College and Universities). Washington, DC: Author. Retrieved from https://www.aacu.org/membership/documents/2009MemberSurvey_Part1.pdf Harvey, L. (2004). The power of accreditation: Views of academics. Journal of Higher Education Policy and Management, 26(2), 207-223. Henderson, J. J., & Christ, W. G. (2014). Benchmarking ACEJMC competencies: what it means for assessment. Journalism & Mass Communication Educator, 1-14, 1077695814525407. Ikenberry, S. O. (2009). Where do we take accreditation? Washington, DC: Council for Higher Education Accreditation. Jackson, R. S., Davis, J. H., & Jackson, F. R. (2010). Redesigning regional accreditation: The impact on institutional planning. Planning for Higher Education, 38(4), 9-19. Jacobs, B. & Van der Ploeg, F. (2006, July). How to reform higher education in Europe. Economic Policy, 535-592. 159 Jaschik, S., & Ledgerman, D. (2014). The 2014 Inside Higher Ed survey of college & university presidents. Washington, DC: Inside Higher Ed. Jurkowitz, M. (2014). The growth in digital reporting: what it means for journalism and news consumers. State of the News Media 2014. Pew Research Center. Retrieved from: http://www.journalism.org/2014/03/26/the-growth-in-digital-reporting/ Kells, H. R. (1976). The reform of regional accreditation agencies. Educational Record 57(1), 24-28. Kells, H. R., & Kirkwood, R. (1979). Institutional self-evaluation processes. The Educational Record, 60(1), 25-45. Kennedy, V. C., Moore, F. I., & Thibadoux, G. M. (1985). Determining the costs of self- study for accreditation: A method and a rationale. Journal of Allied Health,14(2), 175-182. Kraeplin, C., & Criado, C. A. (2005). Building a case for convergence journalism curriculum. Journalism & Mass Communication Educator, 60(1), 47-56. Kren, L., Tatum, K. W., & Phillips, L. C. (1993). Separate accreditation of accounting programs: An empirical investigation. Issues in Accounting Education, 8(2), 260- 272. Kuh, G. D. (2010). Risky business: Promises and pitfalls of institutional transparency. Change: The Magazine of Higher Learning, 39(5), 30-35. Kuh, G. D., & Ewell, P. T. (2010). The state of learning outcomes assessment in the United States. Higher education management and policy, 22(1), 1-20. Kuh, G., & Ikenberry, S. (2009). More than you think, less than we need: Learning outcomes assessment in American higher education. Champaign, IL: National Institute for Learning Outcomes Assessment. Retrieved from http://www.learningoutcomeassessment.org/documents/niloafullreportfinal2.pdf 160 Learned, W. S., & Wood, B. D. (1938). The student and his knowledge: A report to the Carnegie Foundation on the results of the high school and college examinations of 1928, 1930, and 1932. New York, NY: The Carnegie Foundation for the Advancement of Teaching. Lee, M. B., & Crow, S. D. (1998). Effective collaboration for the twenty-first century: The Commission and its stakeholders (Report and Recommendations of the Committee on Organizational Effectiveness and Future Directions). Chicago, IL: North Central Association of Colleges and Schools. Leef, G. C., & Burris, R. D. (2002). Can college accreditation live up to its promise? Washington, DC: American Council of Trustees and Alumni. Retrieved from https://www.goacta.org/publications/downloads/CanAccreditationFulfillPromise.pdf Lind, C. J., & McDonald, M. (2003). Creating and assessment culture: a case study of success and struggles. In S. E. Van Kollenburg (Ed.), A collection of papers on selfstudy and institutional improvement, 3. Promoting student learning and effective teaching, pp.21- 23. (ERIC Document Reproduction Service No. ED 476 673). Retrieved from http://files.eric.ed.gov/fulltext/ED476673.pdf#page=22 Lingwall, A. (2010). Rigor or remediation? Exploring writing proficiency and assessment measures in journalism and mass communication programs. Journalism & Mass Communication Educator, 65(3-4), 283-302. Maki, P. L. (2010). Assessing for learning: Building a sustainable commitment across the institution (2nd ed.). Sterling, VA: StylusPublishing. Masse, M. H., & Popovich, M. N. (2007). Accredited and nonaccredited media writing programs are stagnant, resistant to curricular reform, and similar. Journalism & Mass 161 Communication Educator, 62(2), 141-160. McLendon, M. K., Hearn, J. C., & Deaton, R. (2006). Called to account: Analyzing the origins and spread of state performance-accountability policies for higher education. Educational Evaluation and Policy Analysis, 28(1), 1-24. Middaugh, M. F. (2012). Introduction to themed PHE issue on accreditation in higher education. Planning for Higher Education, 40(3), 6-7. Middle States Commission on Higher Education. (2009). Highlights from the Commission’s first 90 years. Philadelphia, PA: Middle States Commission on Higher Education. Retrieved from http://www.msche.org/publications/90thanniversaryhistory.pdf Miles, J. A. (2012). Jossey-Bass business and management reader: Management and organization theory. Hoboken, NJ: Wiley. Moltz, D. (2010). Redefining community college success. Inside Higher Ed. Retrieved from http://www.insidehighered.com/news/2011/06/06/u_s_panel_drafts_and_debates_ measures_to_gauge_community_college_success. Moore, R. C. (2006) The capstone course. Assessing media education: A resource handbook for educators and administrators, 349-371. Routledge. National Advisory Committee on Institutional Quality and Integrity. (2012, April). Report to the U.S. Secretary of Education, Higher Education Act Reauthorization, Accreditation Policy Recommendations. Retrieved from http://www2.ed.gov/about/bdscomm/list/naciqi- dir/2012-spring/teleconference-2012/naciqi-final-report.pdf Neal, A. D. (2008). Dis-accreditation. Academic Questions, 21(4), 431-445. Newton, E., Bell, C., Ross, B., Philipps, M., Shoemaker, L., & Haas, D. (2012). An open letter to America's university presidents. Knight Foundation. Retrieved from: 162 http://www.knightfoundation.org/press-room/other/open-letter-americas-university- presidents/ Obama, B. (2013a, February 12). State of the Union Address. The White House. Retrieved from: http://www.whitehouse.gov/the-press-office/2013/02/12/president-barack-obamas-state- union-address Obama, B. (2013b, February 12). The President’s Plan for a Strong Middle Class and a Strong America. The White House. Retrieved from http://www.whitehouse.gov/sites/default/files/uploads/sotu_2013_blueprint_embargo.pdf Obama, B. (2013c, August 22). Fact Sheet on the President’s Plan to Make College More Affordable: A Better Bargain for the Middle Class. The White House. Retrieved from http://www.whitehouse.gov/the-press-office/2013/08/22/fact-sheet-president-s-plan- make-college-more-affordable-better-bargain- Orlans, H. O. (1974). Private accreditation and public eligibility: Volumes 1 and 2. Retrieved from ERIC database. (ED097858) Orlans, H. O. (1975). Private accreditation and public eligibility. Lexington, MA: D.C. Heath and Company. Pallant, J. (2013). SPSS survival manual (5th ed.). New York, NY: McGraw-Hill International. Perrault, A. H.; Gergory, V. L.; & Carey, J. O. (2002). The integration of assessment of student learning outcomes with teaching effectiveness. Journal of Education for Library and Information Science, 43(4), 270-282. Pigge, F. L. (1979).Opinions about accreditation and interagency cooperation: The results of a nationwide survey of COPA institutions. Washington, DC: Committee on Postsecondary Education. 163 Procopio, C. H. (2010). Differing administrator, faculty, and staff perceptions of organizational culture as related to external accreditation. Academic Leadership Journal, 8(2), 1-15. Provezis, S. J. (2010). Regional accreditation and learning outcomes assessment: Mapping the territory (Doctoral dissertation, University of Illinois at Urbana-Champaign). Raessler, K. R. (1970). An analysis of state requirements for college or university accreditation in music education. Journal of Research in Music Education, 18(3), 223-233. Ratcliff, J. L. (1996). Assessment, accreditation, and evaluation of higher education in the US. Quality in Higher Education, 2(1), 5-19. Reidlinger, C. R., & Prager, C. (1993). Cost-benefit analyses of accreditation. New Directions for Community Colleges, 83, 39-47. Reinardy, S., & Crawford, J. (2013). Assessing the assessors: JMC administrators critique the nine ACEJMC standards. Journalism & Mass Communication Educator, 68(4), 335-347. Rhodes, T. L. (2012). Show me the learning: Value, accreditation, and the quality of the degree. Planning for Higher Education, 40(3), 36-42. Ross, F. G. J., Stroman, C. A., Callahan, L. F., Dates, J. L., Egwu, C., & Whitmore, E. (2007). Final report of a national study on diversity in journalism and mass communication education, Phase II. Journalism & Mass Communication Educator, 62(1), 9-26. Schermerhorn, J. W., Reisch, J. S., & Griffith, P. J. (1980). Educator perceptions of accreditation. Journal of Allied Health 9(3), 176-182. Scriven, M. (2000). Evaluation ideologies. In D. L. Stufflebeam, G. F. Madaus, & T. Kellaghan (Eds.), Evaluation models (250-278). Boston, MA: Kluwer Academic Publishers. Seamon, M. C. (2010). The value of accreditation: An overview of three decades of 164 research comparing accredited and unaccredited journalism and mass communication programs. Journalism & Mass Communication Educator, 65(1), 9-20. Shaw, R. (1993). A backward glance: To a time before there was accreditation. North Central Association Quarterly, 68(2), 323-335. Shibley, L. R., & Volkwein, J. F. (2002, June). Comparing the costs and benefits of re- accreditation processes. Paper presented at the annual meeting of the Association for Institutional Research, Toronto, Ontario, Canada. Singh, A. B. (2005). A report on faculty perceptions of students’ information literacy competencies in journalism and mass communication programs: The ACEJMC survey. College & Research Libraries, 66(4), 294-311. Smith, V. B., & Finney, J. E. (2008, May/June). Redesigning regional accreditation: An interview with Ralph A. Wolff. Change, 18-24. Southern Association of Colleges and Schools. (2007). The Quality Enhancement Plan. Retrieved from http://www.sacscoc.org/pdf/081705/QEP%20Handbook.pdf Spangehl, S. D. (2012). AQIP and accreditation: Improving quality and performance. Planning for Higher Education, 40(3), 6-7. Stempel, G. H., Hargrove, T., & Bernt, J. P. (2000). Relation of growth of use of the Internet to changes in media use from 1995 to 1999. Journalism & Mass Communication Quarterly, 77(1), 71-79. Stensaker, B., & Harvey, L. (2006). Old wine in new bottles? A comparison of public and private accreditation schemes in higher education. Higher Education Policy, 19, 65-85. Stone, G. (1989). Measurement of excellence in newspaper writing courses. Journalism Educator, 44(4), 4-19. 165 Subervi, F., & Cantrell, T. H. (2007). Assessing Efforts and Policies Related to the Recruitment and Retention of Minority Faculty at Accredited and Non-accredited Journalism and Mass Communication Programs. Journalism & Mass Communication Educator, 62(1), 27-46. Sursock, A. & Smidt, H. (2010). Trends 2010: A decade of change in European higher education. Brussels: European University Association. Tanner, A., & Duhé, S. (2005). Trends in mass media education in the age of media convergence: Preparing students for careers in a converging news environment. SIMILE: Studies In Media & Information Literacy Education, 5(3), 1-12. Tanner, A., Forde, K. R., Besley, J. C., & Weir, T. (2012). Broadcast journalism education and the capstone experience. Journalism & Mass Communication Educator, 67(3) 219-233. Trivett, D. A. (1976). Accreditation and institutional eligibility. Washington, DC: American Assocaition for Higher Education. Uehling, B. S. (1987a). Accreditation and the institution. North Central Association Quarterly, 62(2), 350-360. UNESCO (2005). Guidelines for Quality Provision in Cross-border Higher Education. Paris: UNESCO. USDE Test. (2006). A test of leadership: Charting the future of US higher education. A report of the commission appointed by Secretary of Education Margaret Spellings. Washington, DC: USDE. Retrieved from http://www2.ed.gov/about/bdscomm/list/hiedfuture/-reports/final-report.pdf Van Damme, D. (2000). Internationalization and quality assurance: Towards worldwide accreditation? European Journal for Education Law and Policy, 4, 1-20. 166 Van der Ploeg, F. & Veugelers, R., (2008). Towards evidence-based reform of European universities. CESifo Economic Studies, 54(2), 99-120. Volkwein, J. F., Lattuca, L. R., Harper, B. J., & Domingo, R. J. (2007). Measuring the impact of professional accreditation on student experiences and learning outcomes. Research in Higher Education, 48(2), 251-282. Walker, J. J. (2010). A contribution to the self-study of the postsecondary accreditation protocol: A critical reflection to assist the Western Association of Schools and Colleges. Paper presented at the WASC Postsecondary Summit, Temecula, CA. Warner, W. K. (1977). Accreditation influences on senior institutions of higher education in the western accrediting region: An assessment. Oakland, CA: Western Association of Schools and Colleges. Weir, T. (2010). Pretest/posttest assessment: the use of an entry/exit exam as an assessment tool for accredited and non-accredited journalism and mass communication programs. Journalism & Mass Communication Educator, 65(2), 123-141. Weissburg, P. (2008). Shifting alliances in the accreditation of higher education: self- regulatory organizations. Dissertation Abstracts International, DAI-A 70/02, August 2009. ProQuest ID 250811630. Wergin, J. F. (2005). Waking up to the importance of accreditation. Change, 37(3) 35-41. Wergin, J. F. (2012). Five essential tensions in accreditation. In M. LaCelle-Peterson & D. Rigden (Eds.), Inquiry, evidence, and excellence: The promise and practice of quality assurance (27-38). Washington, DC: Teacher Education Accreditation Council. Western Association of Schools and Colleges. (1998). Eight perspectives on how to focus the accreditation process on educational effectiveness. Oakland, CA: Accrediting 167 commission for Senior Colleges and Universities WASC. Western Association of Schools and Colleges (WASC) (2002). Guide to using evidence in the accreditation process: A resource to support institutions and evaluation teams. Retrieved from www.wascweb.org/senior/Evidence_Guide.pdf Western Association of Schools and Colleges. (2009). WASC resource guide for ‘good practices’ in academic program review. Retrieved from http://www.wascsenior.org/findit/files/forms/WASC_Program_Review_Resource _Guide_Sept_2009.pdf Western Association of Schools and Colleges’ Accrediting Commission for Community and Junior Colleges (WASC-ACCJC) (2011). Retrieved from http://www.accjc.org. Westerheijden, D. F., Stensaker, B., & Rosa, M. J. (2007). Quality assurance in higher education: Trends in regulation, translation and transformation. Dordrecht, The Netherlands: Springer. White House. (2013, February 12). The President's plan for a strong middle class and a strong America. Accessed from http://www.whitehouse.gov/the-press- office/2013/02/12/president-s-plan-strong-middle-class-and-strong-america Wiedman, D. (1992). Effects on academic culture of shifts from oral to written traditions: The case of university accreditation. Human Organization, 51(4), 398-407. Williams, L. (2010). Assessment of student learning through journalism and mass communication internships. Applied Learning in Higher Education, 2, 23-38. Willis, C. R. (1994). The cost of accreditation to educational institutions. Journal of Allied Health, 23, 39-41. Wolff, R. A. (1990, June 27-30). Assessment 1990: Accreditation and renewal. Paper 168 presented at The Fifth AAHE Conference on Assessment in Higher Education, Washington, DC. Wolff, R. A. (2005). Accountability and accreditation: Can reforms match increasing demands?. In J. C. Burke (Ed.), Achieving accountability in higher education: Balancing public, academic, and market demands (pp. 78-105). San Francisco, CA: Jossey-Bass. Wood, A. L. (2006). Demystifying accreditation: Action plans for a national or regional accreditation. Innovative Higher Education, 31(1), 43-62. doi: 10.1007/s10755-006- 9008-6 Woolston, P. J. (2012). The Costs of Institutional Accreditation: A study of direct and indirect costs. (Doctoral dissertation, University of Southern California). World Bank (2002). Constructing knowledge societies: New challenges for tertiary education. Washington, DC: World Bank. Wriston, H. M. (1960). The futility of accrediting. The Journal of Higher Education, 31(6), 327-329. Yung-chi Hou, A. (2014). Quality in cross-border higher education and challenges for the internationalization of national quality assurance agencies in the Asia-Pacific region: the Taiwanese experience. Studies in Higher Education, 39(1), 135-152. Zis, S., Boeke, M., & Ewell, P. (2010). State policies on the assessment of student learning outcomes: Results of a fifty-state inventory. Boulder, CO: National Center for Higher Education Management Systems (NCHEMS). 169 APPENDIX A SURVEY COVER EMAIL Hello, I am a doctoral student in the University of Southern California’s Rossier School of Education, in the Doctor of Education (Ed.D.) program. I am conducting a research study which aims to explore the relationship between curricular priorities/learning assessment practices and quantifiable student outcomes at undergraduate journalism programs. The study procedure involves an online Qualtrics survey, and is anticipated to take approximately 20 minutes to complete. The purpose of the survey is to gather data from directors and/or faculty administrators at U.S. journalism schools offering undergraduate degree programs. You are eligible to participate in this study if you are a director or a faculty administrator at a U.S. journalism school offering undergraduate degree programs. We ask that the person most knowledgeable about the journalism program's curriculum please respond. Your participation is voluntary. Responses will be anonymous; findings will be reported only in aggregate form. If you have any questions about the research study, please contact me at bdimapin@usc.edu<mailto:bdimapin@usc.edu>. Thank you for your time, Benedict Dimapindan Rossier School of Education University of Southern California Doctor of Education Candidate For more information about the study, please click on the following link: Certified USC Research Information Sheet<https://usc.qualtrics.com/CP/File.php?F=F_e4Fm6W5HtDoBiw5> Follow this link to the Survey: ${l://SurveyLink?d=Take the Survey} Or copy and paste the URL below into your internet browser: ${l://SurveyURL} Follow the link to opt out of future emails: ${l://OptOutLink?d=Click here to unsubscribe} 170 APPENDIX B SURVEY INSTRUMENT Journalism Accreditation Survey 171 Q20 PART I: Demographic Information Q2 Is your journalism program accredited by the ACEJMC? Yes (1) No (2) Q3 What degree(s) does your program offer in journalism? Bachelor’s degree (1) Master’s degree (2) Ph.D./Doctorate (3) Other (please list) (4) ____________________ Q4 What was your program’s undergraduate student enrollment size in academic year 2013- 2014? Q5 Are you the accreditation liaison at your school? Yes (1) No (2) Q6 Your official title: Q7 Month and year of decision on last ACEJMC accreditation review (if applicable): Month (1) Year (2) Q10 Were you the accreditation liaison at the time of the school’s last ACEJMC accreditation review? Yes (1) No (2) Q11 If your program is ACEJMC accredited, what are your reasons for choosing to pursue or maintain accreditation? 172 Q21 PART II: Professional Values and Competencies 173 Q12 According to the ACEJMC, all journalism graduates should have acquired 12 core values and competencies as a result of completing the curriculum at their respective schools.Listed below are those 12 professional values and competencies. Please indicate from 1 to 5 (with 5 being the most important, and 1 being the least important) the values and competencies your program emphasizes most: Least Important (1) (2) Avergage (3) (4) Most Important (5) Freedom of Speech: Understand and apply U.S. principles and laws of freedom of speech and press, as well as receive instruction in and understand the range of systems of freedom of expression around the world, including the right to dissent, to monitor and criticize power, and to assemble and petition for redress of grievances (1) History: Demonstrate an understanding of the history and role of professionals and institutions in shaping communications (2) Gender Diversity: Demonstrate an 174 understanding of gender, race, ethnicity, sexual orientation and, as appropriate, other forms of diversity in domestic society in relation to mass communications (3) Cultural Diversity: Demonstrate an understanding of the diversity of peoples and cultures and of the significance and impact of mass communications in a global society (4) Theories: Understand concepts and apply theories in the use and presentation of images and information (5) Ethics: Demonstrate an understanding of professional ethical principles and work ethically in pursuit of truth, accuracy, fairness and diversity (6) Thinking: Think 175 critically, creatively and independently (7) Research: Conduct research and evaluate information by methods appropriate to the communications professions in which they work (8) Writing: Write correctly and clearly in forms and styles appropriate for the communications professions, audiences and purposes they serve (9) Evaluate: Critically evaluate their own work and that of others for accuracy and fairness, clarity, appropriate style and grammatical correctness (10) Numbers: Apply basic numerical and statistical concepts (11) Technology: 176 Apply current tools and technologies appropriate for the communications professions in which they work, and to understand the digital world (12) Q19 For the values/competencies that you ranked as most important, please indicate how your program emphasizes these in your curriculum over others? 177 Q22 PART III: Assessment of Learning Q13 How do you assess undergraduate student learning at the conclusion of the program? Capstone or professionally focused project (1) Comprehensive examination (2) Thesis (3) Portfolio (4) Exit survey or interview (5) Other (6) ____________________ None (7) Q14 Please explain the rationale behind your program’s decision to use this measure. Q15 In your experience, how effective has this measure been? Please elaborate. 178 Q23 PART IV: Student Outcomes Q16 What is the job placement rate (in the journalism/mass communications field) for bachelor's degree graduates of your program? For Academic Year ending 2012 (1) For Academic Year ending 2013 (2) For Academic Year ending 2014 (3) Optional Comments: (4) Q17 What is the graduation rate (bachelor’s degree within 6 years) at your program? For Academic Year ending 2012 (1) For Academic Year ending 2013 (2) For Academic Year ending 2014 (3) Optional Comments: (4) Q18 What is the retention rate at your program? For Academic Year ending 2012 (1) For Academic Year ending 2013 (2) For Academic Year ending 2014 (3) Optional Comments: (4)
Abstract (if available)
Abstract
This mixed methods study focused on the curriculum, learning assessments, and student outcomes (job placement, graduation, and retention rates) at undergraduate journalism programs. The goal was to explore the relationship between curricular priorities/learning assessment practices and measurable student outcomes. The study also sought to determine whether any differences exist between ACEJMC-accredited and unaccredited programs. The study led to several noteworthy findings. The results showed considerable overlap in how accredited and unaccredited programs prioritize professional competencies in their respective curriculum
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Learning outcomes assessment at American Library Association accredited master's programs in library and information studies
PDF
Assessment, accountability & accreditation: a study of MOOC provider perceptions
PDF
Perspectives on accreditation and leadership: a case study of an urban city college in jeopardy of losing accreditation
PDF
An examination of the direct/indirect measures used in the assessment practices of AACSB-accredited schools
PDF
The effects of accreditation on the passing rates of the California bar exam
PDF
An evaluation of nursing program administrator perspectives on national nursing education accreditation
PDF
A descriptive analysis focusing on similarities and differences among the U.S. service academies
PDF
The costs of institutional accreditation: a study of direct and indirect costs
PDF
A study of the pedagogical strategies used in support of students with learning disabilities and attitudes held by engineering faculty
PDF
The benefits and costs of accreditation of undergraduate medical education programs leading to the MD degree in the United States and its territories
PDF
Coloring the pipeline: an analysis of the NASPA Undergraduate Fellows program as a path for underrepresented students into student affairs
PDF
An exploratory, quantitative study of accreditation actions taken by the Western Association of Schools and Colleges' Accrediting Commission for Community and Junior Colleges Since 2002
PDF
Assessment and accreditation of undergraduate study abroad programs
PDF
Institutional student loan cohort default rates by institution type
PDF
Evaluating the effects of diversity courses and student diversity experiences on undergraduate students' democratic values at a private urban research institution
PDF
PowerPoint design based on cognitive load theory and cognitive theory of multimedia learning for introduction to statistics
PDF
The effectiveness of a peer mentorship program: a mixed methods study
PDF
The goals of specialized accreditation: A study of perceived benefits and costs of membership with the National Association of Schools of Music
PDF
The relationship of students' self-regulation and self-efficacy in an online learning environment
PDF
The effect of traditional method of training on learning transfer, motivation, self-efficacy, and performance orientation in comparison to evidence-based training in Brazilian Jiu-Jitsu
Asset Metadata
Creator
Dimapindan, Benedict de la Merced
(author)
Core Title
Priorities and practices: a mixed methods study of journalism accreditation
School
Rossier School of Education
Degree
Doctor of Education
Degree Program
Education (Leadership)
Publication Date
06/23/2015
Defense Date
04/28/2015
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
accreditation,curriculum,Education,Journalism,learning assessments,OAI-PMH Harvest,student outcomes
Format
application/pdf
(imt)
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Keim, Robert G. (
committee chair
), Tobey, Patricia Elaine (
committee member
), Woolston, Paul J. (
committee member
)
Creator Email
bdimapin@usc.edu,ben.dimapindan@gmail.com
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c3-576860
Unique identifier
UC11301283
Identifier
etd-Dimapindan-3509.pdf (filename),usctheses-c3-576860 (legacy record id)
Legacy Identifier
etd-Dimapindan-3509.pdf
Dmrecord
576860
Document Type
Dissertation
Format
application/pdf (imt)
Rights
Dimapindan, Benedict de la Merced
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the a...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus MC 2810, 3434 South Grand Avenue, 2nd Floor, Los Angeles, California 90089-2810, USA
Tags
accreditation
learning assessments
student outcomes