Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
An empirical examination of customer perceptions of service quality utilizing the extended service quality model under the condition of multiple subunit service providers
(USC Thesis Other)
An empirical examination of customer perceptions of service quality utilizing the extended service quality model under the condition of multiple subunit service providers
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send U M I a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note w ill indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9" black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UM I directly to order. ProQuest Information and Learning 300 North Zeeb Road. Ann Arbor, M l 48106-1346 USA 800-521-0600 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. AN EMPIRICAL EXAMINATION OF CUSTOMER PERCEPTIONS OF SERVICE QUALITY UTILIZING THE EXTENDED SERVICE QUALITY MODEL UNDER THE CONDITION OF MULTIPLE SUBUNIT SERVICE PROVIDERS by Richard Alan Hagy A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA in Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (PUBLIC ADMINISTRATION) August 2001 Copyright 2001 Richard Alan Hagy Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 3054744 Copyright 2001 by Hagy, Richard Alan All rights reserved. _® UMI UMI Microform 3054744 Copyright 2002 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest Information and Learning Company 300 North Zeeb Road P.O. Box 1346 Ann Arbor, Ml 48106-1346 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UNIVERSITY OF SOUTHERN CALIFORNIA The Graduate School University Park LOS ANGEI.ES, CALIFORNIA 90089-1695 This d isserta tio n , w ritte n b y Richard A. Hagy____________ U nder th e d irectio n o f hi?.... D issertation C om m ittee, a n d a p p ro ved b y a ll its m em bers, h as been p re se n te d to a n d a ccep ted b y The G raduate School, in p a rtia l fu lfillm en t o f requ irem en ts fo r th e degree o f DOCTOR OF PHILOSOPHY ______________ Dean o f Graduate S tu dies D ate August 7, 2001 DISSERTM TONC^PMMITTEE 6) . " A j V C h a i r p e r s o n G- sat^. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. ACKNOWLEDGMENTS A long-term project such as this could not have been accomplished without the support of a group of very special people. First and foremost, I would like to thank my wife, Karyn, and our two wonderful sons, Brandon and Brett, for their unwavering support and encouragement. I can only hope that they feel a sense of ownership in this degree because they truly deserve it! To my committee members, Robert Myrtle, Robert Stallings, and William G. Tiemey, thank you for providing the academic foundation that enabled me to complete this dissertation. The quality of their scholarship, in all dimensions, makes them exemplars for the professoriate. Special thanks to Bob Myrtle for chairing my committee and for being such a wonderful mentor throughout the Ph.D. process. I also wish to thank Terry Cooper and Richard Chase for their contributions to my academic program and for their intellectual insights reflected in this study. Finally, I wish to thank my colleagues in the (JSC Business Affairs and Student Affairs divisions, without whom, this study would not have been possible. Special thanks to my colleague and friend, A. Bingham Cherrie, for his thoughtful support at all the right moments. And last, but certainly not least, I must extend a heartfelt thank you to Thomas Moran. As a boss, Tom demands critical thinking daily and this certainly made the Ph.D. program somewhat easier. As a friend and mentor, Tom provided me with invaluable insights into the Ph.D. process itself, which in the final analysis, helped me to achieve this wonderful goal! Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. iii TABLE OF CONTENTS ACKNOWLEDGMENTS........................................................................................ii LIST OF FIGURES..................................................................................................v LIST OF TABLES................................................................................................ vi ABSTRACT........................................................................................................... vii CHAPTER ONE: INTRODUCTION .................................................................... I Background of the Study............................................................................ 1 Defining Characteristics of Service Firms.................................................. 1 Purpose of the Study ....................................................................................2 Research Questions ...................................................................................... 3 Significance of the Study..............................................................................6 Delimitations of the Study............................................................................ 7 Limitations of the Study................................................................................ 8 CHAPTER TWO: REVIEW OF THE LITERATURE........................................ 10 Conceptualization of Service Quality in the Service Firm........................ 10 Customer Contact...................................................................................... 1 1 Linking Service Practice Drivers to Firm Performance............................ 12 Service Quality Measurement in the Public Sector.................................. 13 Service Quality Measurement in Nonprofit Firms.................................... 16 The Unique Case of the Nonprofit Higher Education Firm ...................... 18 Quality and Assessment in Higher Education............................................ 20 Service Quality Measurement-SERVQUAL.............................................. 23 Conceptual Model of Service Q uality........................................................ 25 Extended Service Quality M odel................................................................27 Future Research Opportunities....................................................................30 CHAPTER THREE: HYPOTHESES ....................................................................33 CHAPTER FOUR: RESEARCH METHODOLOGY............................................ 37 Research Design.......................................................................................... 37 Sam ples......................................................................................................37 Subunit Service Providers..........................................................................38 Instruments..................................................................................................39 Methodology for Computation of SERVQUAL Gap Scores...................... 40 Pilot Study..................................................................................................42 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. iv Data Production.......................................................................................... 42 Treatment of Data ......................................................................................43 Data Analysis.............................................................................................. 44 CHAPTER FIVE: RESULTS ................................................................................ 46 Demographics and Characteristics of the Samples ....................................47 Mean Ratings of SERVQUAL Survey Items-Student Sample.................. 49 Mean Ratings of SERVQUAL Survey Items-Manager Sample................ 52 Psychometric Performance of the SERVQUAL Instrument for the Measurement of Students’ Perceptions of Service Quality-Gap 5 (Hypotheses la, lb, lc, I d ) .....................................................................53 Differences in Students’ Gap 5 Scores (Hypotheses 2a, 2b, 2 c ) ................ 69 Managers’ Understanding of Students’ Service Quality Expectations-Gap 1 (Hypothesis 3 ) ........................................................ 74 Gap 1 Relationships to Marketing Research Orientation, Upward Communication, and Levels of Management (Hypotheses 4a, 4b, 4 c )...........................................................................79 Gap 1 Relationship to Students’ Subunit Satisfaction (Hypothesis 5).... 82 Relationship Between Subunit Satisfaction and Marketing Research Orientation, Upward Communication, and Levels of Management (Hypotheses 6a, 6b, 6 c ).............................. 83 Gap 5 Relationship to Students’ Subunit Satisfaction (Hypothesis 7 )------ 84 Relationship Between Service Quality (Gap 5), Value, Satisfaction, and Future Behavioral Intentions Measures (Hypotheses 8a, 8b, 8c,8 d ,8 e)................................................................................................ 85 Findings Related to the Conceptual Interrelationships Between Service Quality, Value, and Satisfaction .............................................................88 CHAPTER SIX: DISCUSSION ............................................................................ 96 Summary of Results.................................................................................... 97 Implications of Results .......................................................................... 101 Limitations and Directions for Future Research .................................... 106 REFERENCES .................................................................................................. 109 APPENDIX A .................................................................................................... 121 APPENDIX B .................................................................................................... 127 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. V LIST OF FIGURES Figure 2.1 SERVQUAL Conceptual Model of Service Q uality.....................................26 2.2 Extended Service Quality M odel..................................................................28 4.1 Structural Model A ..................................................................................... 92 4.2 Structural Model B ........................................................................................93 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. vi LIST OF TABLES Table 5.0 Demographic Characteristics of Student Respondents .............................. 48 5.1 Distribution of Manager Respondents by Subunit Service Provider........49 5.2 Mean Student Ratings of Expectation, Perception, and Gap 5 Items .... 50 5.3 Mean Student SERVQUAL Dimensional Importance Ratings................ 51 5.4 Mean SERVQUAL Expectation Ratings and Managers’ Gap 1 Scores .. 52 5.5 Mean SERVQUAL Importance Ratings— Students Versus Managers .... 53 5.6 SERVQUAL Scale Reliability Analysis-Student Instrument..................55 5.7 Factor Analysis of Gap 5 Scores-Student Instrument.............................. 58 5.8 t-Tests of SERVQUAL Student Dimensional Importance Weightings . . . 60 5.9 Factor Analysis of Weighted Gap 5 Scores-Student Instrument..............62 5.10 Regression Model for Overall Service Quality and Gap 5 Scores...........63 5.11 Analysis of Variance of Student Responses to Service Problem Item . . . 65 5.12 Analysis of Variance of Student Responses to Service Resolution Item .. 66 5.13 Analysis of Variance of Student Responses to Recommend Item ..........67 5.14 t-Test of Weighted Overall Gap 5 Scores by Gender.................................... 71 5.15 t-Test of Weighted Dimensional Gap 5 Scores by G ender..........................72 5.16 Analysis of Variance of Overall Gap 5 Score by Grade Classification . . . 73 5.17 Analysis of Variance of Overall Gap 5 Score by Facility Type....................74 5.18 Scale Reliability Analysis-Manager Instrument...................................... 75 5.19 t-Test of Student Expectations versus Managers’ Predictions..................77 5.20 t-Test of Dimensional Importance Ratings-Managers versus Students ..78 5.21 Managers’ Overall Weighted Gap 1 Scores and Factors by Subunit........80 5.22 Pearson Correlations for Managers’ Gap 1, Marketing Research Orientation, Upward Communication, and Levels of Management .......... 81 5.23 Pearson Correlations for Managers’ Gap 1 and Subunit Satisfaction---- 82 5.24 Pearson Correlations for Subunit Satisfaction, Marketing Research Orientation, Upward Communication, and Levels of Management .......... 83 5.25 Pearson Correlations for Weighted Gap 5 and Subunit Satisfaction........85 5.26 Pearson Correlations for Weighted Gap 5, Satisfaction, Value, Intent to Remain in Program, Willingness to Recommend Program, and Willingness to Recommend Institution...................................................86 5.27 Structural Model ‘A’-Summary of Effects............................................95 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. vii ABSTRACT The dominant conceptualization of service quality in the service management literature has been the specification of service quality as the gap between the customer’s expectations for service (what a customer feels a service firm should offer) versus the customer’s perception of the service performance that was delivered by a firm. This gap between expectations and perceptions is often referred to as the disconfirmation construct of service quality, and is frequently measured by the 22- item SERVQUAL instrument developed by Parasuraman, Zeithaml, and Berry (198S). The Extended Service Quality Model, also developed by Parasuraman et al. (1988), specifies a set of factors that are theorized to contribute to a series of four organizational gaps which may contribute to the size of the service quality gap perceived by customers. This study explored the reliability and validity of the SERVQUAL scale for the measurement of student perceptions of service quality of a university housing program in a nonprofit higher education firm. Gap 1 of the Extended Service Quality Model, defined as the discrepancy between customer expectations and management’s perceptions of customer expectations, was also examined along with its theorized contributing factors (marketing research orientation, upward communication, and levels of management) across five subunit organizations involved in the production of the housing program. The study provided support for the psychometric performance of the SERVQUAL instrument in a nonprofit higher education firm; reported significant differences in student Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. perceptions of service quality according to gender and grade classification where females, sophomores and seniors perceived lower service quality; and, found that a significant Gap 1 existed between managers and customers. Moderate relationships were found between service quality, subunit satisfaction, overall satisfaction, value, and a willingness to recommend the program. Lastly, a structural model with the causal ordering of service quality and value as antecedent to satisfaction provided a good fit to the data and accounted for 64 percent of the variance in a student's willingness to recommend the housing program to a friend. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. I CHAPTER ONE Introduction Background of Study The service sector of the United States economy has grown rapidly throughout the 1980s and 1990s and now accounts for over 70% of the nation’s gross national product (Rust, Zahorik and Keiningham, 1994) and 74% of all jobs (Shugan, 1994). Throughout this period, leading service Arms have moved aggressively to continuously redesign complex service delivery systems as one means of gaining competitive advantage and achieving world class performance (Roth, Chase and Voss, 1997). While competition is increasingly based on quality constrained by cost, customers are increasingly value oriented, focusing on results and service quality that exceed the price and acquisition costs incurred for a service. In short, to remain competitive in a services- and information-based economy, organizations will need to focus both on technical service quality (what is delivered) as well as on functional service quality, that is, the complex operations systems through which a service is delivered (Gronroos, 1983). Defining Characteristics of Service Firms Service firms are unique from their manufacturing counterparts in that the “product” is simultaneously produced and consumed. Service organizations are also faced with the unique challenge of deriving productivity gains not principally through technological innovations (although technology plays an important role in the service Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2 delivery process) but through the most effective and efficient application of human resources (Stajkovic and Luthans, 1997). Other key differences between manufacturing and service organizations include: (1) the definition and assessment of performance outcomes (i.e., a tangible manufactured good that is easily described and directly measured, versus an intangible service that contains implicit attributes which are hard to define in operational terms), and (2) the nature of the task-performance and work processes involved in the delivery of performance outcomes (Stajkovic and Luthans, 1997). In the case of service organizations, the service delivery process involves a complex web of dual perceptions between managers and customers (Luthans, 1988, 1995), and is most often the source of mismanagement (Parasuraman, Zeithaml, and Berry, 1985). Purpose of the Study SERVQUAL, an instrument to measure customer perceptions of service quality, and the Extended Service Quality Model (ESQM), developed to measure organizational “gaps” in the service delivery process, are customer-centered approaches designed by Parasuraman et al. (1988) to capture and to measure the perceptions of customers, managers, and employees. In order to extend service quality theory, Parasuraman et al. (1991a) have made a series of recommendations with regard to the Extended Service Quality Model (ESQM) including conducting research focusing on the individual gaps which constitute the ESQM. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3 Thus, the main purpose of this study is to empirically examine Gap 1 and Gap S of the Extended Service Quality Model (see Figure 2.1, p. 26), as operationalized in a university housing program in a nonprofit higher education firm, under the condition of multiple subunit service providers. The second purpose of the study is to examine the interrelationships of the service quality, value, and satisfaction constructs. The literature has been equivocal on the question of which constructs are antecedent, which are mediating and which are consequent (Rust and Oliver, 1994); therefore, two competing models will be empirically examined through the use of structural equation modeling techniques to determine which model provides the best fit to the data. Research Questions Questions about the Psychometric Performance of the SERVQUAL Instrument— Reliability and Validity (Gap S ) 1. Do the items in the SERVQUAL scale form a cohesive, five-dimension construct for measuring students’ perceptions of program service quality (Gap 5) in a nonprofit higher education firm? That is, does the SERVQUAL instrument generalize to the nonprofit higher education setting? Questions about Student Expectations and Perceptions of Service Quality 2. To what extent do personal factors effect students’ Gap S SERVQUAL scores, defined as the difference between expectations and perceptions of service Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 4 quality? That is, do significant differences exist in students’ SERVQUAL scores according to gender, grade classification, and type of residential facility? Questions about Managers’ Understanding of Student Expectations for Service Quality (Gap 1) 3. Due to organizational barriers, managers may not always have an accurate understanding of what features connote high quality to students. Specifically, does inadequate market research on students’ service quality expectations, inadequate upward communication from student-contact staff, and excessive layers of management contribute to a gap (Gap 1) between students’ expectations for service quality and managers’ understanding of their expectations? Questions about the Contributing Factors to Gap 1 4. What is the relationship between the size of Gap 1 and its three theorized contributing factors? That is, what is the relationship between Gap 1 and marketing research orientation, upward communication, and levels of management? Questions about Student Satisfaction with Subunit Service Providers and Subunit Managers* Gap 1 5. What is the relationship between student satisfaction with subunit service providers and the size of subunit managers’ Gap 1? Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 5 Questions about Subunit Satisfaction and the Contributing Factors to Subunit Managers* Gan 1 6. What is the relationship between students’ satisfaction with subunit service providers and the three theorized factors contributing to subunit managers’ Gap 1? That is, what are the valences of the relationships between subunit satisfaction and a subunit managers’ marketing research orientation, upward communication, and levels of management? Questions about Subunit Satisfaction and Students* Perception of Program Service Quality (Gap 5) 7. What is the relationship between Gap 5 and students’ satisfaction with subunit service providers? Questions about the Conceptual Interrelationships Between Service Quality. Value, and Satisfaction 8. What is the relationship between service quality (Gap 5), satisfaction, value, and future behavioral intentions measures including institutional commitment, and housing program commitment? 9. What are the causal directions of the relationships between students’ perceptions of service quality (Gap S), overall program satisfaction, overall program value and students’ willingness to recommend the program? Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 6 Significance of the Study All organizations, public and private, face growing expectations for performance and accountability. Public sector organizations generally, and nonprofit higher education institutions in particular, are increasingly turning to private sector approaches for measuring customer expectations and perceptions of service quality. Indeed, Kanter and Brinkerhoff (1981) showed that the client satisfaction approach is one means by which public sector organizations assess performance. The Government Performance and Results Act (GPRA) is yet another example of legislation specifically designed to require goal setting, planning, and performance reporting throughout federal agencies. Voluntary efforts include the modification of the American Customer Satisfaction Index (ACSI; Fomell, 1996) into an ACSI Government Model for use by government agencies. This effort has proven successful with over 31 federal government agencies having implemented the ACSI model to measure customer expectations, perceived quality, customer satisfaction, and agency outcomes (Felker, 1999). In the nonprofit education sector, the National Institute of Standards and Technology (NIST) recently announced the revised 2001 Education Criteria for its Baldrige National Quality Program. The Baldrige Education Criteria are specifically designed to link with the seven-part framework used in the Baldrige Business Criteria such that cross-sector cooperation can be facilitated. Central to both the ACSI model and the Baldrige framework is the concept of service quality and customer-focus. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 7 The SERVQUAL instrument and the Extended Service Quality Model developed by Parasuraman et al. (1988) have played a central role in the measurement and conceptualization of service quality. Thus, the significance of the present study is that it provides new data on the generalizability of SERVQUAL and the ESQM to the higher education sector, specifically, in a university housing program under the condition of multiple subunit service providers. University housing programs play a crucial role in the lives of residential students. The present study provides valuable insight into how students evaluate the service quality of a residential housing program, and how the Extended Service Quality Model can be implemented to identify service quality gaps in the delivery of support services in colleges and universities. Delimitations of the Study The present study is limited to one model of service quality, the Extended Service Quality Model developed by Parasuraman et al. (1988). As such, only the five service quality dimensions posited by the SERVQUAL scale were examined. Furthermore, consistent with the developers' recommendations, the study did not attempt to examine the entire model, but rather, focused on specific parts of the model, Gap 1 and Gap 5. The study did not examine any other support services outside of the residential housing program, either academic-related or non-academic related. Customer data Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 8 were produced through a stratified random sample of the resident student population. Manager data were produced through a census of the subunit manager population. Limitations of the Study Quantitative data were produced via self-report surveys distributed by the researcher via mail to the student sample and in person to the manager population. As is the case with all self-report surveys, it was assumed that respondents answered accurately and truthfully, and that they comprehended the survey items. In the case of the manager survey, the researcher was able to respond to questions during the data collection process, whereas in the case of the customer survey, the researcher was unable to answer questions due to the aforementioned survey distribution method. The responses to survey items by subjects in both the customer and manager groups are subject to personal bias and to the perceptions of each respondent. The motivations driving the responses of the respondents are unknown to the researcher. The samples used in this study were comprised of students resident in the university’s housing program, and managers of the subunits that support the program. The study did not survey students resident in non-university owned housing. As such, the results of this study may not be applicable to students resident in privately-owned housing. The study’s samples were also limited to a single nonprofit research university (Carnegie Code: Doctoral/Research Universities-Extensive), located in a major urban setting with an enrollment (headcount) of 28,766. The study’s findings Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 9 therefore may not generalize to other types of universities and colleges, and to institutions with smaller or larger enrollments. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 10 CHAPTER TWO Review of the Literature Conceptualization of Service Quality in the Service Firm The defining characteristic of the service firm is the relationship between the customer and the service worker (Whyte, 1946). Expanding on this initial definition, Gronroos (1990:29) cites the distinctive characteristics of services as the following: “(a) Services are more or less intangible; (b) Services are activities or a series of activities rather than things; (c) Services are at least to some extent produced and consumed simultaneously; (d) The customer participates in the production process at least to some extent.” Service quality, therefore, cannot be managed in the traditional manufacturing sense since services are performances rather than engineered objects (Parasuraman et al., 1985). Initially, a product-oriented (production-line) approach to service quality and management issues can be seen in the literature in studies by Levitt (1972), Sasser, Olsen, and Wyckoff(1978), and Fitzsimmons and Sullivan (1982). However, Chase and Aquilano (1977) provided one of the first moves away from this orientation with their classification scheme which identified three service types: pure services, mixed services, and quasi-manufacturing services. The assertion was that “the main feature that sets a service system apart from a manufacturing system is the extent to which the customer must be in direct contact” with the service production process (Chase and Aquilano, 1977:17). Chase (1978) advanced the concept by introducing the term “customer contact” and provided its first operational Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. II definition as a function of total time the customer is in the system relative to the total time of service creation. Customer Contact Chase and Tansik (1983) formally specified the Customer Contact Model (CCM) which incorporated service production dimensions in the design of service systems. Mills, Hill, Leidecker, and Margulies (1983) continued to move away from the product-oriented approach by focusing on the interface between firm and client which they termed the “seminal element” of service firms. The first empirical test of the customer contact construct was performed by Mills and Turk (1986). Kellogg and Chase (1995) advanced a generalized theory of customer contact by adding the essential dimensions of coupling (Weick, 1976), interdependence (Victor and Blackburn, 1987), and information richness (Daft and Lengel, 1984), with empirical support for linking the concepts coming from Child (1987). The Kellogg and Chase (1995) study constructed the first empirically derived model to measure customer contact, while the link between the customer contact dimensions of communication time and intimacy to service quality was established by Soteriou and Chase (1997). Finally, the concept of customer contact has been extended to include a capacity to satisfy customers (CSC) measure (Seargeant and Frenkel, 2000). Drawing on the fact that high correlations have been demonstrated to exist between customers’ and contact employees’ perceptions of service quality (Schneider and Bowen, 1985), and research which showed the importance of customer contact employees to be Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 12 supported by management, co-workers and effective technology (Gronroos, 1988; Schmit and Allscheid, 1995), Seargeant and Frenkel (2000) found that the capacity to satisfy customers was strongly dependent on a set of mutually supportive variables including job satisfaction, organizational commitment, team support and interdepartmental support. Linking Service Practice Drivers to Firm Performance A recent stream of the service quality literature seeks to define the relationships between service practices (defined as the sum of the processes which constitute the service delivery system) and firm performance. What is evident in the literature is that efforts to improve service quality linkages to firm performance is a multi-disciplinary issue (Roth and Van der Velde, 1991). Roth and Jackson (1995) identified several strategic determinants of service quality and performance in their study of retail banks. Specifically, through the use of structural equation modeling, the bank’s people capabilities, technological leadership, and market acuity were found to be major determinants of service quality. These findings led to the formulation of their service management strategy triad: operational capabilities-service quality- performance (C-SQ-P). Adopting a tactical perspective, Bolton and Drew (1991b) found that customer perceived performance had a strong direct effect on quality and value assessments. Rust, Zahoric, and Keiningham (1994) established the relationship between satisfaction and customer retention and market share. And other researchers (Schneider and Bowen, 1984, 1985, 1993; Dennison, 1990; Mitchell, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 13 Lewin, and Lawler III, 1990; Heskett, Sasser, and Schlesinger, 1997) have employed a human resources management perspective to investigate the relationships between personnel practices and service quality. Specifically, Heskett et al. (1997) proposed the service-profit chain, a seven-stage linkage model that establishes direct and strong relationships between profit, growth, customer loyalty, customer satisfaction, the value of goods and service delivered to customers; and, employee capability, satisfaction, loyalty, and productivity. The basic tenet of the model is that satisfied employees deliver high-value services, which in turn lead to satisfied and loyal customers thereby resulting in higher levels of firm profitability. To date, however, much of the work on service quality conceptualization and measurement has occurred in the private sector. Researchers must therefore be cognizant that when instruments are utilized in new settings, their use be preceded by appropriate testing of the psychometric properties of the instrument and that care be taken in its implementation and interpretation (Orwig, Pearson, and Cochran, 1997). Based on extant organizational theory, three factors with potential effects on service quality measurement and the generalizability of SERVQUAL and the ESQM are discussed below. Service Quality Measurement in the Public Sector First, as Poister and Henry (1994:155) point out, “It often appears that the perceived performance gap between the public and private sectors pertains to both quality and customer service as well as the efficiency of service delivery.” Yet, in a Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 14 democratic form of government, satisfaction or dissatisfaction with government services is of fundamental importance (Poister and Henry, 1994). Specification and management of service delivery processes in the public sector is difficult due to the multiple performance criteria by which public organizations are evaluated including: accountability, efficiency, equity, and fiscal integrity-which often involve tradeoffs (Wilson, 1989). Furthermore, simple definitional matters, such as the meaning of “responsiveness,” can become a difficult issue for the public sector organization. For example, Saltzstein (1992) cites two competing perspectives with regard to bureaucratic responsiveness: the first perspective views public agencies and bureaus as being responsive to the wishes of the public, while the second perspective views the bureaucracy as a representative of the state in interactions with the public. These opposed bureaucratic effects are produced by the U.S. constitutional system which fragments authority, encourages intervention, and produces agencies that serve people that are more responsive, and agencies that regulate people more adversarial (Wilson, 1994). Trends during the 1990s, however, including the various quality management programs and reengineering processes (Berman and West, 1995; Cohen and Brand, 1993; Davenport, 1994; Hyde, 1995), the reinventing government movement (Osborne and Gaebler, 1992), and the National Performance Review (1993), all reflect considerable consensus on the “premise that improvement in the performance of specific public organizations, and government more generally, requires greater Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 15 attention to the needs and desires of the client-citizens regarding their preferences and, second, greater freedom by organizational personnel to design and implement service delivery in ways that effectively address these preferences” (Robertson, 1995:4). Yet when government attempts to serve the “customer,” four competing approaches can arise which send the government in four different directions: (1) citizen as service recipients, (2) partners in service provision, (3) overseers of performance, and (4) citizens as taxpayers (Dilulio, Garvey, and Kettl (1993). With regard to service quality measurement, one potential significant difference between public and private sector organizations involves the concept of “consumer sovereignty.” As Moore (1995) points out, in the private sector the presumptive value of an enterprise is established by the individual, and the voluntary choice of a customer to purchase a service. However, in the public sector this link is severed in that organizational funding is derived through the coercive power of taxation. This has the effect of “blotting” out individual preferences, and thus, decreases citizen control over public sector production. The challenge for public sector organizations, therefore, is to interpret the political marketplace of citizens and to decipher individual preferences from the collective decisions derived from representative democratic institutions (Moore, 1995). In the public sector, however, the aim of managerial work is less clear, the level and services to be produced are ambiguous, and the measurement of performance and value is difficult (Moore, 1995). As a result, the types of outcomes Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 16 noted by Roth et al. (1997) are not uncommon, specifically, that the public sector lags the private sector on most aspects of service practices and performance including organizational productivity, external service quality, customer growth/retention, and on its value orientation (i.e., efforts to “proactively remove sources of non-value to customers and to create exceptional value through new service design, service quality and cost management” [Roth et al., 1997, p. 12]). And when compared to other service sectors, local governmental organizations employed best practices to a much lower extent and were among the lowest performers in the sample. Furthermore, in a recent study based on a sample of U.S. cities with populations of 25,000 and over, Poister and Streib (1999) found that less than 40 percent of the municipal jurisdictions made any kind of meaningful use of performance measures in their management and decision processes, including service quality measures. Thus, the extant literature provides evidence that sector status (public vs. private) may have a moderating effect on service quality measurement. Service Quality Measurement in Nonprofit Firms The second factor that could potentially affect service quality measurement is firm type. Specifically, each type of firm can be viewed as having a unique set of constraints which produce differential effects on firm behavior. For example, the goal of the for-profit firm is clear-maximize profits for the shareholders of the firm. This is achieved by producing services that earn revenues in excess of the costs of production. Furthermore, the bottom-line of the for-profit firm provides an Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 17 unambiguous measure of the firm’s success and the effectiveness of its managers. For the nonprofit firm, however, the “nondistribution constraint” (Hansmann, 1987) is the characteristic which sets it apart from its for-profit counterparts and may have an important effect on service quality. Simply stated, the nondistribution constraint prohibits the distribution of residual profits to those individuals exercising control over the firm, namely, boards of directors or managers. Additionally, the “reasonable compensation constraint” states that nonprofits cannot pay their board members or employees beyond what is considered “reasonable” compensation for similar opportunities in other sectors. Thus, extant theory suggests that nonprofit firms have an explicit incentive to maximize service quality since residual profits cannot be distributed to managers or directors. As Hansmann (1987) noted, in comparison to for-profit firms, A nonprofit firm, in contrast, offers consumers the advantage that, owing to the nondistribution constraint, those who control the organization are constrained in their ability to benefit personally from providing low-quality services and thus have less incentive to take advantage of their customers than do the managers of a for-profit firm. (p. 29) Finally, most theories that attempt to model the behavior of nonprofit firms assume that managers will attempt to maximize quality based on the premise that nonprofits attract individuals that are ideologically aligned with the goals and mission of the firm. Managers in the nonprofit sector therefore derive satisfaction from the nature and quality of their work and thus seek to maximize both output and quality levels. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 18 The Unique Nature of the Nonprofit Higher Education Firm The third factor with possible effects on service quality measurement involves the unique nature of the nonprofit higher education firm. This uniqueness was described by Keller (1983) when he noted, American colleges and universities occupy a special, hazardous zone in society, between the competitive profit-making business sector and the government owned and run state agencies. They are dependent yet free; market oriented yet outside cultural and intellectual fashions... They constitute one of the largest industries in the nation but are among the least businesslike and well managed of all organizations. (p. 5) Indeed, a key question that has been addressed recently in the nonprofit higher education literature is, “Why can’t a college or university act more like a for-profit firm?” Two explanations have been advanced by Winston (1997a, 1997b) based on a nonprofit/donative-commercial model of the firm. Key to the understanding of this model is the fact that the price charged by colleges and universities is typically less than production costs, with the difference coming in the form of a student subsidy (Cost = Price + Subsidy). The fact that per student subsidies vary greatly among the various types of collegiate institutions (public v. private, research v. non-research) is a key characteristic of the economic structure of U.S. higher education (Winston, 2000). As Winston (1996:4) noted, “This sustainable separation of cost and price-the continuing ability of a college to subsidize all of its customers-is surely a defining economic characteristic of higher education, both public and private.” Thus, student Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 19 subsidy levels may effect the capacity of a higher education firm to deliver high service quality. The behavior of the nonprofit higher education firm is also effected by what Winston (1996) has referred to as “customer-input technologies.” For colleges and universities, certain key production inputs can only be purchased from the very customers who buy their products, namely, students. Rothschild and White (1995) highlighted the simultaneous nature of this relationship where the student-as-customer pays a net price for his/her education while at the same time the student-as-supplier- of-input is paid a wage in the form of per student subsidies (discussed above). The subtle and complex set of issues surrounding student peer effects has been discussed in the literature (Goethals, Winston and Zimmerman, 1999; Goethals, 1999), and most recently, empirical support has been provided by Zimmerman (1999) and Goethals (2000). The higher education literature thus recognizes that student quality is used as a proxy for institutional quality and vice versa, and colleges and universities have long recognized this strategic interplay in their admissions-quality-pricing policies (Turner, 1996; Winston and Zimmerman, 2000). In summary, the unique nature of the nonprofit higher education firm, characterized by customer subsidies and customer-input technologies, may have an effect on service quality conceptualization and measurement within the nonprofit higher education firm. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2 0 Quality and Assessment in Higher Education In the competitive higher education market, quality is the key dimension upon which universities compete— quality of faculty and students, quality of academic and research programs, and quality of educational, research and residential facilities (Clotfelter, 1996). Clotfelter (1996) refers to this phenomenon as the “pursuit of excellence” wherein managers strive to maintain or improve the quality of educational services such that the institution’s relative quality ranking (vis-a-vis its peers) can be maximized. Throughout the 1980s and 1990s quality indicators such as student satisfaction and assessment measures, have figured prominently in the higher education market. Assessment can be defined as the collection and use of information in order to: (1) demonstrate accountability to external audiences, (2) improve the functioning of the institution’s programs, and (3) to provide continuous feedback and interpretation about the life of an institution (Braskamp, 1991). The dominant model for higher education assessment is Astin’s input-environment-outcomes (I-E-O) model (1970a, 1970b, 1972, 1977, 1993). The model assesses the student’s customer-input characteristics (I), the environmental educational and co-curricular programs that the student is exposed to (E), and outcomes (O), that is, how students change as a result of exposure to the college experience. With regard to the environment, the two variables with the strongest positive effect on student satisfaction are the research Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 21 orientation of the institution and average faculty salaries (Astin, 1993). One explanation for this finding is that highly paid faculty are concentrated in major research universities as a result of their demand for modem facilities including laboratories, libraries and computers. Students thus benefit from access to such tangibles. Another finding critical to the issue of service quality in university housing programs is the finding that living in a university residence hall has direct effects on the following three outcomes: (1) attainment of the bachelor’s degree, (2) satisfaction with faculty, and (3) willingness to re-enroll in the same college (Astin, 1993). Pascarella and Terenzini (1991) also found that living on campus is associated with higher levels of integration in the academic and social systems of an institution which may affect the values, attitudes and psychosocial orientations of students. Collectively, these findings suggest the importance of building and maintaining high student satisfaction with residential housing services which may in turn have key implications for student satisfaction and retention. While the I-E-0 assessment model plays a key role in the accreditation process, quality indicators have also been analyzed in the popular press which has seen the proliferation of college rankings publications. Unfortunately, a gap has emerged around the issue of academic quality, as judged by the professoriate, versus educational service delivery quality, as judged by students, parents and state and federal governments. As a result, higher education is increasingly turning to market- driven models to assess student satisfaction with the quality of co-curricular and non- Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 2 2 academic support services such as housing programs. As Delene and Bunda (1991:4) point out, “The congruence between many service industries in the business sector and higher education suggests the applicability of research for the assessment of service quality to higher education.” The move toward a customer service model with its focus on customer expectations and needs, administrative commitment to the training, development and recognition of frontline employees, and service quality evaluation has not necessarily been an easy transition for higher education. Indeed, until very recently, the mere mention of the word “customer” has elicited negative reactions from many in academe who fail to see how the concept applies to higher education given the complex web of relationships with multiple, often conflicting, constituencies. However, the competitive market of the late 1990s has driven rethinking on this topic. As Chaffee (1998) noted, At a minimum, students are daily consumers of our services. In this sense, they are inarguably our customers...[who] are increasingly aware of and vocal about their expectations. To the extent that this orientation comes to prevail, institutions that take little interest in student expectations do so at their own peril. If we do not meet student expectations, someone else will. Consistent failure to meet expectations leads to institutional decline, (pp. 24-25) The SERVQUAL disconfirmation construct and its underlying Extended Service Quality Model discussed below is one such approach for measuring student expectations and perceptions of service quality, and thus warrants exploration in the nonprofit higher education context; furthermore, it is a proposition supported by Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 23 Keller (1983:118): “Colleges and universities across the land are realizing that they must manage themselves as most other organizations in society do; they are different and special but not outside the organization world...The time has arrived for college and university leaders to pick up management’s new tools and use them.” Service Quality Measurement— SERVOUAL Since the mid 1980s, service quality has been measured from a customer’s attitudinal perspective. Parasuraman et al. (1985) through extensive qualitative research involving numerous customer focus-group interviews and in-depth executive interviews, developed a conceptual model that defined service quality from the customer’s standpoint and identified criteria which customers use to judge service quality. Parasuraman et al. (1988) subsequently consolidated ten dimensions of service quality into the following five dimensions: (1) tangibles— the appearance of physical facilities, equipment, personnel, and communication materials; (2) reliabilitv-the ability to perform the promised services accurately and dependably; (3) responsiveness-the willingness to help customers and ability to provide prompt service; (4) assurance— the knowledge and courtesy of employees and their ability to convey trust and confidence; and (5) empathy— the caring, individualized attention provided to the customer. Based on the above five dimensions, and the formal specification of service quality as the “gap” between the customer’s expectation of service versus the customer’s actual experience, Parasuraman et al. (1988) developed the 22-item Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 24 SERVQUAL instrument designed to measure customer perceptions of service quality. While the methodology and dimensionality of the SERVQUAL instrument has been debated in the literature (e.g., Babakus and Boiler, 1992; Babakus and Manegold, 1992; Cronin and Taylor, 1992, 1994; Parasuraman et al., 1993, 1994; Teas, 1993, 1994), the instrument itself has been widely tested both in the public and private sectors and has been shown to be quite generalizabie. Nonetheless, Cronin and Taylor (1992) introduced a modified version of the instrument, SERVPERF, a performance-based measure of service quality aimed strictly at measuring the customer’s perceptions of Arm performance (versus the gap between customer perceptions and expectations). Subsequently, in a healthcare setting, Cronin and Taylor (1994) confirmed that patients’ perceptions of healthcare providers’ performance are directly related to overall satisfaction and judgement of service quality. These findings support the suggestion of Carman (1990) that it may not be necessary to capture baseline expectations as measured in the SERVQUAL instrument. The trade-off question, however, that continues to be debated in the service quality literature is whether the performance-based SERVPERF sacrifices important diagnostic information obtained by SERVQUAL, specifically, with regard to the importance weightings (expectations) customers place on the various service quality dimensions. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 25 Conceptual Model of Service Quality Parsuraman et al. (1985) originally developed a conceptual model of service quality (see Figure 2.1, next page) with ten hypothesized determinants of perceived service quality (later reduced to five dimensions as discussed above) based on the results of three-stage exploratory investigation comprised of: (1) management level interviews and customer focus-groups conducted across four private sector service categories (retail banking services, securities brokerage services, credit card services, and product repair and maintenance services); (2) a comprehensive case study of a national bank involving 36 branches in three regions; and (3) systematic group interviews involving senior-level managers of three additional industries (designed to confirm the findings of stages 1 and 2). The model specifies a series of gaps within an organization which lead to service quality deficiencies perceived by customers (Gap 5). The five theorized gaps are as follows: Gap 1: Marketing Information Gap-discrepancv between customer expectations and management perceptions of customers' service expectations. Gap 2: Standards Gap-discrepancv between management perceptions of customer expectations and service quality specifications. Gap 3: Service Performance Gap-discrepancv between service quality specifications and the service actually delivered. Gap 4: Communication Gap-discrepancv between communications to customers describing the service and the service actually delivered. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Gap 5: Service Quality Gap-discrepancv between customer service expectations and perceptions. 26 Figure 2.1 SERVQUAL Conceptual Model of Service Quality WidcfMuhOmnriiakis OSIQVfR SaviceM\oy (Indudngpe-aripEt-ana&) > BdoralQnmricaiaBtoacKiiB’ TrariakncfRia|tkiE irto SauaeQdty ^Bd&aiaB Source: Parasuraman, A., Berry, L. and Zeithaml, V. (1991a). “Perceived service quality as a customer-based performance measure: An empirical examination of organizational barriers using an extended service quality model,” Human Resource Management, 30:335-364. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 27 The Extended Service Quality Model Following the initial exploratory study, the authors “extended” the conceptual model of service quality by developing a set of theoretical constructs (contributing factors) potentially affecting the magnitude and direction of Gap 1 through Gap 4 within the organization (Parasuraman et al., 1988). Specific antecedent variables were developed for each of the factors contributing to Gaps 1-4 based on the extant organizational behavior literature (see Figure 2.2, next page). As seen in Figure 2.2, the size of Gap 1 (marketing research gap) is hypothesized to be a function of marketing research orientation, upward communication, and the number of management levels. Gap 2 (service standards gap) is hypothesized to be a function of management commitment to service quality, goal setting, task standardization, and perception of feasibility. Gap 3 (the service performance gap), is proposed to be a function of employee teamwork, employee-job fit, technology-job fit, employee perceived control, supervisory control systems, role conflict, and role ambiguity. Finally, Gap 4 (external communication gap) is specified as function horizontal communication among departments (e.g. sales, marketing, customer contact) and the propensity of the firm to promise more than it can deliver. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 2.2 Extended Service Q uality Model 28 Tangibles Responsiveness Reliability Role Conflict Role Ambiguity Employee Perceived Control Propensity toOvapromise Hwizortal Communed ion Supervisory Control Systems Upwod Cmmutication Marketing Research Ohcntalion Levels o f Management ftrception o f Feasibility Task Standvduation Management Commitment to SQ Goal Setting Source: Parasuraman, A., Berry, L., and Zeithaml, V. (1988). “Communication and control processes in the delivery of service quality,” Journal of Marketing, 52:35-48. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. The next significant contribution to service quality theory development was the empirical testing of the Extended Service Quality Model (ESQM). The critical questions examined by Parasuraman et al. (1991a) were: (1) what are the main theoretical constructs responsible for the size of the four service quality gaps, (2) whether the size of the service quality Gap 5 is a function of Gaps 1-4, and (3) which of the four service quality gaps is (are) most critical in explaining service quality variation. This first empirical test of the Extended Service Quality produced mixed results. Although the overall regression model for the contributing factors to Gap 1 did not achieve significance, statistically significant results for two of the antecedent variables (face-to-face interaction with managers, and levels of management) were attained. Statistically significant results were attained for the regression models for Gaps 2-4, and included statistically significant findings for the following contributing factors: management commitment to service quality, goal setting, task standardization, teamwork, employee-job fit, technology job-fit, employee perceived control, and horizontal communication. With regard to the Gap 1 findings, Parasuraman et al. (1991a) hypothesized that insufficient variation in the mean Gap 1 scores was a plausible explanation for the regression model’s lack of significance. The researchers also hypothesized that the small Gap 1 scores may have been attributable to the five companies whose managers participated in the study, as all of the firms were known to conduct extensive customer research. It was therefore recommended that managers in other Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 30 companies be sampled to determine if similarly small Gap 1 scores exist in different settings. Finally, Gap 3 and Gap 4 yielded statistically significant correlations with Gap 5. Future Research Opportunities Extended Service Quality Model. Despite the fact that the SERVQUAL instrument has received extensive treatment in the service management literature, the fully specified Extended Service Quality Model has received limited empirical treatment (e.g., Chism, 1997; Parasuraman et al., 1991a). Thus, an opportunity exists to extend service quality theory by empirically examining the relationships specified in the model. Indeed, because of the broad nature of the model, Parasuraman et al. (1991a) recommend that one avenue for future research is to focus on the individual gaps specified in the model. Service Quality. Value. Satisfaction Interrelationships. The concepts of service quality, value, and satisfaction are interrelated, yet they are treated as three distinct constructs in the literature. The conceptualization of service quality represented by the SERVQUAL scale is based on a process of disconfirmation whereby (1) customers entertain expectations of performance across five theorized dimensions based on an ideal company referent, (2) experience an actual service performance, and (3) form performance perceptions based on the magnitude of the disconfirmation that was perceived. This conceptualization captures the customer’s global perceptions of a firm’s service quality. As discussed in the previous sections, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 31 service quality management involves service product design (technical service quality), service process design, and service delivery (functional service quality). Customer satisfaction, on the other hand, is generally described as a post choice evaluative judgement of a specific service incident Oliver (1977,1980). Satisfaction is viewed as having two dimensions-transaction-specific and cumulative. The cumulative dimension to satisfaction is an evaluation based on the customer’s long-term purchase and consumption experience with a service firm (Fomell 1992; Johnson and Fomell, 1991). In both instances, customer satisfaction is purely experiential (Rust and Oliver, 1994). Value involves the customer’s calculus between quality and price; however, the manner in which the two variables combine to form the customer’s perception of value is not well understood (Rust and Oliver, 1994). For example, Zeithaml (1987) found that what constitutes value in the mind of the customer is highly personal and idiosyncratic as seen in the following customer definitions: (1) value is low price; (2) value is whatever I want in a product; (3) value is the quality I get for the price I pay; and (4) value is what I get for what I give. Thus, with regard to the interrelationships between the three constructs, there is consensus in the literature that more research is needed to determine which constructs are antecedent, which are mediating and which are consequent. The position taken by Cronin and Taylor (1992) and by Rust and Oliver (1994) is that satisfaction is superordinate to quality, that is, service quality is one of the variables Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 32 seen as affecting service satisfaction at the encounter-specific level; individual service encounter satisfaction is likely aggregated over time into global perceptions of service quality and cumulative satisfaction. It is this conceptualization of the inter relationships between service quality, value and satisfaction that formed the basis for the alternative models explored in this study. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 33 CHAPTER THREE Hypotheses The research questions to be addressed in this study along with their formally specified hypotheses are presented below: Research Question #1. Do the items in the SERVQUAL scale form a cohesive, five-dimension construct for measuring students’ perceptions of program service quality (Gap 5) in a nonprofit higher education firm? Hypothesis la: The SERVQUAL instrument forms a cohesive scale when operationalized in a university housing program. Hypothesis l b: The students' Gap 5 scores (perception-minus-expectation SERVQUAL scores) for the 22 items contained on the SERVQUAL instrument produce a five-dimension structure with items 1-4 loading on the tangibles factor, items 5-9 loading on the reliability factor, items 10-13 loading on the responsiveness factor, items 14-17 loading on the assurance factor, and items 18-22 loading on the empathy factor. Hypothesis lc: The students' weighted Gap 5 scores for tangibles, reliability, responsiveness, assurance, and empathy are significantly related to students' evaluation of overall program service quality (OSQ). Hypothesis ld: There are significant differences between students' weighted Gap 5 scores who do not experience a problem, who have problems resolved satisfactorily, and who are eager to recommend the program to a friend; versus Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 34 students who do experience a problem, who do not have problems resolved satisfactorily and who are not willing to eagerly recommend the program to a friend. Research Question #2. Do significant differences exist in students’ Gap 5 scores according to gender, grade classification, and type of residential facility? Hypothesis 2a: There are no significant differences in students’ weighted Gap 5 scores based on gender. Hypothesis 2b: There are no significant differences in students’ weighted Gap 5 scores based on grade classification. Hypothesis 2C : There are no significant differences in students’ weighted Gap 5 scores based on type of facility. Research Question #3. Do managers have an inaccurate understanding of students’ exnectation for service quality? Hypothesis 3: There are significant differences between students’ expectations for service quality and managers’ understanding of the students’ expectations when evaluated according to weighted overall expectation score. Research Question #4. W hat are the relationships between the size of Gap 1 and its theorized contributing factors— marketing research orientation, upward communication, and levels of management? Hypothesis 4a: The size of Gap 1 is negatively related to marketing research orientation. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 35 Hypothesis 4b: The size of Gap 1 is negatively related to upward communication. Hypothesis 4e: The size of Gap 1 is positively related to levels of management. Research Question #5. What is the relationship between student satisfaction with subunit service providers and the size of subunit managers’ weighted Gaol? Hypothesis 5: Student satisfaction with subunit service is negatively related to the size of subunit managers’ weighted Gap 1. Research Question #6. What is the relationship between subunit service satisfaction and marketing research orientation, unward communication, and levels of management? Hypothesis 6a: Subunit service satisfaction is positively related to marketing research orientation. Hypothesis 6b: Subunit service satisfaction is positively related to upward communication. Hypothesis 6C : Subunit service satisfaction is negatively related to levels of management. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 36 Research Question #7. What is the relationship between service quality (Gap 5). and satisfaction with subunit service providers? Hypothesis 7: Students’ weighted Gap 5 scores are negatively related to students’ satisfaction with subunit service providers. Research Question #8. What is the relationship between service quality (Gap Si. satisfaction, value and future behavioral intentions measures including institutional commitment, and housing program commitment? Hypothesis 8a: Students’ weighted Gap 5 scores are negatively related to students’ perception of overall program satisfaction. Hypothesis 8b : Students’ weighted Gap 5 scores are negatively related to students’ perception of overall program value. Hypothesis 8C : Students’ weighted Gap 5 scores are negatively related to students’ intent to remain in university housing. Hypothesis 8d: Students’ weighted Gap S scores are negatively related to students’ willingness to recommend university housing to a friend. Hypothesis 8C : Students’ weighted Gap S scores are negatively related to students’ willingness to recommend the institution to a friend. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 37 CHAPTER FOUR Research Methodology Research Design The study was both exploratory and explanatory in nature. The overall purpose was to test Gap 1 and Gap 5 of the Extended Service Quality Model where multiple subunit service providers are involved in the production of a service. A relatively complex research design was required to test the various hypotheses of the study, and included two different samples of respondents (managers and students) and two different levels of analysis (individual respondents and organizational entities). Samples The sampling frame was a nonprofit research university (Carnegie Code: Doctoral/Research Universities-Extensive) located in a major urban setting. Samples were produced from a population of 5,648 students resident in the university’s housing system and from 107 managers from five subunit service provider organizations involved in the production of services which constitute the university’s housing program. The SERVQUAL instrument was mailed to 1,671 students selected at random, while the Gap 1 manager instrument (sections 1 and 2 of the SERVQUAL instrument, along with a multiple-item instrument designed to measure the factors influencing Gap 1) was administered to all managers in each of the subunit service provider organizations. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 38 Subunit Service Providers The five subunit service providers involved in the production of services which constitute the university housing program included: (1) the Office of Residential and Greek Life; (2) the Housing and Residence Halls department; (3) the Access Card Services department; (4) the Department of Public Safety; and (5) the Facilities Management Services department. The Office of Residential and Greek Life provides the programmatic elements of the housing program including, residential advisory services; the housing and residence halls department provides the registration and assignment, custodial, customer service desk, residential mail, and telecommunication services; the Access Card Services department provides the point- of-entry electronic access control services; the Department of Public Safety provides 24-hour security and monitoring services; and, the Facilities Management Services department provides residential repair and maintenance services. The subunits in this study operate under a governance system typical of universities wherein administrative subunits are subject to hierarchical authority, while academic subunits operate under an ideological claim to guild rule (Clark, 1977). Mintzberg (1988) classifies universities as a professional form of bureaucracy, in contrast to the mechanistic form. Thus, the subunits in this study are typical of a university setting in that they are part of a centralized, bureaucratic nonacademic community which exists side by side with a decentralized, bureaucratic academic community (Hardy, 1990). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 39 Instruments Customer Instrument The customer instrument was a modified version of the SERVQUAL instrument developed by Parasuraman et al. (1988,1991b). Minor changes were made to the instrument, primarily changing survey item wording from “customer” to “student" in order to reflect the higher education context. The instrument contains 22 statements designed to capture customers’ expectations of excellent service providers in general, and a corresponding 22 statements designed to measure customers’ perceptions of service quality of a specific company. The two sections are separated by a section designed to ascertain customers’ assessment of the relative importance of the five dimensions hypothesized to constitute the service quality construct. In addition, immediately following the perceptions section was: (1) a satisfaction section (a series of questions designed to measure students’ satisfaction with the housing program’s subunit service providers); (2) an overall performance section which included individual items for overall service quality, overall satisfaction, overall value, and multiple items for future behavioral intentions; (3) a multiple item institutional commitment section; and (4) a personal information section. Reliability of the survey instrument was assessed through an analysis of Cronbach’s coefficient alphas and item-to-total correlations. Face validity of the instrument was assessed a priori through discussions with senior administrators in the university housing program. The dimensionality of the instrument was assessed Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 40 through factor analysis utilizing the principal axis factoring method where the number of factors was constrained a priori to five. After oblique rotation, factor loadings were analyzed. Manager Instrument In order to measure the marketing information gap (Gap 1), the expectations section and the dimensional importance section of the SERVQUAL instrument were administered to managers who were asked to respond to the items as if they were a student filling out the instrument. In other words, managers were asked to predict students’ expectations for service quality and dimensional importance. Immediately following the first two sections was a series of items designed to measure the factors theorized to contribute to the marketing information gap (Gap 1). The Manager Gap 1 instrument was scored on the same seven-point Likert-type scale (strongly disagree-strongly agree) found on the customer SERVQUAL instrument. Negatively worded items were reversed scored prior to data analysis. Methodology for Computation of SERVQUAL Gap S Scores Gan 5 Scores (SERVQUAL Scores). Service quality (Gap 5) was measured by computing the difference between the ratings students assigned to the paired expectation/perception statements (e.g., Gap 5 Score = Perception Score - Expectation Score). For each student, a Gap 5 Score was computed for each of the five dimensions by calculating the mean of student’s scores on the items that make up Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 41 each dimension. An overall measure of service quality (overall Gap 5 score) was calculated by computing the mean of the dimensional Gap 5 scores. Weighted Gap 5 Scores. A weighted Gap 5 score takes into account the student’s relative importance weightings and was calculated for each student as follows: (1) a Gap 5 score was computed for each dimension, (2) the Gap S score for each dimension was multiplied by the importance weight assigned by the student to that dimension, and (3) an overall weighted Gap 5 score was computed by summing the weighted Gap 5 scores across the five dimensions. Gap 1 Scores. The marketing information gap (Gap 1) was operationalized as the difference between students’ expectations and managers' perceptions of students’ expectations. The Gap 1 score was computed by subtracting, for each quality dimension, the mean student expectation score from the mean manager expectation score. Subunit Gap 1 Scores. A Gap 1 score was computed for each subunit service provider by subtracting, for each quality dimension, the mean student expectation score from the mean subunit manager expectation score. Weighted Gap 1 Scores. A weighted Gap 1 score was also computed by following the weighting procedure outlined above. The weighted Gap 1 scores thus captured discrepancies between students and managers for each of the quality dimensions, but also highlighted differences of relative dimensional importance between the two groups. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 42 Pilot Study The customer instrument was tested with a group of 52 students who responded to a stratified random sampling o f400 students from the housing program. The Cronbach’s alpha reliability coefficients for the five dimensions of the SERVQUAL scale ranged from .79 for tangibles to .93 for reliability. These results fall within the range of .53 to .93 found in the service quality literature (Parasuraman et al. 1991b) and thus indicated high internal consistency among items within each dimension. Slight modifications were made to the final instrument based on respondent input and included changing the wording of “housing service” to “housing program” as well as the addition of three items dealing with service failure and recovery. Data Production Permission to conduct the study was requested from and approved by the vice president for business affairs and the assistant vice president for student affairs. A stratified (by residential facility type) random sampling of 1,671 students resident in the university housing program was drawn from the resident population of 5,648. Instruments were mailed via U.S. Mail accompanied with a cover letter from the executive director for business affairs. Data for the manager sample was produced through researcher-proctored survey administrations of the manager’s instrument on a subunit-by-subunit basis. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 43 T reatment of Data Prior to data analysis, the sample data sets were evaluated in terms of potential problems related to accuracy and missing data. Surveys were visually scanned for out- of-range values before and after data input. Additionally, variables in the data sets were electronically queried to check for extreme values. Out-of-range data entry errors were compared to the respondent’s original survey instrument and corrected. With regard to the dimensional importance section of the instruments, 12 respondents either allocated less than or more than 100 points. These errors were corrected by calculating the respondents’ allocation percentages based on the total number of points that were allocated and then applying these percentages to a 100 point scale. Missing data were handled in two ways. First, instruments were initially reviewed for incomplete sections; a total of 22 customer surveys were deemed unusable. All of the manager instruments were deemed usable, the result of the proctored survey administration procedure wherein the researcher was able to verify the completeness of the instruments upon return. The second approach to handling missing data involved respondents’ omissions on the multiple item sections pertaining to the five dimensions of the SERVQUAL instrument. Rather than utilizing pairwise or listwise methods to exclude cases, the decision was made to average the respondent’s scores on the other items pertaining to the dimension and then input the dimensional average for the missing item; a total of six cases were handled in this manner. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Data analysis was conducted using SPSS for Windows and the AMOS (analysis of moment structures) software packages to employ analysis of variance, correlation, factor analysis, multiple regression, and structural equation modeling techniques for the testing of hypotheses and the validation of theoretical constructs. Specifically, factor analysis was performed on the instruments to test the theorized dimensionality; Pearson’s correlation coefficients were analyzed to test the hypothesized relationships between relevant variables; and analysis of variance and independent sample t-tests were performed to test for equality of means between groups. Multiple regression analysis was performed to test the relationship between overall service quality (dependent variable) and the weighted Gap 5 scores for the five service quality dimensions (independent variables). Finally, structural equation modeling was employed to generate standardized parameter estimates for two alternative structural models containing hypothesized causal relationships between program service quality (overall weighted Gap 5), overall value, overall satisfaction, and future behavioral intentions. Model fit was evaluated through multiple measures including: (1) conventional chi-square statistic (Joreskog, 1969); (2) goodness-of-fit index (GFI; Joreskog and Sorbom, 1984; Tanaka and Huba, 1985); (3) root mean square error of approximation (RMSEA; Browne and Cudeck, 1993; Steiger and Lind, 1980); and (4) comparative fit index Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 45 (CFI; Bentler, 1990). When evaluating the fit of a model, the aim of the researcher is not to reject the null hypothesis (proposed model) since the researcher has specified its structural relations a priori. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 46 CHAPTER FIVE Results Introduction The results of this study are organized into the following sections: (1) demographics and characteristics of the customer and manager samples; (2) mean survey results for student service quality expectations, perceptions, Gap 5 scores, and dimensional importance weightings; (3) mean survey results for manager perceptions of student service quality expectations and dimensional importance weightings; (4) findings related to the psychometric performance of the SERVQUAL scale; (5) findings related to student perceptions of service quality and whether differences existed based on gender, grade classification, and type of residential facility; (6) findings related to the marketing information gap (Gap 1); (7) findings related to the relationship between Gap 1 and marketing research orientation, upward communication, and levels of management; (8) findings related to the relationship between Gap 1 and students’ satisfaction with subunit service; (9) findings on the relationship between subunit satisfaction and marketing research orientation, upward communication, and levels of management; (10) findings related to the relationship between Gap S and subunit satisfaction; and (11) findings related to the fit of alternative models of the interrelationships between service quality, value, satisfaction, and future behavioral intentions. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 47 Demographics and Characteristics of the Samples Customer Sample A total of 361 students returned surveys yielding a response rate of 21.6%. Of the surveys returned, 22 were deemed unusable due to substantial uncompleted portions of the instrument. Thus, 339 surveys were included in the analysis which provided a final response rate of 20.3%. Table 5.0 contains a summary of the characteristics of the student respondents. Thirty-seven and one-half percent of the respondents were male (n=127) and 62.5% were female (n=212). The sample gender composition of female respondents was higher when compared with the known residential population characteristics of 51.5% and 48.5% respectively for male and female. The mean age of the respondents was 20.5 years with a standard deviation of 4.0 years. Eighty-six and four-tenths percent of the respondents fell within the traditional college age range of 17-22 years. With respect to grade classification, 42.8% were freshman, 24.5% sophomore, 13.0% junior, 9.4% senior, and 10.3% graduate; this sample composition compared favorably with the population grade classification of 46.7%, 24.1%, 13.8%, 7.3%, and 8.1% for freshman, sophomore, junior, senior and graduate students respectively. Finally, Table 5.0 displays the distribution of the sample by type of residential facility where 28.6% resided in residence halls, 57.5% in apartments, and 13.9% in residential colleges. The distribution of the population was as follows: residence halls (20.6%), apartments (65.7%), and residential colleges (13.7%). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 48 Table 5.0 Demographic Characteristics and Facility Type of Student Respondents Student Characteristics Number of Respondents Percentage of Respondents Gender Male 127 37.5 Female 212 62.5 Total 339 100.0 Class Standing Freshman 145 42.8 Sophomore 83 24.5 Junior 44 13.0 Senior 32 9.4 Graduate 35 10.3 Total 339 100.0 Residential Facility Type Residence Hall 97 28.6 Apartment 195 57.5 Residential College 47 13.9 Total 339 100.0 Manager Sample A response rate of 100.0% resulted in a manager sample of 107 completed surveys. Table 5.1 (next page) contains a summary of the distribution of manager respondents across the five subunit service provider organizations. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 49 Table 5.1 Distribution of Manager Respondents by Subunit Service Provider Subunit Number of Manager Respondents Percentage of Manager Respondents Office of Residential and Greek Life 25 23.4 Housing and Residence Halls 20 18.7 Access Card Services 10 9.3 Department of Public Safety 31 29.0 Facilities Management Services 21 19.6 Total 107 100.0 Mean Ratings of SERVQUAL Survey Items— Student Sample Table 5.2 (next page) contains a summary of the students’ mean expectation, perception and difference scores for the five service quality dimensions and individual service quality characteristics. With regard to service quality expectations, mean ratings ranged from a high of 6.46 for the reliability dimension to a low of 5.62 for the tangibles dimension. Mean service quality perceptions ranged from a high of 4.70 for the assurance dimension to a low of 4.02 for the reliability dimension. The rank ordering of the means for each dimension was consistent with the results obtained in the the pilot study. The last column of Table 5.2 displays the mean service quality Gap 5 scores calculated by subtracting service quality expectations from service quality perceptions. The largest service quality gaps (Gap 5 scores) were reported for the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 50 Table 5.2 Mean Student Ratines of Expectation Items. Perception Items, and Service Quality Cana (Can 5 Scores) Service Expected Mean Perceived Service Quality Survey Service Service Quality Dimension Item Quality1 Quality2 Gap3 Tangibles Modem equipment 6.20 4.03 -2.17 Visually appealing facilities 5.94 3.82 -2.12 Neat-appearing employees 5.38 5.17 -0.21 Attractive printed materials 4.97 5.02 0.05 Tangible Mean 5.62 4.51 -l.ll Reliability Meets promised deadlines 6.65 3.94 -2.71 Staff willing to solve problems 6.57 3.94 -2.63 Services performed correctly 6.47 4.09 -2.38 Services available when promised 6.59 4.03 -2.56 Error-free records 5.99 4.09 -1.91 Reliability Mean 6.46 4.02 -2.44 Responsiveness Told when services will be provided 6.20 3.66 -2.54 Prompt service provided 6.38 3.96 -2.42 Staff seem willing to help 6.45 4.47 -1.98 StafT not too busy to help 6.03 4.09 -1.95 Responsiveness Mean 6.27 4.05 -2.22 Assurance Employees who instill confidence 5.78 4.22 -1.57 Safe and secure facilities 6.68 4.90 -1.78 Courteous employees 6.30 5.07 -1.23 Knowledgeable employees 6.21 4.62 -1.59 Assurance Mean 6.24 4.70 -1.54 Empathy Attentive to individual needs 5.66 4.06 -1.59 Office hours convenient to students 6.30 3.99 -2.31 Staff provides personal attention 5.54 4.24 -1.30 Committed to students' best interest 6.34 4.01 -2.32 Understands student needs 6.02 3.95 -2.07 Empathy Composite 5.98 4.05 -1.93 Overall 6.11 4.26 -1.85 1 Mean expectation ratings (survey items 1-22) 2 Mean perception ratings (survey items 23-44) 1 Obtained by subtracting mean expectation ratings from mean perception ratings Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. reliability and responsiveness dimensions with mean Gap 5 scores of -2.44 and -2.22 respectively, followed by empathy (-1.93), assurance (-1.54), and tangibles (-1.11). The Gap 5 results were also consistent with the results obtained from the pilot study. The SERVQUAL construct predicts that larger Gap 5 scores represent lower perceptions of service quality. Table 5.3 contains a summary of the direct importance weights for each of the five SERVQUAL dimensions. Students allocated a total of 100 points across five dimensions, and as seen in Table 5.3, the tangibles and reliability dimensions received the heaviest weightings with 28.25 and 24.80 points respectively, followed by the responsiveness, assurance, and empathy dimensions with 20.18, 14.16, and 12.61 points respectively. Table 5.3 Mean Student SERVQUAL Dimensional Importance Ratines Service Quality Dimension Student Importance Ratings SD Tangibles 28.25 17.16 Reliability 24.80 10.12 Responsiveness 20.18 8.49 Assurance 14.16 7.05 Empathy 12.61 6.78 Note. N = 339 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Mean Ratines of SERVQUAL Survey Items-Manager Sample 52 The instructions for the manager instrument asked managers to predict how they thought students responded to the 22 SERVQUAL expectation items. Table 5.4 contains a summary of the managers’ mean expectation scores for the five service quality dimensions. Table 5.4 Mean SERVQUAL Expectation Ratines and Managers* Gan 1 Scores Service Quality Dimension Student Expectations Manager Predictions of Student Expectations Gap i Score Tangibles 5.62 5.60 -0.02 Reliability 6.46 6.01 -0.45 Responsiveness 6.27 6.02 -0.25 Assurance 6.24 6.13 -0.11 Empathy 5.98 5.79 -0.19 Overall 6.11 5.91 -0.20 Note. Gap I Score obtained by subtracting mean student expectations from mean manager expectations. Mean manager ratings ranged from a high 6.13 for the assurance dimension to a low 5.60 for the tangibles dimension. As seen in Table 5.4, managers underestimated student expectations for service quality across all five dimensions. The fourth column of Table 5.4 displays the Gap 1 score which is the difference between manager and student expectation ratings. The largest Gap 1 scores were Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 53 found for the reliability and responsiveness dimensions, followed by empathy, assurance and tangibles. Managers were also asked to predict how students responded on the dimensional importance section of the SERVQUAL instrument. The results are displayed in Table 5.5. Managers once again exhibited inaccurate understandings of the relative importance students assigned to each dimension by underestimating the importance of the tangibles and reliability dimensions and overestimating the importance of the empathy, assurance, and responsiveness dimensions. Table 5.5 Mean SERVQUAL Dimensional Importance Ratines Student Ratines Versus Manager Ratings Service Quality Dimension Student Importance Ratings Manager Predictions of Student Importance Difference Tangibles 28.25 25.17 -3.08 Reliability 24.80 22.45 -2.35 Responsiveness 20.18 21.21 1.03 Assurance 14.16 16.20 2.04 Empathy 12.61 14.96 2.35 Note. Students (n = 339); Managers (n = 107) Findings Related to the Psychometric Performance of the SERVQUAL Instrument for the Measurement of Gap 5— Hypotheses la. lb. lc. and Id One of the major goals of the present study was to test the psychometric performance of the SERVQUAL instrument for the measurement of Gap 5 when Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 54 administered in a nonprofit higher education housing program. As discussed in Chapter 2, the instrument’s high reliability has generally been confirmed in the service quality literature, while mixed results have been reported with respect to its convergent validity, that is, the extent to which scale items converge on a five dimensional structure. The study’s findings related to the SERVQUAL instrument’s reliability and construct validity are presented below. Hypothesis la. SERVQUAL Reliability Reliability refers to the instrument’s ability to demonstrate overall consistency as well as internal consistency among items within each of the five theorized dimensions. Based on extant research which has demonstrated reliability coefficients for the SERVQUAL scale ranging from .53 to .93 (Parasuraman et al., 1991b), Hypothesis la predicted that the SERVQUAL instrument would form a cohesive scale. Total scale reliability coefficients for the expectation items, perception items and the perception-minus-expectation Gap 5 scores are presented in Table 5.6 (next page). As seen in Table 5.6, the expectation items, perception items, and Gap 5 items each attained a Cronbach alpha reliability coefficient value of .93. With regard to the expectation items, the reliability coefficients for each of the five dimensions ranged from .70 to .88. The responsiveness dimension achieved the highest alpha (.88), followed by reliability (.86), empathy (.84), assurance (.78), and tangibles (.70). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 55 Table 5.6 SERVQUAL Scale Reliability Analysis (Student Instrument) Cronbach's Alpha Reliability Coefficients Service Quality Dimension Number of Items Reliability Coefficients (Alphas) Expected Service Quality Perceived Service Quality Service Quality Gap Tangibles 4 0.704 0.683 0.655 Reliability 5 0.859 0.885 0.881 Responsiveness 4 0.880 0.837 0.846 Assurance 4 0.779 0.796 0.806 Empathy 5 0.844 0.852 0.851 Total Scale Reliability 22 0.926 0.931 0.930 The reliability coefficients for each of the five dimensions for the perception items ranged from .89 to .68. The reliability dimension achieved the highest alpha (.89), followed by empathy (.85), responsiveness (.84), assurance (.80), and tangibles (.68). Lastly, the reliability coefficients for the Gap 5 scores ranged from .88 to .66. The reliability dimension achieved the highest alpha at .88, followed by empathy (.85), responsiveness (.85), assurance (.81) and tangibles (.66). The results from this study fall within the range of .93 to .53 that has been reported in the service quality literature and are within the range of .93 to .65 found in the pretest of the instrument in the pilot study. Furthermore, since the total scale reliability coefficients of .93 for the expectation, perception, and Gap 5 scores Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 56 exceeded the minimum accepted level recommended by Nunnally (1978), Hypothesis la was supported, that is, the SERVQUAL instrument formed a cohesive scale. SERVQUAL Construct Validity Construct validity occurs when inferences can be made about underlying (latent) variables through the measurement of observed indicators (Heck, 1998). According to Cronbach and Meehl (1955), the process of construct validation occurs when investigators develop instruments to reflect a particular construct, attach meanings to the construct (in the present study, service quality), which in turn allow for the generation of specific testable hypotheses that provide the means for the confirmation or disconfirmation of the claim. The first step toward establishing construct validity is the assessment of the face validity of the instrument. Face validity is a subjective criterion which reflects the extent to which scale items are meaningful and appear to represent the construct being measured (Parasuraman et al., 1991b). In the present study, face validity of the SERVQUAL instrument was established in the pilot study phase of the research through discussions with senior administrators in the university housing program in which instrument items were discussed and agreement was reached that the items were relevant and useful for the measurement of service quality in the housing program. The second step toward construct validation was the assessment of the instrument’s reliability. Findings pertaining to the instrument’s high reliability were Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 57 reported in the previous section. While the establishment of instrument reliability and face validity are necessary for the establishment of construct validity, additional analysis measures are required (Churchill, 1979). Hypotheses lb. lc. and Id. Construct Validity of the SERVQUAL Instrument Hypothesis lb. Hypothesis lb, in conjunction with lc and Id, was designed to assess the construct validity of the SERVQUAL instrument. In order to confirm the five-factor dimensionality of the construct predicted by Hypothesis lb, the perception-minus-expectation Gap 5 scores were factor analyzed. The factor analysis was constrained a priori to five factors due to the theorized five-factor structure of the instrument. The principal axis factoring method was utilized to extract the factors. The five-factor solution was then subjected to oblique rotation since it is the preferred method when factors are intercorrelated. Table 5.7 (next page) provides the rotated Gap 5 items with the factors (pattern matrix). The five factors in Table 5.7 account for 58.49% of the common variance among the observed variables. The mean for the final estimates of the communalities was .585, and all factors displayed eigenvalues greater than 1.0 with the exception of factor 4 (.883). The visual representation of the solution in the Scree plot of eigenvalues also supported a five-factor solution. Finally, the mean pairwise interfactor correlations was .350. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 58 Table 5.7 Factor Analysis of Gan 5 Scores Survey Pattern Matrix1 Factor Loadings Item 1 2 3 4 5 Item 16 0.754 Item 1 2 0.534 Item 17 0.518 Item 1 4 0.506 Item 1 5 0.449 Item 1 3 0.358 Item 2 0.823 Item 1 0.820 Item 8 -0.952 Item 5 -0.784 Item 7 -0.744 Item 11 -0.734 Item 10 -0.667 Item 6 -0.652 Item 9 -0.378 Item 3 0.763 Item 4 0.428 Item 20 -0.892 Item 1 8 -0.792 Item 1 9 -0.517 Item 2 1 -0.448 Item 22 -0.381 Note. Extraction Method: Principal Axis Factoring Rotation Method: Oblimin with Kaiser Normalization 'Rotation converged in 1 3 iterations Factor Correlation Matrix Factor Pet. of Cum. Pet. Factor 1 2 3 4 5 Variance 1 1.00 41.07 41.07 2 0.21 1.00 6.43 47.49 3 -0.55 -0.32 1.00 6.19 53.68 4 0.22 0.33 0.09 1.00 2.66 56.34 5 •0.68 0.25 0.57 -0.27 1.00 2.15 58.49 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 59 As seen in Table 5.7, however, the factor structure differs from the theorized five-dimension structure predicted by Hypothesis lb in two ways. First, the tangible items split into two distinct dimensions with items 1 and 2 (modem furniture, fixtures and appliances / visually appealing residential facilities) loading on factor 2, while tangibles items 3 and 4 (neat-appearing employees / attractive printed materials) loaded on factor 4. It should be noted that an identical tangibles split was reported by Parasuraman et al. (1991b). The second difference was that the responsiveness dimension split with items 10 and 11 (employees will tell students exactly when services will be performed / employees will give prompt service to students) loading on a distinct reliability dimension (factor 3), while items 12 and 13 (employees will always be willing to help students / employees will never be to busy to respond to student requests) loaded on a distinct assurance dimension (factor 1). The only dimension to load as predicted (no splits or overlap) was the empathy dimension, where items 18-22 all loaded on factor 5. In sum, the items theorized to constitute the empathy, reliability and assurance dimensions each loaded as predicted on separate factors. On the other hand, the tangibles dimension split into two distinct dimensions, and the responsiveness dimension split and overlapped with the assurance and reliability dimension. Thus, due to the failure of the SERVQUAL Gap 5 items to load on a five-factor solution as predicted, Hypothesis lb was not supported. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 60 As noted above, similar mixed findings with regard to the dimensionality of the SERVQUAL instrument have been reported in the literature (Carman, 1990; Babakus and Boiler, 1991; Orwig et al., 1997). The decision was therefore made to further assess the dimensionality of the instrument. Paired-sample t-tests comparing the points students allocated (importance weightings) across the five SERVQUAL dimensions were performed to determine whether students distinguished between the five dimensions. As seen in Table 5.8, results indicated that pairings across all five dimensions were significantly different from each other at the p<.01 level. Table 5.8 Paired Sample t-Tests for Equality of Means of Student SERVQUAL Dimensional Importance Weightings First Item Second Item of Pair of Pair Sig. Dimension Pair Mean SD M ean SD t (2-tailed) Tangibles-Reliability 28.25 17.16 24.80 10.12 2.612 0.009 Tangibles-Responsiveness 28.25 17.16 20.18 8.49 6.405 0.000 Tangibles-Assurance 28.25 17.16 14.16 7.05 11.971 0.000 Tangibles-Empathy 28.25 17.16 12 .6 1 6.78 13.770 0.000 Rel iabi 1 ity-Responsi vness 24.80 10.12 20.18 8.49 6.729 0.000 Rel iabi 1 ity-Assurance 24.80 10.12 14.16 7.05 15.458 0.000 Rel iabi lity-Empathy 24.80 10.12 12 .6 1 6.78 17.489 0.000 Responsiveness-Assurance 20.18 8.49 14.16 7.05 10.637 0.000 Responsiveness-Empathy 20.18 8.49 12 .6 1 6.78 12.872 0.000 Assurance-Empathy 14.16 7.05 1 2 .6 1 6.78 3.331 0.001 Note. N = 339; df = 338 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 61 Finally, factor analysis was performed on the weighted Gap 5 scores. Results are presented in Table 5.9 (next page). As seen in the table, all weighted Gap 5 items loaded as predicted on five separate factors. The amount of common variance accounted for by the five factors increased from 58.49% to 64.51%. Collectively, these findings provided additional indirect support for the dimensionality of the SERVQUAL instrument. Hypothesis lc. Another assessment of construct validity involved the examination of the relationship between a separate measure of overall program service quality (OSQ) and the weighted Gap 5 scores along the five hypothesized dimensions. Hypothesis lc predicted a significant relationship between mean weighted Gap 5 scores for tangibles, reliability, responsiveness, assurance, and empathy (independent variables), and the students' overall evaluation of program service quality (dependent variable). Findings from the regression model are displayed in Table 5.10. As seen in Table 5.10, the adjusted R-square for the model was .335. The model also indicated that at least one of the independent variables in the regression was statistically significant (F-value=35.08, p<.005). Upon examination of the t- statistics and standardized beta coefficients of the independent variables, it was concluded that the tangibles, reliability, responsiveness, assurance and empathy dimensions were all statistically significant at the .01 level or better. The empathy dimension displayed the strongest beta coefficient (P= .24), followed by the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 62 Table 5.9 Factor Analysis of W eighted Gao 5 Scores Pattern Matrix1 Factor Survey 1 2 3 4 5 Item (Tangibles) (Reliability) (Responsiveness) (Assurance) (Empathy) Item I 0.871 Item 2 0.984 Item 3 0.379 Item 4 0.225 Item S 0.925 Item 6 0.727 Item 7 0.836 Item 8 0.878 Item 9 0.493 Item 10 -0.695 Item 11 -0.813 Item 1 2 -0.785 Item 1 3 -0.847 Item 14 0.650 Item 15 0.688 Item 16 0.782 Item 17 0.852 Item 1 8 0.855 Item 19 0.647 Item 20 0.915 Item 21 0.853 Item 22 0.860 Note. Extraction Method: Principal Axis Factoring Rotation Method: Oblimin with Kaiser Normalization 'Rotation converged in 8 iterations Factor Correlation Matrix Factor Pet. of Cum. Pet. Factor 1 2 3 4 5 Variance 1 1.00 34.10 34.10 2 0.31 1.00 11.33 45.43 3 0.00 0.16 1.00 8.49 53.91 4 0.30 0.51 0.10 1.00 5.90 59.82 5 -0.58 -0.39 0.05 -0.36 1.00 4.70 64.51 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 63 responsiveness dimension (P= 21), the tangibles dimension ((3= 17), reliability dimension (P= 15), and the assurance dimension ({3= 14). Table 5.10 Regression Model for Overall Service Quality (QSO) and Weighted Gao 5 Scores ANOVA Adjusted R Model Square F Significance Model 1 0.335 35.080 0.000 Predictors: (Constant), Weighted Tangibles Gap, Weighted Reliability Gap, Weighted Responsiveness Gap, Weighted Assurance Gap, Weighted Empathy Gap. Dependent Variable: Overall Service Quality Model Standardized Beta Coefficients t Sig. Model 1 54.278 0.000 Weighted Tangibles Gap 0.171 3.835 0.000 Weighted Reliability Gap 0.146 2.614 0.009 Weighted Responsiveness Gap 0.209 3.557 0.000 Weighted Assurance Gap 0.138 2.626 0.009 Weighted Empathy Gap 0.235 4.369 0.000 Several tests were performed to determine whether assumptions relating to ordinary least squares regression were violated. First, multicollinearity diagnostics were run on the model and indicated that multicollinearity was not a threat since all variance inflation factors (VIFs) were less than 10 and tolerance statistics were greater than .10. Second, individual plots between the independent and dependent variables were inspected and linearity confirmed. Third, inspection of a normal probability Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 64 plot confirmed that the residuals were from a normal distribution. The Constance variance of errors assumption was confirmed through a Levene’s test where the null hypothesis of equal variance of errors was accepted (F=l .734, p>. 10). And finally, examination of predicted residuals against the model revealed several outliers (observations 68,215, and 323). These cases were excluded from a second dataset, and the regression model was rerun to determine the effects of excluding the outliers. Exclusion of the outliers improved the performance of the model slightly by increasing the adjusted R-square to .358 (F=38.330, p<.005); all independent variables remained significant. Thus, Hypothesis lc (weighted Gap 5 scores are significant predictors of overall service quality) was accepted. Hypothesis Id. A final assessment of construct validity was conducted by examining whether weighted Gap 5 scores were associated with the responses students gave to the following three questions: (1) whether they had experienced a problem with the housing program (item 56); (2) if they had experienced a problem, whether the problem was resolved satisfactorily (item 57); and (3) whether they would eagerly recommend the housing program to a friend (item 62). Hypothesis Id predicted that there would be significant differences between the two groups of students' weighted Gap 5 scores according to how they responded to the three questions. One-way analysis of variance was used to analyze potential differences between the respondent groups. The results for the “Have you experienced a problem?" question are presented in Table 5.11 (next page). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 63 Table 5.11 Analysis of Variance of Student Responses to Item 56— "Have you experienced a problem?" Mean Sum of Mean Weighted Gap 5 "No” "Yes" Squares df Square F Sig. Tangibles -0.345 -0.362 Between Groups 0.024 1 0.024 0.079 0.779 Within Groups 100.875 336 0.300 Total 100.899 337 Reliability -0.364 -0.763 Between Groups 12.479 1 12.479 63.395 0.000 Within Groups 66.139 336 0.197 Total 78.618 337 Responsiveness -0.298 -0.569 Between Groups 5.729 1 5.729 35.089 0.000 Within Groups 54.858 336 0.163 Total 60.587 337 Assurance -0.150 -0.265 Between Groups 1.043 1 1.043 17.344 0.000 Within Groups 20.201 336 0.060 Total 21.243 337 Empathy -0.162 -0.309 Between Groups 1.690 1 1.690 22.337 0.000 Within Groups 25.415 336 0.076 Total 27.105 337 The results displayed in Table 5.11 indicated that, with the exception of the tangibles dimension, statistically significant differences (p<.005) existed between the students who experienced a problem (‘yes’ group) versus students who did not report a problem (‘no’ group). Upon inspection of the means for the weighted Gap 5 scores, Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 66 larger Gap S scores (i.e., lower perceived service quality) were consistently reported in the ‘yes’ group as was hypothesized. The results for the “If you experienced a problem, was the problem resolved satisfactorily?” question are presented in Table 5.12. This time, statistically significant differences were found at the p<.05 level or better across all five Table 5.12 Analysis of Variance of Student Responses to Item 57— "If you experienced a problem, was it resolved?” Mean Sum of Mean Weighted Gap 5 "No" "Yes" Squares df Square F Sig. Tangibles -0.425 -0.281 Between Groups 1.125 1 1.125 4.878 0.028 Within Groups 49.794 216 0.231 Total 50.918 217 Reliability -0.844 -0.639 Between Groups 2.260 1 2.260 9.169 0.003 Within Groups 53.245 216 0.247 Total 55.506 217 Responsiveness -0.673 -0.420 Between Groups 3.429 1 3.429 19.754 0.000 Within Groups 37.490 216 0.174 Total 40.919 217 Assurance -0.299 -0.214 Between Groups 0.387 1 0.387 6.236 0.013 Within Groups 13.388 216 0.062 Total 13.774 217 Empathy -0.347 -0.253 Between Groups 0.470 1 0.470 5.164 0.024 Within Groups 19.678 216 0.091 Total 20.149 217 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 67 dimensions of service quality. Students who had problems resolved satisfactorily (‘yes’ group) consistently perceived service quality to be higher, as represented by smaller Gap 5 scores, than the students who experienced a problem yet felt that the problem was not resolved satisfactorily (‘no’ group). Finally, the results for the “Would I eagerly recommend the housing program to a friend?” question are presented in Table S. 13 below. Table 5.13 Analysis of Variance of Student Responses to Item 62— "Would you eagerly recommend housing program to a friend?” Mean Sum of Mean Weighted Gap 5 "No" "Yes" Squares df Square F Sig. Tangibles -0.432 -0.280 Between Groups 1.956 1 1.956 6.663 0.010 Within Groups 98.952 337 0.294 Total 100.908 338 Reliability -0.745 -0.491 Between Groups 5.486 1 5.486 25.251 0.000 Within Groups 73.220 337 0.217 Total 78.707 338 Responsiveness -0.596 -0.344 Between Groups 5.352 1 5.352 32.552 0.000 Within Groups 55.410 337 0.164 Total 60.762 338 Assurance -0.291 -0.156 Between Groups 1.537 1 1.537 26.275 0.000 Within Groups 19.707 337 0.058 Total 21.244 338 Empathy -0.345 -0.167 Between Groups 2.674 1 2.674 36.758 0.000 Within Groups 24.518 337 0.073 Total 27.192 338 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 68 Since this item was scored on a seven-point scale anchored by strongly disagree (1) and strongly agree (7), students who responded with 1,2, 3, or 4 were collapsed into a ‘No’ group, while students who responded with 5,6, or 7 were collapsed into a 4 Yes’ group. The decision to cut the scale at 5 for the ’yes’ group was made due to the word ’eagerly’ contained in the question. Once again, statistically significant differences (p=.01 or better) were found between to the two groups across all five SERVQUAL dimensions. Thus, the statistically significant findings presented in Tables 5.11-5.13 supported Hypothesis Id that significant differences exist in perceived service quality between students who experienced a problem, had a problem resolved satisfactorily and who were eager to recommend the housing program to a friend, versus students answering in the opposite to these questions. Summary of Findings Related to the Psychometric Performance of the SERVQUAL Instrument-Hvpotheses la. lb. lc. and Id The findings presented above on the psychometric performance of the SERVQUAL instrument provide support for its construct validity. This study confirmed the high reliability of the overall SERVQUAL scale (Hypothesis la). Although factor analysis of the Gap 5 scores failed to support the predicted dimensionality of the instrument (Hypothesis lb), the results reported here were consistent with findings in the literature, namely, the existence of two distinct tangibles dimensions and the overlap of the responsiveness dimension onto the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 69 assurance and reliability dimensions. However, paired sample t-tests of the responses to the dimensional importance items provided statistically significant results that students perceived five distinct dimensions. Moreover, direct support for the dimensionality of the construct was provided by factor analytic results which showed that weighted Gap S scores loaded as predicted on all five original SERVQUAL dimensions. Finally, indirect support for the SERVQUAL scale’s construct validity was provided by the findings in connection with Hypotheses lc and Id. Statistically significant results were obtained when an independent measure of service quality (OSQ) was regressed on the five weighted Gap S scores (Hypothesis Id); and statistically significant differences were demonstrated to exist in the level of perceived service quality (as measured by weighted Gap 5 scores) between groups of students who had experienced a service failure, who had not had service problems resolved satisfactorily, and who were not eager to recommend the program to a friend versus the group of students who had not experienced a service failure and who were eager to recommend the program to a friend (Hypothesis Id). Findings Related to Differences in Students* SERVQUAL Gap S Scores According to Gender. Grade Classification, and Type of Residential Facility— Hypotheses 2a. 2 b. and 2c An important question for higher education program managers is whether significant differences exist in the service quality expectations and perceptions of Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 70 students, when segmented on the basis of demographic characteristics, and in the case of university housing programs, it is important for program managers to understand whether significant differences exist in students’ perceptions of service quality based on the type of residential facility. The following section reports the findings in connection with these research questions as formally presented in Hypotheses 2a, 2b, and 2c. Hypothesis 2a. Gap S Differences According to Gender Hypothesis 2a predicted no significant differences in students’ weighted Gap 5 scores based on gender. An independent t-test was performed to test for significant mean differences in the weighted Gap 5 scores. The Levene’s test for equality of variances was also performed to determine whether the separate variance or pooled variance t-test model would be utilized. The null hypothesis in the Levene’s test assumes that the two population variances are equal-failure to reject (high p value) indicates that the pooled-variance t-test should be used. Results are presented in Table 5.14 (next page). As seen in Table 5.14, the mean overall weighted Gap 5 score for males was -1.71 as compared to -2.04 for females. The F value for the Levene’s test did not achieve significance (p=.333); therefore the pooled-variance t-test was used. The t- value achieved significance (p<.05); thus, Hypothesis 2a was rejected in favor of the alternative hypothesis which assumed that significant differences do indeed exist between men and women. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 71 In the present case, the results indicated that women have significantly higher Gap 5 scores, an indication of lower perceived service quality. Table 5.14 t-Test for Equality of Means in Weighted Overall Gap 5 Scores bv Gender Weighted Overall Gap 5 N Mean SD Sig. t (2-tailed) Male 127 -1.71 1.326 2.412 0.016 Female 212 -2.04 1.177 To further explore this significant finding, an independent sample t-test was performed to determine if significant differences existed for each of the five dimensions of service quality between men and women. Pooled-variance t-tests were performed for the weighted Gap 5 scores for the tangibles, reliability, assurance, and empathy dimensions, and a separate-variance t-test was performed for the responsiveness dimension. Results are presented in Table 5.15 (next page). Based on significant t-statistics for the responsiveness (p<.005) and assurance dimensions (p<.05), it was concluded that women perceived lower service quality in the responsiveness and assurance dimensions. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 72 Table 5.15 t-Test for Equality of Means in Weighted Gap 5 Scores by Gender and Dimension Weighted Gap 5 Mean SD t Sig. (2-tailed) Tangibles Male -0.37 0.613 -0.266 0.791 Female -0.35 0.500 Reliability Male -0.56 0.504 1.659 0.098 Female -0.65 0.468 Responsiveness Male -0.36 0.378 3.847 0.000 Female -0.53 0.438 Assurance Male -0.18 0.232 2.361 0.019 Female -0.25 0.256 Empathy Male -0.24 0.296 0.680 0.497 Female -0.26 0.260 Note. Male (n = 1 27); Female (n = 2 1 2) Hypothesis 2b. Gap 5 Differences According to Grade Classification Hypothesis 2b assumed that there would be no differences in weighted Gap 5 scores according to grade classification (freshman, sophomore, junior, senior, graduate). One-way analysis of variance was conducted to test for significant differences between each of the groups. As seen in Table S. 16 (next page), the F ratio achieved significance (p<.05); thus, Hypothesis 2c was rejected and it was concluded Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 73 that significant differences existed in weighted overall Gap 5 service quality scores based on grade classification. Seniors reported the largest weighted Gap 5 scores followed by sophomores, juniors, freshmen, and graduate students. Table 5.16 Analysis of Variance of W eighted Overall Gap S Score bv Gffrfe Classification Weighted Overall GapS N Mean Sum of SD Squares df Mean Square F Sig. Freshman 1 4 5 -1.75 1.08 Sophomore 83 -2.18 1.43 Junior 44 -1.92 1.39 Senior 32 -2.30 1.15 Graduate 35 -1.66 1.14 Total 339 -1.92 1.24 ANOVA Between Groups 16.792 4 4.198 2.772 0.027 Within Groups 505.887 334 1.515 Total 522.678 338 Hypothesis 2c. Gap 5 Differences According to Residential Facility Type Hypothesis 2c assumed that there would be no differences in weighted Gap 5 scores according to residential facility type (apartment, residential college, residence hall). One-way analysis of variance was conducted to test for significant differences between each of the groups. As seen in Table S. 17 below, the F ratio did not achieve significance (p>.05). Hypothesis 2c was therefore accepted and it was concluded that Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 74 there were no significant differences in students’ perception of service quality (Gap 5) across the three types of residential facilities. Table 5.17 Analysis of Variance of Weighted Overall Gan 5 Scores bv Residential Facility Tvne Weighted Overall Gap 5 N Mean SD Sum of Squares df Mean Square F Sig. Residence Hall 97 -1.81 1.24 Apartment 195 -2.00 1 .3 1 Residential College 47 -1.83 0.97 Total 339 -1.92 1.24 ANOVA Between Groups 2.846 2 1.423 0.920 0.400 Within Groups 519.833 336 1.547 Total 522.678 338 Findings Related to Managers* Understanding of Students’ Expectation for Service Quality (Gap 1 1 — Hypotheses 3 Manager Instrument Scale Reliability The second major goal of the study was to determine whether a gap existed between managers’ understanding of students’ expectations for service quality. In order to test Hypothesis 3, data were collected through the administration of the manager survey instrument. As discussed in Chapter 3, the manager instrument contained the first two sections from the customer SERVQUAL instrument along Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 75 with multiple items designed to measure contributing factors to Gap 1. Table S. 18 contains the Cronbachs’ alpha reliability coefficients for the manager instrument. Table 5.18 Scale Reliability Analysis (Manager Instrument! Cronbach's Alpha Reliability Coefficients Service Quality Dimension Number of Items Reliability Coefficients fAtahas) Expected Service Quality Tangibles 4 0.695 Reliability 5 0.851 Responsiveness 4 0.827 Assurance 4 0.740 Empathy 5 0.804 Total Scale Reliability 22 0.914 Number of Reliability Coefficients (Alphas) Gap 1 Contributing Factor Items Gap 1 Marketing Research Orientation 4 0.695 Upward Communication 4 0.663 Levels of Management 1 — Total Scale Reliability 9 0.774 The total scale reliability for the managers’ SERVQUAL expectations section was .914. Reliability coefficients for the five service quality dimensions ranged from .695 for tangibles to .851 for the reliability dimension. The scale reliability results were generally consistent with the customer expectation reliability coefficients reported earlier; the managers’ tangibles reliability coefficient was slightly lower Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 76 when compared to the customer instrument (.695 versus .704), as was the responsiveness coefficient (.827 versus .880). Alpha reliability coefficients for the scale items designed to measure Gap 1 contributing factors are also displayed in Table 5.18. The total Gap 1 scale reliability coefficient was .774, while the marketing research orientation alpha was .695 and the upward communication alpha was .663. The reliability coefficients for this part of the manager instrument were lower and are reflective of the more exploratory nature of the research with regard to Gap 1. However, the alpha reliability coefficients are within the range (.50-.60) considered acceptable by Nunnally (1967:226): “In the early stages of research on predictor tests or hypothesized measures of a construct, one saves time and energy by working with instruments that have only modest reliability, for which purpose reliabilities of .60 or .50 will suffice.” Hypotheses 3. Managers’ Understanding of Students* Expectation for Service Quality (Gap I) An independent samples t-test was conducted to test for significant differences in the weighted overall expectation scores of the manager and student samples. The results are displayed in Table 5.19 on the next page. The weighted overall expectation score was analyzed because it not only captured potential differences across the 22 expectation items but also differences in the relative importance students and managers assigned to each of the five service quality dimensions. The F value for the Levene’s test did not achieve significance (p=.606), therefore the Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. pooled-variance t-test was used. The t-test of the students’ mean weighted expectation score of 6.14 compared to the managers’ mean score of 5.96 achieved significance (t-value = 2.44, p<.05). Thus, Hypothesis 3 was supported, and it was concluded that students’ overall expectations for service quality were significantly higher than managers’ predictions of the students’ expectations. Table 5.19 t-Test for Equality of Means Between Student Expectations for Service Quality and Managers' Predictions of Student Expectations Sig. Weighted Expectations Mean SD t (2-tailed) Overall Student 6.14 0.689 2.441 0.015 Manager 5.96 0.641 Tangibles Student 1.62 1.050 1.860 0.064 Manager 1.43 0.879 Reliability Student 1.60 0.681 3.555 0.000 Manager 1.36 0.591 Responsiveness Student 1.27 0.571 -0.311 0.756 Manager 1.29 0.525 Assurance Student 0.89 0.466 -2.196 0.029 Manager 1.00 0.460 Empathy Student 0.76 0.445 -2.275 0.023 Manager 0.88 0.513 Note. Students (n = 339); Managers (n = 107) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 78 To further explore this positive finding, an independent samples t-test was performed to examine differences in service quality expectations according to the five SERVQUAL dimensions. These results are also displayed in Table 5.19 above. The t-test for equality of means confirmed that students had significantly different expectations from managers on three of the five service quality dimensions (p<.05 level or better). Specifically, managers significantly underestimated the students’ reliability expectations, whereas as they overestimated the students’ assurance and empathy expectations. Further support for this finding can be found in Table 5.20 below, which displays the results from an independent samples t-test for differences between the two groups on their dimensional importance ratings. Table 5.20 t-Testfor Equality of Means in Dimensional Importance Ratings Between Managers and Students Dimension Mean SD t Sig. (2-tailed) Tangibles Importance Student 28.25 17.155 1.831 0.069 Manager 25.17 14.497 Reliability Importance Student 24.80 10.115 2.170 0.031 Manager 22.45 8.575 Responsiveness Importance Student 20.18 8.491 -1.125 0.261 Manager 21.21 7.737 Assurance Importance Student 14.16 7.046 -2.622 0.009 Manager 16.20 6.867 Empathy Importance Student 12.61 6.779 -3.013 0.003 Manager 14.96 7.883 Note. Students (n = 339); Managers (n = 107) Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 79 As seen in Table 5.20, managers placed significantly less emphasis on the reliability dimension than did students, and managers significantly overrated the importance of the assurance and empathy dimensions (at the p<.05 level or better). Findings Related to the Relationship Between Marketing Research Orientation. Upward Communication, and Levels of Management and the Size of Gap 1— Hypotheses 4a. 4b. and 4c Gap 1 of the Extended Service Quality Model was operationalized as the difference between students’ expectations for service quality and managers’ predictions of the students’ expectations for service quality across the 22 SERVQUAL items. Thus, to the extent that students’ expectations exceeded management’s perceptions of them, larger Gap 1 scores resulted thereby indicating managers’ inaccurate understanding of students’ expectations for ser/ice quality. As seen in the results above, a statistically significant Gap 1 was demonstrated to exist in the housing program. Research question #4 theorized that three factors may contribute to the size of Gap 1. Table 5.21 below displays the means for the overall weighted Gap 1 score, marketing research orientation, upward communication, and levels of management for each of the subunit service providers, as well as for the overall manager sample. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 80 Table 5.21 Managers' Scores for Overall Weighted Gap 1 and Factors Pertaining to Gap 1 by Subunit Subunit Service Provider Weighted Gap 1 Score Marketing Research Upward Orientation Communication Too Many Levels of Management Office of Residential and Greek Life -0.07 4.16 4.98 4.32 Housing and Residence Halls -0.25 4.66 4.96 2.45 Access Card Services -0.03 4.83 5.53 1.90 Department of Public Safety -0.33 4.39 4.46 3.00 Facilities Management Services -0.06 3.52 3.92 3.81 All Subunits -0.18 4.26 4.67 3.26 Note. N=107 Hypothesis 4a. Marketing Research Orientation and Gap I Hypothesis 4a stipulated that the size of the Gap 1 would be negatively related to market research orientation. Marketing research orientation was operationalized as the mean of items 23,24,25, and 26 on the manager’s instrument. As seen in Table 5.22 (next page), the Pearson correlation coefficient is quite low, not statistically significant, and the valence of the coefficient is not in the hypothesized direction. Thus, Hypothesis 4a was not supported. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 81 Table 5.22 Pearson Correlation Coefficients for Relationship Between Managers' Gan I and Marketing Research Orientation. Upward Communication, and Levels of Management Pearson Correlations Variable Gap 1 Score Marketing Research Upward Orientation Communication Levels of Management Gap 1 Score 1.00 Marketing Research Orientation 0.06 1.00 Upward Communication -0.05 0.59 1.00 Levels of Management -0.14 -034 -0.40 1.00 Note. Items in bold-correlation is significant at the .01 level (2-tailed); N = 107 Hypothesis 4b. Upward Communication and Gan 1 Hypothesis 4b stipulated that the size of the Gap 1 would be negatively related to upward communication. Upward communication was operationalized as the mean of items 27,28,29, and 30 on the manager's instrument. As seen in Table 5.22, the Pearson correlation coefficient was again quite low and not statistically significant; however, the valence of the coefficient was in the direction hypothesized. Thus, although the valence of the relationship between Gap 1 and upward communication was correct, due to the lack of significance in the relationship, Hypothesis 4b was not supported. Hypothesis 4c. Levels of Management and Gap I Hypothesis 4c predicted that the size of Gap 1 would be positively related to levels of management. While the relationship is somewhat stronger than the previous Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 82 two factors, the valence of the relationship was not in the hypothesized direction, and the Pearson correlation coefficient failed to achieve statistical significance. Consequently, Hypothesis 4c was not supported. Findings Related to Student Satisfaction with Subunit Service Providers and the Size of Subunit Managers’ Weighted Gap I-Hvpothesis 5 Hypothesis 5 predicted that the size of a subunit manager’s weighted Gap 1 would be negatively related to students’ satisfaction with the subunit’s service. Students directly indicated their perceptions of subunit service satisfaction on customer survey items 45 through 55. Satisfaction with the Office of Residential and Greek Life (subunit 1) was indicated by item 53, Housing and Residence Halls (subunit 2) was represented by the mean of items 45-49, 54 and 55, Access Card Services (subunit 3) was indicated by item 51, the Department of Public Safety (subunit 4) was indicated by item 52, and Facilities Management Services (subunit 5) was indicated by item 50. Table 5.23 contains the Pearson correlation coefficient for the relationship between managers’ weighted Gap 1 score and subunit satisfaction. Table 5.23 Pearson Correlation Coefficients for Relationship Between Managers' Weighted Gan 1 Scores and Subunit Satisfaction Pearson Correlations Variable Weighted Gap 1 Subunit Score Satisfaction Weighted Gap 1 Subunit Satisfaction 0.09 1.00 1.00 Note. Items in bold-correlation is significant at the .01 level (2-tailed); N=I07 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 83 As seen in Table 5.23 above, the valence of the relationship was not in the hypothesized direction, and the Pearson correlation coefficient failed to achieve significance. Therefore, Hypothesis 5 was not supported. Findings Related to the Relationship Between Subunit Service Satisfaction and Marketing Research Orientation. Upward Communication, and Levels of Management— Hypotheses 6a. 6b. and 6c Hypotheses 6a, 6b and 6c predicted that subunit service satisfaction would be positively related to marketing research orientation, positively related to upward communication, and negatively related to levels of management. Table 5.24 contains the Pearson correlation coefficients for the relationships. Table 5.24 Pearson Correlation Coefficients for Relationship Between Subunit Satisfaction and Marketing Research Orientation. Upward Communication, and Levels of M anagem ent ______________ Pearson Correlations_____________ M arketing Subunit Research Upward Levels of Variable Satisfaction Orientation Communication M anagement Subunit Satisfaction 1.00 Marketing Research Orientation 0.28 1.00 Upward Communication 0.17 0.59 1.00 Levels of Management -0.14 -034 -0.40 1.00 N ote. Items in bold-conelation is significant at the .0 1 level (2-tailed); N = 107 Hypothesis 6a. Subunit Satisfaction and Marketing Research Orientation The results presented in Table 5.24 above indicated that, as hypothesized, marketing research orientation was positively related to subunit satisfaction. The Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 84 Pearson correlation coefficient (r=.28) achieved significance at the p=.01 level; thus, Hypothesis 6a was supported. Hypotheses 6b and 6c. Subunit Satisfaction and Upward Communication and Levels of Management As displayed in Table 5.24 above, the relationships between subunit satisfaction and upward communication and levels of management both failed to achieve statistical significance. Although the valences of the relationships were as hypothesized, due to the lack of significance, both Hypothesis 6b and Hypothesis 6c were not supported. Findings Related to the Relationship Between Students Perception of Service Quality (Gap 5) and Students’ Satisfaction with Subunit Service Providers— Hypothesis 7 Hypothesis 7 predicted that students’ weighted overall Gap 5 scores would be negatively related to satisfaction with subunit service providers. The findings with regard to Hypothesis 7 are presented in Table 5.25 on the next page. The Pearson correlation coefficients in Table 5.25 indicated statistically significant relationships (at the p=.01 level) for each of the subunit service providers. Further, the valences for each of the relationships were negative as Hypothesis 7 predicted. The Facilities Management Services subunit displayed the strongest Pearson correlation coefficient (r=-.48), followed by Housing and Residence Halls (r=-.42), the Department of Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Public Safety (r=-.30), Access Card Services (r=-.29), and Residential and Greek Life (r= -.16). Thus, Hypothesis 7 was supported. Table 5 L 2 5 Pearsnn Correlation Coeflicients for Relationship Between W eighted Gan S Scores and Subunit Service Satisfaction ________________Pearson Correlations_______________ Weighted Residential Housing & Access Dept Facilities Overall and Residence Card of Public Mgrrt. Variable Gap 5 Greek Life Halls Services Safety Services Weighted Overall Gap 5 1.00 Residential & Greek Life -0.16 1.00 Housing & Residence Halls 4142 032 1.00 Access Card Services -0.29 031 0.44 1.00 Department of Public Safety -030 03S & 41 036 Facilities Management Services -0.48 0.11 039 034 Note. Item s in bold-correlation is significant at the .01 level (2-tailed); N = 339 Findings Related to the Relationship Between Service Quality (Gap 5). Satisfaction. Value and Future Behavioral Intentions Measures— Hypotheses 8a. 8b. 8c. 8d. 8e One of the most important and intriguing issues facing researchers is the interplay between the quality, satisfaction and value constructs (Rust and Oliver, 1994) and the effects these constructs may have on future behavioral intentions measures. Hypotheses 8a through 8e were designed to provide insight into these relationships in the context of a university housing program. Specifically, Hypothesis 8a predicted that weighted Gap S scores would be negatively related to students’ perception of overall program satisfaction; Hypothesis 8b predicted that Gap 5 would Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 86 be would be negatively related to students’ perception of overall program value; Hypothesis 8c predicted that Gap S would be negatively related to students’ intent to remain in university housing; Hypothesis 8d predicted that Gap 5 would be negatively related to students’ willingness to recommend university housing to a friend; and Hypothesis 8e predicted that Gap 5 would be negatively related to students’ willingness to recommend the institution (university) to a friend. Findings related to the above hypotheses are presented in Table 5.26. Table 526 Pearson CorrelatMinCoeflidentsfnrRptsrfinnsliip Between V^feighted G an 5 Scores and Overall Strtkrfartinn O verall V alue. Intent to R em ain in H ousing P rogram . W illingness tn RBcnmmend U niversity H ousing Proeram . and W illingness to R ecom m end U niversity Rsarscn Correlations W eighted Intent to W illingness to O verall O verall Overall Rem ain in Recom m end V ariable Cap 5 Satisfaction Value Housing H ousing U niversity W eighted O verall Gap 5 1.00 O verall Satisfaction -0 1 4 8 1.00 O verall Value 452 0 1 6 9 1.00 Intent to Rem ain in H ousing 0.08 0.01 0.00 1.00 W illingness to Recorrmend H ousing 445 028 0 L 6 7 0.10 1 .0 0 W illingness to Recarrmend U niversity -0.07 026 022 -0.06 030 1 .0 0 N ate. Itens in bold-conelatian is significant at the .0 1 level (2-tailed): N =339 Hypotheses 8a and 8b. Weighted Overall Gap 5. Overall Satisfaction, and Overall Value Hypotheses 8a and 8b expected overall satisfaction and overall value to be inversely related to the weighted overall Gap 5 scores. The results displayed in Table Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 87 5.26 indicate that the weighted overall Gap 5 scores were significantly related to the students’ perceptions of overall satisfaction (r=-.48) and overall value (r=-.52) at the p=.01 level. Further, valences of the Pearson correlation coefficients supported the hypothesized inverse nature of the relationships; thus, Hypotheses 8a and 8b were supported. Hypothesis 8c. Weighted Overall Gap 5 and Students* Intent to Remain in the Housing Program Hypothesis 8c predicted that large Gap 5 scores, an indicator of low perceived service quality, would be negatively related to students’ desire to remain in the university housing program. The Pearson correlation coefficient displayed in Table 5.26 failed to achieve significance; furthermore, the valence of the coefficient was not in the hypothesized direction. Hypothesis 8c was therefore not supported. Hypothesis 8d. Weighted Overall Gap 5 and the Students’ Willingness to Eagerly Recommend the Housing Program Hypothesis 8d predicted that weighted overall Gap 5 scores would be inversely related to students’ willingness to recommend the housing program. Table 5.26 results show that the Pearson correlation coefficient (r=-.45) for this relationship achieved significance at the p=.01 level, and the sign of the coefficient was in the predicted direction. Hypothesis 8d, that high perceived service quality would be related to a willingness to recommend the program, was therefore supported. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Hypothesis 8e. Weighted Overall Gap S and the Students’ Willingness to Recommend the Institution Finally, Hypothesis 8e expected that students’ weighted overall Gap 5 scores would be negatively related to a willingness to recommend the university. Although the Pearson correlation coefficient displayed in Table S.26 was in the hypothesized direction, it failed to achieve significance. Thus, Hypothesis 8e was not supported. Summary of Findings Related to Hypotheses 8a. 8b. 8c. 8d. 8e The relationship between weighted Gap 5 (an indicator of perceived service quality) and overall satisfaction, overall value, and a willingness to recommend the housing program all achieved significance at the p=.01 level. The Gap 5 relationship between an intent to remain in the housing program and a willingness to recommend the university did not achieve significance. Table 5.26 also indicates that students’ intent to remain in the housing program was not significantly related to overall satisfaction, overall value, nor to a willingness to recommend the university. On the other hand, a willingness to eagerly recommend the housing program to a friend was significantly related to a willingness to recommend the university (r=.30). Findings Related to the Conceptual Interrelationships Between Service Quality. Value, and Satisfaction— Research Question U 9 The literature and hypotheses in this study provided the starting point for the assessment of two alternative models designed to capture the causal relationships between the service quality, value and satisfaction constructs. The technique used to Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 89 evaluate the fit of these alternative models and the findings for each of the models are discussed in the next sections. Analysis Technique and Model Evaluation Structural equation modeling was the technique adopted to evaluate the fit of the two alternative models. Referred to as causal modeling, latent variable structural equation modeling (Joreskog and Sorbom, 1993), or analysis of covariance structures, structural equation modeling is a relatively recent innovation which allows researchers to graphically represent, estimate, and test a theoretical network of linear relations between variables (Rigdon, 1998). Structural equation modeling is particularly well suited for theory testing and empirical model building (Boilen and Long, 1993; Fomell and Larcker, 1981) due to its simultaneous estimation of multiple interrelated dependence relationships (Sergeant and Frenkel, 2000). And since standardized regression coefficients are produced for each of the relationships hypothesized by the model, indirect causal effects can be analyzed by multiplying the standardized beta coefficients of compound paths connecting two variables via intervening variables (Bohmstedt and Knoke, 1994). In order to provide a range of statistical evidence for the validity of a model, four measures of model fit were used to assess the alternative models in this study: (1) conventional chi-square statistic (Joreskog, 1969); (2) goodness-of-fit index (GFI; Joreskog and Sorbom, 1984; Tanaka and Huba, 1985); (3) root mean square error of Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 90 approximation (RMSEA; Browne and Cudeck, 1993; Steiger and Lind, 1980); and (4) comparative fit index (CFI; Bender, 1990). In the context of structural equation modeling the chi-square statistic tests (within sampling error) whether the sample covariance matrix is equivalent to the model-implied covariance matrix. Since the proposed model represents the null hypothesis, the aim of the researcher is to not reject the null hypothesis as indicated by p-values larger than .05 o r . 10 (Rigdon, 1998). A limitation of the chi-square statistic is that its behavior is a function of sample size-larger samples generate larger chi-square statistics which favor rejection of a model. The goodness-of-fit index (GFI) is bounded by zero (0) and unity (1), where unity indicates perfect fit. Williams and Hazer (1986) have suggested that GFIs above .85 represent good model fit, and Tanaka and Huba (1985) have stated that models with GFI values less than .80 should be rejected. The RMSEA attempts to mitigate sample size effect by shifting the focus from exact model fit to approximate model fit. Since RMSEA has in its denominator both sample size and degrees of freedom, the statistic removes sample size effects and produces a measure of nonfit per degrees of freedom. Brown and Cudek (1993) stated that RMSEA values between 0 and .05 are indicators of good fit, while values above .10 indicate poor fit. Finally, the comparative fit index (CFI) compares the proposed model to the independence (worst fitting) baseline model. In the independence model, each Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 91 measure is modeled as being uncorrelated with every other measure. Since the CFI is a comparison to the mal-fitting baseline model, CFI values range from 0 to 1, where values near 1 imply that the model does a good job of explaining covariation among measures. The most well-established decision rule for the CFI states that values equal to or greater than .90 indicate adequate fit (Rigdon, 1998). Alternative Model Evaluation The alternative models that were examined in this study are presented in Figures 5.1 and 5.2. The difference between the two models is the causal ordering of the overall satisfaction and overall weighted Gap 5 variables in the causal chain. As seen in Figure 5.1, Model A predicts that service quality (Gap 5) and overall value are antecedents of overall satisfaction; overall satisfaction in turn is hypothesized to have a direct effect on students’ willingness to eagerly recommend the housing program to a friend. Empirical support for this perspective is provided by Cronin and Taylor (1992), and Anderson, Fomell and Lehmann (1993). As seen in Figure 5.2, in Model B the overall satisfaction variable is moved back in the causal chain and is replaced by the overall weighted Gap 5 variable. Model B predicts that overall satisfaction and overall value have direct effects on students’ perception of service quality as represented by the size of the weighted Gap 5 variable; the weighted Gap 5 variable is then predicted to have a direct effect on students’ willingness to recommend the program to a friend. The conceptual support for Model B is provided by Bolton and Drew (1991b), and Bitner (1990). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure 5.1 Structural Model ‘A’ 92 Model A Standardized estimates Chi-Square=1.069 df=1 p=.301 GFI=.998 RMSEA=.014 CFN1.000 -.17 .64 .49 .60 -.52 .60 .26 OSAT OVAL WGAP5 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Figure S.2 Structural Model ‘B’ 93 Model B Standardized estimates Chi-Square=134.556 df=1 p=.000 GFI=.859 RMSEA=.629 CFI=.803 OSAT -.24 .46 .30 -.14 HRECOMM WGAP5 .69 -.36 .60 OVAL Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 94 In the models, direct effects are represented by straight, single-headed arrows; covariation is represented by curved double-headed arrows. The numbers displayed next to the single-headed arrow paths are the resulting standardized regression coefficients (P) for the relationship between dependent and independent variables. The number next to the double-headed arrow path represents the estimated covariances between independent variables, and the number adjacent to the dependent variables is the squared multiple correlation (SMC). The SMC may be interpreted as the proportion of total variance in the dependent variable that is explained by all variables that the dependent variable relies on (Bacon, 2001). All P coefficient estimates in Model A and Model B were significant at the p<.005 level. The magnitudes of the standardized coefficients provide guidance as to the relative influence between the independent and dependent measures. Model Fit The hypothesized path models in Figures 5.1 and 5.2 were formally tested using the structural equation modeling technique. Model fit statistics are shown in each of the figures. Model A achieved a good level of fit. The chi-square statistic of 1.069 (p>.10) implies that the model is appropriately representative the sample data. Furthermore, the global fit index (GFI=.998) indicated excellent fit (values in excess of .85 indicate good fit), the root mean square of approximation (RMSEA=.014) indicated a reasonable error of approximation (values less than .08 are considered acceptable), and the comparative fit index (CFI= 1.000) indicated that the model Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 95 achieved good fit when compared to the independence model (values above .90 are considered acceptable). Thus, based on the chi-square statistic in addition to global fit indices, Model A was accepted as providing reasonable fit to the sample data. Model B, on the other hand, did not achieve a reasonable fit to the sample data. As seen in Figure 5.2, the chi-square statistic of 134.556 was significant (p<.005) indicating that the sample covariance was not equivalent to the Model B- implied covariance matrix. Further, global fit indices collectively indicated poor model fit (GFI=.859, RMSEA=.629, CFI=.803) based on generally accepted indices values. Model B was therefore rejected in favor of Model A, that is, the causal ordering of service quality and value as antecedent to satisfaction provided superior fit and accounted for 64% of the variance in the students’ willingness to eagerly recommend the program to a friend. Table 5.27 below displays a summary of the direct, indirect and total effects for each relationship in Model A. Table 5.27 M odel A — Sum m ary of Effects Effect of On Direct Effect Indirect Effect Total Effect WGAP5 OSAT -0.17 -0.17 O VA L OSAT 0.60 — 0.60 WGAP5 HRECOM M — -0.10 -0.10 OVAL HRECOM M 0.26 0.36 0.62 OSAT HRECOM M 0.60 — 0.60 Note. WGAP5 = Weighted Gap 5 Score; OVAL = Overall Value OSAT = Overall Satisfaction; HRECOMM = W illingness To Eagerly Recommend Housing Program to a Friend; N = 339 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 96 CHAPTER SIX Discussion The primary purpose of this study was to empirically examine the Extended Service Quality Model in a nonprofit higher education firm. Following the recommendation of the developers of the model that research be undertaken which focuses on the individual gaps specified in the model, this study examined Gap 5, the service quality gap, and Gap 1, the marketing information gap. The results obtained from the study also provide data on the generalizability and validity of the SERVQUAL construct to measure student perceptions of service quality in a university housing program. Lastly, the study examined the model under the increasingly usual condition when multiple subunits are involved in the production and delivery of a service. The first section of this chapter provides a summary of the results obtained in connection with the research hypotheses. In the second section, the practical implications of these findings are discussed in the context of nonprofit higher education and public sector organizations. The third section concludes with a discussion of some of the limitations of the study and suggested directions for future research. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 97 Summary of Results SERVQUAL Psychometric Performance Research question #1 pertained to the psychometric performance of the SERVQUAL instrument. Overall, the SERVQUAL instrument performed well. Hypothesis la regarding the instrument’s reliability was supported by high Cronbach’s alpha reliability coefficients for the overall scale (.926 for the expectation items, .930 for the perception items, and .930 for the Gap 5 items). Strong reliability coefficient values were also reported for the reliability, responsiveness, assurance and empathy dimensions (ranging from .885 to .779), while somewhat lower alphas were reported for the tangibles dimension (.704 to .655). The SERVQUAL expectation items on the manager instrument also performed well where alpha values ranged from .914 to .695. With regard to convergent validity, mixed results were reported. Hypothesis lb expected that the 22 Gap 5 items would load as specified on to five separate factors (service quality dimensions). While the five factor solution accounted for 58.49 percent of the common variance, the tangibles dimension split into two distinct dimensions, and the responsiveness items loaded on the reliability and assurance dimensions. However, when the weighted Gap 5 items were factor analyzed, all 22 items loaded as predicted on the five separate service quality dimensions. Additional support for the overall construct validity of the instrument was provided in the results for Hypotheses lc and Id. The regression model with the five Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 98 weighted service quality dimensions as independent variables was effective in accounting for variation in a separate indicator of overall service quality (OSQ) (Hypothesis lc). Further, when the students' mean Gap 5 responses were analyzed according to how they responded to a series of questions pertaining to service failure, service recovery, and a willingness to recommend the housing program, significant differences were found to exist (Hypothesis Id). Specifically, students who had encountered service failures reported larger Gap 5 scores indicative of lower perceptions of service quality. Collectively, these findings provided support for the reliability and construct validity of the SERVQUAL instrument in the context of a university housing program. Student Segmentation Results Research question # 2 posited that there would be no significant differences in students’ perceptions of service quality according to gender, grade classification, or type of residential facility (Hypotheses 2a, 2b, 2c). Significant findings were reported for gender where females expected higher service on the reliability and responsiveness dimensions and reported lower perceptions of overall service quality, and lower perceptions of quality on the responsiveness and assurance dimensions. Additionally, significant differences were found in weighted Gap S scores segmented based on grade classification where seniors’ and sophomores’ perceptions of service quality were the lowest. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 99 Gap I Results Significant findings were reported in connection with Gap 1. Hypothesis 3 expected to find significant differences between students’ expectations for service quality versus managers’ understanding of students’ expectations. Not only did managers significantly underestimate students’ overall service quality expectations and reliability expectations, but they also overestimated students’ assurance and empathy expectations. Contributing Factors to Gap 1 The overall findings with regard to the Extended Service Quality Model’s theorized contributing factors to Gap 1 were disappointing. Hypotheses 4a, 4b, and 4c were all rejected; that is, no significant relationships were found to exist between the size of Gap 1 and marketing research orientation, upward communication and levels of management. The analysis was also rerun to determine whether any of the individual items of each of the factors were significantly related to the Gap 1, and this too produced insignificant results. It may be, as suggested by Parasuraman et al. (1991a), that insignificant variation in the Gap 1 scores could have contributed to these results. Gap 1 and Subunit Satisfaction Gap 1 scores were also analyzed in the context of subunit satisfaction ratings. No significant relationship was found between Gap 1 and subunit satisfaction ratings; thus, Hypothesis S was not supported. Research question #6 predicted that subunit Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 100 satisfaction would be correlated with Gap 1 factors; subunit satisfaction was found to significantly related to marketing research orientation (r=.28); however, upward communication and levels of management failed to achieve significant correlations with subunit satisfaction. Summary of Results for Gap 5 and Subunit Satisfaction. Overall Value. Overall Satisfaction and Future Behavioral Intentions Measures The final two research questions shifted focus back to the students’ global perceptions of program service quality, Gap 5, and the relationship with subunit satisfaction. Research question #7 predicted that Gap 5 would be significantly correlated with subunit satisfaction. Statistically significant relationships were reported for each of the subunits at the p<.01 level or better, thus supporting Hypothesis 7. Results were also reported for the relationships between service quality (Gap 5), overall satisfaction, overall value, intent to remain in housing, willingness to recommend the housing program, and willingness to recommend the university. Significant correlations were found between service quality and satisfaction, value and willingness to recommend housing program. The valences of the relationships were as hypothesized. Thus, Hypotheses 8a, 8b, and 8d were supported. The intent to remain in the housing program and the willingness to recommend the institution were not significantly correlated. Hypotheses 8c and 8e were therefore not supported. Significant correlations (p<.005) were found to exist, however, between institutional commitment (willingness to recommend university) and overall satisfaction (r=.26), Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 101 overall value (r=.22), and the students’ willingness to eagerly recommend the housing program to a friend (r=.30). Finally, the strong interrelationships between service quality, value, satisfaction and willingness to recommend the housing program were examined to determine which causal orderings of the variables provided the best fit to the data. Chi-square tests and global fit statistics supported Model A which positioned service quality and value as antecedents of satisfaction, with satisfaction directly effecting the students’ willingness to recommend the program. Overall, Model A accounted for 64 percent of the total variance in the students’ willingness to recommend the housing program. Implications of Results Nonprofit Higher Education Organizations American universities exist in a highly competitive and stratified market where they compete for faculty, private donative resources, research funds, and students. Although the higher education firm is a very unusual economic institution as demonstrated by Winston (2000), it is most like a typical firm when it comes to the need to provide high quality service to its customers. The implications of the results from the present study are that the SERVQUAL instrument, which has been thoroughly tested across a wide range of settings in the for-profit sector, does indeed generalize to a setting in a nonprofit higher education firm. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 102 In order to provide high service quality to customers, managers must understand the expectations of the customer. As Keith (1998:164) noted, “The word customer may sound overly commercial to members of academe. However, the word calls our attention to the constituencies we serve and reminds us that we need to meet their needs and expectations if we are to succeed.” A key strength of the SERVQUAL instrument is that it does provide managers with clear, straightforward data on customer expectations across five different service quality dimensions. By examining the mean expectation scores and dimensional importance ratings, and by comparing the customers perceptions ratings to their expectations, managers can target specific areas with the largest service quality gaps (Gap 5 scores). In the present study, students rated the tangibles dimension most important, followed by the reliability dimension. The reliability mean Gap 5 score was the largest of the five dimensions; thus, the reliability of the housing program services is an area that ought to be examined closely by management. On the other hand, the empathy dimension was the least important dimension to students. Evidently, students were more concerned about unreliable services than they were about caring, empathetic employees who could attend to their personal needs. In the context of a university housing program, this makes intuitive sense. In other administrative support settings across the university, the empathy dimension may play a more important role vis-a-vis the other service quality dimensions. An example of one such area would be student advisement and career counseling services. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 103 The SERVQUAL construct also provides university managers with key customer segmentation data. The findings from this study indicated that student expectations for service quality were high (mean overall rating = 6.11 on a 7-point Likert-type scale), and service quality perceptions did not vary significantly based on type of residential facility. However, significant differences in weighted Gap 5 scores were found based on class standing and gender. Thus, university managers need to be cognizant of such differences when designing and managing housing programs. Specifically, managers should note that female students expected higher service quality when compared to their male counterparts, and perceived significantly lower service quality as represented by their larger overall Gap 5 score (-2.0 versus -1.71 for males), and sophomore and senior students' perceptions of service quality were lower as indicated by their larger overall Gap 5 scores. The findings in connection with Gap 1 of the Extended Service Quality Model are also important for managers in the higher education sector. This study demonstrated that managers had an inaccurate understanding of students' expectations for service quality (statistically significant differences for the overall expectation score as well as for the reliability, assurance and empathy dimensions); managers also displayed an inaccurate understanding of the relative importance of the service quality dimensions. While this study was unable to demonstrate the direct association of the theorized contributing factors to Gap 1, the finding that marketing research orientation was correlated with mean subunit satisfaction (r=.28, p<.005) does Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 104 provide evidence that the use of market research data are important in the design and delivery of services that satisfy students. The final implication of the results of the study for the higher education sector is that subunit service satisfaction plays an important role in the student’s global perception of program service quality. Students are served by a variety of subunits each day; thus, it is important for university management to understand the effect individual subunit satisfaction has on global perceptions of program service quality. In the present study, significant subunit satisfaction correlations with overall Gap S scores ranged from r=-.48 to r=-.16. Thus, in terms of student satisfaction, some subunits take on a more important role in the student’s view, and management should be aware of the interrelationships that exist at the subunit level. Lastly, Astin (1993:278) noted that, “There appears to be a direct association between satisfaction and retention in college.” The importance of university housing programs is but one component in the student’s overall satisfaction assessment; however, it is an important one as Astin (1993) found: “The three effects that are directly attributable to living in a campus residence hall are positive effects on attainment of the bachelor’s degree, satisfaction with faculty, and willingness to re enroll in the same college” (p. 367). This study confirms the above finding in that the student’s intent to re-enroll was significantly correlated (at the p=.01 level) with overall satisfaction (r=.27), overall value (r= 24), overall service quality (r=.17), and a willingness to eagerly recommend the housing program (r=.26). The SERVQUAL Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 103 service quality construct provides the necessary information for managers to assess and monitor service quality in a university housing program and thus is a tool to help drive student satisfaction. Public Sector Organizations The implication of the results from this study for public sector organizations is that it helps to validate the SERVQUAL service quality construct for use in settings other than the private sector. That is, evidence from this study supports the generalizability of the SERVQUAL instrument. Public managers are well aware of the need to assess service quality perceptions of the constituencies they serve. Not unlike the higher education sector, the public sector has not typically viewed the citizens who use its services as customers. However, this view is changing as noted by Kline (2001) wherein governments in the United Kingdom, Australia, and the United States have moved to put in place customer standards, customer relationship management systems, and “e-govemment” technologies to improve service delivery to its customers and to engage the public in more interactive and effective ways. Groups such as the Government Division of the American Society for Quality and the Government Performance Coalition are actively advocating for such advancements. And as noted in the first chapter, the American Customer Satisfaction Index (ACSI) was adapted for use by public sector organizations. The customer’s service quality expectations and perceptions and expectations play a prominent role in the ACSI model and can easily be measured by the SERVQUAL instrument. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 106 Ultimately, the aim of public managers ought to be to create public value similar to how managers in the private sector aim to create private value (Moore, 1995). As demonstrated in this study, the SERVQUAL approach is a tool public managers can utilize to understand customer expectations and then implement service delivery programs which make the most efficient use of taxpayer dollars and which improve the overall effectiveness of public organizations. Limitations and Directions for Future Research In this concluding section, the limitations of the study are discussed and are followed by recommendations for future service quality research. The first limitation at the methodological level involved the use of survey data as the sole source of information for customer expectations and perceptions of service quality across the five dimensions, as well as for satisfaction ratings on subunit service providers. Focus group sessions, structured interviews, and other supplemental sources of data could have strengthened the research design. The same limitation applies to the manager sample wherein similar data collection strategies could have been employed to provide richer data about the managers’ understanding of customers’ expectations and about the characteristics of their subunit organizations. The second limitation, not uncommon in service quality research, was the reliance on self-reported information to draw conclusions about the quality of service that was delivered. Since the delivery of service is a performance by the customer- contact employee, key insights can be provided by these employees concerning their Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 107 perceptions about the level of service quality. Indeed, Schneider and Bowen (1985) demonstrated that customer-contact employees can accurately predict customer expectations and perceptions of service quality. The third limitation of the study was that the research focused on a single university housing program rather than on a broad cross-section of housing programs situated in universities with different Carnegie code classifications. Such a large- scale research design was beyond the capacity of this study, however, if such a study were to be pursued, it would likely require the assistance of a national professional housing officers association such as the Association of College and University Housing Officers-Intemational. Turning to future service quality research opportunities, one important avenue is the relationship between service quality and customer satisfaction since there still seems to be a lack of consensus in the literature as to the causal direction of the relationship. Teas (1993) suggests that both service quality and customer satisfaction could potentially be viewed from a specific transaction basis as well as from a global firm perspective. Since the SERVQUAL instrument was designed to measure customers’ global perceptions of a firm’s service quality, Parasuraman et al. (1994) suggest that the instrument be modified to assess transaction-level perceptions which could then be aggregated and compared to global firm service quality perceptions. With respect to future research on the SERVQUAL instrument, it might be fruitful to explore the split in the tangibles dimension that was noted in this and other Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 108 studies. It may indeed be the case that customers perceive the items related to the appearance of the physical facilities quite differently from the items related to the appearance of the customer- contact employees as was suggested by Parasuraman et al. (1991b). Finally, with the evolution of electronic commerce, research is needed on how the five core dimensions of service quality may change when customers interact with organizations in an electronic medium. This is particularly important for universities as more administrative support services are moved to the World Wide Web. The challenge for managers remains the same, however; only through an accurate understanding of customer expectations can organizations design and deliver services which meet or exceed customer expectations thus resulting in perceptions of high service quality. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. REFERENCES 109 Anderson, E., Fomell, C., and Lehman, D. (1993). Economic Consequences o f Providing Quality and Customer Satisfaction. Cambridge: Marketing Science Institute. Arbuckle, J. (1997). Amos 4.0 (Computer software). Chicago, IL: SPSS Inc. Arbuckle, J. and Wothke, W. (1999). Amos 4.0 User’ s Guide. Chicago, IL: SmallWaters Corporation. Astin, A. (1970a). “The methodology of research on college impact (I).” Sociology o f Education, 43, 223-254. (1970b). “The methodology of research on college impact (II).” Sociology o f Education, 43,437-450. (1972). “The measured effects of higher education.” Annals o f the Academy o f Political and Social Sciences, 404, 1-20. (1973). “Measurements and determinants of the outputs of higher education.” In L. Solomon & P. Taubmans (Eds.), Does College Matter? Some Evidence on the Impacts o f Higher Education. New York: Academic Press. (1977). Four Critical Years. San Francisco: Jossey-Bass. (1990). Assessment fo r Excellence: The Philosophy and Practice o f Assessment and Evaluation in Higher Education. New York: Macmillan. (1993). What Matters in College? Four Critical Years Revisited. San Francisco: Jossey Bass. Babakus, E., and Boiler, G. (1992). “An empirical assessment of the SERVQUAL scale.” Journal o f Business Research, 24(3), 253-268. , and Manegold, W.G. (1992). “Adapting the SERVQUAL scale to hospital services: An empirical investigation.” Health Services Research, 26(6), 767- 780. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 110 Bacon, L. (2001). “Using Amos for structural equation modeling in market research.” SPSS white paper. Chicago, IL: Lynd Bacon and Associates, Ltd., and SPSS Inc. Bentler, P. (1990). “Comparative fit indices in structural models.” Psychological Bulletin, 107(3), 238-246. Berman, E. and West, J. (1995). “Municipal commitment to total quality management: A survey of recent progress.” Public Administration Review, 55(1), 57-66. Bitner, J. (1990). “Evaluating service encounters: The effects of physical surroundings and employee responses.” Journal o f Marketing, 54,69-82. Bollen, K. and Long, J. (1993). Testing Structural Equation Models. London: Sage. Bolton, R. and Drew, J. (1991b). “A multistage model of customers’ assessments of service quality and value.” Journal o f Customer Research, 17, 375-384. Bohmstedt, G. and Knoke, D. (1994). Statistics fo r Social Data Analysis (3r d ed). Itasca, IL: F.E. Peacock Publishers, Inc. Braskamp, L. (1991). “Purposes, issues and principles of assessment.” NCA Quarterly, 66,417-429. Brown, M. and Cudeck, R. (1989). “Single sample cross validation indices for covariance structures.” Multivariate Behavioral Research, 24(4), 445-455. Carman, J. (1990). “Consumer perceptions of service quality: An assessment of the SERVQUAL dimensions.” Journal o f Retailing, 66(1), 33-55. Chaffee, E. (1998). “Listening to the people we serve.” In W.G. Tierney (Ed.), The Responsive University: Restructuring for High Performance. MD: The Johns Hopkins University Press. Chase, R. (1978). “Where does the customer fit in a service operation?” Harvard Business Review, 56, 157-159. , and Aquilano, N. (1977), Production and Operations Management. Homewood: Richard D. Irwin, Inc. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. Ill , and Tansik, D. (1983). “The customer contact model for organizational design.” Management Science, 29,1037-1050. , and Bowen, B. (1991). “Service quality and the service delivery system.” In S. Brown et al. (Eds.), Service Quality: Multi-Disciplinary and Multi-National Perspectives. Lexington: Lexington Books. Child, J. (1987). “Information technology, organization, and response to strategic choices.” California Management Review, 30,33-50. Chism, S. (1997). “Measuring customer perceptions of service quality in human services.” UMI Microform 9822566. Ann Arbor, MI: UMI. Churchill, G. (1979). “A paradigm for developing better measures of marketing constructs.” Journal o f Marketing Research, 16(1), 64-73. Clark, B. (1977). Academic Power in Italy: Bureaucracy and Oligarchy in a National University System. Chicago: University of Chicago Press. (1984). Perspectives on Higher Education: Eight Disciplinary and Comparative Views. Los Angeles: University of California Press, Ltd. (1995). Places o f Inquiry: Research and Advanced Education in Modern Universities. Los Angeles: University of California Press, Ltd. Clotfelter, C. (1996). Buying the Best: Cost Escalation in Elite Higher Education. Princeton: Princeton University Press. Cohen, S. and Brand, R. (1993). Total Quality Management in Government: A Practical Guide fo r the Real World. San Francisco: Jossey-Bass. Cronbach, L. and Meehl, P. (1955). “Construct validity in psychological tests.” Psychological Bulletin, 52, 281-302. Cronin, J. and Taylor, S. (1992). “Measuring Service Quality: A re-examination and extension.” Journal o f Marketing, 56(3), 55-69. (1994). “Modeling patient satisfaction and service quality.” Journal o f Healthcare Marketing, 14(1), 34-44. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 112 Daft, R. and Lengel, (1984). ‘Information richness: A new approach to managerial behavior and organizational design.” Research on Organizational Behavior, 6,191-225. Davenport, T. (1994). “Managing in the new world of process.” Public Productivity & Management Review, 18, 133-147. Delene, L. and Bunda, M. (1991). The Assessment o f Service Quality in Higher Education. Kalamazoo, MI: Western Michigan University. (ERIC Document Reproduction Service No. HE 024 725). Dennison, D. (1990). Corporate Culture and Organizational Effectiveness. New York: John Wiley and Sons. Dilulio, J., Garvey, G., and Kettl, D. (1993). Improving Government Performance: An Owner s Manual. Washington D.C.:Brookings Institution. Felker, T. (1999, Fall). “ACSI government model is proving effective.” Public Sector Network News, 5(1), 12-13. Fitzsimmons, J. And Sullivan, R. (1982). Service Operations Management. New York: McGraw-Hill Book Company. Fomell, C. (1992). “A national customer satisfaction barometer: The Swedish experience.” Journal o f Marketing (January), 1-18. (1996). “The American Customer Satisfaction Index: Nature, purpose, and findings.” Journal o f Marketing, 60, 7-18. , and Larcker, D. (1981). “Evaluating structural equation models with unobservable errors and measurement error.” Journal o f Marketing Research, 18,39-50. Gaither, N. (1992). Production & Operations Management, Orlando, FL: Dry den Press. Goethals, G. (1999). “Peer influences among college students: The perils and potentials.” Williams Project on the Economics o f Higher Education, Discussion Paper No. 51. Williamstown, MA: Williams College. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 113 (2000). “Social comparison and peer effects at an elite college.” Williams Project on the Economics o f Higher Education, Discussion Paper No. 55. Williamstown, MA: Williams College. , Winston, G., and Zimmerman, D. (1999). “Students educating students: The emerging role of peer effects in higher education.” Williams Project on the Economics o f Higher Education, Discussion Paper No. 50. Williamstown, MA: Williams College. Gronroos, C. (1982). “A service-oriented approach to marketing services.” European Journal o f Marketing, 12,588-601. (1983). Strategic Management and Marketing in the Service Sector. Cambridge, MA: Marketing Science Institute. . (1988). “Strategy, quality, and resource management in the service sector.” International Journal o f Operations and Product Management, 3,9-19. (1990). Service Management and Marketing: Managing the Moments o f Truth in Service Competition. Lexington, MA: Lexington Books. Hansmann, H. (1980). “The role of nonprofit enterprise.” Yale Law Journal, 89, 835-901. (1987). “Economic theories of nonprofit organization.” In The Non-profit Sector: A Research Handbook New Haven: Yale University Press. Hardy, C. (1990). “Putting power into university governance.” In J. Smart (Ed.), Higher Education: Handbook o f Theory and Research VI. New York: Agathon Press. Heck, R. (1998). “Factor analysis: Exploratory and confirmatory approaches.” In G. Marcoulides (Ed.), Modern Methods for Business Research. Mahwah, NJ: Lawrence Erlbaum Associates. Heskett, J., Sasser, E., and Schlesinger, L. (1997). The Service Profit Chain. New York: The Free Press. Hunter, J. and Schmidt, F. (1990). Methods o f Meta-Analysis. Newbury Park: Sage Publications. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 114 Hyde, A. (1995). “Quality, reengineering, and performance: Managing change in the public sector.” In A. Halachmi and G. Bouchaert (Eds.), The Enduring Challenges in Public Management: Surviving and Excelling in a Changing World. San Francisco: Jossey-Bass. Johnson, M., and Fomell, C. (1991). “A framework for comparing customer satisfaction across individuals and product categories.” Journal o f Economic Psychology, 12(2), 267-286. Joreskog, K. (1969). “A general approach to confirmatory maximum likelihood factor analysis.” Psychometrika, 34, 183-202. , and Sorbom, D. (1984). LISREL-VI User's Guide (3r d ed.). Mooresville, IN: Scientific Software. (1993). L1SREL8: User’ s Reference Guide. Chicago: Scientific Software. Kanter, R. and Brinkerhoff, D. (1981). “Organizational performance: Recent developments in measurement.” Annual Review o f Sociology, 7, 321-349. Keith, K. (1998). “The responsive university in the twenty-first century.” In W. G. Tiemey (Ed.), The Responsive University. Baltimore, MD: The Johns Hopkins University Press. Keller, G. (1983). Academic Strategy: The Management Revolution in American Higher Education. Baltimore: The Johns Hopkins University Press. Kellogg, D. and Chase, R. (1995). “Constructing an empirically derived measure for customer contact.” Management Science, 41(11), 1734-1748. Kline, J. (2001, Spring). “Quality customer service.” Government Division News, 6(3), 9-10. Levitt, T. (1972). “Production-line approach to service.” Harvard Business Review, 50,41-52. Luthans, F. (1988). “The exploding service sector: Meeting the challenge through behavioral management.” Journal o f Organizational Change Management, 1(1): 18-22. (1995). Organizational Behavior (7th ed.). New York: McGraw Hill. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. IIS Mills, P. and Turk, T. (1986). “A preliminary investigation into the influence of customer-firm interface on information processing and task activities in service organizations." Journal o f Management, 12,91-104. , Hill, J., Leidecker, J. and Marguiies, N. (1983). “Flexiform: A model for professional service organizations.” Academy o f Management Review, 8, 118- 131. Mintzberg, H. (1988). “The professional bureaucracy.” In J. Quinn, H. Mintzberg, and R. James (Eds.), The Strategy Process. Englewood Cliffs: Prentice-Hall. Mitchell, D., Lewin, D., and Lawler III, E. (1990). “Alternative pay systems, firm performance, and productivity.” In A. Blinder (Ed.), Paying fo r Productivity: A Look at the Evidence, Washington, DC: The Brookings Institute. Moore, M. (1995). Creating Public Value. Cambridge: Harvard University Press National Performance Review (1993). From Red Tape to Results: Creating a Government that Works Better and Costs Less. Washington, DC: USGPO. Nelson, E. (1992). “Do patient perceptions of quality relate to hospital financial performance?” Journal o f Health Care Marketing, 12(4), 6-13. Nunnally, J. (1967). Psychometric Theory. New York: McGraw-Hill. (1978). Psychometric Theory (2n d ed.). New York: McGraw-Hill. O’Connor, S. (1992). “A model of service quality perceptions and health care consumer behavior.” Journal o f Hospital Marketing, 6(1), 69-92. , and Shewchuk (1995). “Service quality revisited: Striving for a new orientation.” Hospital & Health Services Administration, 40(4), 535-552. Oliver, R. (1977). “Effect of expectation and disconfirmation on postexposure product evaluations: An alternative interpretation.” Journal o f Applied Psychology, 62,480-486. (1980). “A cognitive model of the antecedents and consequences of satisfaction decisions.” Journal o f Marketing Research, 17,460-469. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 116 Orwig, R., Pearson, J., and Cochran, D. (1997). “An empirical investigation into the validity of SERVQUAL in the public sector.” Public Administration Quarterly, Spring, 54-68. Osborne, D. and Gaebler (1992). Reinventing Government: How the Entrepreneurial Spirit is Transforming the Public Sector. MA: Addison-Wesley. Parasuraman, A., Zeithaml, V., and Berry, L. (1985). “A conceptual model of service quality and its implications for future research.” Journal o f Marketing, 49, 41-50. (1986). SERVQUAL: A Multiple-ltem Scale fo r Measuring Perceptions o f Service Quality. Cambridge, MA: Marketing Science Institute. (1988). “Communication and control processes in the delivery of service quality.” Journal o f Marketing, 52, 35-48. (1991a). “Perceived service quality as a customer-based performance measure: An empirical examination of organizational barriers using an extended service quality model.” Human Resource Management, 30, 335-364. (1991b). “Refinement and reassessment of the SERVQUAL scale.” Journal o f Retailing, 67,420-450. (1993). “More on improving service quality measurement.” Journal o f Retailing, 67(4), 420-450. (1994). “Reassessment of expectations as a comparison standard in measuring service quality: implications for further research.” Journal o f Marketing, 58, 111-124. Pascarella, E. and Terenzini, P. (1991). How College Affects Students. San Francisco: Jossey-Bass, Inc. Poister, T. and Henry G. (1994). “Citizen ratings of public and private service quality: A comparative perspective.” Public Administration Review, 54, 155- 160. Poister, T. and Streib, G. (1999). “Performance measurement in municipal government: Assessing the state of the practice.” Public Administration Review, 59(4): 325-335. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 117 Rigdon, E. (1998). “Structural equation modeling.” In G. Marcoulides (Ed.), Modern Methods fo r Business Research. Mahwah, NJ: Lawrence Erlbaum Associates. Robertson, P. (1995). “Bringing the public in: Client-citizen participation in public organization governance.” Paper presented to the annual meeting of the Academy of Management, Public and Nonprofit Sector Division. Roth, A., Chase, R., and Voss, C. (1997). Service in the US. London: Severn Trent Pic. , and Jackson, W. (1995). “Strategic determinants of service quality and performance: Evidence from the banking industry.” Management Science, 41(11), 1720-1733. , Johnson, S., and Short, N. (1996). “Strategic deployment of technology in hospitals: Evidence for reengineering.” In E. Geisler and O. Heller (Eds.), Managing Technology in Healthcare. Boston: Kluwer Academic Publishers. , and van der Velde, M. (1991). “Operations as marketing: A competitive service strategy.” Journal o f Operations Management, 10,303-328. Rothschild, M. and White, L. (1995). “The analytics of pricing in higher education and other services in which customers are inputs.” Journal o f Political Economy, 103, 573-586. Ruby, C. (1996). “Assessment of student satisfaction with selected student support services using the Servqual model of customer satisfaction.” UMI Microform 9639754. Ann Arbor, MI: UMI Rust, R. (1998). “What is the domain of service research T Journal o f Service Research, 1(3), 107. , and Oliver, R. (1994). “Service quality: Insights and managerial implications from the frontier.” In R. Oliver and R. Rust (Eds.), Service Quality: New Directions in Theory and Practice. Thousand Oaks: Sage Publications, Inc. , Oliver, R., and Keiningham, L. (1996). Service Marketing. New York: HaperCollins College. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 118 , Zahoric, A., and Keiningham, T. (1994). Return on Quality: Measuring the Financial Impact o f Your Company’ s Quest fo r Quality. Chicago: Probus Publishing Company. Saltzstein, G. (1992). “Bureaucratic responsiveness: Conceptual issues and current research.” Journal o f Public Administration Research and Theory, 1,63-68. Sasser, W., Olsen, R., and Wyckoff, D. (1978). Management o f Service Operations: Text, Cases and Readings. Boston: Allyn and Bacon, Inc. Schmit, M. and Allscheid, S. (1995). “Employee attitudes and customer satisfaction: Making theoretical and empirical connections.” Personnel Psychology, 48, 521-536. Schneider, B. and Bowen, D. (1984). “New services design development and implementation and the employee.” In W. George and C. Marshall (Eds.), Developing New Services. Chicago: American Marketing Association. (1985). “Employee and customer perceptions of service in banks: Replication and extension.” Journal o f Applied Psychology, 70(3), 423-433. (1993). “The service organization: Human management is crucial.” Organizational Dynamics, 21(40), 39-52. Sergeant, A. and Frenkel, S. (2000). “When do customer contact employees satisfy customers?” Journal o f Service Research, 3(1), 18-34. Shugan, S. (1994). “Explanations for the growth of services.” In R. Rust and R. Oliver, R. (Eds.), Service Quality: New Directions in Theory and Practice. Thousand Oaks: Sage Publications. Stajkovic, A. and Luthans, F. (1997). “A meta-analysis of the effects of organizational behavior modification on task performance, 1975-1995.” Academy o f Management Journal, 40(5), 122-1149. Steiger, J. and Lind, J. (1989). “Statistically-based tests for the number of common factors.” Handout presented at the spring meeting of the Psychometric Society, Iowa City. Soteriou, A. and Chase, R. (1997). “Linking the customer contact model to service quality.” Unpublished manuscript, University of Southern California. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 119 (1997). “A robust approach to improving service quality.” Unpublished manuscript, University of Southern California. SPSS Inc. (1999). SPSS Base 10.0 Applications Guide. Chicago, IL: SPSS Inc. (2000). SPSS 10.0 (Computer software). Chicago, IL: SPSS, Inc. Tanaka, J. and Huba, G. (1985). “A fit index for covariance structure models under arbitrary GLS estimation.” British Journal o f Mathematical and Statistical Psychology, 38,197-201. Taylor, S. and Cronin, J. (1994). “Modeling patient satisfaction and service quality.” Journal o f Health Care Marketing, 14(1), 34-44. Teas, R. (1993). “Expectations, performance evaluation, and consumers’ perceptions of quality.” Journal o f Marketing, 57, 18-34. (1994). “Expectations as a comparison standard in measuring service quality: An assessment of a reassessment.” Journal o f Marketing, 58, 132-139. Thompson, J. and Ingraham, P. (1996). “Organization redesign in the public sector.” In D. Kettl and H. Milward (Eds.), The State o f Public Management. Baltimore: The Johns Hopkins University Press. Turner, S. (1996). “A note on changes in the returns to college quality.” Mimeo, University of Michigan, April 1 . Victor, B. and Blackburn, R. (1987). “Interdependence: An alternative conceptualization.” Academy o f Management Review, 12,486-498. Weick, K. (1976). “Educational organizations as loosely coupled systems.” Administrative Science Quarterly, 21,1-19. Whyte, K. (1946). “When workers and customers meet.” In W. Whyte (Ed.), Industry and Society. New York: McGraw-Hill Book Company. Williams, L. and Hazer, J. (1986). “Antecedents and consequences of satisfaction and commitment in turnover models: A reanalysis using latent variable structural equation models.” Journal o f Applied Psychology, 71(2), 219-231. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 120 Wilson, J.Q. (1989). Bureaucracy: What Government Agencies Do and Why They Do It. New York: Basic Books. Wilson, J.Q. (1994). “Reinventing public administration.” PS, Political Science & Politics, 27(4), 667-677. Winston, G. (1996). “The economic structure of higher education: Subsidies, customer-inputs, and hierarchy.” Williams Project on the Economics o f Higher Education, Discussion Paper No. 40. Williamstown, MA: Williams College. (1997a). “Why can’t a college be more like a firm.” Williams Project on the Economics o f Higher Education, Discussion Paper No. 42. Williamstown, MA: Williams College. (1997b). “College costs: Subsidies, intuition, and policy.” Williams Project on the Economics o f Higher Education, Discussion Paper No. 45. Williamstown, MA: Williams College. (2000). “The positional arms race in higher education.” Williams Project on the Economics o f Higher Education, Discussion Paper No. 45. Williamstown, MA: Williams College. Winston, G. and Zimmerman, D. (2000). “Where is aggressive price competition taking higher education?” Williams Project on the Economics o f Higher Education, Discussion Paper No. 56. Williamstown, MA: Williams College. Zeithaml, V. (1987). Defining and Relating Price, Perceived Quality, and Perceived Value. Cambridge, MA: Marketing Science Institute. ____ , V., Parasuraman, A., and Berry, L. (1990). Delivering Quality Service: Balancing Customer Perceptions and Expectations. New York: The Free Press. Zimmerman, D. (1999). “Peer effects and academic outcomes: Evidence from a natural experiment.” Williams Project on the Economics o f Higher Education, Discussion Paper No. 52. Williamstown, MA: Williams College. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 121 APPENDIX A Customer Instrument Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 122 April 17,2000 Dear Resi dent: As a resident in USC's housi ng program your opinion o f t he quality o f the servi ces you have received is highly valued. Knowing how you and other students view the i mportance o f the vari ous servi ces offered to you is vital t o enhanci ng program quality. The enc l osed survey se e k s your opinions and w ill help us to provide you and future s tude nts with better housi ng servi ces. Completing the survey w ill t ake less than 15 mi nut es. As a thank you for your participation, we will be conducting an appreciation prize drawing for 1 M , 2 * * and 3" place gift certificates of S300, S200, and S100 respectively at the University Bookstore. In order to be entered in the drawing a nd so that we mav identify the winners, we will need only vour first name, and telephone number where vou mav be reached at the time of the drawing on Mav V*. This information w ill only be used for pur poses o f the drawing and all survey information will be treated as strictly confidential. The first secti on o f the survey a s k s about your expectations o f the service you would want to s e e from a n excellent (model) university housing program. The se c o n d section as ks about your perceptions o f the service you have received from USC’s housi ng program. You are one o f a smal l number o f people randomly sel ected to provide your opinion. For the results o f this st udy to be truly representative o f the thinking o f other resi dents in university housi ng, it is important that you compl ete this questionnaire. You can be a ssur e d o f compl ete confidentiality. S o that your opi ni ons may be included in our analysis, please return the completed survey no later than Friday, April 28th. Encl osed for your convenience is a self- addr essed return envel ope that you can drop in campus mail We appreciate your time and participation! Sincerely, Richard A. Hagy Executive Director, Bu s ine s s Affairs Encl osure Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 123 Survey of Student Satisfaction with the Service Quality of USC's housing orwjnm. Y ou are tang asfcad to vokrtanfy partdpale m a survey of Sudan saoitsaion rath the level of serves quality prowled by USCs housing pragma The purpose of this study is tw ofcid: (1) To dsSermtne what levol of service quality students expect from an ticsUsnt (modal) university housing program, aid (2) To determine the level of service quality students perceive to exist in USC's housing program. It ml taka you approxxnalaty 10-15 m tnuies to complete this survey. If you v w sft 10be entered in (lw appreciation pria draw ing, please prowls tawdh your first name ana telephone number at the and of mis survey. Y our cooperation areanvannrarnseri and vour responses ere stttctivcotmooiM. E X PEC TA TIO N S SECTIO N Bractions: Based on vour experiences as a customer of a univatsitvhousinrioroctim.olaa«eihinlt about the kind of universty houstng prognm that you w ould bs pleased v w tn . and in did section (items 1-22), show us the extent to w M cft you think such a pragnm snuU posses the future descnbed by each statement If you Isel a feature is not at al essential tar an excelent university housing program , such as the on* you law in m ind, thendrcle ttw numbarT. If you M a future a absolutely essential to an excalsnt homing program, ad o * 7 * . If your foeiings are less strong, ancle one of the numbers m the m iddle. Thera are tw right or w rong answsrs-al wo are interested in is a number ttrat truly ralscts your feelings regarding university How imooftant are these features for wrcstfenfunivereitv housing programs? Strongly Oleagne S trongly A gree 1 . Excelent university housing programs w* haw modem (undue, fixtures and appliances. 1 2 3 4 5 6 7 1 The residential facades a excelent univorsiiy housing programs wil be visuady appa«<ng. 1 2 3 4 5 S 7 3. Empfoyew of excelent w erety housing programs wi he neat-appearing. 1 2 3 4 5 6 7 4. M aterials associated w rd i an swilantuniuaiidY housing program (such asappfcaoons. oanoNats of soM ffnana) t i ha uiuaiy aooMina. v ^ e atreiut^w oer ^n a v ^ e sire are iy 1 2 3 4 5 6 7 5. W hen excelent housing programs promise to do something by a certain dme. they re* do so. 1 2 3 4 S 6 7 6. W hen Sudanis hare a problem . ereader* unwerety housing programs wi show a sincere interest in solving it 1 2 3 4 5 6 7 7. Excellent university housing programs wl perform the serves right the first time. 1 2 3 4 S 6 7 i. Excelent university housing programs w dl provide their services at lira time they promise lo do sa 1 2 3 4 S 6 7 9. Excellent univanity houing programs wi insat on enor-free records. 1 2 3 4 5 6 7 10. Employees of sxcelentuniverarty housing programs witsl students exactly vHan services wi be pm nm d 1 2 3 4 S S 7 11. Employees of excelent uniwnity housing pragrams wl give prom pt service to students. 1 2 3 4 S 6 7 12. Employees of excelent unhandy housing programs wiahrays he wBng to help students. 1 2 3 4 5 6 7 1 3 . Employees of excelert university housing programs wi never be too busy to respond to student requests. 1 2 3 4 5 S 7 14. The behavior of employees in excelent univanity housing pregrams wi inet* conMance in students. 1 2 3 4 5 6 7 15. Excellent univanity housing programs wi provide students wlh an emirenmant that is safe and secure. 1 2 3 4 S S 7 1 6 . Employees of excalart univarsity housing programs wi be consislsnBy courteous vih students. 1 2 3 4 S 6 7 17. Employees of excelent urwersily housing pragrams e l haw the Know ledge to answer student questions. 1 2 3 4 5 S 7 18. Excellent university housing pragrams w* give students MMdual attention. 1 2 3 4 S 6 7 19. Excelent university houeing pragrams wi have operating hours convenient to al their students. 1 2 3 4 5 S 7 20. Excelent untvetsdy housing programs wi have employees w ho gtve students personal attention. 1 2 3 4 5 6 7 21. Excelent university housing pragrams wi hare the students'Past interests at heart 1 2 3 4 5 6 7 22. I 8 i S £ 1 f 1 •8 ! i 1 2 3 ■t 4 S 6 ■ ra w 7 are Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. * 124 OI>EHSIOHALII»R)RTAI«E-4]lradlons:Usbd below arallrt balm s pertaining b a u n h ^ W e w ould aw blinow how im portant each of awsebaturas is b you whan you evaluab the quaity of a univaiaitys nousing program . Ploai# adocab a total of 100 ports among Bo 6 w a ta B m accordi ng t o h a m rtportanf each M n it to you-tfw mom important a b aon is b you, the more ports you s ta rt adocab b it Ploaaa ensure that the ports you allocate ta the live features add up to 1 0 0 . IN GENERAL how imoortant are each of the fafwwnllstadbalow? R a m d M h W im i ofl Ot oomt M: A . Ttw appearance of a unvanity housing program's factfbes. equipment personnel. and oommunicatnns roabnals. ports 8. The abity of a unwrsdy housing pmgnm b parfonn dw promised service dependably and accunbly. ooims C . The wiingrwss of a university housing program Blwb students and provide prompt service. ports 0. The knowledge and courtesy of a unhrorsdy housing program's employees and their abity b convey trust and confidence. ____ ports 6. The caring, jndividuafizad adsntion a univaisity housing program prewdes its students. ____ ports T O T A L ports adocabd: 122 ports W hich ana feature among dwabovsliva is most i mpor t ant t o you? (pbasaenbrdw bature's btbr) ------ W hich babw is sacond m ost im portant b you? -------- W hich batrn is basfmportanfta you? _____ PERCEPTIOfIS SECTION-Olrectlonatlbins 23-44 rabbb y our balings abort tfSC'fftBMboaroreare. For each aabmsnt. crting a * 1 * means awt you strongly disagree O w l USCs houaing pmgnm haa O w l M u*, a b racing a T man* dwt you sbongly agraa. Y ou may circb any of dw num bers in aw middb that show how strong your beings are. Than am no rigid or wong answats-ad we are interested in is a number that bast showa sour pareapdona about the quaily of USCs housing program. How wall do those characteristics describe USC’ s I m a it& m a a a l Srengty Mesons W rongly Agne 23. USCs housing pregram has modem famitm. tortures and appianoaa. 1 2 3 4 5 6 7 24. USCs housing program's mbsmial faoBtes aw vrsuady appealing. 1 2 3 4 5 S 7 25. USCs housing program's amployses are iwst’ appsanng. 1 2 3 4 5 6 7 26. llaWSi wiocisbrt rtti USCs housing pwgam (such as appfcabons. pamphbb or rtsbmam) arevisuadyappaaing. 1 2 3 4 5 6 7 27. W han USCs housing program promisssb do something by a cartamtima.it does so. 1 2 3 4 5 6 7 28. W han you have a problem, USCs housing program shows a smears interest n solving it 1 2 3 4 5 6 7 29. USCs housing program parbrms dw sanies right tlw first time. 1 2 3 4 5 6 7 30. USCs housing program provides 4s services at dw dme * promises b do so. 1 2 3 4 CM ni 5 S 7 MNMftVi Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 125 PERCEPTIONS SECTIOIMJSC's houXnureonm (continued! Strongly Olesgiee Sbongly A gree 31. USCs housing program insists an anor-fieeieconls. 1 2 3 4 5 6 7 32. Employees of USCs housing prognm W I you warty when services mi be performed. 1 2 3 4 5 6 7 33. Employees of USCs housing program give you prom pt service. 1 2 3 4 5 S 7 34. Employees < * USCs housing pragnm always am «Kng to help you. 1 2 3 4 5 6 7 35. Employees of USCs housing program am never too busy to respond to your requests. 1 2 3 4 5 6 7 36. The behavior of employees of USCs housing prognm metis confidence in you. 1 2 3 4 5 6 7 37. USCs housing pragnm provides you w M ) an amironment that is sale and secure. 1 2 3 4 5 S 7 38. Employees of USCs housing program era consaandy courteous mO i you. 1 2 3 4 S s 7 39. Employees of USCs housing pregram have the knoeMge to answer your questions. 1 2 3 4 5 5 7 40. USCs housing pmgnm gives you individual atiaaion. 1 2 3 4 5 S 7 41. USCs housing pragnm Ins opening houm convenient id afi U s students. 1 2 3 4 5 s 7 42. USCs housing pregram hee employees who give you penonel attention. 1 2 3 4 5 6 7 43. USCs housing program hat your best interests a heat 1 2 3 4 5 3 7 44. Employees of USCs housing pmgnm undentand your specie needs. 1 2 3 4 5 S 7 O V E R A L L PERFORMAIIf^iliocMons: Please indicalaimur aoi non tor each of ttw individual U S C housi ng am anm srnca tsabims below .______________________________________________________________________________ How satisfied arevouwitii ttw fotowfna USChousina oroorem services? Strongly O tosgroo f! 45. I am satisfied w ith the residential cable T V service. 1 2 3 4 5 6 7 46. 1 an satisfied w itti ttw residenaal telephone service. 1 2 3 4 5 6 7 47. I an safisfied v eO i ttw residenbal Mamet service. 1 2 3 4 5 6 7 46. 1 am satisfied wlh the fesxienba mai service. 1 2 3 4 5 6 7 49. 1 an satisfied wth (he nesdencti custodial service. 1 2 3 4 5 6 7 S O . I an satiated w itti the msidenttal repair and mamananoe service. 1 2 3 4 5 6 7 51. 1 am safisfied w itti toe msidential ACCESS service. 1 2 3 4 5 6 7 52. 1 an saafisd w dh fto neeidanba protection semes. 1 2 3 4 5 6 7 S 3. I an safisfied w itti dw residential advisor service. 1 2 3 4 S 6 7 54. I an saMad w itti itw customer service desk. 1 2 3 4 5 6 7 55. 1 am safisfied w itti ttw Returning Rasidsnt Renewal (R * ) process. 1 2 3 4 5 6 7 56. Have you experienced a problem mtti any of USCs hoiaing program's services? N o Yes 57. If you txparioncad a problem. vms Ihe problem toooivod sabsfacttnly? N o Y es 56. If you experienced a problem, pleaae Micala vM ch sorvcsfs) (enter tie service's number from ilams45ttauS5) C W n n l w m l l * Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 126 O V ERA LL PERFO R M A N CE SECTIO N (continued) Sbongly Sbongly Ohmwi A gios 59. Ovaroti. 1 In n m cm vod grcolannaivra qubity Iron USCs housing prognm. 1 2 3 4 5 6 7 S O . Overall. 1 am satistiad nitti ttw value of USC houing ehen 1 compare the cost to ttw snrvicn quaBy. 1 2 3 4 5 6 7 61. Overall, lam satistiad «dh my USC housing experience Ms year. 1 2 3 4 5 6 7 62. Sasad on my overall experience nidi tha U SC housing pmgnm this year, 1 w ould eageriy mcommend it to W ands w ho am considering univanity housing next year. 1 2 3 4 5 6 7 S3. Oo you inland to return to USC next year? N o Y es 64. If you am muring, do you inland to Sve in USC housing next year? N o Y es 65. If you inland to Sva in U SC housing next year, what type of housing do you desire? Residence H al_ _ *pa«tninnr Residential Colage__ 66. If you am mtuming. but do not intend to ive in U SC housing, please indcaia the mason below : ___ Not satistiad w itti U SC housing program's service quality ___ N ot satistiad w itti U SC housing program's value _ _ Housmg of choice not atfaM a fN a ^ n w r w w i vi RVHiM ynOsufir nousng . P nB fcr to In* in oftonpus piivsli housng. Pltm uplifli Please indicate vour dearee of commitment to USC. Sbongly Sbongly O iregree Agne 67. Sand on my oxporionct at USC, 1 w ould oiporty foconvnsnd K to Minds d o sib considanng a cofege education. 1 2 3 4 5 6 7 68. If 1 earn beginning m y ootiage education al over again, 1 w ould choose to adsnd (his institution again. 1 2 3 4 5 6 7 69. Alter graduation. 1 w ould Me to support ties institution etih my financial oontibutions. 1 2 3 4 5 6 7 Please orovide the foUowina information. JLwiH not be used toJdantifv individuals. 70. Gender M ale___ Female___ 71. Your Age:___ 72. Grade aassMcation: Fmshman___ Sophomore____Ju n io r_ Senw____Graduate____ 73. W hat type of housing do you K v e in? Ramdenca Hal Ananmant pn * * tT * * 'j< rviday 74 W hich suable building do you lea in? 75. Whet type of room do you live in? Single___ O ouN e____T riple____Quad____Other n * ■ deed on l » SER V Q U A L iwlumeii dw*loeed by Peew ensi, Zeewrt. * d Beiy, •SER V Q UA L A IMIplb asm Sam tor Haeeuiing Canamar Ponopion i at Ssraos Q u^ty.' J to n e of R tt ttnq, 64(1). 1 2-4 0. W aaaa t ornmbM F fM m i. A PH LlP . im . Thank Y ou For Your Participation I IN ORDER TO BE EN TERED IN THE A PPR EC IA TIO N PR IZ E ORAW NG FOR T H E S 10Q , $200, OR $300 B O O K ST O R E G IFT CERTWCATE PLEASE R ET U R N Y O U R SU R V E Y til T O E EN C LO SED EN V ELO PE yiACAMPUlMAI. B Y FR B A Y . APRS. JS ™ A N D PROM DE US WIN; Y O U R FIRST N A M E :_________________________ T E L E P H O N E * ____________ Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 127 APPENDIX B Manager Instrument Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 128 UNIVERSITY HOUSING PROGRAM-MANAGERS* SURVEY P*RT I - EX PEC TA TIO N S SE C TIO N - Directions: T hts portion of the survey deals will ha m wu t hi n* our stu d n ts M W annut a university housing program dal mdwir view , daivars excadant quaiy of sennee. Please indicale die extent to which our students faaldiatawaiereuHhioreltohouslnwotorearesfrfletieraJ would possess da leahsedescnbed by each statement If our students am fifce ly totoalatoatum is not afad essential far eitodentunivotsay housing programs, circlo die numOoM . If our students am O rely totoslaliBabmisadsotoltfyssssntiaf.cirdo7. dots’sextants'fastings am litely to bo less strong, cxtlo one of the numbers in die m idrfle. Remember, mate are no right or wrong answ ers w e aw interested in what you rite* our students'fastings are regarding urworeitir housing programs dot w ould daSrer axcatant quality sarvfat. How important are thoaa features to our students? 1. Excedent university housing programs wi hare modem famture. fixtures and apptiances. 1 2 3 4 S 6 7 2. The residential facilias at exealantunivofsity dousing programs wd be visuady appealing. 1 2 3 4 5 6 7 3. Employees of excadant university housing programs ml bo neat-appearing. 1 2 3 4 5 6 7 4. Materials associalsd w ith an axoaiant univarsity housing program (such as applications. 1 2 3 4 5 6 7 pampMats or stawnanti) wi bo visuaay appadfag. 5. Whan axoaiant housing programs promise to do somadang by a certain dme.dioy ml do so. 1 2 3 4 5 6 7 6. Whan students have a problem , excelent untversSy housing programs wi show a sincere interest 1 2 3 4 5 6 7 in solving it 7. Excadant univarsity housing programs wipetfcrm die service right the first tana. 1 2 3 4 5 6 7 3. Excelent university housing programs w til provide diair services at the tim e they promise to do so. 1 2 3 4 5 6 7 9. Excadant univarsity housing programswi insist on error-tee records. 1 2 3 4 5 6 7 10. Emptoyaas of excadant university housing programswi tad students exactly whan services wil be 1 2 3 4 5 6 7 (M nbfflM d. 11. Employees of excadant university housing programs wighra prom pt service to students. 1 2 3 4 5 6 7 12. Smptoyess of excadant uniesrsity housing prognms wiatieayt be w«ng to help students. 1 2 3 4 5 6 7 13. Employees of excadant university housing programs wi never be too busy to respond to student 1 2 3 4 5 6 7 14. The behavior of amptoyoes in excadant university housing programs w iinsf cotidence in 1 2 3 4 5 6 7 students. 15. Excadant university housing programswi provide stodsntswidi an anvimnmsnt that is safe and 1 2 3 4 5 6 7 16. Em ployees of excadant university housing programswi be oonsielendy courteous w ith students. 1 2 3 4 5 6 7 17. Employees of excadant univarsity housing prognms tod lava die tawiadge to answer student 1 2 3 4 5 6 7 questions. 18. Excadant univarsity housing prognms wi give sbdsnls individual aaendon. 1 2 3 4 5 6 7 19. Cxcolant university housing programswi have operating hours convenient to ad Stair students. 1 2 3 4 5 6 7 20. C x coiant university housing programswi have em ployees w ho gjussbjdents personal attendon. 1 2 3 4 5 6 7 21. Cxcadantunivatsity housing programswi hare dw students' bast interests at heart. 1 2 3 4 5 6 7 22. The amptoyoes of excadant university housing prognms wi understand dw specie needs of their 1 2 3 4 5 6 7 students. Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 129 PARTII»OHgNSIOMALIiyO(nANC6-Olmaiona:Lisiadbalowamliv8io*aBSP8iBininQtDaimivorsihrhoirse»pmflraet«n o e n o r a t W « buM fta to ta w /row isinortar* each otthoee rMlM I f| fir iT I f T Ilflflfffl itim Ittey irr^ iil ttin m *, \i I unMntys housing program. Ploaso atocata a DM of 100 points among the live Im am acartng to how srpartantaecft feature is to oirsludanti-the mom important a feahrm is to our students. Ihe mom points you should aiocale to it Please ensure that the poMa y o u aPocata to t h a live f s a t u m e a d d up to 1 0 0 . m G E N E R A L how imoortant ere «adt of tha features llstad below to our students? Please <*s*bumetocefoff«nemtK A . The appaamnca of a universily housing program's facita. equipment personnel. and communications mstenais. aoeits 9. Tha adM ty of a univem ity housing program to perform tha premoad sarvica dependably and Kcumriy. points C . The wffngness of a university housing program to hatp students and provide prompt service. ____points 0. Tha knowledge and courtesy of a urvvetsdy housing program's amptoyaes and their abity to convey dust and confidence. ____points E. Tha eating, indtahafizad attention a university houwg program provides its students. ____ points TO TA L points allocated: i j j points W hich one M um among tha above Ihm is most important to our studants? (please indicate the feature's letter) ------ W hich feature is second moat ferporterrt to our students? W hich feature is faasf importer* to our students? ---- Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 130 PA RT IB-M ANAGERS PB1CEPTIONS SECTION-Oitactlotw: listed below are a number of statements inwndad to nwasure row aarcantfarw about rour b d M d u * r f i t i r n n i lT f ~l— ’ ' — T T — f - j — p i m m in dicate ttw ettnt to w hich you disagree or agree wttt each statement If you strangiy disagree arete 1. If you strongly agree, erte 7 If your faetings are not strang, cade one of the numbers in ttw m iddle. There are no right or wrong anstMts. Please teti us honestly how you fori.__________ ______________________________________________________________ How wall do those charactariatica describe r w dsperbntrtf? Sbongly Ofoagne Strongly A gree 23. W a reguiarty cotiact inform ation about ttw naads of our students. 1 2 3 4 5 6 7 24. W e rarely usa m arketing research intatmation that is cottactad about our students. 1 2 3 4 5 S 7 25. W e reguiarty cotiact inform ation about ttw service^uatity expectations of our students. 1 2 3 4 5 6 7 26. The managers in our department rarriy inwract vrth students. 1 2 3 4 S S 7 2 7. The student-contact parsonnri in our department frequently com m unicaw w ith management 1 2 3 4 s S 7 28. Managers in our department rarely seek suggestions about sow ing students from student-contact personnel. 1 2 3 4 5 6 7 2 9. The managers in our department fisquandy haw foce-«>4ace (Sanctions writ student-contact personnel. 1 2 3 4 5 6 7 30. The prim ary means of comm unication in our department between student-contact parsonnri and upper-level managers is through memos. 1 2 3 4 S 6 7 31. Our department Iws too many levels of management between student-contact personnel and top 1 2 3 4 5 6 7 m a n ig tm e n L PART IV-STUOENTCONTACT SE C TIO N - Oltacllona: Pfoasettiinlt about vour typical troth weak and indicaw ttw amount of time you spond com m unicating (tiw ctty w itti studantsaittwr via ttw W fophotw.foco4ofoce communication, afoclronicmail. or regular nwi. 32. In gansral. what parcentaga of lima do you spsndaach wort w ank in direct studant-contactrittwr by ptxxw. by m ati. or in person? PW asa indicaW a percentage tw tw aan 0 and 100. % Thank You for Your Partdpatfonl This survey is bared on ttw SERVOUAL srseument and tw Bdredad Senses Ourity Modal developed by Pareeuranun. ZMhanS. and Bany (1991a. 1991b). Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Hospital conversions: The California experience
PDF
A study of the Navy College Program for Afloat College Education: Implications for teaching and learning among nontraditional college students
PDF
An analysis of health risk selection and quality of care under Medicare fee -for -service and Medicare managed care health care systems
PDF
A typological history of non-heterosexual male college students in the United States, 1945 to the present
PDF
An investigation of repeat visit behavior on the Internet: Models and applications
PDF
Banking on the commons: An institutional analysis of groundwater banking programs in California's Central Valley
PDF
An empirical investigation of everyday low price (EDLP) strategy in electronic markets
PDF
Assessment of prognostic comorbidity in hospital outcomes research: Is there a role for outpatient pharmacy data?
PDF
Dependence on foreign labor, quality of education and unemployment in the GCC countries: In search of solutions
PDF
Biopharmaceutical strategic alliances: Interorganizational dynamics and factors influencing FDA regulatory outcomes
PDF
Data -driven strategies to improve student achievement: A cross-case study of four California schools
PDF
A cultural study of shared governance at two community colleges
PDF
Executive spending power: Flexibility in obligation and outlay timing as a measure of federal budgetary and policy control
PDF
Consumer choice under budget constraint: Why consumers overspend
PDF
Service Quality: An Examination Of The Relationship Between Customer Contact And An Approach For Improvement
PDF
Customer satisfaction, customer bargaining power, and financial performance
PDF
Information overload: Exploring management of electronic mail
PDF
Globalization and the decline of the welfare state in less developed countries
PDF
A qualitative examination of the nature and impact of three California minority engineering programs
PDF
The skill mix of an effective first level Navy RDT&E manager
Asset Metadata
Creator
Hagy, Richard Alan
(author)
Core Title
An empirical examination of customer perceptions of service quality utilizing the extended service quality model under the condition of multiple subunit service providers
School
Graduate School
Degree
Doctor of Philosophy
Degree Program
Public Administration
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
business administration, marketing,Education, higher,OAI-PMH Harvest,Political Science, public administration
Language
English
Contributor
Digitized by ProQuest
(provenance)
Advisor
Myrtle, Robert (
committee chair
), Stallings, Robert A. (
committee member
), Tierney, William G. (
committee member
)
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-c16-149880
Unique identifier
UC11330359
Identifier
3054744.pdf (filename),usctheses-c16-149880 (legacy record id)
Legacy Identifier
3054744-0.pdf
Dmrecord
149880
Document Type
Dissertation
Rights
Hagy, Richard Alan
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Access Conditions
The author retains rights to his/her dissertation, thesis or other graduate work according to U.S. copyright law. Electronic access is being provided by the USC Libraries in agreement with the au...
Repository Name
University of Southern California Digital Library
Repository Location
USC Digital Library, University of Southern California, University Park Campus, Los Angeles, California 90089, USA
Tags
business administration, marketing