Close
About
FAQ
Home
Collections
Login
USC Login
Register
0
Selected
Invert selection
Deselect all
Deselect all
Click here to refresh results
Click here to refresh results
USC
/
Digital Library
/
University of Southern California Dissertations and Theses
/
The identification, validation, and modeling of critical parameters in lean six sigma implementations
(USC Thesis Other)
The identification, validation, and modeling of critical parameters in lean six sigma implementations
PDF
Download
Share
Open document
Flip pages
Contact Us
Contact Us
Copy asset link
Request this asset
Transcript (if available)
Content
THE IDENTIFICATION, VALIDATION, AND MODELING OF CRITICAL PARAMETERS IN LEAN SIX SIGMA IMPLEMENTATIONS by Arthur J. Dhallin A Dissertation Presented to the FACULTY OF THE USC GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (INDUSTRIAL AND SYSTEMS ENGINEERING) May 2011 Copyright 2011 Arthur J. Dhallin ii Dedication This dissertation is dedicated to my wife Giselle, whose support and encouragement allowed me to survive the long process; my parents, who kept me focused on completing the project; and my grandmother who always reminded me that education was the most important thing a person could have. iii Table of Contents Dedication ii List of Tables v List of Figures vii Abstract viii Chapter 1: Executive Summary 1 Chapter 2: Introduction 4 Chapter 3: Motivation of Research 7 Academic Motivation 7 Industry Motivation 9 Chapter 4: Research Question 12 Chapter 5: Literature Review 18 Definitions 21 Quality Management 22 Six Sigma 26 Lean 34 Lean Six Sigma 39 Literature Review Conclusions 44 Chapter 6: Research Hypothesis 46 Chapter 7: Research Methodology 49 Chapter 8: Identification of Constructs and Factors 52 Category #1: Personnel 58 Category #2: Process 63 Category #3: Customer/Product 67 Category #4: Information Processing 71 Category #5: Environment Factors 75 Chapter 9: Validation of Constructs and Factors 78 Factor Analysis 80 Analysis of Survey Results 84 iv Chapter 10: Modeling of Variables and Factors 90 Coding of Data 92 Generalized Linear Regression 102 Assessment Coded Data 104 Correlation 110 Transformation of Data 115 Model Results 117 Analysis of Predictive Model Results 124 Caveats 130 Chapter 11: Conclusion and Next Steps 132 References 134 Appendices Appendix A: List of Interviews 142 Appendix B: Pilot NAVSEA Survey 143 Appendix C: SPSS Output for Factor Correlation and Reliability Analysis 146 Appendix D: EQS Output for Exploratory Factor Analysis 163 Appendix E: SPSS Exploratory Factor Analysis Results 219 Appendix F: SPSS Correlation Matrix Results for Model Variables 230 Appendix G: Results of Predictive Model Development Using Linear Regression (Backwards Elimination) 234 v List of Tables Table 1: Sample Lean Six Sigma Implementation Methodology 13 Table 2: Garvin’s Dimensions of Quality 21 Table 3: Definition of DMAIC Phases (Wortman, 2001) 30 Table 4: Types of Waste 377 Table 5: Comparison of Lean, Six Sigma, and Lean Six Sigma (Upton and Cox, 2004) 42 Table 6: Research Methodology Steps 50 Table 7: Definitions for Management Theory Terms 53 Table 8: Personnel Factors 61 Table 9: Process Factors 64 Table 10: Customer/Product Factors 69 Table 11: Information Factors 73 Table 12: Environmental Factors 75 Table 13: Descriptive Statistics 85 Table 14: Reliability Computations 86 Table 15: Exploratory Factor Analysis Loadings 88 Table 16: Scales and Explanations for Number of People 94 Table 17: Scales and Explanations for Role Types 95 Table 18: Customer Construct Scales and Explanation 97 Table19: Scales and Explanations for Activity Complexity 99 Table 20: Scales and Explanations for Process Execution 100 Table 21: Scales and Explanations for Documentation 100 vi Table 22: Scales and Explanations for Tool Usage 101 Table 23: Scales and Explanations for Teaming 101 Table 24: Summary of Initial Linear Regression Model Predictor Variables 102 Table 25: Summary of Coded Data 105 Table 26: Distributions of Coded Variables 105 Table 27: Summary of Correlation Matrix 111 vii List of Figures Figure 1: Overlap between Quality and Management Theory Literatures 20 Figure 2: Six Sigma DMAIC Methodology 31 Figure 3: Visual Representation of Constructs 56 Figure4: Scatterplot Matrix with Box-Cox Transformation 116 Figure 5: Linear Regression Results 118 Figure 6: Test for Non-constant Variance 119 Figure 7: Test for Curvature 120 Figure 8: Assessment of Leverages 120 Figure 9: Assessment of Residuals 121 Figure 10: Assessment of Cook’s Distances 121 Figure 11: Split Sample Validation Results 123 Figure 12: Linear Regression Results Using Investment Variable 124 viii Abstract The objective of the research project discussed in this document was to develop a theoretical and empirical model that could be used to predict the results of Lean Six Sigma implementation efforts in a knowledge-intensive environment. Some previous research had attempted to develop a theoretical model for quality management. However, the results were narrowly focused around specific tools and emphasized manufacturing environments. This research project developed a generalized manner in which any process could be modeled using the people, process activities, customer, and information sharing elements to describe it. Processes modeled in this manner can then be assessed with respect to Lean Six Sigma implementations and the results of the implementation hypothesized. To identify potential constructs and variables that operationalize the constructs, a comprehensive literature survey was conducted across a range of academic disciplines. These items were then posed to a series of expert practitioners and evaluated using factor analysis techniques, resulting in statistically validated theoretical constructs that describe a process's susceptibility to process improvement. Finally, historical data from Lean Six Sigma implementations was used to create a valid model from which the return on investment for proposed Lean Six Sigma project could by predicted. 1 Chapter 1: Executive Summary The research presented in this Ph.D. dissertation is focused on providing insight into the key parameters associated with Lean Six Sigma (“LSS”) implementation efforts in a knowledge-intensive environment. This research has resulted in an empirically validated theoretical model for process improvement implementation efforts, and a statistically significant predictive model that can provide guidance with respect to maximizing return on investment for a potential process improvement event. The motivation for the research arose from both academia and industry. Currently, multiple gaps exist in the quality literature. Although some previously published work has attempted to empirically validate quality constructs, no research has focused on developing constructs that will provide insight into how a process can be generally modeled such that results of a specific process improvement event can be assessed. In addition, a quantitative model for process improvement implementations does not exist, such that a predicted return on investment can be calculated. Finally, there is a dearth of insight into how process improvement implementation efforts should change based upon the nature of work done in a process, particularly with respect to knowledge-intensive enterprises. The initial research focused on surveying the existing academic literature, not only the quality management literature, but also the management theory, social science, and associated other domains. From this, it was possible to identify four general constructs that could be used to model a process to predict potential process improvement results: People, Activities, Customers, and Information. Each of these constructs was 2 operationalized and numerous potential parameters that could be used to measure the constructs were identified. The theorized constructs and parameters were statistically analyzed using exploratory and confirmatory factor analysis based upon a survey given to industry practitioners. The resulting analysis validated the hypothesized model, identifying the importance of the theorized constructs. In addition, new insight was obtained by the emergence of multiple information sharing constructs. The resulting model is the first empirically validated model for the implementation of process improvement efforts. Using a unique data set provided by the U.S. NAVSEA Lean Six Sigma College, over 200 LSS events were analyzed to develop a predictive model for process improvement. The data was reviewed, normalized, and coded to validated scales. A detailed assessment of the observed correlations was conducted and statistically significant general linear regression model was developed. The resulting insight suggests that, to maximize return on investment, practitioners should evaluate three key areas: investment required, number of customers, and level of service requirements within the process. The higher the investment required, the lower the expected return on investment, suggesting that implementation efforts be focused on more, small-scale efforts, rather than large transformations. The number of customers was positively correlated with the return on investment, indicating that processes with a higher number of customers have the opportunity to generate economies of scale through process improvement events. Finally, process that required high level of service requirements were representative of higher complexity and uncertainty. These processes provided a higher return for a given investment. Thus, future implementation 3 efforts should focus on complex processes, rather than more easily understood corporate or overhead processes. Many other significant observations were recorded as part of the research. Although the data set is from a single, large organization, it does provide unique insight into a previously unknown area. Furthermore, threats to internal and external validity have been appropriate mitigated. Numerous future research questions have been identified over the course of the research, enabling a wide array of future contributions. Overall, the results are a significant contribution to the existing body of knowledge and provide significant insight to both academic researchers and industry practitioners. 4 Chapter 2: Introduction Process improvement methodologies have become a necessary component of business practices for private and public companies. According to recent surveys, 75% of companies currently employ some type of process improvement strategy (Rayner, 2007). The application of these methodologies has become a common theme in business publications and the object of study for academic researchers. These efforts are often characterized as Lean, Six Sigma, or Lean Six Sigma. Although details of the various methodologies may vary, the process improvement methodologies are generally quite similar (Upton and Cox, 2004), relying upon the identification of activities performed within a process and their resulting relationships (also known as process mapping), statistical tools, and other industrial engineering methods. Despite the interest, clear gaps exist within the existing bodies of literature. The most obvious is the historical emphasis on manufacturing examples, leaving several questions about the applicability of the techniques to functions involving traditional engineering disciplines, such as design, analysis, software development, configuration management, etc. In addition, the return on investment (“ROI”) or expected savings, in both time and cost, from implementation events is a much debated topic (Westphal, et al. 1997). The result is that managers of knowledge-intensive processes must prioritize improvement efforts with little more than heuristics and conventional wisdom. An improved model through which to predict the output of improvement events would enable enterprises to better leverage their scarce resources and ensure consistent delivery of cost and time savings. In addition, these methodologies would be expected to improve performance. However, for the purposes of this research described below, the metric 5 used to assess the impact of process improvement methodologies will focus on cost savings. This pressure is particularly intense within the defense industry. The cost pressures and need to ensure the provision of the war-fighter with the next generation of equipment and weapon systems has necessitated that each of the armed services adopt some form of process improvement: Task Force Lean at Naval Sea Systems Command (“NAVSEA”), Airspeed at Naval Air Command (“NAVAIR”), AFSO21 for the Air Force, etc. The results have already been observable through initiatives such as NAVAIR’s Airspeed campaign, in which maintenance requirements have been lowered, enabling cost savings that can be used towards the purchase of additional planes. In general, most of the defense methodologies utilize a form of LSS. The specific methodologies will be discussed in later sections. The implementation of LSS has required a sustained and costly effort by the various commands with overall positive results, as evidenced by the reported cost savings. However, NAVSEA has recognized the need to accelerate its implementation rate and has been actively seeking assistance. As a result, they approached the author with a request to determine how they could better manage their portfolio of improvement projects. Since each project is estimated to cost a minimum of $25,000, the investment required to continue the LSS program is significant. A means of selecting, monitoring, and managing the implementation efforts such that those projects with the best ROI are consistently selected would be of great value to the NAVSEA enterprise. The focus of this research is to identify and validate a methodology that will assist in predictive modeling and selection of LSS implementation projects. This effort will 6 provide insight into an issue that is of interest to both academia and business, while simultaneously contributing the first predictive model associated with quality management. 7 Chapter 3: Motivation of Research The research question explored throughout this proposal has been motivated by both an academic need and industry requests, most notably NAVSEA, but also a recognized need throughout the defense industry. The completed research has extended the understanding of quality management and process improvement topics through the validation of key variables and the development of a validated statistical model. Moreover, it will provide government and industry with a tool to assist in the benchmarking of their implementation efforts. Academic Motivation Upon reviewing the past twenty years of published academic literature on quality topics, it is apparent that most of literature can be separated into two categories: (1) quality control, and (2) quality management. Quality control literature is focused on what could be classified as lower-level topics. Quality control is generally concerned with methods and techniques for the statistical management of processes, primarily manufacturing processes (Evans and Lindsey, 2001). Conversely, quality management topics are focused primarily upon macro-level issues. Quality management literature tends to consist of enterprise constructs and management philosophies (Dean and Bowen, 1994). As noted by numerous authors, much of the prescriptive quality management literature has been based largely on anecdotal evidence and the recommendations of various gurus and quality experts (Evans and Lindsay, 2001; Sila and Maling, 2002; Black and Porter, 1996; Curkovic, et al., 2000; Ahire, et al., 1996). Thus, the field of expertise has evolved from empirical use into the academic realm. This has resulted in a 8 body of literature that is largely subjective in nature. Although there are ample case studies on the topic, a “unified theory of quality management” does not exist (Dean and Bowen, 1994). In addition, no published predictive models for quality have been rigorously defined and tested. Some authors (Sila and Maling, 2002; Black and Porter, 1996; Curkovic, et al., 2000; Ahire, et al.,1996; Anderson, et al., 1995; Saraph, et al., 1989) have attempted to identify constructs and their associated casuality, but very little has been done to link these operationalized constructs to measurable outcomes. Another challenge with the existing literature is that most of the quality research has been conducted in a manufacturing environment. Although the results provide useful suggestions, the application of the quality techniques in a knowledge-intensive environment, where the primary resources are people rather than material and machines, requires further research to validate. Industry and government are now struggling with the application of existing quality techniques in non-manufacturing sectors (Mann and Dhallin, 2003a). Very little academic research has been conducted to provide guidance in this area. Due to the increased importance of the knowledge sector, this topic has become of vital interest. This research seeks to close the gap between the lower-level quality control and the higher-level quality management areas discussed above. The research explores decisions relevant to executives and managers that intend to implement enhanced quality business processes within organizations. It provides actionable insight that can be used to measure and implement improvements. Furthermore, the research uses a rigorous statistical methodology to assess the critical predictors for quality improvement. This enables both more detailed prescriptions and a higher level of confidence. The historical 9 data collected as part of this effort has resulted in the first predictive model for quality implementation efforts, albeit for a specialized instantiation. Furthermore, the research has been conducted primarily in an engineering and logistics environment, providing insight into how quality improvement can be maximized in a knowledge-intensive environment. Industry Motivation Quality methodologies and improvements have been adopted throughout most sectors of industry. The most popular programs include Lean, Six Sigma, and Lean Six Sigma (Murman, et al., 2002), and previous quality programs include Total Quality Management (“TQM”) and Business Process Reengineering (“BPR”). As later discussed, most of these methodologies, although slightly different, share certain core concepts. The aerospace and defense industry has unequivocally embraced these methodologies and clearly perceive them to be of value. The adoption of quality practices within industry is discussed by numerous articles in both the academic and popular press. Studies have shown that quality management implementation can enhance competitiveness (Powell, 1995), improve stock price (Daniels, 2002; Rajan and Tamimi, 1999), lower costs (George, 2002; Hendricks and Singhal, 1997), and reduce cycle-times. Thus, the impact of quality implementation efforts is clearly demonstrated. Industry’s interest in this research is different from that of academia, as industry seeks to obtain a tool to enable better management of quality implementations. The value or return on investment of the initiatives has proven difficult to determine, particularly prior to the launch of the improvement efforts in which the opportunity selection is 10 determined. As described by an executive within the Department of Defense, often the prioritization of implementation projects is done by the “best guess” method. In November 2005, the Naval Surface Warfare Center, Port Hueneme Division (“NSWC PHD”), initiated a discussion about the possibility of building a model that would enable the prioritization of improvement opportunities. NSWC PHD desired to have a more robust method of identifying the opportunities most likely to yield cost savings and cycle-time reductions. The current means of allocating resources was based solely upon perceived importance and perception, rather than demonstrated results. Research resulting in a tool for improvement prioritization and implementation would be extremely significant for both industry and government. This would enable the optimization of resource allocation towards improving business processes and provide management metrics from which the improvement process could be monitored and evaluated. Currently, NSWC PHD expects that all LSS projects will result in a minimum $50,000 reduction in costs each. This number is simply a heuristic derived from practice with no validation. However, the challenge is that some improvement efforts result in a higher cost of implementation than cost savings achieved. Although the value of the implementation, as defined by performance improvements, customer satisfaction, time savings or cost savings, may ultimately exceed the costs of implementation, the Navy is primarily concerned with cost savings. Therefore, a tool that enables the determination of expected cost savings would be greatly desired. In June 2006, approval was granted by NAVSEA to conduct research into previous NAVSEA LSS implementation efforts. It was agreed that the historical data 11 associated with all improvement events at the Naval Surface Warfare Centers would be provided for use. In addition, the improvement professionals were to be available for interviews and questions. The successful conclusion of the research has been highly anticipated by NAVSEA. In addition, other organizations have embraced these topics. In particular, the aerospace and defense industries have seen the initiation of widespread adoption. For historical and practical reasons, the implementation of the quality methodologies is proving to be challenging, yet necessary. Research that is able to assist in such implementation for this industry would be highly welcomed. The author’s relationship with NAVSEA is solely that of academic researcher. The author was previously a consultant with the organization, but has not worked with it for several years. However, personal contacts and past performance have enabled the author to convince Navy personnel of the importance and validity of the research, and provide the level of trust required to conduct the effort. No payment is being received for this effort. 12 Chapter 4: Research Question As discussed above, the purpose of this research is to bridge the gap in the existing quality literature and provide insight into the implementation efforts of quality methodologies. The research was conducted so that it can statistically test hypotheses regarding the applicability of identified variables in predicting the results of improvement efforts. This has enabled the creation of a statistically robust model that will allow the predictive modeling of the improvement effort prior to the start of the implementation. Therefore, the research question is proposed below: Given a standard Lean Six Sigma implementation methodology, what are factors that will predict the resulting effort and cycle-time savings achieved in a business process characterized as a knowledge-intensive environment? This research question has two main parts. The first is the identification and validation of the critical factors that impact the implementation of a Lean Six Sigma methodology. The second part of the research is the testing and modeling of the identified critical factors in order to predict the associated savings in a useful manner. Each of these research components is discussed in detail in the Methodology section below. It is also useful to define some of the terms within the research question in order to ensure a common understanding of the research. These terms are discussed below. The standard Lean Six Sigma implementation methodology referenced in this proposal pertains to the NAVSEA Standard Work Guide for Lean Six Sigma Implementations (Gooden, et al., 2005). Although there are slight differences among various practitioners, the NAVSEA methodology was constructed as part of the Task Force Lean (“TFL”) initiative to incorporate the major elements of Lean, Six Sigma, and 13 Theory of Constraints. The specifics of the LSS methodology will be discussed in more detail in following sections. However, a brief summary of the methodology is provided in Table 1. Although some of the specific techniques and practices may vary between organizations, the NAVSEA methodology is representative of most major implementation efforts seen throughout industry and incorporates the best practices of multiple quality improvement methodologies (Upton and Cox). Table 1: Sample Lean Six Sigma Implementation Methodology Step Detailed Activities Define: Identify what is important to the customer • Project selection • Charter development • SIPOC creation • Project Planning Measure: Determine what to measure and validate the measurement system. Quantify current performance and estimate improvement target • Develop data collection plan • Value Stream As-Is map • Validate measurement system • Evaluate normality, stability, and capability Analyze: Identify the causes of the variation and defects. Provide statistical evidence that causes are real. Commit to improvement targets. • Identify process constraints • Organize potential causes • Perform FMEA • Conduct Hypothesis Testing • Develop Future State map Improve: Determine solutions including operating levels and tolerances. Install solutions and provide statistical evidence that the solution works. • Generate, evaluate, and select solution • Conduct design of experiments Pilot and debug solution • Plan the implementation • Implement the solution Control: Put controls in place to maintain improvement over time • Monitor the system • Establish visual controls • Manage process performance • Hand-off to process owner (Source: NAVSEA Lean Six Sigma DMAIC Project Roadmap) “Lean Six Sigma” refers to a methodology that combines the major aspects of Lean and Six Sigma. These are discussed in more detail below, but a brief explanation is 14 provided here for clarity. “Lean” methodologies focus on the maximization of value provided to a customer while eliminating waste. “Six Sigma” refers to a methodology that is focused on minimizing variation within a process. In contrast, LSS combines these two methods, such that an emphasis is placed on the provision of customer value through the elimination of waste, while seeking to minimize variation within internal processes. Hence, it attempts to combine the aspects of both philosophies into one cohesive methodology (George, 2002). Most of the actual methods used for the implementation of LSS are the same as those observed in Lean or Six Sigma-only methodologies. This will be discussed in much more detail in later sections. A further consideration is that the data used to calibrate the predictive model is based upon data collected from NAVSEA facilities. Due to the presence of a common baseline methodology, it is possible to confidently predict that the methods used to complete the implementation efforts are reasonably consistent. To prevent confusion, most of the LSS methodology discussions will be tailored to NAVSEA practices. However, this should not threaten the generalizibility of the findings. The NAVSEA version of LSS was developed by Bescorp, under the leadership of Michael Wahl. In discussing the genesis of his methodology, he clearly stated that NAVSEA followed a traditional LSS perspective, utilizing tools from both methodologies as appropriate. This is further reinforced by the fact that the required materials include the Michael George’s Lean Six Sigma book (George, 2002), Womack’s Lean Thinking (one of the seminal Lean works), and statistical training material approved by the American Society of Quality (“ASQ”). In fact, the NAVSEA black belts are highly encouraged to pass the ASQ Certified Six Sigma Black Belt exam, the gold standard for Six Sigma certification. 15 Given the above circumstances, it is reasonable to conclude that NAVSEA LSS methodology could be considered a standard Lean Six Sigma methodology that is currently deployed in numerous organizations. The effort and cycle-time savings are the metrics that are used to measure the success of the implementation effort. Each project is required to report cost savings and cycle-time savings. “Cost savings” are defined as the amount of money that is saved through the changes identified in the process. These savings are normally reported as FTE or man-hours. This is advantageous for this research because it eliminates the need to adjust for different overhead rates that may be present at different Navy installations. The collection of man-hours reduced from the process enables the calculation of a ROI (man-hours saved ÷ man-hours spent) and percentage cycle-time reduction (1 – new cycle-time divided by the old cycle-time). By having the data reported in this format, it is automatically normalized across various locations with different cost structures dependent upon local factors. This actually increases the generalizibility of the proposed model and will enable the direct comparison across different facilities that may use different billing rates. Further, the effort and cycle-time savings will be normalized and reported as a percentage improvement, in order to compare the results across process of different sizes. It should be noted that the performance improvements for the process will not be studied. The primary metric of interest is cost. It is assumed that the process outputs will meet the specified customer requirements and that the requirements are stable. The purpose of the process improvement is to meet those customer requirements as quickly and cheaply as possible. Thus, an improvement to the process will be an ability to 16 provide a product that meets the same minimum requirements at either a reduced cost or in a faster manner. Although the cost and time metric do not assess the process output quality, they do address the issue of primary concern to the customers. The research question also references the existence of a process. Although the importance of processes is almost universally recognized, there is not a single, definitive definition that stands above the others. For the purpose of this research, the following EIA-632 definition will be used: “A set of interdependent tasks transforming input elements into products.” The research is focused on implementation opportunities that involve business processes that are observable and repeatable to optimize ongoing activities. The EIA definition was utilized due to its acceptance within the Aerospace and Defense environments and its congruence with LSS methodologies. Another definition required for this research question is that of a knowledge- intensive (“KI”) environment. “Knowledge-intensive environments” can be defined as those processes in which the primary inputs are the people and their knowledge, which are used to perform activities. The actual process of producing is intangible in nature (Drucker, 2003). These processes employ human and social capital, rather than physical capital to produce the output. The concepts of human and social capital will be discussed in much more detail in following sections. The result of a KI process is not a widget that can be touched; rather, it is information or knowledge that can be utilized. A classic example of such a process is systems engineering, from which the output is a better designed, integrated, and usable product. However, there is no actual systems engineering widget. 17 KI environments were chosen as part of the research question because most of the work accomplished at the Naval Surface Warfare Centers (“NSWCs”) has KI characteristics. NSWCs do not manufacture or create new items. Instead, they are responsible for the maintenance, modernization, and configuration management of existing assets. The average person employed is involved in engineering, logistics, or administrative duties. The research question is elaborated and discussed further in the Central Hypothesis Section below. 18 Chapter 5: Literature Review The past two decades have witnessed an explosion of interest in the topics of quality management, process control, and continuous improvement. From Deming’s early work (Deming, 1986) on quality management and statistical control, process improvement and quality management have seen a proliferation of ideas and methodologies. These have attracted the attention of both managers and academic theorists under numerous fads and trends. The result is that quality is now considered a business necessity rather than a differentiator. The specific methodologies and techniques have continued to evolve and change, depending upon the preferred management keywords. Some examples include business process reengineering (Hammer and Champy, 2006), total quality management (Evans and Lindsey, 2001), Baldrige, Six Sigma (Pande, et al., 2000), Toyota Production System (Liker, 2004), and Lean (Womak and Jones, 2003). The specific content of these methodologies has differed slightly and evolved throughout the years, which will be discussed in more detail below. However, the importance of improving processes and the consequent business results have remained consistent. In addition to the management attention associated with quality topics, the impact of introducing improved business process and total quality can be dramatic. Several studies have demonstrated that Baldrige companies have consistently outperformed the S&P 500 (Daniels, 2002; Rajan and Tamimi, 1999). Similarly, numerous studies have documented cost savings, value enhancement, and cycle-time reductions associated with Six Sigma and Lean (Leitner, 2005; Oppenheim 2004). Thus, there is increasing pressure 19 on companies to implement high quality business process in order to enhance competitiveness. Conversely, there is also an inherent danger to implementing quality methodologies in a poor manner. Baldrige-winning companies have failed since they won the prestigious quality award. Other organizational failures resulting from the misapplication of Six Sigma, TQM, or BPR are easy to find (Hammer, 2002). Therefore, it is necessary to fully understand the ramifications of the implementation efforts and identify some way to predict their success. Unfortunately, academic literature is largely silent on this topic. Despite the attention that has been focused on these issues, rigorous academic literature has lagged behind industry acceptance. Where these topics have been researched, most of the literature is of a qualitative, rather than quantitative, nature. This has resulted in a recognition of the importance of the topic, but a relative lack of formalized tools and models that can provide insight into the basic nature and implementation of the concepts. In addition, there appears to be a disconnect between the quality literature and general management theorists on several topics (Dean and Bowen, 1994). This is illustrated in Figure 1, below, in which the overlap between the total quality and management theory literatures is shown along with the similarity of prescription between the two fields. The purpose of this research has been to investigate the implementation of quality management programs in order to obtain a more thorough understanding of the critical predictors of the improvement effort. This research assesses the identified predictors to determine significance and the relative weightings. Moreover, these variables have been 20 used to create a predictive model of quality management implementation efforts in terms of expected cost savings and cycle-time reductions. The research significantly extends the body of knowledge on quality management implementation and provides the first quantitative tool to predict the expected quantitative benefits resulting from the quality management implement efforts. Figure 1: Overlap between Quality and Management Theory Literatures (Dean and Bowen, 1994) This section will provide the reader with a general understanding of the previous literature on quality and process management topics. The general characteristics and methodologies will be presented and discussed. The analysis and applicability of the relevant methodologies to the proposed research will be analyzed in greater detail in the following section. Each of the following sub-sections is a summary of relevant topic areas and is not meant to be an all-inclusive discussion. 21 Definitions One of the most common questions with respect to quality and quality management is, “How is quality defined?” This topic has been discussed in numerous industry and academic forums. Some people have argued that quality should be defined as cost, schedule, and performance, making it consistent with standard project management literature. Others have provided a much more nuanced view of quality. Garvin (1984) argued that there are seven major dimensions on which quality could be assessed. These are listed in Table 1 below. Table 2: Garvin’s Dimensions of Quality Dimension Definition Performance The ability of the product to meet its specified performance requirements. Features The unique aspects of a produce that make it distinctive. Reliability The probability that a product will meet its performance specifications. Conformance The degree to which a product adheres to established standards and expectations. Durability The amount that a product could be used before it begins to exhibit some type of deterioration. Serviceability The ability to repair and maintain a product. Aesthetics The impact that the product has on the senses. It should be noted that the above definitions were developed primarily for a manufacturing orientation. As a result, the applicability to service- or knowledge-centric environment is not necessarily straightforward. Thus, Garvin’s dimensions, while presented, will not be considered for the proposed research. Another common quality definition utilizes the Kano model, where customer satisfaction is based upon three types of customer requirements: Dissatisfiers, Satisfiers, and Exciter/Delighters (Evans and Lindsey, 2001). Dissatisfier requirements are those that are expected or considered standard. Failure to include these will inevitably lead to a 22 perceived lack of quality. Satisfiers are requirements that most customers say they want; fulfilling them leads to customer satisfaction. Exciter/Delighter requirements are new or innovative features that customers do not expect, but once they see them they like the features. Inclusion of these may lead to a high perception of quality. Quality Management The topic of quality management covers a broad and diverse literature with numerous streams of research. The field has been heavily influenced by a series of quality gurus, such as Deming, Juran, Crosby, Taguchi, Ishikawa, and Feigenbaum (Evans & Lindsay, 2001). Each of the gurus advocated various practices and prescriptions. What is rather unique about this body of literature is that it was developed by practitioners and later migrated into academic discourse. Consequently, there is no unified theory of quality; most quality research has been descriptive or anecdotal in nature (Cole and Scott, 2000). The quality literature can be segregated into two distinct streams: (1) quality control and (2) quality management. Quality control literature is focused on specific tools and techniques for ensuring quality and reliability in operations. Quality control topics tend to be heavily manufacturing orientated and are generally applicable at very low levels of the organizational hierarchy. Examples of quality control topics are discussions of statistical process control (“SPC”) tools and techniques. In contrast, quality management literature is focused on enterprise-wide issues and practices that enable the implementation of systems that ensure quality outputs at all level of the enterprise. However, most of the quality management literature has been descriptive and 23 philosophical in nature, with only high-level constructs offered with respect to implementation (Hackman and Wageman, 1995). When discussing quality in an enterprise or strategic context, the perspective of the quality management literature is more appropriate. In addition, as the nature of work has become more knowledge-centric, the application of SPC techniques has become problematic due to the necessity of incorporating people into the analysis of the improvement effort. Additionally, it is the higher-level quality management topics that have been validated as being strategically important and leading to enhanced performance. Thus, due to the nature of the research question, the remainder of this discussion will focus on the quality management literature. Quality management can best be understood as a collection of principles, practices, and techniques. These principles are defined as the underlying core concepts common to all TQM implementations, and are shared throughout the literature and form the basis of what the practices and techniques are designed to support. As enumerated by Dean and Bowen (1994), the principles are customer focus, continuous improvement, and teamwork. Customer focus refers to the organizational emphasis given to meeting customer demands and providing products and services that fulfill customer needs. Continuous improvement requires a constant improvement of processes, products, and services that enables the fulfillment of customer demands. The teamwork principle incorporates the idea that customer focus and continuous improvement are best obtained by a collaborative effort throughout the organization, and in conjunction with customer and suppliers. 24 The principles described above are implemented through various practices, which are groups of activities that support the principles. Examples include process analysis, customer data collection, and employee involvement. The techniques are, in turn, implemented through specific practices, such as Pareto analysis, surveys, and 360 degree team reviews. The quality control literature previously mentioned focuses on techniques. Other methodologies, such as Six Sigma and ISO, include aspects of practices and techniques. Lean methodologies tend to focus on all three levels, although there is a higher emphasis on principles, with an assumption that practices and techniques will be adopted as necessary to support the principles. Quality management implementations were largely associated with the adoption of Total Quality Management (“TQM”) practices in the early 1990’s (Evans and Lindsey, 2001). TQM represented a broad methodology that incorporated the writings of Deming, Juran, Crosby, and other quality gurus. In addition, a significant body of industry, consulting, and academic work was done to further define the various components. Despite variations that could be found with respect to specific TQM implementations, the general principles discussed above are universally recognized as the core components of TQM. The practical implementation was also associated with an emphasis on structured problem-solving methodologies that employed various practices and techniques depending upon the implementation. A common tool for implementing TQM was the Malcolm Baldrige National Quality Award (“MBNQA”) that provided assessment criteria around the generally recognized areas necessary for TQM implementation. The MBNQA has been credited with improving the quality of American manufacturing (Curkovic, et al., 2000). However, numerous critics have also shown that MBNQA 25 winners rarely sustain their performance. In fact, there are also numerous instances in which the companies failed. Academic research into quality management has identified a series of constructs (i.e. practices in the nomenclature above) that support the quality management principles. Numerous quality management constructs have been identified (Saraph. et al., 1989; Flynn, et al., 1994; Anderson, et al., 1995; Ahire, et al., 1996). In an analysis of hundreds of survey-based quality management studies, Sila and Ebrahimpour (2002) identified at least twenty-five constructs that were operationalized in multiple studies. However, there is some agreement about the most important constructs. As validated by Black and Porter (1996) and Curkovic, et al., (2000), the MBNQA constructs used for award assessment include the major components of most quality guru writings and researchers studies. In addition, the constructs have been used to discuss quality management in the context of management theory. These constructs are leadership, strategic planning, customer and market focus, information and analysis, human resources management, and process management. Despite agreement on identified quality management constructs, there is little in the way of prescriptive solutions for their implementation. Most of the survey research has focused on determining if the constructs were significant, rather than how to implement them. The focus of this research is to fill this gap. In addition, the empirical studies have adopted the constructs as universal, but have not attempted to examine how the construct significance or the factors underlying their implementation may vary in different organizational contexts (Sitkin. et al., 1994). Furthermore, the bulk of the empirical evidence validating the identified constructs has been collected in 26 manufacturing industries, raising doubts about the applicability of existing quality management research in other domains (Bailey, et al., 1999). As a result, although the identified TQM construct and research supporting their implementation is a useful starting point, it provides little insight into how best to implement the quality improvement methodologies at a process level. Furthermore, there is no research in this domain that allows the prediction of results from process improvement implementation efforts, especially in a knowledge-intensive environment. The retreat (perhaps failure) of TQM initiatives can be largely attributed to the fact that the efforts were unsustainable due to unfocused implementation efforts. Although TQM raised the visibility of quality efforts and communicated the importance of quality, it eventually developed a reputation for having more style than substance and a heavy focus on cost-savings, rather than quality improvement. The result was a movement away from TQM and into other process improvement methodologies such as Six Sigma and Lean. Six Sigma Six Sigma, Lean, and Lean Six Sigma are process improvement methodologies that have become popularized within both the academic and business literature. Each of them has somewhat different orientations. However, the general results are similar and are often deployed together. Six Sigma is a methodology that uses a structured decision-making process to reduce variability within a process. The methodology was developed by Motorola in the late 1980’s and successfully used to improve process capabilities and quality (Wortman, 2001). In fact, many observers believe that the Six Sigma efforts were the key factor 27 contributing to Motorola’s improvement that resulted in a Malcolm Baldrige Award for Business Excellence in 1988. Its focus is to minimize variability within a process in order to maximize the quality of the output. The Six Sigma name represents a quality level of, at most, 3.4 defects per million (ppm) opportunities. The underlying theory for Six Sigma is that statistical tools can be deployed to minimize the variability within a process, resulting in large gains in quality and improvements in profitability. One of the key assumptions is that an improvement in process capability is required to minimize the variability and improve the results. “Process capability” is defined as “the range over which the natural variation of a process occurs as determined by the system of common causes” (Evans and Lindsey, 2001). Thus, process capabilities are indicative of the expected result of the process based upon systemic and random variation. An important aspect of Six Sigma is that it examines defects as they occur with respect to the process. Thus, the total number of defects is important only in the context of the total amount of the output. The 3.4 ppm represents a situation in which 34 or 340 defects could be present in a sample size of 10 million or 100 million, respectively, but mean exactly the same thing statistically. From a Six Sigma perspective, the importance is on the over-arching process trend itself. A key assumption of the Six Sigma methodology is that the process and its results can be represented by a normal distribution with a measured mean and standard deviation. In this situation, the 99.73% of the measurements will fall within plus or minus three standard deviations (or sigmas). Similarly, based upon the normal distribution, 99.99966% of the measurements will fall within plus or minus 4.5 sigma 28 (Feigenbaum, 1961). Although normal distributions are generally considered to be the most common and applied in numerous situations, this is a dangerous assumption that must be verified. If a process follows a different type of distribution, such as weibull, beta, or exponential, the underlying statistics would be quite different. Although Six Sigma has been popularized in business literature, the normality assumption is often ignored, to the detriment of proper statistical analysis. Process capabilities are often expressed as an index, Cp, in which the ratio of the process specifications to the natural tolerance of the process is measured. The process specifications are generally used to refer to the design limits of the process, such as specified measurements for parts. However, they can also be used to specify the service requirements in non-manufacturing processes. It is important to note that limits may be exceeded by being both above or beneath the limit. This is reflected in the fact that both the Upper Specification Limit (“USL”) and Lower Specification Limit (“LSL”) are used to calculate Cp, as shown in Equation 1 below (Wortman, 2001). Note that the difference between the USL and the LSL is the design tolerance. Equation 1: Cp = (USL – LSL)/(6*standard deviation) It should be recognized that Equation 1 is not solely a Six Sigma equation. Rather, it is defined from a long history of process control literature. The 6*sigma in the denominator is due to the fact that 99.73% of the measurement will fall within plus or minus 3 standard deviations (Feigenbaum, 1961). In general, any level of quality could be defined (i.e. 3 sigma, 5 sigma, etc) using Equation 2, in which k is the defined level. Equation 2: k*standard deviation = tolerance/2 29 Thus, a k-sigma quality level would produce a Cp of 2k*sigma/6*sigma, or k/3. This will allow for the calculation of other levels of quality. The Six Sigma quality level introduced by Motorola expanded this such that it was defined as a process level in which the variation is equal to half of the design tolerance, while allowing the mean to shift by as much as 1.5 standard deviations (Harry, 1998). The allowance for a shift in the mean was the result of an observation by Motorola that field studies suggested that the processes shifted by this amount on average over time. Also, this corroborates well known observations in quality control literature that it is extremely difficult for processes to be maintained in exact control (Wortman, 2001). Furthermore, common statistical process control methods often only allow for the detection of shifts greater than two sigma. As a result, it is feasible for the process mean to shift and not be noticed by the process engineer. Consequently, Motorola incorporated the 1.5 shift requirement in order to better model the process (Harry, 1998). It should be noted that the 3.4 defects per million can be achieved in numerous ways, depending upon the capability index of the process and the amount of shift allowed. For instance, a 6 sigma process with 1.5 sigma shift will produce 3.4 ppm defects on average. However, so will a 5 sigma process allowing 0.5 sigma shift and a 5.5 sigma process allowing 1 sigma shift (Evans and Lindsey, 2001). Thus, the 3.4 number should not be held as the most important metric. This represents another underlying assumption. If the process were to shift less or more, the resulting failures per million would be fewer or greater, respectively. Hence, it is possible that improper decision could be made with respect to controlling the process or with respect to continuing investment in improving it. 30 Six Sigma is generally noticed for its statistical contributions (i.e., 3.4 ppm defects). Arguably, the emergence of the DMAIC methodology to support the Six Sigma implementation efforts is a much more important contribution (Hammer, 2002). DMAIC stands for Define, Measure, Analyze, Improve, and Control. Definitions for each phase are in the table below. The methodology is designed to provide users with a structured means of approaching any problem and applying rigorous statistical methodologies to improve the process. It has gained in popularity and become quite common in many industries. Table 3: Definition of DMAIC Phases (Wortman, 2001) Six Sigma Step Definition Define Select the appropriate response to be improved Measure Data must be gathered to analyze the problem Analyze Identify the root cause of defects and assess the causes of the variation Improve Reduce variability or eliminate the cause Control Monitor the process to sustain the improvements Six Sigma methodology is best conceptualized as a toolbox that enables the practitioner to utilize numerous different tools, depending upon the circumstance. Examples of the specific tools and techniques applied during the Six Sigma phases are shown in Figure 2. Most applications of DMAIC rely upon significant statistics, making the applications quite similar to the statistical process control (“SPC”) topics in previous quality control literatures. The advantage of this focus is that it provides great rigor to the analysis of a problem or process. Despite the SPC orientation, Six Sigma methodology has been deployed throughout numerous organizations, including functional areas such as engineering, manufacturing, etc. In these instances, the focus continues to be to reduce variability. However, tools and techniques are adapted as necessary to deal with less quantitative process controls. Six Sigma also relies upon the creation of a highly trained cadre of improvement experts (Green Belts or Black Belts) that apply the impr Applying DMAIC methodology is The best illustration of this is that DMAIC methods can be used to minimize the variability within a sub-process result, the inclusion of the sub customer satisfaction. Thus, a significant amount of time and effort could be spent on reducing the variability on something that could have been simply eliminated Figure 2 Although Six Sigma has proven to be effective in numerous situations, an improper application of the principles can be costly. As noted by Hammer (2002), “Six Sigma success is not business success.” Six Sigma m process controls. Six Sigma also relies upon the creation of a highly trained cadre of improvement experts (Green Belts or Black Belts) that apply the improvement tools. pplying DMAIC methodology is problematic, as it can be used inappropriately. The best illustration of this is that DMAIC methods can be used to minimize the process; however, the sub-process may not be value result, the inclusion of the sub-process does not help the process or contribute to customer satisfaction. Thus, a significant amount of time and effort could be spent on reducing the variability on something that could have been simply eliminated Figure 2: Six Sigma DMAIC Methodology Although Six Sigma has proven to be effective in numerous situations, an improper application of the principles can be costly. As noted by Hammer (2002), “Six Sigma success is not business success.” Six Sigma must be deployed in a more general 31 process controls. Six Sigma also relies upon the creation of a highly trained cadre of ovement tools. it can be used inappropriately. The best illustration of this is that DMAIC methods can be used to minimize the process may not be value-added. As a process does not help the process or contribute to customer satisfaction. Thus, a significant amount of time and effort could be spent on reducing the variability on something that could have been simply eliminated. Although Six Sigma has proven to be effective in numerous situations, an improper application of the principles can be costly. As noted by Hammer (2002), “Six ust be deployed in a more general 32 process management effort and used as part of a larger, project-oriented, problem-solving regimen. One major drawback of the methodology is that Six Sigma will optimize the process, even though the process itself may add no value to the company. Furthermore, there have been significant criticisms of Six Sigma, such that an improper application of the methodologies will result in decreased innovation (Hammer, 2002). Numerous companies, such as Kodak, Xerox, and Polaroid, have demonstrated business reversals after adopting Six Sigma. Even Motorola has had difficulty in the past couple of years, despite its adherence to Six Sigma applications. Hammer notes that a review of Six Sigma implementation efforts suggests that it can lead to higher quality and lower costs, but that it is not effective at generating dramatic improvements in business performance. This can be attributed to the fact that the methodology seeks to reduce variability and improve the performance of existing processes. However, this can also stifle innovation and prevent new processes and methods from being nurtured. Another problematic area for the implementation of Six Sigma is in knowledge- intensive processes. These processes are characterized by relying upon people to create products based upon their own skills and knowledge rather than raw materials and tools (Prusak, 1997). The resulting process is based upon intangible characteristics and much more difficult to measure and standardize. For example, it is impossible to completely standardize the manner in which someone might create a new design or write a report. Thus, although Six Sigma is applicable to the degree in which it can provide structured methodologies to assess problems and encourage people to look for root causes, the statistical focus is much less applicable. It is simply impossible to expect humans to eliminate variability from their thinking processes entirely. 33 The achievement of the 3.4 ppm metric is of limited relevance for the purposes of this research. This research is essentially an investigation of process management and modeling, using a standard methodology to baseline the improvement efforts in order to identify those variables within a process that may help predict its likelihood of being improved. The purpose of the research is to identify and validate the variables that can be used to generally characterize a process. These variables have been used to create a statistical model that can predict expected ROI from improvement events based upon historical results. Although the achievement of a 3.4 ppm process capability would most likely demonstrate significant process maturity, it is only one of many characteristics that could be used to predict whether future improvements would be useful. As stated earlier, the minimization of variability does not guarantee business results. Thus, process maturity will be assessed qualitatively and used as a variable in regression analysis. Along a similar vein, the discussion of specific methodologies deployed to improve the process is likely somewhat distracting. Modern process improvement methodologies have shown increasing convergence (George, 2002). All of them now incorporate some use of statistical tools and structured problem solving. The specifics may vary slightly, but the use of Lean Six Sigma, Lean, or Six Sigma methods will most likely yield similar results on the same process. Thus, although the research uses the NAVSEA Lean Six Sigma methodologies, it is hoped that the research will be able to be generalized beyond a specific implementation methodology and be used for a more general process management purpose. In many ways, Six Sigma is an extension of the fundamental TQM principles. Key principles that are required for the implementation of Six Sigma improvement efforts 34 include committed executive leadership integration with existing initiatives and business strategy; process thinking; disciplined customer and market knowledge; results orientation, and training (Blakeslee 1999). Thus, although Six Sigma has proved to be quite popular in implementing quality improvement initiatives, it represents a set of tools that can be applied to a particular process and does not provide much direction in recommending which process should be improved or the expected results of improving the process. Consequently, the Six Sigma methodology, while recognized as having value, will have limited applicability to this research effort. Lean Lean is the name commonly used to refer to the Toyota Production System (“TPS”). Initially researched by the International Motor Vehicle Program at the Massachusetts Institute of Technology (“MIT”), the techniques and results of the TPS were popularized by “The Machine that Changed the World” (Womack, et al., 1990). Since this publication, lean concepts and techniques have been promulgated by numerous authors, academics, consultants, and industry. Although there are sometimes slight variations, Lean philosophy and concepts can be summarized in the following 5 steps: 1. Specify value as defined by the customer 2. Identify the value stream that transforms inputs to outputs 3. Make value flow through the process 4. Let customers pull value (Just-In-Time production) 5. Pursue perfection through continuous improvement 35 Hallum (2003) notes that the Lean name has been used to describe at least four aspects of a company: operating philosophy, tools, activities, and the state of the manufacturer. This has resulted in confusion with respect as to what the application of Lean actually constitutes, and a lack of understanding about the requirements for the enterprise transformation required to implement the TPS. The Lean concept (or more accurately the Toyota Production System) was pioneered by Taiichi Ohno in post-war Japan. Due to the constraints facing his country, Ohno developed a production methodology that could survive in a fragmented domestic market, with limited workforce, scarce supplies, and little capital investment. The result was a system for building cars that was quite different from traditional mass production. The system he designed had to be able to produce only what was absolutely necessary to meet customer demand and requirements, allowing highly varied and affordable products with small production runs (Murman, et al., 2002) This was achieved by producing only in response to actual orders placed by customers, enabling customer pull in which the production system could operate with minimum stock. Another key development was the conceptualization of production flows in reverse, allowing for the creation of just-in-time production. This also relied upon a novel approach at that time, in which the subcontractors and suppliers were considered part of the extended enterprise and incorporated into the business and production planning. Each of the links in the process was synchronized such that the production process was leveled, minimizing waiting and work-in-process (“WIP”) (Ohno, 1988). Another important aspect of the original TPS was the radical reduction in set-up time. This was accomplished by Shingo, a contemporary and associate of Ohno, 36 and led to the development of the idea of autonomation, or Jidoka, loosely defined as ‘automation with a human touch’ (Wortman, 2001). Jidoka focuses on the interaction between humans and machines to achieve perfect coordination in the production cycle. In addition, the TPS places an emphasis on the relentless elimination of waste. It is through the elimination of waste that costs are reduced and productivity improved. Seven types of waste have been identified: overproduction, waiting, transport, extra processing, inventory, motion, and defects (Table 4). Each of these contributes to unnecessary work and does not add value to the final product. A commonly claimed type of non-value added activity is testing or inspection. However, this should be used carefully because, in some instances, testing could be considered high value-add, depending upon the requirements of the customer. The TPS is also known for its emphasis on employee empowerment. Toyota actively seeks out worker suggestions in continuously improving the process. This has lead to institutionalization of the kaizen approach, in which there is a continuous focus on improving the quality of the process. Common techniques include the poka-yoke (error- proofing) and andon (visual display of production lights). The kaizen process relies upon workers to implement and focuses on a structured-problem solving approach and work standardization. The impact of implementing Lean philosophies has been conclusively shown to produce positive results (Swank, 2003). However, a survey of Lean literature quickly reveals that most of the research has focused on techniques and application of Lean manufacturing. Efforts at applying Lean techniques in KI environments are relatively 37 recent, with few results to discuss (Oppenheim 2004). Other research has focused on developing assessments of Lean applications (Sawhney and Chason, 2005). Table 4: Types of Waste Type of Waste Definition Overproduction • Result of operations continuing after they should be stopped • Results in excess quantities and products made before customer needs them Waiting • Periods of inactivity in downstream processes • Occurs because upstream activities does not deliver on time Transport • Unnecessary movement of materials • Transport should be minimized because it adds time to the process and goods can be damaged during transport Extra Processing • Rework, reprocessing, handling, and storage • Inspection, non-value added testing Inventory • Excess inventory not directly required for current customer orders • Includes Work-in-Progress (“WIP”) and raw materials • Cost to store and/or create the inventory Motion • Extra steps by employees to accommodate inefficient layout, defects, or inventory • Adds time and no value to the customer Defects • Products that do not conform to the customer expectations • Causes customer dissatisfaction and hidden costs One notable example of an assessment methodology for Lean is the Shingo Prize. Its application is similar to that of the MBNQA, in that it is an award given by a non- profit group that recognizes Lean enterprise management. The Shingo Prize assesses companies on eleven different areas, grouped into five major categories which are remarkably similar to the MBNQA criteria. Like the MBNQA, it does not prescribe recommended solutions; rather, it highlights elements that may be incorporated to meet the objectives (www.shingoprize.org). Despite intense interest from industry, no attempts have been made to create a predictive model of Lean implementation efforts. Thus, executives who must manage implementations have almost no ability to predict expected results or manage ongoing 38 efforts; they must accept on faith that the Lean implementations will improve business results. The nature of this research question does present some potential commonalities with the research conducted the Lean Aerospace Initiative (“LAI”). LAI is a research group at MIT that manages a consortium of government and industry partners that collaborate and research mutual areas of interest in Lean enterprise management. Although LAI nominally addresses all elements of the aerospace industry, most of its research has been focused around organizations involved with aircraft manufacturing and maintenance. The results have been a positive application of Lean principles in this area. However, the larger issue of how to address purely knowledge-intensive environments is addressed only tangentially. LAI has also developed three tools of interest to this research: the Transition to Lean Roadmap (“TTL”), the Lean Enterprise Self-Assessment Tool (“LESAT”), and Government LESAT (“GLESAT”). TTL is a model of Lean implementation that assists companies in implementing Lean (Lean Aerospace Initiative, 2000). In contrast, LESAT and GLESAT are assessment tools that attempt to rate how well a lean enterprise transformation is doing with respect to the set of factors considered most important in transitioning to a Lean enterprise (Lean Aerospace Initiative, 2001). In this respect, LESAT and GLESAT are similar to MBNQA and the Shingo Prize assessments. The LESAT and GLESAT provide another means to assess the enterprise perspective of quality improvement efforts. However, after discussing the development of the tool with the LAI researchers, it is apparent that the assessment methodologies were developed by qualitative methods. Furthermore, there has been no research 39 attempting to show the specific benefits from incremental improvements in LESAT scores. Thus, not only does the question of ROI for a given implementation effort remains unanswered, a methodology for prioritizing improvement events is not addressed. Lean Six Sigma Many enterprises are currently implementing a methodology called Lean Six Sigma. This nominally combines the philosophy with Lean and the statistical focus of Six Sigma. The concept of LSS was introduced and popularized by the work of Michael George (George, 2002), and seeks to combine the best aspects of both methodologies into a single, integrated perspective. In fact, the tag line of the George’s book was “Six Sigma Quality with Lean Production Speed.” Thus, LSS is explicitly an amalgam of the Lean and Six Sigma methodologies. Lean methodologies are based on the Toyota Production System. Lean is a management philosophy that emphases five principles: (1) Value is specified by the customer, (2) Identify the value stream of a process, (3) Make value flow through the process, (4) Let customer demand pull the process, and (5) Continuous improvement (Liker, 2004). In order to improve a process, Lean focuses on maximizing value to the customer while minimizing non-value added components that are classified as waste. Most of the Lean tools are used to analyze process flows and delay times in order to maximize the velocity of the process (Womack, 2003). Lean improvements are made through a series of small projects, or kaizen events that focus on improving individual aspects of the process. Statistical tools and data analysis are used in an ad hoc manner to support the improvement efforts. 40 In contrast, Six Sigma is a structured problem solving methodology that focuses on minimizing the variability within a process. Six Sigma uses a very rigorously defined process called DMAIC: Define, Measure, Analyze, Improve, and Control. The DMAIC process relies upon data driven analysis and a comprehensive set of statistical tools. Specific tools and techniques used at each phase can be found in the Certified Six Sigma Black Belt materials (Wortman, 2001). Most of the tools are familiar to statistical process control (“SPC”) professionals. Six Sigma emphasizes the need to recognize opportunities and eliminate defects within the process. Another important aspect of Six Sigma is that it provides a highly prescriptive cultural infrastructure in which experts (i.e., Black Belts and Green Belts) are trained to do the improvement work. Lean Six Sigma, as defined in the literature (George, 2002), combines aspects of each of the two elements. The emphasis for LSS is to achieve the increased process velocity (through flow and customer pull) inherent to Lean, but with a reduction in variability achieved by Six Sigma. Since neither of these is mutually exclusive, the combination of the two methodologies is relatively straightforward. The table below (Upton and Cox, 2004) explains elements of this combination in more detail. Nine characteristics are used to compare the three methodologies. The infrastructure refers to how the implementation methodologies are advocated throughout the enterprise. In actuality, there is little difference between the three methods. However, Six Sigma and LSS do provide well-structured programs for enterprise training and implementation. The incentives discussed below refer to what motivates employees to participate. In this perspective, Lean is more of an organic method that grows from use; Six Sigma and LSS is much more structured and often used as a means of career 41 development. Similarly, Lean participants can include anybody in the company while Six Sigma and LSS participants tend to be both dedicated and non-dedicated process improvement resources. The use of statistics is often explained as an important differentiator between Lean and Six Sigma. Lean Six Sigma opts to use the statistical methods most appropriate. However, in actuality, Lean implementers will quite often use very complex statistics as necessary. The perceived gap has narrowed dramatically over the past decade. Despite this, the DMAIC methodology does have a distinct bias towards incorporating statistical tools into the analysis process. This is also evident in the use of tools. Lean uses a portfolio of tools, with limited structure. In contrast, Six Sigma uses a very structured DMAIC process with prescribed tools. LSS modifies the DMAIC process slightly to incorporate more of the Lean tools (such as value stream mapping) and uses the DMAIC process as appropriate. Both Lean and Six Sigma are process-centric. However, due to the different result focuses, Six Sigma tends to be product-centric (i.e., reduce the variability of the product), while Lean tends to focus more on the systems of processes and the identification of non-value added work. The systems of processes focus is a legacy of value stream mapping, one of the fundamental tools used in Lean implementations. This difference has also narrowed as value stream mapping has become integrated into most improvement events. The Six Sigma equivalent, SIPOC, captures very similar information. Project identification can be quite different between Lean and Six Sigma in that Lean emphasizes input from the workers and incorporates popular suggestions. Six 42 Sigma relies more upon strategic initiatives and uses the Define phase to ensure the value of the effort. Similarly, Six Sigma processes use gated reviews after each phase, whereas Lean efforts tend to be broadcast when they are done. LSS follows the Six Sigma approach to add rigor to the review process. Table 5: Comparison of Lean, Six Sigma, and Lean Six Sigma (Upton and Cox, 2004) Lean Six Sigma Lean Six Sigma Infrastructure Senior Leaders, Sensei Champions, Sponsors, Green/Black/Master Black Belts Champions, Sponsors, Green/Black/Master Black Belts Incentives Haphazard incentive or career development Some incentive, frequent career development Some incentive, frequent career development Use of Statistics Basic data analysis Advanced statistical analysis preferred Basic or advanced statistical analysis as applicable Participants Dedicated resources (Black Belts) and non-dedicated resources (Yellow/Green Belts) Part of everybody’s job Dedicated resources (Black Belts) and non- dedicated resources (Yellow/Green Belts) Focus of Efforts Process centered with some product centric considerations Process centered with some system of process thinking Process centered with both product centric and some system of process thinking Project Identification Popular and strategic project identification Strategy-driven project identification Popular and strategic project identification Tool Structure Portfolio of tools, some structure Structured tool use through DMAIC Structured tool use through modified DMAIC Review Process Updates during Kaizens, communication at the end Gated reviews at the end of each DMAIC phase Gated reviews at the end of each DMAIC phase Focus of Improvement Focus on reduction of cycle time and WIP Focus on reduction of cost and variation Focus on value improvement, cost, cycle time, variation, and WIP reduction 43 It is apparent from above table that LSS combines aspects of both methodologies. In addition, LSS tends to pick the most applicable parts of both methodologies as appropriate for the project at hand. There is almost nothing novel with respect to tools or methods, other than the manner in which they are combined. Given the above observations and the increasing popularity of LSS, it is fair to state that the process improvement methodologies have converged over the past several years. Most experts and consultants recognize the value of both methodologies and will deploy those that are most useful for a particular situation. A useful heuristic supporting this observation is that the majority of consultants and websites reference both Lean and Six Sigma. For example, material for the Six Sigma Black Belt Certification has an extensive model on Lean (Wortman, 2001). Conversely, the LAI Lean Academy® has an extensive model on reducing variability through statistical tool use. A valid concern would be the generalizibility of the research, given that it is focused on the NAVSEA implementation methodology. However, based upon the above analysis and discussions with NAVSEA experts and other consultants, it is apparent that there is little that differentiates it from a variety of LSS initiatives. In addition, the methodology deployed by NAVSEA would be recognized by almost any type of process improvement expert as similar to a methodology they are deploying. As part of the research, the basic NAVSEA method was compared with Raytheon’s Six Sigma, Boeing’s Lean+, Northrop Grumman’s Six Sigma, and the Air Force’s AFSO-21 programs. The difference in methods is very slight. Any predictive results obtained at 44 the Navy should translate to other organizations. Slight modifications or calibrations may be required, but this is a standard practice for modeling efforts. There is the possibility that a new methodology may emerge and supplant LSS. However, if previous experience with TQM is any guide, new methodologies will leverage existing efforts. Thus, LSS will most likely become a standard way of doing business. A radical change in methodologies will also most likely rely upon LSS methods to some degree, enabling the results of this research to still be of use, albeit possibly more limited. Literature Review Conclusions The preceding literature review is meant to demonstrate that a significant amount of work has been done on topics related to quality and process improvement. In addition, numerous other methodologies, such as business process reengineering, ISO, and Capability Maturity Models have been examined in a similar fashion. Upon deeper analysis, it is possible to theorize that, although the specific tools and techniques behind the various quality methodologies may vary, the underlying constructs are quite similar (Sila and Ebrahimpour, 2002). This hypothesis extends into the assessment methodologies, such that a high correlation is expected between the assessment scores using any of the major tools. What the existing literature does not address is providing actionable insight into how improvement efforts can be prioritized at a process level. Most of the theoretical and almost all of the empirical research have been done at an enterprise perspective. The results have been verified and validate the enterprise applications of the methodologies. 45 However, for the average manager attempting to improve a local process, there is little relevant research. In addition, all of the previous quality methodologies assume, either implicitly or explicitly, that implementing enhanced quality is a beneficial task. There has been almost no effort to quantify the return on investment associated with the overall quality implementation, much less local implementations. Thus, this clearly demonstrates the central point of this dissertation that research into modeling the expected ROI from local implementations would fill an existing gap within the body of literature. 46 Chapter 6: Research Hypothesis As discussed in Section 4.0, this research has focused on identifying, validating, and modeling the parameters associated with a Lean Six Sigma implementation effort for a knowledge-intensive process. The intended outcome of the research is to use these parameters to assess their relative impact on the improvement effort and determine a predictive model for future process improvement efforts in a similar environment. Thus, the central hypothesis of the research is stated below: There exists a set of parameters from which it is possible to classify any given process in a knowledge-intensive environment that can be used to create a model for the prediction of expected return on investment resulting from a future Lean Six Sigma implementation event. The above hypothesis is designed to conform to best practices in qualitative and quantitative research design (Miller and Salkind, 2002). Key aspects of a well-defined hypothesis are that it has an empirical referent, it is specific, it is falsifiable, and it is testable. In this instance, the first criterion is satisfied because no value judgments are involved; the entire hypothesis is based upon observable and measurable facts. The specificity criterion is satisfied due to the fact that the hypothesis is directed at a specific methodology (Lean Six Sigma) in a specific environment (knowledge-intensive) with specific outcome criteria (return on investment or cycle-time savings). The proposed hypothesis is also falsifiable and testable. The falsification is evident in that a corresponding null hypothesis would be that it is not possible to predict expected return on investment or cycle time from a set of parameters associated with a knowledge-intensive process. The testability is evident in that the dependent identified parameters will be regressed against the dependent variables (return on investment or 47 cycle time savings), allowing for the use of two-tailed hypothesis tests to test for significance. It should be noted that the hypothesis is constructed in such a way that the research will focus on the modeling of a general process, irrespective of size. As a result, it may be necessary to transform some of the variables to accommodate this scaling issue. Another important caveat is that this research is somewhat exploratory. As a result, the expected predictive statistical accuracy may be somewhat less than that expected on other cost models. The COSYSMO model provides an adequate baseline due to its similarity of breadth and the fact that systems engineering work is a major portion of the processes being studied. COSYSMO strives to predict within 30% of the actual totals 50% of the time (Valerdi, 2005). The proposed model as part of this research will strive for a similar level of significance. Additional fidelity will be incorporated based upon the amount of historical data available and the resulting analysis. However, since this will be the first time that a predictive model has been attempted for quality management purposes, a predictive power of 50% of the actual totals 50% of the time should be considered acceptable. Further refinement will be left for future research. In addition to the central hypothesis, the research question suggests two sub- hypothesis. These are as follows: Sub-Hypothesis #1: There exists a set of constructs that will include the significant parameters associated with a process that can be used to predict the results of a Lean Six Sigma implementation efforts. Sub-Hypothesis #2: There will exist a strong correlation (greater than 0.70) between the expected return on 48 investment and percentage cycle-time savings for Lean Six Sigma implementation events. Sub-Hypothesis #1 (“SH1”) will be discussed more extensively in the following sections. The purpose of SH1 is to verify that the parameters hypothesized to impact the LSS implementation effort are grounded in the previous empirical research and a solid basis for inclusion in the model. As a result, it is expected that the identified parameters will be able to be grouped into logically consistent constructs to ensure internal validity. It will be necessary to accept this hypothesis prior to the creation of a predictive model. The current hypothesized model has two potential dependent variables: return on investment and cycle-time savings. Since the costs associated with knowledge-intensive enterprises result primarily from the activities of people, rather than capital investment or materials, savings in time can be directly translated into cost savings. As a result, it is expected that a correlation is present between these two metrics. Unfortunately, as discussed in Section 10, the available historical data did not enable the testing of this sub- hypothesis. Despite this, the acceptance of Sub-Hypothesis #2 (“SH2”) is not a pre- requisite for the success of the predictive model, leaving SH2 to future research efforts. Detailed discussion of the methodologies to be used to test and assess the central and sub-hypotheses are discussed in Section 8.0. The next section will discuss the proposed constructs in more detail and identify parameters that are expected to be included in the predictive model. 49 Chapter 7: Research Methodology As discussed above, the ultimate purpose of this research is to provide insight into how a portfolio of Lean Six Sigma implementation efforts could be managed in a more effective manner. Thus, the research focused on identifying the critical characteristics and parameters of existing processes that could be used to assess an individual process, and provide insight into the potential impact of a process improvement event. Once the identified parameters were validated, the research used previously completed projects as an empirical baseline to determine a statistically significant model. As stated previously, the output of the research is unique in that it attempts to move towards a predictive model for quality improvement, something that has been lacking within the academic literature for over twenty years. Furthermore, it provides guidance with respect to implementations in a truly knowledge-intensive environment. The nature of the research question and the central hypothesis required a methodology that is able to accommodate both qualitative and quantitative analyses. Qualitative methods are defined as those methods that seek to assess behavior and actions through research observation and participation, largely focused around activities such as grounded theory, participant observation, and case studies (Miller and Salkind, 2002). In contrast, quantitative methods use statistics and modeling to answer specific research hypothesis. The qualitative perspective was used to assess the existing literature for the identification of potential constructs and variables. Quantitative methods were used to develop a survey to assess and validate the identified constructs and to create a predictive model based upon the significant factors. As a result, the proposed methodology can be considered a mixed methods approach, but with a quantitative emphasis. 50 In order to accomplish the proposed research, a seven step plan was executed. These steps are shown in the table below. Note that steps 1 through 3 were completed as part of the qualification exams and steps 4 through 7 were completed as part of the final dissertation work. The future activities will also be discussed below, but the results of these steps will be presented at the dissertation defense. Table 6: Research Methodology Steps Step Output 1 Review literature and obtain expert opinion to identify potential constructs and factors List of potential constructs and factors for consideration 2 Develop and pilot a survey to use expert opinion to validate the identified constructs and items Pilot Survey 3 Statistically analyze the survey results to validate the identified constructs and items Validation of constructs and items through factor analysis techniques 4 Create a data collection template for the coding of historical implementation project data Data collection instrument for historical data 5 Collect historical project data Coded historical data 6 Statistical analysis of data to create predictive model of implementation efforts Significant statistical model 7 Interpretation and validation of the predictive model Completed dissertation Step 1 relies upon the qualitative analysis of the existing literature to identify the characteristics and factors that could be associated with a general process and used to predict the outcome of process improvement events. In addition, the results from the literature were corroborated through discussions with experts knowledgeable in process improvement. These experts were primarily from the government, aerospace, defense, academic, or consulting arenas. The output of this stage was a list of potential constructs and items for consideration in the predictive model. Steps 2 and 3 involve the creation and deployment of a survey instrument to test expert opinion on the identified construct and factors from Step 1. The survey was 51 designed in accordance with U.S. Navy protocol and piloted at the U.S. Navy Lean Six Sigma conference in May, 2007. The survey was completed by over 200 certified Navy Black Belts, all of whom are recognized for their process improvement expertise. The results have been tabulated and analyzed using factor analysis. This analysis provided the means of validating the hypothesized factors, consequently providing a theoretical foundation for the desired predictive model. Steps 4 and 5 focus the creation of a standardized data collection instrument (“DCI”) for the collection of historical project implementation data. This involved the final determination of parameters to be collected and the development of item scales, as required. The historical data was then be collected and coded in accordance with the standardized DCI. The output of these stages provided the data required for the development and statistical analysis of a predictive model. The statistical analysis in Step 6 involved the use of generalized linear regression to determine an appropriate model that can predict, with statistical significance, the expected ROI or cycle-time savings to be achieved from a given process after a LSS implementation effort. The final stage, Step 7, interpreted and validated the model to ensure that it is theoretically consistent. The completion of the outlined methodology resulted in a high quality research result that fills a critical gap in quality improvement literature, and provides distinctive value to both academia and industry. Each step is discussed in more detail below 52 Chapter 8: Identification of Constructs and Factors Based upon the surveyed literature discussed above, it is evident that the various quality improvement methodologies are inextricably intertwined. Each of them may have slightly different emphases or implications, but the general conclusions and recommendations tend to converge. As a result, although there are numerous candidate variables that may be selected as predictive variables for process improvement efforts, these can be grouped into the general categories highlighted by the literature review. Each category can then be operationalized to test for relevance and impact on the performance of process improvement events. An enhanced understanding of the nature of the variables that operationalize each construct, their relative weights to each other, and their casual relationships will enable researchers and practitioners to understand the trade-offs implied with selecting one improvement event over another. Although the topics of constructs and factors are common in management theory and social science, their use in engineering research is somewhat limited. As a result, a brief discussion of these topics is provided in order to clarify their use. Therefore, Table 7, below, defines common definitions used throughout the remaining sections. In general academic literature, a “construct” is defined as a theoretical concept that has been invented or adopted for scientific purposes and that cannot be measured directly (Kerlinger and Lee, 1999). The purpose of a construct is to represent a higher- level, theoretical entity. A common example from the social sciences is an addiction construct. Addiction cannot be measured directly, but rather from characteristics that are symptomatic of addiction. 53 Table 7: Definitions for Management Theory Terms Term Definition Construct A construct is defined by Webster’s dictionary as “a theoretical entity.” For the purposes of this research it is used as a latent variable that cannot be measured directly. The constructs specified in this proposal represent groupings of variables. Operationalize Operationaliation refers to the assignment of the theoretical constructs variables (or parameters) that can be measured directly. The variables may either be numeric or based on a qualitative scale. It is through the operationalization of the constructs that it is possible to statistically analyze the significance of the hypothesized constructs. Social Capital Social capital refers to the interactions between people and the ability of these interactions to allow information to flow and influence actions. Human Capital Human capital refers to individual characteristics associated with people. Attributes such as tenure, experience, education, etc are considered types of human capital. In this research, the constructs are those entities representing the impact of people, activities, customers, and information flows on a process. These constructs were identified based upon the literature review and were hypothesized to be a valid representation of how a process could be modeled. A more detailed discussion of each is provided in the following sections. The identified constructs are high-level and cannot be measured directly to assess the impact of each construct on the process. As a result, it is necessary to define more discrete elements that are representative of the constructs, but able to be measured. Since constructs cannot be measured directly, they must be operationalized with variables. These variables are characteristics of a specific construct that can be measured directly and are used as a means of evaluating the underlying construct. In fact, one definition of a variable is “measure of a construct” (Kerlinger and Lee, 1999). After a construct has been theorized, the researcher will attempt to identify variables that are valid measures of the constructs. This process is called “operationalization.” The identified variables are tested and either validated or rejected as measures of the 54 construct. In this context, variables are often referred to as items in the academic literature. Research involving constructs often references item development and testing. Thus, the words variable and item are used interchangeably. The variables identified in this research are each of the enumerated variables in the following sections. These were identified based upon the literature and have been hypothesized to adequately represent the constructs. The goal of the research is to identify a parsimonious group of variables that can describe the theoretical construct in a significant manner. Additional variables could be added to increase the fidelity of the construct measurement. However, similar to linear regression, the goal is to have the maximum amount of explanatory power with the minimum number of items (Miller and Salkind, 2002). The identified variables are theorized to be adequate representations of the theorized constructs. The factor analysis discussed in the sections below tends to support this hypothesis, to the extent that specific constructs were clearly identified and present. Some confusion will arise based upon the use of the word “factor.” The technique of factor analysis refers to the statistical method of data reduction that will identify constructs (also known as “latent variables”) based upon measured data (Gould, 1996). In this sense, the factors are those constructs that emerge based upon the data analysis. In addition, when describing the operationalized components of the constructs, the words “factor” and “variable” are often used interchangeably. This is based upon previous work with linear regression in which a factor can be a variable within the equation. To help clarify, the following discussion will use the word “construct” when referring to any latent variables that are only measured through other variables. 55 Similarly, the word “variable” or “item” will be used to refer to measurable parameters that define a construct. Any unavoidable use of the word factor will be used in the sense of it as a variable (i.e., the group of process variables may be called process factors). Given the empirical research already completed to validate the underlying constructs (Sila and Maling, 2002; Black and Porter, 1996; Curkovic, et al., 2000; Ahire, et al. 1996; Anderson, et al., 1995; Saraph, et al., 1989), it is proposed that five general categories based upon these constructs be used to test the ability to predict implementation results. Each construct will be operationalized into measurable components that can be individually tested for significance. A given implementation project can then be scored according to the identified variables, and the resulting cycle time and cost savings improvements can be used as dependent variables to test for predictive value. The proposed methodology for analysis and validation of the model is discussed in following sections. Based upon the surveyed literature, the proposed constructs, representing general categories of variables, are as follows: People, Process, Customer/Product, Information, and Environment. Since this research is focused on local processes rather than an enterprise perspective, issues such as enterprise leadership commitment and strategic planning are deemed a constant for a single organization. Furthermore, many previous researchers and assessment tools have included a business results construct. Due to the limitations of associating results with an intermediary process, this construct has been treated as an enterprise topic. An important assumption is that an improvement in a local process will not be detrimental to enterprise events. Although this assumption is not always true, it is management’s responsibility to ensure that the overall enterprise is not 56 sub-optimized at the expense of a local process. A diagram of the proposed hypothetical model is shown in Figure 3 below. Figure 3: Visual Representation of Constructs The proposed constructs are identified in numerous literature sources as either potential constructs impacting the implementation of quality efforts or empirically validated constructs (Sila and Ebrahimpour, 2002). In addition, most of the assessment tools such as Baldrige, Shingo, LESAT, GLESAT, and CMM have similar groups of variables representing the corresponding constructs. In many instances, the exact construct definition may vary slightly, depending upon the source of the study. For example, the human resource management construct has been identified as employee involvement, employee fulfillment, human resource management, training, employee Process Activities Customer People Information LSS Event Savings Environment 57 satisfaction, and teaming. However, the overall groupings of the important constructs have remained consistent for the past decade and continue to be reflected in the most recent literature and major assessment tools (Sila and Ebrahimpour, 2002). Another interesting point is the inter-disciplinary nature of the identified constructs. The constructs theorized to model quality efforts have numerous overlaps with other disciplines. As shown in Figure 1 (Dean and Bowen, 1994), above, there is a distinct overlap in the literature domains between management theory and quality management. It is quite enlightening to note that, in some instances, the prescriptions between the two bodies of literature are very similar. However, there are also instances in which prescriptive recommendations based upon the literature of the domains are also quite different (Dean and Bowen, 1994). In particular, an important distinction is the fact that prescriptions between process management and information and analysis vary significantly. This suggests that there is ambiguity with respect to how the quality of a given process or organization can best be improved. As a result, a detailed examination of the various items that constitute each construct would help clarify the existing literature. The nature of the hypothesized model also suggests that the resulting constructs and factors will be inter-disciplinary as well. One of the outcomes of the research should be clarification on this issue. The following sub-sections detail each of the identified constructs. Note that the sub-sections attempt to operationalize the constructs based upon the available literature. The actual validation of the constructions and the items will be based upon the statistical methodology discussed in later sections. In addition, relevant scale development issues 58 are addressed, with most of the scale development relying upon previously published literature and interviews with practitioners. In addition to a literature review, extensive interviews and discussions were held with numerous process improvement experts to obtain their insight into the critical variables associated with Lean Six Sigma implementation efforts. Most of these experts were industry executives and practitioners who had experience into some type of process improvement methodology and implementation. Their inputs were invaluable in helping focus the initial research thrust and determine the topics of interest to industry. As well, they provided additional insight into variables that had been unstudied in previous research. Although the interviews helped set the initial literature review and construct operationalization, the research literature was the primary basis from which final decisions were made, as to the factors and variables of interest. However, the interviewees should be highlighted for their input, and are listed in Appendix A. Their time and efforts are greatly appreciated. Category #1: Personnel A fundamental aspect of quality improvement efforts in all of the established methodologies is the focus on people working in the process. This is vividly illustrated in the empirical literature from the importance placed upon constructs associated with human resource management, employee empowerment, employee fulfillment, employee satisfaction, training, and teamwork (Sila and Ebrahimpour, 2002). Furthermore, the recent Lean literature that attempts to move out of a manufacturing environment recognizes the centrality of people in determining how well a process functions (Oppenheim, 2004; Swanhey, 2005). As a result, to adequately characterize a process, it 59 will be absolutely necessary to have metrics that measure the impact that people have on the process. The proposed personnel factors are discussed below and summarized in Table 8. The management theory body of literature provides insight into how the impact of people on a process can be characterized and measured. In particular, the social capital and human capital research literatures have conducted in-depth studies of how people can impact their process and vice versa. What these bodies of literature do not address is how the variables may drive the results of process improvement efforts. Despite extensive literature searches, there is no research examining the relationship between social capital and process improvement. In addition, these bodies of literature tend to be descriptive, rather than prescriptive. Although social capital techniques have been used to enhance process quality (Mann and Dhallin 2003a; Mann and Dhallin, 2003b), there has been very little research into how specific social or human capital characteristics can be expected to impact quality improvement efforts. The source of social capital lies in the structure and content of the relationships (Adler and Kwon, 2002). Numerous studies have shown the impact that social capital has in organizations (Hansen, 1999; Cross, 2002; Burt, 2000; Krackhard, 1993). Specifically, Tsai and Ghosal (1998) demonstrated the role of intra-firm networks in value creation. Despite the acknowledged importance of social capital, to date, very little research has been done into how to build and manage social capital (Dhallin, 2005). Some case studies and research have demonstrated that the simultaneous management of social capital, organizational processes, and technology can greatly improve enterprise performance (Mann and Dhallin, 2003a; Mann and Dhallin, 2005). Other case studies 60 have shown the importance of social capital to enable information transfer and information searching (Uzzi, 1997). Social capital and its impact is an intangible asset with an amorphous characterization. However, it is reasonable to suggest that the degree of social capital within a given process can be at least obliquely measured from the number of people in the process, the degree of communication present within the process, and the quality of the communications. This is consistent with the organizational theory literature on network theory (Burt, 2000). The number of people working within a given process will have a major impact on process improvement efforts. From the social capital perspective traditionally employed by network theorists, the addition of each person to a population of n adds n-1 potential interactions within the process. Thus, larger populations will have a higher potential coordination costs for a given process. Further, basic change management recognizes that additional people will create greater organizational inertia, suggesting that change will occur more slowly, but potentially enable higher returns from improvement events. It should be noted that the assumption of n-1 potential interactions for each additional person is a simplifying assumption. In reality, the addition of each person adds 2 n-1 potential communication topographies. However, the network theorists only measure individual interactions rather than complete network topographies. Accordingly, this research will use the same methodology as that defined in the social science literature in order to compare the results consistently. Another major issue associated with characterizing a process is the associated cost. However, in a knowledge-intensive process, the value created in the process is, by 61 definition, created by the people working in the process and the tools that they use. Thus, the number of people also acts as a proxy for the process cost. A subtle distinction is the number of distinct people involved in a process versus the number of full-time equivalents (“FTE”). The higher the ratio of FTE to number of distinct people, the more dedicated the people are to a given process. The implication is that higher percentage of FTE would yield a more focused process, assuming the competencies are equal. Table 8: Personnel Factors Factor Definition # of people # of distinct individuals who work on the process FTE # of full-time equivalents who work on the process Degree of teaming The amount of interaction and coordination required to complete tasks # of roles The number of functional roles who work on the process Tenure The average tenure of the workforce in the process Diversity The diversity of the personnel working on the process Multitasking The amount of workload fragmentation an average user has on an average day Likewise, teamwork is considered a fundamental aspect of all quality systems (Dean and Bowen, 1994; Liker, 2004). Although the exact measure of teamwork may vary, a qualitative evaluation of the amount of teaming will provide a basic metric for measuring the coordination costs associated with completing the process. It is expected that this factor will exhibit a behavior such that both low and high teaming will be indicative of opportunities for improvement. This is similar to some of the network theory research and mirrors network metrics such as centrality and betweenness (Krackhardt, 1993). Human capital factors can also be considered important when assessing the impact that personnel have on the process. Human capital refers to characteristics and traits possessed by the individual workers. There are a host of various human capital 62 factors that could be considered variables for measuring the capability of a process. Common factors include sex, age, social background, educational background, socioeconomic status, work history, etc. Although these factors may impact the manner in which people perform their work, for the purposes of this research, we are concerned with those factors that could be reasonably expected to have a significant impact on the capabilities of a process. As a result, many of the more sociological factors are not of interest for this project. Where possible, the sociological factors will be included to test for significance, but there will not be a focused effort to capture all factors that are not hypothesized to be significant. A candidate factor that has been shown to impact work is average tenure of the workforce. The longer someone has been working on their job, the more likely they are to become an expert on that particular facet. The underlying assumption is that there is a strong correlation between competency and tenure. However, this assumption may be weak. In addition, it can be hypothesized that a higher tenure would also imply that they may not be very likely to improve. As a result, the tenure variable may be hypothesized to be significant or insignificant. Diversity in the workforce is generally acknowledged as a positive. In this instance, diversity can be defined as a holistic measure of perceived sociological characteristics (i.e., age, race, sex, etc). This is due to the concept of homophily, which implies that a lack of diversity does not allow the individuals to learn new information outside of a small, well-known realm (Brass, 1994). Thus, it is expected that lower diversity would also be suggestive of improvement opportunities because people may not be aware of alternative methods. 63 Similarly, exposure to additional functional roles will sometimes enable personnel to understand the process in a holistic manner. Therefore, the number of functional roles represented in the process could be a significant predictor of improvement opportunities. It is quite interesting to note that the quality literature is almost silent on the fact that there could be different prescriptive recommendations, depending upon the role of the individual within a process (Swaney, 2005). In addition, it is realistic to expect that a process containing multiple functional roles may operate at a higher efficiency due to the synergies commonly found in integrated product teams. The final personnel factor for consideration is the degree of multitasking with which the average user must cope. Highly multitasked personnel are burdened with higher switching costs and lower efficiency (Oppenheim, 2004). This has also been empirically proven in various different research projects. Although the exact relationship between work fragmentation and efficiency is debated, the overall correlation is that the higher the fragmentation, the lower the efficiency. Conversely, an alternative interpretation would be that multitasking could increase efficiency if the tasks are related. Thus, the manner in which this variable would impact the process is indeterminate based upon the literature. Category #2: Process The nature of the process and the activities involved within the process are of obvious importance for this research. Characterizing the process will involve understanding the type of work that is accomplished, the capabilities of the process, and the way in which the work is completed. A summarization of the variables that are proposed for operationalizing the process construct are shown in Table 9. 64 Table 9: Process Factors Factor Definition # of activities Number of discrete activities identified in the value stream Activity Complexity Perceived average complexity of the identified activities Training Requirements The average amount of training required for an entry-level engineer to work in the process Process Type Which enterprise value stream is the process attached? Process Maturity The maturity of the process as defined by capability and age. Documentation The degree to which useful documentation is available in the process to assist the worker in completing tasks. Homogeneity The variation of activities that are completed throughout the process. Parallel or serial The degree to which activities proceed in parallel or serial fashion (expressed as a variable between 0 to 1). Using the definition of “a process” defined in EIA 632, one of the key components of a process are the tasks that are required to complete the work. Thus, adequately describing the tasks and their relationships will be a major driver in being able to generally model any given process. From this perspective, the first item of interest will be the number of activities that are involved with the process to be improved. It is recognized that the number of activities may vary based upon the level of analysis associated with an improvement event. Therefore, a consistent metric is the number of activities identified in a value stream. This will enable a normalization of the activity counting. It is hypothesized that the greater number of activities, the more likely there are opportunities for improvement. Associated with the number of activities is the complexity of the activities. The activity complexity is an inherently qualitative assessment, but it will enable a calibration of the model to accept varying types of work. The scale used to assess activity complexity is based upon the COSYSMO model (Valerdi, 2005) due to its rigorous development in a technical environment and general industry acceptance. It should also 65 be noted that the activity complexity will assist in mitigating any issues associated with identifying processes with different numbers of activities because the complexity dimension should serve as a means of weighting the impact of the activities. As another means to assess activity complexity, the amount of training required to work in the process will also be measured. This is a quantifiable variable that will be easier to assess. However, it is expected to be highly correlated with activity complexity and could possibly represent issues associated with multi-collinearity in the statistical analysis. This topic is discussed in more detail in later sections. Process type will be used as a control variable to determine if the enterprise value stream is correlated with the type of work being completed. Similar to the previous discussion of functional roles, there is very little discussion in the literature that attempts to prescribe varying solutions based upon the nature of the work being done. In fact, most of the literature (Hallum, 2003) argues that the implementation methods should be the same, regardless of the nature of the work. However, it is reasonable to expect that the rate of improvement may be dependent upon the nature of the value stream. Although the data used for the analysis will not encompass any type of potential value stream, it will be valuable to note if there are obvious statistical differences. Despite this, it is expected that activity complexity and number of activities will dominate the impact of process type. Process maturity can be hypothesized to impact improvement efforts. The more mature the process, the more likely it is to be understood. Moreover, a mature process would also be more amenable to process standardization. The process maturity was assessed based upon the COSYSMO definition (Valerdi, 2005) for the same reason as the 66 complexity definition. In addition, this scale incorporates general CMM standards and should serve as a common measure of maturity. An important component of maturity is the degree of documentation associated with a process. This is included in the CMM process maturity models and has been identified by numerous experts as a good metric. Furthermore, documentation can be rather easily assessed and is understood by practitioners. As such, it is included as a predictor for the process construct. The more documentation that has been created, the more likely the process operates at an efficient level. An important caveat is that the documentation must be of high enough quality to be useful to the process participants. Thus, an assessment of documentation must include an inherently qualitative analysis of the documentation as well. It should also be noted that this variable may be dominated by the process maturity variable previously discussed. Another form of uncertainty is the variety of the activities that must be completed. It is quite easy to conceive a process in which the process activities may vary based upon the customer or product specifics. In these instances, it could be difficult to optimize due to the degree of complexity, training, and control required to produce the outputs. In addition, more varied activities will most likely involve more interfaces, providing opportunities for increased improvement, but also increased problems (Rechtin, 1991). Therefore, it is likely that processes with a wide-range of activities are better candidates for improvement efforts. The final process factor that will be assessed is whether activities are completed in a sequential or parallel process. Sequential activities are more likely to be better understood and optimized for the inputs and outputs associated with each activity. In 67 addition, parallel activities will increase the number of interfaces throughout the process and increase the likelihood of sub-optimization. Thus, a metric will be developed to assess the degree to which the process is serial and/or parallel. This will most likely be some type of index based upon the value stream mapping activities. This metric should be straightforward to assess and provide insight into how the interfaces between activities will impact the process efficiency. Although this variable assesses the process, its inclusion is strongly suggested by some of the organizational network theory (Mann and Dhallin, 2003b). Category #3: Customer/Product A customer-focused organization is considered the primary characteristic of a quality organization (Dean and Bowen, 1994; Liker, 2004; Womack and Jones, 2003). In addition, most of the previously conducted empirical studies on quality constructs have included some measure of customer results or customer satisfaction (Sila and Ebrahimpour, 2002). However, these constructs have largely been treated as dependent variables that are heavily influenced by the other quality constructs. For the purposes of this research, the focus is slightly different. Rather than treating the customer or business results construct and its measurable items as dependent variables, it is hypothesized that the nature of the customer has a direct impact on the process and will serve as a predictor of the success of the process improvement implementation effort. The nature of a knowledge-intensive organization adds an additional layer of complexity in assessing the impact that customers or products may have on any given process. This results from the fact that the products of the process are intangible and difficult to measure (Mann and Dhallin, 2003b). Therefore, the products delivered and 68 customers tend to have a much higher correlation. Due to the difficulty of separating factors associated with product and customer, they are treated as a single construct for the purposes of this research. This is not inconsistent with the previous empirical research in TQM (Curkovic, et al., 2000). The customer/product factors are shown in Table 10, below. The characteristics of the customer-base can be hypothesized as being quite important in the manner in which the process and associated personnel operate. The number and concentration of customers will impact the way in which work is done throughout the process. In some way, this is an analogy to supplier concentration. As shown by the ubiquitous Porter model, concentration of suppliers can greatly impact an industry’s market dynamics (Porter, 1981). Similarly, it can be hypothesized that the more dependent a process is on a single customer, the more impact that customer will have on a process and any effort to improve it. Furthermore, the greater the number of customers, the higher the coordination and customization costs that could be expected and the greater likelihood of inefficiencies. In addition, the higher the concentration of product to a few customers, the more likely the process can be standardized. It is likely that these variables will exhibit a behavior similar to that predicted in Degree of Teaming, in which extremely high and low concentrations and number of customers represent opportunities for improvement. A major feature of any process is the inherent uncertainty associated with the work. In a knowledge-intensive environment, this could have a very significant impact due to the lack of standardization and importance of the innovation process. Therefore, the better understanding that the worker has of the requirements, the more likely the 69 process is to be efficient. This item has shown to be significant in other modeling applications. (Valerdi, 2005) Table 10: Customer/Product Factors Factor Definition Number of Customers The number of customer associated with a given process. Customer Concentration The degree to which a single customer dominates production Requirements Understanding Degree to which the requirements are accurately communicated and understood. Level of Service Requirements Expected level of service to be delivered to the client after the product has been produced. Type of Customer Is the work product for an internal or external customer? Product Criticality Importance of the product or service to the customer Accountability Requirements Degree to which the individuals are held accountable for the products Diversity of Customers The observed diversity of the customers (not the products) Customer Involvement The degree to which the customers are involved in the value creation. The nature of the product should also impact the efficiency of the process. A product that requires a high level of service will most likely have a higher degree of documentation and better understanding. Given a fixed cost, this should result in a better process due to the fact that higher productivity is required. Similarly, the type of customer can be hypothesized to impact the process. Organizations tend to focus on external customers and often do not understand the interfaces associated with internal customers (Mann and Dhallin, 2003b). It is also reasonable to expect that the internal organizational interfaces may be unknown or ignored, suggesting significant opportunities for improvement when studied. The diversity of the customer-base may also impact the performance of the process. It could be hypothesized that the more diverse the customer base, the more effort is required by the process to accommodate customer demands. Previous literature 70 looking at network effects among firms in an industry has also shown information transfer benefits that result from numerous firms working together (Powell, 1989). As a result, it is somewhat difficult to identify the exact impact that this factor may have on the process improvement effort. The criticality and accountability associated with a work product can also be expected to impact process improvement efforts. Highly critical and visible products will be more likely to have higher quality due to the added attention from the customer. Conversely, this may also lead to inefficiencies due to additional reviews and iterations that may not necessarily add value, but are used only to minimize any potential risk. Accountability requirements have a similar impact, in that individuals who are held accountable for output have a vested interest in ensuring higher quality. Although it is hypothesized that these two variables will be of significance in predicting improvement opportunities, it is difficult to hypothesize the impact they will have. Finally, customer involvement is likely to be a strong predictor of improvement opportunities. Highly involved customers will most likely encourage the process to focus on value-added products only, reducing opportunities for waste. There is ample evidence in the Lean literature that suggests firms will exhibit superior performance by closely involving their customers (Murman, et al., 2002). Taken at a more micro level, it is reasonable to suggest that the benefit of this involvement will continue at all levels of an organization. However, if the process already has high customer involvement, then the benefits of the involvement may have already been realized, suggesting that the percentage return on the process improvement result may not be as significant as a process in which the customer is not as heavily involved. 71 Category #4: Information Processing Information infrastructure is a critical element in all quality methodologies. This construct evaluates the ability of the organization to gather, use, and process information and review performance. However, the quality management literature does not fully operationalize the construct, relying upon the use of statistical tools and structured methodologies as proxies. Researchers that have evaluated the casual relationships between the various identified quality constructs (Anderson, et al., 1995; Curkovic, et al., 2000) have determined that the information processing and analysis construct is the foundation upon which the other quality constructs depend. Thus, the prescriptions for this construct have a significant impact on the implementation of the other quality constructs. This has led several critics of quality management to describe it as nothing more than scientific management revisited (Spencer, 1994). Management theory topics that overlap with this construct include decision- making and information processes. However, the similarity of prescriptions between the two fields is quite mixed. Quality management emphasizes the need to collect and process as much data as possible in order to facilitate data-based decision-making. Some streams of management theory agree with the notion of linking information process and organizational performance (Galbraith, 1977). The reliance on information processing and utilization conforms to rational models of decision-making (Scott, 2003). Other theories of information processing are less sanguine about the importance of information processing and analysis. The common assumption of bounded rationality (March and Simon, 1958) suggests that people can only assimilate and process a limited, finite amount of information. Institutional isomorphism, which is defined as the concept 72 that companies often adopt traits and characteristics because their peers have adopted them, suggests that people may analyze information in order to gain legitimacy rather than to make decisions (DiMaggio and Powell, 1983). Furthermore, information analysis may be used to justify predetermined conclusions (Pfeffer, 1981). In addition, the rational model of decision-making has been shown to exhibit lower performance in situations in which there is high task uncertainty (Scott, 2003). Upon comparing the prescriptions for information processing and analysis advocated by quality management and management theory, it appears that there is an inherent gap. Management literature offers few prescriptive solutions for this construct (Eisenhardt and Zbaracki, 1992). Conversely, quality management literature does not fully operationalize the construct, relying upon the use of statistical tools and structured methodologies as proxies. This is most likely due to the fact that most previous empirical studies of the construct were performed in a manufacturing environment. Thus, an analysis of how information processing occurs and facilitates quality management efforts at all levels of the enterprise would enhance both an understanding of quality management efforts and management theory. Despite this, it is imperative that the information requirements and communication channels be captured in order to assess the opportunities for process improvement. As Oppenheim (2004) notes, Lean engineering is largely based on the proper flow of information. Table 11 suggests the variables that can be used to assess the information infrastructure of a process. The degree of automation for a process is a measure of how standardized the process is. In a knowledge-intensive environment, the automation is likely to be incorporated through the automated flow of information and knowledge sharing. This 73 often occurs through information systems and may be exhibited through actions such as automated design mechanisms or modeling from historical data. The higher the degree of automation, the more efficient the process likely is. This does rely upon a simplifying assumption that the automation is well-designed. However, for the purposes of this research, the assessment of the automation is beyond the scope of the effort. This hypothesis corresponds to the knowledge-value added methodologies developed by Housel (Rodgers and Housel, 2003). A correlated factor is the importance of various tools to complete the job. If the workers are highly reliant upon knowledge management systems or other tools, it may be expected that tools are well-embedded in the process, implying more standardization and consistency. Table 11: Information Factors Factors Definition Automation The percentage of the work that is automated through information systems Importance of Tools The degree to which personnel rely upon tools to complete their tasks Communication requirements The quantity of communication required to satisfactorily accomplish a task Communication quality The average quality of communication between individuals Embeddedness The degree to which information is available from sources outside of the process Availability of Feedback The degree to which information is provided to previous steps in the process to help in future decision making Availability of Controls The degree to which controls are built into the process steps to ensure quality. Communication can be expected to play a critical role in information sharing. Management theorists commonly accept the assumption of bounded rationality (March and Simon, 1958) and suggest that people can only assimilate and process a limited, finite amount of information. As a result, people are able to overcome or aggravate this limitation through communication with others. Thus, communication is fundamental for 74 knowledge transfer and value creation. This is also consistent with much of the knowledge management literature (Prusack, 1997). However, too much communication can also be detrimental, leading to increased overhead costs from coordination and non- value added activities or, in the extreme, confusion and errors. Therefore, it is likely that processes with a high degree of communication requirements will have opportunities for waste elimination (Oppenheim, 2003). A caveat to the above is the quality of the communication. Not only should the quantity of communication be measured, but the quality of the communications should also be assessed. In some instances, it may be appropriate to have large quantities of information if the corresponding quality of information is significant. However, instances in which the communication quality is not high, it is highly likely that there be significant potential for improvement efforts. The measurement scale for the quality of communication will have to be a qualitative assessment based upon pre-defined criteria. The exact scale is yet to be determined; therefore, it is expected that both of these factors will be significant in predicting the potential of a process improvement effort. Another factor for consideration in the information factors category is the concept of embeddedness. This is borrowed from network theory and is defined as the types of ties that individuals have to different parts of an organization or process (Uzzi, 1997). In this instance, embeddedness will be used to define the ability for information from sources outside the process to be accessed, searched, and incorporated. This is consistent with previous empirical research (Uzzi 1997). Another important dimension of information sharing and analysis will be the use of feedback and controls within the process. Feedback loops are an important concept in 75 complex systems (Senge 2006). Without feedback, there is a limited ability to control the system. Similarly, much of the quality control literature emphases the necessity of feedback for process control (Deming, 1986). Thus, it is reasonable to expect that well- performing systems will exhibit some type of built-in feedback loop. A similar concept could also be theorized at a more micro level. Whereas feedback mechanisms provide information about the output of the process, controls are implemented into individual process steps in order to ensure that each step of the process produces quality output. This is most easily observed in the poka-yoke techniques deployed by traditional quality control applications. It is reasonable to expect that the presence of controls could also impact knowledge-intensive organizations. Although this variable is of interest, its impact may be difficult to measure due to the nature of the data set under study. Category #5: Environment Factors The final category for consideration includes all variables related to the external environment of the process. Most of these are relatively straightforward and can be considered control variables. The list of identified environmental variables is shown in Table 12. Table 12: Environmental Factors Factors Definitions # of Organizations Number of distinct organizations that report through different chains Managerial oversight The degree of oversight that management exerts on an ongoing basis for a given process # of Locations The number of independent sites involved with the process Date The date the implementation was completed Personnel The specific personnel leading the effort 76 The number of distinct organizations involved is a measure of the organizational interfaces that must be mediated within the process. In this instance, an organization is defined as an entity that reports through a different chain of command. It can be hypothesized that an increase in the number of organizations involved with a given process will increase the number of organizational interfaces and, hence, complexity of the process (Scott, 2003). Previous literature and experience has shown that many problems associated with a process can be strongly correlated to a breakdown in organizational interfaces (Mann and Dhallin, 2003a; Mann and Dhallin, 2003b; Poldony, 1997; Burt, 2000). As a result, it would be expected that this is a significant variables in determining potential for process improvement efforts. From a corporate governance perspective, the degree of managerial oversight can be expected to focus productivity efforts. This variable is somewhat analogous to the accountability variables discussed in the Customer/Product section. However, it is slightly different in that it examines the implementation of general management practices. This can be considered an environmental variable because it would be expected to be constant across the major organizational units, rather than vary with a given process. It would be expected that a better managed process would exhibit better performance, thus providing a lower return on process improvement efforts. The location of production is often of significant impact in many endeavors. The difference can often be attributed to personnel, supplier, customer, or other factors. In this research, the organization under study has several satellite facilities, some in different states. As a result, a given process may involve more than one location. Since the involvement of these other locations is often mandated by Department of Defense orders 77 or Congressional mandate, the presence of multiple locations should be considered an environmental variable for the purposes of the research project. It is expected that multiple sites will decrease efficiency due to increased coordination costs. The date of the implementation effort and the personnel doing the implementation event are included to test for significance. If these are significant, it may suggest that improvement results are indicative of a cultural transformation. For a similar reason, the lead implementers of the improvement events will be tracked in order to assess for a correlation between project lead and project results. This may suggest that a separate mechanism within the implementation event, rather than the process, is driving the results of the improvement effort. Finally, the various types of waste present in a process will be collected (Oppenheim 2004). This will allow correlations with types of waste and process characteristics. It may also suggest pre-emptive measures that could be taken in order to improve general process performance without completing a complete implementation effort. At a minimum, it will be useful to note who the traditional measures of waste translate into a knowledge-intensive environment. 78 Chapter 9: Validation of Constructs and Factors As explained in Section 7.0, once the potential constructs of interest had been identified and theoretically operationalized, it was necessary to develop a means of empirically validating them. As discussed in the literature review, previous research has followed a similar methodology to validate the theoretical constructs for TQM (Sila and Ebrahimpour, 2002). However, the current research is significantly different because it attempts to validate the theoretically-derived constructs for a Lean Six Sigma environment. Although it is hypothesized that these models will be similar, it cannot be assumed a priori. Another constraint on the use of previous research is the heavy emphasis on manufacturing environments. Most of the previous literature is focused on applying or validating quality methodologies in a manufacturing environment. This research is specifically focused on a knowledge-intensive environment. These environments are quite different, exhibiting different characteristics, management, value steams, etc. As a result, it is absolutely necessary to validate any hypothesized constructs to ensure that they are applicable for a knowledge-intensive environment. Given the above-stated necessity of validating hypothesized constructs, it was determined that it would be required to conduct some type of survey and analysis to assess and validate the latent variables and constructs prior to the development of a predictive model. This type of research is quite familiar to management theorists and social psychologists. Within the engineering domains, however, this is a rather unique approach, since most items of interest are measurable directly. Therefore, it was necessary to draw upon non-traditional engineering statistics and methods. 79 The chosen methodology was the development and deployment of a survey to assess the significance of the constructs and their operationalized variables. The survey was designed to be given to process improvement experts (i.e., black belts and green belts). The survey template is contained in Section 14.0 (Appendix B). This survey was designed based upon accepted best practices (Bradburn, 2004) and in accordance with previous research peer-reviewed research in a similar domain (Curkovic, et al., 2000). The survey was reviewed and piloted with a small number of Navy Master Black Belts to ensure participant understanding and response. The author was asked by NAVSEA to provide the survey for the Navy Lean Six Sigma Conference in May, 2007. The survey was provided to all participants, each of whom was at least a green belt and had completed at least one process improvement project. The respondents represented all of the U.S. Navy commands, including NAVAIR, NAVSEA, and the supply depots. A total of 206 surveys were collected; of those, a total of 188 were complete. The partially complete surveys were removed from the sample so as to not skew the results. This effort was expected to be a smaller scale effort in which the survey could be more thoroughly piloted and assessed. However, given the number of respondents, it was decided to analyze the survey results more rigorously and determine if any statistically significant results could be analyzed. Any issues associated with statistical significance and reliability could be addressed through a second round of surveys. The following sub-sections detail the analysis and results of the surveys. However, before the results can be presented, the statistical methodology used to analyze the surveys is discussed. 80 Factor Analysis Factor analysis techniques are methods of data reduction in which the covariance among a set of variables are assessed in order to better understand the underlying latent variables (i.e., constructs). The term “factor analysis” can refer to many different types of specific techniques, such as exploratory factor analysis, principal component analysis, and confirmatory factor analysis. In addition, this type of analysis can generally be thought of as a subset of structural equation modeling (Kline, 2005; Byrne 1994). Factor analysis techniques were chosen as the means of statistically analyzing the survey results because of their general applicability to this domain and their widespread acceptance for the analysis of qualitative survey data (Dunteman, 1989). The initial concept of factor analysis is to transform a set of variables into a much smaller set of uncorrelated variables that represent the same information displayed in the original set. In particular, factor analysis is an excellent technique for reducing a complex system of correlations into a more manageable model with fewer dimensions. Each variable in a data set represents a dimension. However, when trying to identify underlying constructs, it is reasonable to expect that the variables will be related such that they can be attributed to underlying constructs (Kline, 2005; Byrne 1994). An important caveat to factor analysis is that the identified constructs must always be interpreted to ensure that they are meaningful and based upon theory. This is consistent with other statistical methods (i.e., regression analysis) in that the results must always be interpreted rather than blindly accepted. As stated above, “factor analysis” is a generic term that encompasses both principal component analysis (“PCA”) and confirmatory factor analysis (“CFA”). 81 Although these techniques are mathematically similar and used for data reduction purposes, they employ different assumptions. CFA decomposes the variance associated with a variable into a common variance shared with other variables and a unique variance associated with a specific variable (the error term). In contrast, PCA examines the total variance of the variables. The result is similar in that both will reduce the original set of variables into a more parsimonious set of variables that can explain the same information (Collar, 2005). Despite the similarity of the two techniques, they are used for different reasons. PCA is used on data in which the relationships between the observed and latent variables are unknown or uncertain. As a result, it is used in more of an exploratory fashion, providing the researcher with guidance into how the observed variables relate to the underlying constructs. This will allow a research to hypothesize a parsimonious model. Conversely, CFA is used in situations in which a model is hypothesized and the researcher is attempting to test the hypothesized linkages between the observed variables and underlying factors (Byrne, 1994). To accomplish data reduction into identified constructs, the Variance-Covariance matrix is used for the calculation. The methodology will identify a number of principal components (or constructs), such that the first principal component accounts for as much of the variability in the data as possible. Subsequent components will account for as much of the remaining variability as possible. Initially, factor analysis is calculated in which each of the components is located orthogonal to all others (Johnson, 1998). Mathematically, this is represented in which the first principal component is defined as y 1 =a 1 ’(x-u), where a 1 is chosen such that the variance of a 1 ’(x-u) is 82 maximized over all vectors a 1 satisfying a 1 ’a 1 =1. The maximum value of the variance of a 1 ’(x-u) among all vectors a 1 satisfying a 1 ’a 1 =1 is equal to λ 1 , the largest eigenvalue of the Variance-Covariance matrix, Σ. This maximum occurs when a 1 is an eigenvector of Σ corresponding to λ 1 and satisfying a 1 ’a 1 =1. In a similar manner, the second principal component is calculated as y 2 =a 2 ’(x-u), where a 2 is chosen such that the variance of a 2 ’(x-u) is a maximum over all linear combinations of x that are uncorrelated with the first principal component and have a 2 ’a 2 =1. The maximum value of the variance of a 2 ’(x-u) among all vectors a 2 satisfying a 2 ’a 2 =1 is equal to λ 2 , the second largest eigenvalue of Σ. This maximum occurs when a 2 is an eigenvector of Σ corresponding to λ 2 and satisfying a 2 ’a 2 =1. In this manner, the computation of principal components will continue for all calculated eigenvalues. However, the explanatory value of the each of the subsequent principal components will continue to decrease. Thus, it is up to the researcher to identify the number of components that should be used to explain the total variance observed. Two methods are provided for this. The first is a SCREE plot in which each of the eigenvalues is plotted on a graph. When the points of the graph tend to level off, the eigenvalues are most likely measuring random noise and can be disregarded. Another heuristic for determining which eigenvalues should be included is provided from the empirical social science literature. In this instance, the guidelines suggest that all eigenvalues greater than 1.0 should be included. Eigenvalues less than this should be discarded. This heuristic is generally accepted in the academic literature and will be used for the purposes of this research (Curkovic, et al., 2000). However, for comparison, SCREE plots have been included for each of the identified principal 83 components and can be compared. An analysis suggested that the 1.0 limit was adequate for the data set collected (Bryne, 1994). Once the principal components have been identified, the factor loadings of the individual variables are also examined. The factor loadings are usually interpreted as regression coefficients for each of the variables on a particular construct. An example of a factor loading matrix is shown in Table 15. The factor loadings demonstrate the degree to which each of the identified variables explains an identified construct. When a variable loads on a factor, it means that part of the variance is explained by a specific construct; the higher the load, the more that the variable can be isolated to that construct. Therefore, a desired outcome is that the hypothesized variables for a specific construct all load on that construct. Note that each construct will have multiple loadings. However, it is expected that those variables that are theoretically hypothesized to operationalize the construct will load primarily on that construct (Kline, 2005). Each variable may potentially load on more than one construct. In these instances the measured variable is not an optimum measure, but can still be used as a means of assessing the underlying construct. Split loadings may also be indicative of a high degree of correlation between the latent constructs, perhaps even indicating a higher order construct relating them. The generally accepted value for determining whether a factor has loaded significantly on a construct is 0.40 (either positive or negative). Values greater than 0.40 are indicative of significant loadings and the variable should be assessed as being a predictor of the construct (Kline, 2005). Along with the identification of principal components and their associated loadings is the concept of rotation. As in general matrix manipulation, in which the 84 system can be rotated for simplification, principal components can be rotated in order to maximize the explained variance. Convention states that the principal components will be orthogonal to each other (also known as the varimax technique). However, this assumption may or may not be true. In instances in which it is not true, an oblique rotation may be done in which the assumption is relaxed. This relaxation of the orthogonality constraint is appropriate when it can be hypothesized that the constructs are not independent of each other. This is appropriate for this research because it can logically be expected that people, activities, customer, and information flows can all impact each other. As a result, the validation of the constructs used an oblique rotation, resulting in a statistically significant result. Analysis of Survey Results The survey results were collected and transcribed into a spreadsheet form in order to facilitate data analysis. This section presents the analysis of the survey results and discusses the implications of that analysis. The descriptive statistics associated with the survey results is shown in the following table. Note that of the 206 surveys, 188 were actually used for analysis. This was due to the fact that not all surveys were completed. In order to be conservative and ensure the integrity of the data analysis, those surveys with missing data were removed from the analysis results. A brief analysis of the summary statistics shows that there is significant range within the surveys. Each variable had a Minimum of 1 and a Maximum of 5. Furthermore, the mean and standard deviation varied significantly from variable to variable. 85 Table 13: Descriptive Statistics N Minimum Maximum Mean Std. Deviation NoPeople 188 1 5 3.77 1.012 Multitasking 188 1 5 3.44 1.009 Teaming 188 1 5 4.02 .936 NoRoles 188 1 5 3.62 1.003 Training 188 1 5 3.38 1.081 WorkforceDiv 188 1 5 3.07 1.124 NoActivities 188 1 5 3.44 1.060 Complexity 188 1 5 3.68 1.078 WorkType 188 1 5 2.88 1.160 Documentation 188 1 5 3.44 1.056 DegreeSerial 188 1 5 3.52 1.037 Requirements 188 1 5 4.57 .821 CustSupport 188 1 5 3.78 1.086 Criticality 188 1 5 4.09 1.103 NoCusts 188 1 5 3.32 1.154 CustDiversity 188 1 5 2.99 1.237 CustInvolve 188 1 5 4.08 1.084 Automation 188 1 5 2.71 1.091 Tooluse 188 1 5 2.72 1.103 CommQuant 188 1 5 3.83 1.081 CommQual 188 1 5 3.92 .981 NoOrgs 188 1 5 3.61 1.101 Valid N (listwise) 188 The first step in analyzing the survey results was to verify the reliability of the results. In this context, “reliability” refers to the degree to which the scores are free from random measurement errors. The reliability is measured as one minus the proportion of observed variance that is due to random error. Since there are different types of errors, there are multiple means of calculating reliability. The most common method of measurement is Cronbach’s Alpha, which measures the internal consistency of reliability. This metric assesses the degree to which the responses to variables hypothesized as being associated with a single construct are consistent. There is no set level to which the reliability coefficient is required. However, scores around 0.70 are generally considered adequate (Kline, 2005). 86 The correlation matrices between the variables for each of the hypothesized constructs and the calculations for the Cronbach’s Alpha for each is shown in Section 14.0 (Appendix C). A review of the correlation matrices determined that the variables within the hypothesized constructs exhibited a high degree of correlation, indicating the validity of the hypotheses. Almost all of the correlations were flagged at a significance level of p = 0.05 or better. In addition, the computed reliability numbers were within a generally expected range of greater than 0.70. Although somewhat lower than desired in traditional survey research, the numbers are consistent with previous quality studies and very close to or above the 0.70 threshold. It should also be noted that a reliability coefficient was calculated for each construct in which each of the variables was eliminated and the coefficient re-computed. The results indicated that the best calculation of reliability was for instances in which all of the variables were included. Thus, it is concluded that the survey exhibited an adequate degree of reliability. Table 14: Reliability Computations Construct Cronbach’s Alpha People 0.67 Activity 0.73 Customer 0.70 Information 0.67 After the reliability of survey was ensured, the next step was to use the factor analytic techniques to validate the hypothesized model and variables. This was done in a two-step process. The first was an exploratory factor analysis computed on the variables associated with each of the proposed constructs. This was done in order to verify that the constructs loaded primarily upon a single construct. The second step was a full CFE in which the entire model was validated. 87 The exploratory factor analysis results are shown in Section 15.0 (Appendix D). The exploratory factor analysis for both the people and activity constructs exhibited expected behavior. Each showed a single eigenvalue greater than 1.0, indicating that the variables loaded on single construct. However, the exploratory factor analysis for the customer and information constructs was somewhat surprising in that each showed two eigenvalues over 1.0, indicating that there were multiple constructs present in the analysis. In order to assess this, additional models were developed in which the variables associated with multiple eigenvalues were removed. In each instances, the removal of a variable (customer diversity for the customer construct and communication quality for the information construct) reduced the number of significant eigenvalues to 1.0. Thus, this indicates that the customer diversity and communication quality variable may not be correctly specified in the model. Despite this, the variables were retained and analyzed in the CFA analysis, but particular attention was focused to ensure their significance. The final step was the completion of a PCA analysis. The complete output is shown in Section 17.0 (Appendix E). For summary purposes, the final constructs and factor loadings are shown in the Table below. Two different rotations were performed, varimax and oblique. As can be seen from the detailed output, the oblique rotation seemed to fit the data better. Also, an oblique rotation can be theoretically supported because it is reasonable to expect that the people, activities, customer, and information aspects of a general process are not completely independent of each other. Thus, the orthogonality constraint can be relaxed. The analysis of the PCA factor loadings was quite interesting. The analysis suggests that two factors that were not initially hypothesized appear to have been 88 identified in the factor analysis. However, there does appear to be clear evidence of people, activity, and customer constructs (Components 1, 4, and 2, respectively). Table 15: Exploratory Factor Analysis Loadings Factors (Identified Constructs) 1 2 3 4 5 6 NoPeople .557 -.016 -.031 .035 -.036 .063 Multitasking .128 -.189 .530 -.173 -.175 .108 Teaming .380 .186 -.015 -.087 -.496 .175 NoRoles .416 .029 .003 -.311 -.188 -.083 Training .750 .097 -.030 -.169 .023 -.077 WorkforceDiv .507 .028 .254 .119 -.177 -.150 NoActivities -.030 -.051 -.013 -.833 -.045 -.052 Complexity .046 .076 -.035 -.873 .104 -.011 WorkType .243 -.013 .031 -.355 .046 -.421 Documentation -.148 .140 -.148 -.148 -.151 -.779 DegreeSerial -.094 .006 .143 -.566 -.265 -.060 Requirements -.023 .686 -.112 -.034 -.332 .048 CustSupport .328 .668 -.125 .152 .084 -.340 Criticality .093 .737 .168 -.244 .122 .093 NoCusts .038 .408 .650 -.196 .204 .103 CustDiversity -.138 .097 .816 .063 -.110 -.033 CustInvolve -.297 .558 .247 .118 -.279 -.078 Automation .144 -.116 .516 -.059 .075 -.565 Tooluse .277 -.243 .447 .023 .034 -.508 CommQuant .146 -.078 .151 -.023 -.739 .026 CommQual -.033 .106 -.074 -.103 -.719 -.238 Extraction Method: Principal Component Analysis. Rotation Method: Oblimin with Kaiser Normalization. a Rotation converged in 14 iterations. An analysis of the other components suggests that they have something to do with the information construct, since each of them shows significant loadings on two variables associated with that construct. For component three, the component appears to be combining the degree of automation and IT tool use with multitasking, the number of customers, and the customer diversity. This suggests that this construct is concerned with the coordination of information with respect to different customer demands. Similarly, component 5 demonstrates significant loadings on the two variables associated with 89 communication and the people variable associated with teaming. Thus, this construct appears to be associated with the coordination of information between people. Finally, component 6 exhibits loadings associated with automation and IT tool use with the degree of documentation and the type of work being formed. This suggests that this construct is coordinating the flow of information between activities. Based upon the above analysis, it appears that the originally hypothesized information construct can actually be separated into three sub-constructs: customer coordination, people coordination, and activity coordination. It is also interesting to note that the two constructs that explain the greatest degree of the model are the people and customer constructs. This corresponds well with the literature review. Furthermore, the analysis clearly shows the importance of information within the process, albeit the analysis suggests separate mechanisms for people, customer, and task coordination. Overall, the survey results strongly support the hypothesized model. The significance of the model appears to justify the identified constructs and variables; thus, the goal of validating the hypothetical model appears to be achieved. This allows for the creation of a predictive model with a theoretical basis. Additional survey data would further refine the model and the associated validations. However, the validation results are quite strong based upon the significance of the identified constructs and the associated expected loading of items within the hypothesized constructs. 90 Chapter 10: Modeling of Variables and Factors The final part of the research involved the modeling of Lean Six Sigma implementation efforts. Using the validated factors, historical project data was assessed, normalized, and coded for use in a generalize d linear regression. The resulting statistical analysis provides a model from which predictions can be made about effectiveness of future LSS implementation efforts for similar environments. The design and use of the predictive model is similar in form and functionality as other models, such as COSYSMO or COCOMO, due to the widespread usage for engineering applications. The model uses standardized parameters to input information and produce a statistically valid result to provide guidance on decision-making. The development of the predictive model is built off the work discussed above in order to provide a theoretical basis for the model parameters. The initial research question hypothesized 36 potential variables that could be used to model a generalized process. These variables have been identified from the existing literature and expert opinion. Moreover, the underlying model and constructs that these variables represent have been validated through the use of factor analysis. The result is a group of validated variables that is hypothesized to be able to predict the outcome of process improvement initiatives based upon process characteristics. The next step of the modeling process analyzed the specified variables, with appropriate modifications, on a historical data set to determine if there was a statistically significant relationship with the outcome of process improvement efforts. The variables that were validated using the factor analysis results were evaluated and appropriate scales were developed to assess the variables in the context of the 91 historical data. Furthermore, the validated variables were reviewed and those with suspected multi-collinearity were assessed in an effort to ensure the generalized linear regression would be parsimonious, a key objective for good models. The result of this effort was the construction of a survey that can be used to assess individual projects. For this model, the dependent variable is the return on investment, expressed as dollars. The original intent of the research was to also assess the percentage cycle-time savings. However, the available historical data did not consistently collect this metric, so dollars was the primary dependent variable. The final form of the model identified a group of independent variables and their relationship to a dependent variable. The independent variables consist of the identified variables that are determined to be statistically significant. These variables are weighted based upon the end result of the statistical analysis. The dependent variable is expressed in terms of return on investment. The ROI metric was used because it normalizes for differences associated with various locations and assists in the generalization of the results. It will also allow for better decision-making capabilities, as alluded to in the question, in which a more expensive implementation effort may yield higher returns but a lower ROI. Thus, the ROI metric is highly preferred. Where possible, actual costs were collected so that an analysis of the percentage cost savings achieved could be done in the future. The database created for this analysis consisted of 241 projects. These projects included strategic value stream analyses, rapid improvement events, LSS black belt projects, and Do Its. However, upon review, not all of the projects were process-centric. Hence, an initial review was completed to determine those projects that did not meet the 92 intent of the research. In particular, the Workforce Balancing Rapid Improvement Events (“WBRIEs”) were eliminated from the sample set. WBRIEs are conducted when a member of a department leaves the organization and are used to re-distribute the workload so as to eliminate the need to hire a replacement. Although this is reported as cost-saving to the enterprise, the result of the WBRIE may or may not be an actual process improvement. Thus, for consistency, the WBRIEs were removed from the data set used for the model. Future research may investigate these efforts. After the initial assessment, 96 total projects were identified as being process- centric and having enough background data to support the required coding efforts. The coding is discussed in detail below. Coding of Data Prior to the creation of the generalized linear regression model and its analysis, it was necessary to review each of the hypothesized variables and develop a scale on which they could be normalized and compared. In addition, best practices in modeling require that the model be as parsimonious as possible; i.e., maximize predictive power with the fewest number of variables. Consequently, the hypothesized variables were reviewed in the context of the factor analysis. Those variables that had significant overlap or were perceived to have weaker significance were reviewed in detail. Where possible, variables were combined or eliminated. This is discussed in more detail below. Once the set of variables with which the model would be created was identified, a scale was developed for each of the variables. Where possible, actual and objective numbers were used to evaluate and code the variable. However, due to the fact that the coding was for historical data, many of the variables were assessed subjectively. Real- 93 time coding of the data would enable more precise numbers to be generated. The purpose of the research was to develop an initial model with the assumption that future research would improve its fidelity. The random error for the subjective assessment was minimized by creating well-defined criteria for assessment such that a reviewer could consistently complete the normalization. Each of the variables that were identified as candidates for the generalized linear regression and their resulting scales is discussed in detail below. As discussed in Table 8, the literature suggested seven potential variables that could model the impact of personnel in a process: the number of people who work in the process, the number of full-time equivalents that are associated with the process, the degree of teaming, the number of roles necessary to complete the work, the diversity of the workforce, and the amount of multitasking present within the process. Nevertheless, once the survey data had been collected and analyzed, the factor analysis suggested that the personnel construct could be explained primarily by the number of people, the number of roles, and the diversity of the workforce. The factor analysis additionally suggested that the amount of training required to complete the work was also part of the personnel construct. From a linear regression perspective, it is appropriate that the number of people and FTE be aggregated into a single metric, especially since FTE is calculated based upon the number of people involved. Failure to do so would raise severe multi- collinearity concerns. The impacts of multitasking and teaming were observed most significantly on information constructs. Upon deeper analysis, this can be analytically explained due to the fact that multitasking and teaming are primarily associated with 94 information flows and activity completion. Hence, although a person will be impacted by these two factors, they are more appropriate to be associated with a different construct. The initial inclination for modeling the number of people associated with a process was to use the exact number of people identified during the value stream mapping. However, upon a review of the various projects, it became evident that it was not useful to try to arrive at a precise number because the number of people for a given process would change on a regular basis depending upon availability, type of work, etc. Rather than a precise number, it was decided that a good metric for the number of people involved with a given process was the size of organization responsible for the process execution. This also had the added benefit of simplifying the coding for each project, enabling a clear and consistent criterion on which the projects could be compared. Another variable that was considered to be of potential importance was the number of organizations involved with the process, which was expected to be strongly correlated with the number of people. However, the impact of multiple organizations and the requisite coordination was hypothesized to impact the efficiency of the process. Moreover, the number of organizations involved with a process was quite easy to capture based upon the historical project data. Table 16: Scales and Explanations for Number of People Scale Explanation Team Small group, not more than 15 people. Can be drawn from various organizations. Department Organizational entity responsible for execution. Less than 50 people Division More than one department, can consist of several hundred people The factors involving the number of roles within a given process and the workforce diversity were analyzed in detail. Various literatures suggest that increased 95 diversity yields higher performance. This is also supported by the human capital and social capital literatures. Unfortunately, it was not possible to analyze specific aspects of human capital diversity due to the fact that the historical project data did not record specific details on each participant in the process. Furthermore, including personnel information would have prompted serious human resources concerns due to a lack of anonymity. Consequently, the data analysis was focused on diversity of the roles that individuals play within a given process. Role diversity is a very interesting topic in the quality literature due to the dearth of research that has been conducted about it. Quality management and quality control prescription are generally treated as universal applications. As a result, there is extremely limited insight into how the results of a quality implementation may differ based upon the type of work done. As discussed above, the higher the number of the roles in a given process, the more interfaces that will exist and the greater opportunity there is for identifying potential costs savings. In addition, the type of roles could be reasonably expected to impact the potential for cost savings. Consequently, based available data, the following categories of work were identified, as shown in Table 17, below. Table 17: Scales and Explanations for Role Types Scale Explanation Administrative Performs administrative tasks requiring little specialized knowledge. Contracts Focus on the oversight and development of contractual relationships Logistics Non-engineering tasks associated with Navy logistics functions Engineering All types of engineering, including design, test, verification, etc Management Supervisory activities that are not directly tied to a customer product. Maintenance Maintenance of machinery and equipment necessary for ongoing support. 96 Using the above categories, the number of roles observed in each of the projects was also counted. This was represented as a cardinal number in the model. Note that the number of roles observed was independent of the number of people who performed the role. Although Training was identified as a significant component in the factor model, upon review, it was determined that inclusion of both the Training variable and the Role Type variable had strong potential for multi-collinearity. Several of the experts queried expressed the sentiment that Training requirements were defined by the Role Typy; thus, no additional predictive power would be gained by modeling each variable separately. The second set of variables that was analyzed and assessed was those that addressed the Customer construct. All of the quality literature clearly indicated this was a significant construct. The importance of the construct was reinforced during the assessment of the factor analysis results. The factor analysis identified several variables that could be expected to measure various aspects of the customer involvement and product results. These were the level of requirements understanding, the expected level of support for the product, the criticality of the product to the customer, the number of customers, and the involvement of the customer with the given process. The modeling of customer requirements, product support, and product criticality has previously been accomplished in the COCOMO family of models. Of particular interest is the modeling of these parameters within COSYSMO. Given the scale development used to validate those models and their wide acceptance in academia and industry, those scales were used to assess the project data for this model. The scales utilized for this assessment are shown in the table below. 97 The number of customers is an objective measure that counts the number of organizations that are customers of a single product. From the perspective of the research, the customer organization refers to funding program offices. The rationale is that only organizations that contribute funds will be able to influence requirements. Table 18: Customer Construct Scales and Explanation Requirements Understanding: The level of understanding of the system requirements by all stakeholders including systems, software, hardware, customers, team members, users, etc. Very low Low Nominal High Very High Poor: emergent requirements or unprecedented system Minimal: many undefined areas Reasonable: some undefined areas Strong: few undefined areas Full understanding of requirements, familiar system Level of Service: The difficulty of satisfying the ensemble of level of service requirements, such as security, safety, response time, interoperability, maintainability, the “ilities,” etc. Very Low Low Nominal High Very High Simple Low difficulty, coupling Moderately complex, coupled Difficult, coupled KPPs Very complex, tightly coupled KPPs Criticality of Requirements: The criticality of satisfying the ensemble of level of service requirements. Very Low Low Nominal High Very High Slight inconvenience Easily recoverable losses Some Loss High financial loss Risk to warfighter Customer involvement is the degree to which a customer is actively engaged in the given process. Due to the fact that there were limited metrics available to measure customer involvement, each project was assessed as either having “Limited” or “Significant” involvement. “Limited” involvement was associated with customers that provided only funding and requirements, having little impact on the overall process and only concerned about the product. Customers with “Significant” involvement were 98 actively engaged in all aspects of the process, often with embedded representatives who exerted direct influence. Since each process had to have a funding source, there was not an option for “No” involvement. Conversely, given the available data set, it was impossible to distinguish extremely high levels of involvement from other significant efforts. Of the hypothesized variables that could reflect the Activity Construct, the factor analysis identified three significant factors: Number of Activities, Complexity of Activities, and the degree to which the activities are completed in parallel or serial. Each of these factors exhibited very strong loadings during the factor analysis and was judged by the experts who completed the surveys to be key determinants of a projects success. Measuring the number of activities involved within a given process is an extremely difficult effort due to the concern of ensuring that the activity count is consistent with activity level. Since historical data could not be adequately normalized to provide the required consistency, the activity complexity was used as the primary means for assessing the nature of the work. Future research conducted in real-time during process improvement events will be able to capture this metric more precisely. Activity complexity is an extremely difficult metric to establish objectively. No current research provides a single means of measuring complexity. All previously developed models have measured activity complexity as the function of a subjective assessment. Due to the fact that this model is leveraging previously created scales, the activity complexity will be modeled on that developed by COSYSMO. Other aspects of complexity, such as work type, personnel interactions, etc will be captured by other variables and will likely be observed in the interaction effects between various variables. 99 Table19: Scales and Explanations for Activity Complexity Low Nominal High • Simple to execute • Few variations or options • Uncoupled • Standardized • Little training required • Familiar • Some variation and interpretation • Loosely coupled • Some training required • Difficult • High degree of customization and variation • Highly coupled • Significant training and experience required The degree to which process activities are conducted in parallel or serial can have a distinct impact on the various Lean Six Sigma implementation efforts. Therefore, this was hypothesized a variable of interest. The factor analysis confirmed this. This concept can best be thought of in terms of process execution, primarily as to how the process is executed with respect to majority of activities. Although this could be calculated as a single percentage, this would add a degree of precision that is unwarranted. For instance, a single activity could be done in parallel with the rest of the process activities. Thus, this could be represented as a single parallel activity or all activities are parallel. A better metric was assessed to be a qualitative measure of the degree of process execution. Given that all of the quality experts were trained in theory of constraints and lean literature, the application of assessing processes for parallel versus serial activities was straightforward. Based upon discussions with the various practitioners, the following scale was developed. The factor models identified three constructs associated with information sharing. These three constructs are hypothesized to be associated with the coordination of information with customers, the information shared within teams, and the coordination of activity information. The customer and activity information flows relied heavily upon the variables associated with Tool Automation and Tool Usage. In addition, the activity 100 coordination construct also incorporated the work type and documentation. Based upon the discussion above, Work Type was not included as a separate variable due to its similarity to Role Type. Documentation is expected to be of limited significance due to the expected multi-collinearity with Role Type, but was included. Table 20: Scales and Explanations for Process Execution Scale Explanation Serial The majority of activities are serial in nature, requiring that preceding activities be fully completed Mixed The process consists of multiple serial and parallel activities Parallel The majority of activities can be conducted in parallel with at least one other activity Table 21: Scales and Explanations for Documentation Scale Explanation Low Minimal or no specified documentation Nominal Documentation available for standardized and general tasks High Extensive documentation available to support most potential activities. The assessment of tool usage is focused on the degree to which the process requires the use of information technology. Similarly, Tool Automation refers to the amount of automation that is expected within the process. After reviewing the available project data, it became evident that there were limited instances in which a process had a significant amount of automation. This is reasonable considering the knowledge- intensive nature of the environment. Consequently, the focus on the information sharing construct was on the usage of IT tools to accomplish the work. Leveraging previous literature, the scale used to assess IT tool usage is shown in the table below. The teaming construct identified during the factor analysis was reflective of the degree to which individuals are able to work cooperatively on a given process. This included the quality and quantity of the communication paths among team members. 101 Based upon available data, it was not possible to construct network maps like those seen in social network analysis to assess communication quality and quantity. Nonetheless, the concept of teaming and team behavior was extremely familiar to all the quality experts; thus, a subjective measure of teaming was required to assess team performance. Table 22: Scales and Explanations for Tool Usage Scale Explanation Low No IT tools used or limited to those commonly available (i.e., Microsoft Office) Nominal Additional use of advanced capabilities for commonly available tools or common technical IT packages (i.e., MATLAB, SPSS) High Advanced toolsets, including customized IT applications, databases, and decision-support systems. Previous studies have measured teaming with respect to culture, trust, or cohesion. Due to the fact that historical data was associated with a single organization, it is expected that culture was consistent. Furthermore, the data suggested that the impact of teaming resulted from the need for individuals in the process to cooperatively work together to produce a product. As a result, the following scale was developed and tested with the quality experts as a means of measuring the degree of teaming. Table 23: Scales and Explanations for Teaming Scale Explanation Low Limited interactions required among participants Nominal Regular interactions required to complete assigned tasks High Intensive interactions and discussions required to accomplish assigned tasks In addition to the variables hypothesized and tested in the factor analysis, several environmental variables were also included. These variables were selected due to the fact that they control for the temporal, organizational, and execution aspects of process improvement implementation efforts. A summary of all of the initial model predictor variables is shown in the table below. 102 Table 24: Summary of Initial Linear Regression Model Predictor Variables Number of People Number of Organizations Role Type Level of Requirements Understanding Level of Service Requirements Criticality of Product Number of Customers Customer Involvement Complexity of Activities Process Execution Type Documentation Tool Usage Degree of Teaming Date of Event Event Leader Generalized Linear Regression Once the scales were developed, the historical project data was coded with respect to the normalization criteria. The resulting data was used to create a generalized linear regression. The statistical analysis and creation of the generalized linear regression followed standard best practices (Cook and Weisberg, 1999). This methodology employed a standard Ordinary Least Square (“OLS”) regression, but also incorporated the use of categorical predictor variables which necessitated the transformation of many of the coded variables. OLS has four basic assumptions: (1) There is a lot of data available, (2) no outliers exist, (3) predictor variables are not correlated, and (4) predictors are either all continuous or discrete (Griffiths, Hill, et al., 1993). The first assumption is easily met and has been discussed above. The second assumption was assessed using a scatterplot matrix in order to identify significant outliers. In instances in which the outliers were of marginal quality, the data point was assessed for removal in order to create a better 103 calibrated model. Once the data set had been finalized, a Box-Cox transformation was applied to determine if any of the identified variables should be transformed. Where appropriate, the transformations were calculated and added as potential variables to the regression. The third assumption is assessed through the analysis for multi-collinearity as discussed below. The final assumption has already been implemented in the identification of the potential factors. The general model form is expected to be as follows: E(ROI) = f(identified variables). The exact nature of the function was determined based upon an analysis of the data. However, it is expected that a general linear regression will be possible, resulting in a model similar to the following: Equation 3: E(ROI) = β 0 + β 1 x 1 + β 2 x 2 +… β n x n where x 1 , x 2 , …x n represent variables used to characterize the process. In addition, the higher level effects (x 1 2 , x 2 2 , …x n 2 )and interaction effects (x 1 x 2 , x 1 x 3, etc.) will be examined for significance. The final model form may be significantly different than that proposed. However, that decision will have to be based upon the analyzed data. Due to the nature of the research question and the fact that no previous research has identified a predictive model for quality implementation efforts, a p value of 0.10 was considered sufficient. As well, variables that exhibit values between 0.10 and 0.05 were also analyzed and considered for inclusion in the model. This is due to the uncertainty associated with a modeling effort that has not previously been conducted. The overall model was evaluated based upon a standard F-test and tested for significance at a p-value of 0.10. 104 The model was analyzed iteratively using backwards and forwards elimination. In this process, variables are removed and their impact on the model significance is assessed. The goal of the backwards elimination is to create a model with a high degree of significance, but in a parsimonious manner. Additionally, the model was checked to confirm homoskedacity, or the presence of constant variance. If the homoskedacity is not present, the data must be re-weighted or transformed in order to meet this fundamental assumption of linear regression. The Cook’s distance and leverages of the individual data points were considered in order to validate the assumptions associated with residual error. The presence of curvature was tested using Tukey’s test. It was expected that some curvature will be present, especially with respect to the interaction between variables. The final step of the model building was to assess for multi-collinearity. “Multi-collinearity” is defined as independent variables that are closely correlated such that their individual impact on the dependent variable is difficult to determine. Due to the correlations present within the CFA, it was expected that this could be an issue. Since the desire is to create a parsimonious model, effort will have to be spent to resolve the multi- collinearity issues associated with strong correlation. Assessment of Coded Data Once the raw data had been coded based upon the criteria discussed above, it was reviewed to assess overall consistency and variability. As part of this analysis, general counts of the qualitative variables and averages of the quantitative variables were computed. These are shown in the tables below. In general, categorical variables are problematic for general linear regressions because one of the linear regression assumptions is that the data is continuous. Since 105 respondents may not consider each point on a Likert scale to be equidistant, the validity of any numerical operation may be questioned. However, some research has indicated that ordinal Likert data is valid in some situations, such as factor analysis if some criteria are met (Lubke and Muthen, 2004). Other research has also validated that F tests in ANOVA could accurately calculate p-values on categorical variables under certain conditions (Glass, et al., 1972). Table 25: Summary of Coded Data Variable Average Standard Deviation Number of Organizations 2.27 2.45 Number of Customers 2.39 2.77 Investment $25,579.41 $42,668.04 3 Year Savings $401,444.53 $733,077.82 Return on Investment 36.76 61.25 Table 26: Distributions of Coded Variables Variable Very Low Low Nominal High Very High Level of Requirement Understand 0 0 6 56 34 Level of Service Required 3 39 30 23 1 Criticality of Requirements 2 44 38 12 0 Customer Involvement 62 22 5 Activity Complexity 49 43 4 Documentation 55 23 18 Tool Usage 1 72 9 14 Teaming 1 59 27 9 Team Department Division Number of People 12 61 23 Serial Mixed Parallel Process Execution 8 35 53 106 In instances where categorical variables cannot be used with a parametric procedure, the variables must be re-coded into binary variables. This presents a different challenge in that the interpretation can be extremely difficult. Similar data from the COCOMO and COSYSMO family of models have used similar scales to generate statistically significant results. In these instances, the ordinal values were transformed into a continuous value based upon expert input in a Delphi assessment. Based upon a review of the coded data, it became apparent that several assumptions could be made to simplify the model. This simplification also supports the effort create a parsimonious model. The first simplification that was identified was with the variables identifying number of people, number of organizations, and number of customers. In this instance the enterprise is organized in a matrix format based upon customer and product. For instance, a single department will conduct all in-service installation and engineering for a single product. If the same function is performed for another customer, it will be in a different department. The result is that the number of customers identified for a given process is generally the same as the number of organizations identified in the process. Consequently, an assumption was made to focus on the number of customers as a predictor due to the ease of verification. The distribution of results for the number of people variable is also of significant interest. Based upon the data, the projects were skewed towards department level processes. Upon further investigation, this result was justified due to the nature of the process improvement efforts. Since process improvement methodologies focus on a process that produces a product and the enterprise is a matrix based on product and customer, it is expected that most of the processes studied would be at a department level. 107 In addition, of those processes that were identified at the team level, the majority of the work was done within a single department. Conversely, the projects that were identified as division-level focused overhead activities that spanned many departments. Additional data was obtained through an organizational survey, to determine the average department and division size (15 and 211, respectively). Although this would allow the number of people variable to be transformed into a single continuous variable, it is preferred to continue to treat it as a categorical variable. To accomplish this, the variable is transformed to 0 or 1 variable in which 1 represents division-level projects and 0 represents all others (department and team projects). The categorizations for role type were also simplified upon review. Although the initial coding hypothesized six distinct roles, the actual data often represented multiple roles in the same process. In addition, roles such as engineering, maintenance, and logistics were often indistinguishable due to the nature of the systems and work performed. As a result, the data coding was simplified to represent technical, primarily engineering, but also some logistics and software, versus overhead tasks. The overhead tasks include activities such as general management, administrative, contracting, etc. A 0 or 1 variable was created to represent each of these categories. The distribution of data for level of requirements understanding was significantly higher than expected. Approximately 60% of the results indicated that the processes requirements were Highly understood. The majority of the remaining projects were Very Highly understood. This result was investigated further and validated based upon the nature of the work performed by the organization. Since the organization is primarily a maintenance organization, with extremely limited development work, there is very little 108 work in which the requirements are not at least Highly understood. In this instance, it is more appropriate to segment the data into processes with some undefined requirements versus processes in which all requirements are well-defined. A bimodal also distribution emerged for product criticality, in which approximately half of the projects were considered Low or Very Low on the respective scales. These projects were often associated with items such as technical documentation, management reviews, or technical data capture and analysis. Conversely, since most of the work is maintenance and logistics, there are relatively few highly critical requirements. This variable was transformed into a categorical 0 or 1 to represent Low criticality or not. Similarly, bimodal distributions were observed for the customer involvement and activity complexity variables. For customer involvement, the function of the enterprise is to perform activities that require little customer input. Thus, it is expected that there is limited customer involvement. For those instances in which the involvement is not Low, the project primarily dealt with in-service engineering or another similar function. With respect to activity complexity, the explanation is more obscure. It is likely that there is a self-selection bias to conduct process improvement efforts on processes in which the complexity is relatively low. Highly complex activities are difficult for a traditional process improvement methodology to model. Furthermore, the requirement to report out verifiable cost reductions necessitates that the savings be repeatable. Highly complex processes would be expected to have more variation. Despite these challenges, it is still useful to model the impact of processes with low complexity versus processes with at least Nominal complexity. 109 The observed variance in data for level of service requirements was as expected, although it was slightly skewed to products with low levels of service requirements. Since only 1 project was identified as needing Very High and 3 projects as Very Low levels of service, each of these was combined into the appropriate adjacent grouping to create a 3 level categorical variable. The distribution of results for the process execution variable exhibited a bias towards serial processes. As with activity complexity, it is reasonable to believe that there is a self-selection bias with the projects that were conducted. Traditional value- stream mapping activities encourage the process to be displayed in a serial format. In addition, the bureaucratic nature of the organization, with numerous approvals and reviews more most functions, also suggests a bias towards a serial process. Thus, this value was also re-coded as being serial or not. The final three variables, degree of available documentation, usage of IT tools, and degree of teaming exhibited similar distributions. Overall, most processes had little documentation, although some did possess nominal or high levels. One observation is that the single most common product resulting from a process improvement event was the creation of standardized documentation. Thus, it is reasonable to hypothesize that low levels of documentation would suggest opportunities for improvement. The majority of projects also required little tool usage or teaming. While this was expected for tool usage, a more robust distribution was expected for the teaming variable. However, it is hypothesized that this variable also demonstrates a self-selection bias in that processes with a greater degree of teaming are considered more complex and more difficult to model, thus creating an incentive not to pursue traditional process improvement 110 methodologies. Regardless, this variable continues to be of interest for the model. Each of these was also coded to a 0 or 1 binary variable, with Low as 0. Regarding the dependent variable, return on investment, it should be noted that there is a tremendous amount of standard deviation. Although the initial hypothesis was to examine the model results from both a cost and time perspective, only the cost perspective was constructed. This was due to the fact that each project was required to report actual costs, based on hours saved and other material costs. Time saved was not consistently reported. Future research will have to evaluate whether the hypothesized strong correlation between cost and time is maintained. Correlation A Pearson correlation was computed for all of the variables listed above to determine significant correlations. The purpose of this analysis is to identify those variables that appear to be dependent upon each other. The complete matrix can be found in Appendix F. However, a summary of the significant relationships and the relative strengths is listed in the table below. An analysis of the correlation matrix results reveals significant insights. Most interesting is the fact that there is a negative correlation between the return on investment for a given project and the initial investment. Although the correlation is weak, it is significant. Application of this correlation would be that process improvement efforts should focus on smaller processes that require limited investment. Stated differently, to optimize ROI, it is better to complete numerous small process improvements instead of a single large process improvement. 111 Table 27: Summary of Correlation Matrix Variables *Values computed at the 0.01 significance level Correlation Coefficient ROI and Investment -.209 Date and Number of People .237 Date and Level of Requirements Understanding -.338* Date and Number of Customers .207* Number of People and Number of Customers .595* Number of People and Degree of Documentation -.238 Number of People and Investment .320 Engineering Tasks and High Service Levels .309* Engineering Tasks and Low Service Levels -.373* Engineering Tasks and Product Criticality .397* Engineering Tasks and Customer Involvement .407* Engineering Tasks and Activity Complexity .433* Engineering Tasks and Teaming .260 Overhead Tasks and Level of Requirements Understanding .380* Overhead Tasks and High Service Levels -.271* Overhead Tasks and Low Service Levels .266* Overhead Tasks and Product Criticality -.247 Overhead Tasks and Activity Complexity -.494* Overhead Tasks and Serial Process .215 Overhead Tasks and Tool Usage .344* Level of Requirements Understanding and High Service Requirement -.226 Level of Requirements Understanding and Low Service Requirements .258 Level of Requirements Understanding and Product Criticality -.292* Level of Requirements Understanding and Customer Involvement -.412* Level of Requirements Understanding and Activity Complexity -.507* Level of Requirements Understanding and Serial Process .229 Level of Reqs Understanding and Tool Usage .299* Level of Reqs Understanding and Teaming -.394* Customer Involvement and Product Criticality .492* Activity Complexity and Customer Involvement .538* Activity Complexity and Serial Process -.249 Teaming and Activity Complexity .447* Teaming and Serial Process -.427* Investment and Number of People .320 However, the fact that the only factor that had a significant correlation with the return on investment for a given process improvement was the level of investment does raise concerns about the ability to model the process in a predictive manner. Despite this, 112 the research is still valid. Due to its seminal nature, the original research statement did not require finding a model with a p-value of 0.05. It is reasonable to expect that a looser p-value would identify more significant correlations. Furthermore, the nature of some of the variables could be argued to support one-tail hypothesis testing. When this is done, several additional variables become significant at the 0.05 p-value. The impact of time on the process improvement activities was unable to be hypothesized during the initial phase of the research. Nevertheless, upon an analysis of the correlation matrix, there is a statistically significant, albeit rather weak in magnitude, positive correlation between the number of people and date of a given process improvement effort. A similar relationship was observed for number of customers. This suggests that as the process improvement efforts became institutionalized, the projects began to focus on larger processes. This is also reinforced by the fact that the level of requirements understanding decreased as the date increased. Upon deeper investigation, the most plausible explanation is that the easiest projects were completed first, leaving larger, more difficult projects for future activities. By extension, if the Lean Six Sigma methodologies were continued to be employed, the data suggests that the projects would continue to evolve and focus on larger, more complex projects. The number of people was strongly correlated with the number of customers in a given process. This can be intuitively supported due to the fact that a larger number of customers generally require additional people to meet the requirements. There are numerous bodies of organizational theory that would collaborate this finding. The fact that the number of people also correlates with the amount of investment required is also supported in this context because investments in process changes will be higher if there 113 are additional people. Stated differently, overcoming organizational inertia to effect actual change management will require additional resources as the number of people increases. A more surprising result is that the number of people employed in a process decreases based upon the degree of documentation available. This is significant because it suggests that the degree of documentation facilitates the transfer of information, enabling the work to be done with fewer individuals. A review of the project results showed that almost every project produced standardized documentation or procedures that could be employed to reduce costs. Thus, this finding seems to suggest that those processes with limited documentation should be prioritized targets for improvement. An assessment of the nature of the work led to unsurprising findings. Engineering tasks were strongly correlated with tasks that require high service levels, significant product criticality, high activity complexity, and high teaming. Conversely, overhead tasks were correlated with the opposite. Given that the nature of engineering work requires complex problem solving in uncertain situations and new applications, these findings are not surprising. Also, the fact that overhead tasks are associated with lower service requirements, lower complexity, and lower criticality is not surprising. Overhead tasks also seem to be correlated with serial processes and IT usage. Given that many overhead tasks are considered “non-value added” tasks, it makes sense that these be automated through tools as much as possible. Moreover, tool automation appears to encourage a serial process due to the ease of implementation. Level of requirements understanding demonstrated a significant correlation with numerous variables. A high level of requirements understanding was correlated with a 114 low service requirement, a serial process, and tool usage, while inversely correlated with product criticality, customer involvement, teaming, and activity complexity. The resulting analysis suggests that a high level of requirements understanding emerges from processes in which uncertainty has been reduced as much as possible and predictability is maximized. Although the correlation of Level of Requirements Understanding with ROI was not flagged as significant at the p = 0.05 level, it is hypothesized that it will emerge as significant in the generalized linear regression. This would also suggest that process improvement efforts provide a benefit by reducing uncertainty and increasing predictability, a central tenet of most quality methodologies. Therefore, processes in which the level of requirements understanding was low should be prioritized candidates for the application of Lean Six Sigma methods. Another interesting aspect from the Level of Requirements Understanding correlation is the magnitude of the inverse correlation with customer involvement, teaming, and activity complexity. From the latter perspective, it is reasonable to assume that more complex activities have higher uncertainty, leading to less understanding. However, most quality methodologies always prescribe more customer involvement. This data suggests the customer become more involved when the processes are more uncertain. The data does appear to support the systems engineering perspective that customers will be more involved until the requirements are finalized. Thus, it could be hypothesized that this is really a quadratic function in which customer involvement will increase and then decrease over time. Other correlations were not surprising based upon the hypotheses discussed above. The number of customers was correlated with the degree of tool usage and is 115 theorized to represent an attempt to standardize the process outputs and achieve some type of economy of scale. Customer involvement was strongly correlated with activity complexity and teaming. This observation is theoretically explained by the need to reduce uncertainty by including additional customers and people in the process to accomplish more difficult tasks. Similarly, activity complexity is inversely correlated with degree to which activities are serial and tool usage, while positively correlated with teaming. This further establishes that complex activities are those where additional uncertainty is present, requiring more input from teams and customers to complete. Transformation of Data Prior to constructing a general linear regression model, it is necessary to review the data for extreme outliers and variables that are candidates for transformation. Transformation is the means with which the data can more appropriately approximate a linear relationship. To accomplish this, a scatterplot matrix was developed and analyzed. Note that the binary variables are removed from the transformation assessment. Consequently, the variables examined include Date, Investment amount, and ROI. The data was reviewed for outliers that may skew the data set. Four potential cases were identified. When these were analyzed in detail, it became evident that they were special cases. One of the outliers was the first Rapid Improvement Event conducted by PHD. When reviewed, the documentation explicitly stated that the return on the investment was low due to the fact that this was the first RIE and the organization was expending additional resources to use it as a learning experience. Two other outliers were identified as being focused on strategic, rather than operational issues. As such, the investment was done irrespective of the potential savings. Finally, the fourth outlier was 116 ascertained to not be process-centric. The initial documentation had appeared to be process-centric, but a discussion with the principles indicated that it had more to do with work-force balancing rather than improving the process. Once the outliers were removed, a Box-Cox transformation was conducted on the variables, as shown in the Figure 4, below. The results suggested that both the ROI and Investment variable should be transformed into log variables. Given the wide disparity in the values for both fields, this was easily supported. The resulting scatterplot matrix is shown below. Figure 4: Scatterplot Matrix with Box-Cox Transformation 117 Model Results Based on the coding and transformation discussed above, a generalized linear regression was conducted on the data set. A backwards elimination methodology was utilized with an Identity Kernal Mean Function. The complete backwards elimination and all associated intermediate steps can be seen in Appendix G, below. It should be noted that the initial model using all variables resulted in an overall p-value of 0.0120, within the desired range, and replicate observations that validated the model fit. In addition, the R-Squared statistic was 0.3192. Although this is much lower than many other engineering models, this is comparable with management, financial, and social science models. Although the R-Squared is a concern, it does not invalidate that fact that the ROI can be modeled; instead, it suggests that future research should identify additional variables that would capture the uncertainty. The final model that was determined through the backwards elimination is shown in Figure 5, below. This was accomplished by eliminating all variables in which the p- value was less than 0.10. The final model identified three variables as the primary predictors of the log(ROI). These variables were log(Investment), the number of customers, and the level of service requirements. Of these, only the number of customers had a positive coefficient. 118 Figure 5: Linear Regression Results Once a statistical significant model had been identified, it was necessary to validate the various linear regression assumptions. Since the p-value for the lack of fit was 0.46, the null hypothesis is rejected, indicating that the model specified above is an appropriate fit. In addition, tests for homoskedacity and Tukey’s test for curvature were conducted, as shown in the figures below. In both instances, the calculated p-values indicated that neither was present. Therefore, the linear regression assumptions are valid. Data set = Dissertation, Name of Fit = L13 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (log[Invest] NoCust {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant 7.31491 0.816790 8.956 0.0000 log[Invest] -0.468128 0.0863921 -5.419 0.0000 NoCust 0.0798979 0.0453253 1.763 0.0815 {F}Service[2] -0.504114 0.321743 -1.567 0.1208 {F}Service[3] -0.744789 0.309228 -2.409 0.0181 R-Squared: 0.280712 Sigma hat: 1.16312 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 87 Summary Analysis of Variance Table Source df SS MS F p-value Regression 4 45.9334 11.4833 8.49 0.0000 Residual 87 117.698 1.35285 Lack of fit 75 102.676 1.36901 1.09 0.4630 Pure Error 12 15.0223 1.25185 119 Figure 6: Test for Non-constant Variance As a final model check, various outliers, leverages, and Cook’s Distances were also examined. It was determined that the included data points were within acceptable ranges for each of these metrics. In addition, when a sensitivity analysis was conducted by removing the largest Leverages and Cook’s Distances, the linear model did not change substantially. Thus, it was concluded that the model was robust. 120 Figure 7: Test for Curvature Figure 8: Assessment of Leverages 121 Figure 9: Assessment of Residuals Figure 10: Assessment of Cook’s Distances In addition to the above model creation and validation process, the model was also analyzed using a split sample method. In this instance, the 92 data points were first randomly sampled to create two separate data sets. A generalized linear regression model was then created for each data set. The purpose of this analysis was to verify that the 122 factors that had been identified as significant in the full model were similarly found to be significant in the two split samples. This is done by analyzing the p-values of the individual variables and by assessing the R-Squared of each model. Each of the variables identified in the full model should also be identified as significant in the split samples. Further, the R-Squared for each model should not differ by more than 5% from the R- Squared specified in the complete model. The results of this analysis are shown below. The analysis of the split sample models validates the relevance of the investment and number of customer variables. However, the variables associated with level of service requirements were not found as significant as those identified in the linear regressions specified by complete model. It should be noted that the level of service requirements variable was somewhat of a marginal variable in the complete mode, as illustrated by the 0.12 p-value for the second level of service requirement factor. Detailed analysis of the split sample suggests that there was not enough variability within each sample to enable this variable to be significant. Despite not including the level of service requirements variable, the split sample analyses does validate the model predicted by the full data set. The R-Squared values of .318 and .268 are well within the 5% threshold of .281. Additionally, the two most significant predictors were validated as statistically significant in each of the sub-models. Therefore, it is reasonable to conclude that the model is valid. Although it would be preferred that all three variables identified in the complete model be validated in the split samples, a larger data set would likely demonstrate this. Given that this is a seminal project, future research should focus on increasing the data set to further refine the model. 123 Figure 11: Split Sample Validation Results Kernel mean function = Identity Response = log[ROI] Terms = (log[Invest] NoCust) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant 6.68021 0.945646 7.064 0.0000 log[Invest] -0.488563 0.111061 -4.399 0.0001 NoCust 0.154439 0.0671828 2.299 0.0264 R-Squared: 0.317583 Sigma hat: 1.08895 Number of cases: 46 Degrees of freedom: 43 Summary Analysis of Variance Table Source df SS MS F p-value Regression 2 23.7298 11.8649 10.01 0.0003 Residual 43 50.9903 1.18582 Kernel mean function = Identity Response = log[ROI] Terms = (log[Invest] NoCust) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant 8.03440 1.41678 5.671 0.0000 log[Invest] -0.583581 0.149508 -3.903 0.0003 NoCust 0.100164 0.0625835 1.600 0.1168 R-Squared: 0.268167 Sigma hat: 1.2294 Number of cases: 46 Degrees of freedom: 43 Summary Analysis of Variance Table Source df SS MS F p-value Regression 2 23.8149 11.9075 7.88 0.0012 Residual 43 64.9914 1.51143 Lack of fit 41 62.3467 1.52065 1.15 0.5733 Pure Error 2 2.64471 1.32235 124 Analysis of Predictive Model Results An analysis of the predictive model results identified three variables with significant predictive power: dollars invested in the process improvement effort, number of customers involved with the process, and the level of service required. As noted above, the first and the third exhibited negative coefficients, indicating inverse relationships, while the second produced a positive coefficient. Of these variables, the most significant is the dollars invested in a process. The p- values for this variable were consistent in each iteration of the model. In addition, it has the largest coefficient. Furthermore, the strong significance to this variable has major implications for the application of process improvement methodologies. Figure 12: Linear Regression Results Using Investment Variable Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (log[Invest]) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant 6.61612 0.795643 8.315 0.0000 log[Invest] -0.423249 0.0855235 -4.949 0.0000 R-Squared: 0.213918 Sigma hat: 1.19549 Number of cases: 92 Degrees of freedom: 90 Summary Analysis of Variance Table Source df SS MS F p-value Regression 1 35.0037 35.0037 24.49 0.0000 Residual 90 128.628 1.4292 Lack of fit 55 92.4938 1.68171 1.63 0.0631 Pure Error 35 36.1339 1.0324 125 From a model perspective, it is informative to examine the degree to which the amount of investment required impacts the model. This is accomplished by creating a version of the model with log(ROI) as the sole predictor, as shown below. As can be observed, the R-Squared for the version of the model using investment alone is .214. Since the full model had an R-Squared of .281, it is clear that the majority of the model’s predictive power is due to this variable. However, the other two variables do explain approximately 7% of the total variation within the complete data set. Consequently, although the investment variable does dominate the predictive power of the model, the analysis does clearly demonstrate the value of including the other two variables in the complete model. The fact that the dollars invested is inversely related to the return on the investment indicates that, in order to maximize ROI, the research findings indicate that it is best to conduct numerous small improvement efforts rather than large, costly efforts. From a theoretical perspective, this validates the fundamental tenets of Lean methodologies in which projects were conceived and executed from the factory floor, with limited application to larger enterprise processes. This finding is extremely significant in that it is counter to recent process improvement literature that focuses on Enterprise Transformation. In the past several years, the industry practitioners have encouraged larger and more complex process improvement events based upon the argument that large scale events have more potential for significant impact. However, these prescriptions are based exclusively upon anecdotal and qualitative observations. The fact that this research uses a rigorously 126 documented quantitative methodology would strongly suggest that these recommendations be revisited. In addition, the result to focus on small investments potentially repudiates the Business Process Re-engineering (“BPR”) methodologies in which practitioners were encouraged to eliminate processes wholesale and re-design them in their entirety. Furthermore, the model strongly indicates that as investment increases, the corresponding increase in the return on investment flattens. The significance of the number of customers was surprising. It was hypothesized that the greater the number of customers, the greater the uncertainty and the more potential for improvement and standardization. The resulting model validated this hypothesis. As the number of customers increase, the potential ROI from a process improvement also increases. To interpret this parameter, it should be noted that the number of customers was strongly correlated with the number of people in the process and tool usage. Further, the organizational structure of the enterprise also strongly correlated the number of customers with the number of organizations. Thus, as the number of customers increases, the process would naturally exhibit larger information coordination costs, organizational coordination issues, and have a significantly higher the potential for non-value added tasks, such as coordination and reviews. An additional explanation for this observation is that as the number of customers increases, the greater the opportunity for standardizing delivered products across the customer base. A qualitative review of the Lean Six Sigma projects indicates that this was a very common outcome from the improvement event. Thus, the number of 127 customers can essentially be thought of as an economy of scale metric, namely the more customers, the more potential there is to achieve an economy of scale and reap process improvements. The final significant variable was the categorical variable level of service requirement. The behavior exhibited by this variables suggests that the higher the level of the service requirement, the greater the potential ROI from a process improvement event. Upon review of the data, it is possible that this variable also explains elements of activity complexity and uncertainty. Thus, the more difficult the task, the better it would be to improve it. High levels of service requirements were strongly correlated with engineering tasks, products with significant criticality, high customer involvement, and high activity complexity. Thus, processes that exhibited high service requirements were generally categorized as the most difficult technically and the most significant from a customer’s perspective. Similarly low levels of service requirements were associated with overhead tasks that exhibited the opposite characteristics. A theoretical rationale for the behavior of this variable is that the higher the level of service requirements, the more uncertainty associated with the process. Uncertainty leads to unpredictability and additional costs. Furthermore, uncertainty enables non-value added tasks to creep into the process. This potential impact of uncertainty and complexity correlates well with several bodies of research in management and organizational theory. This finding also corresponds with the Lean literature in that process improvement tasks should focus on maximizing value while seeking to minimize waste. 128 By focusing on the most complex tasks with high criticality and customer involvement, these processes would naturally be maximizing customer value. Another significant finding based upon this model is that activities associated with high service requirements, in this context primarily engineering activities, yielded a higher return than processes associated with administrative and management activities. Thus, this research provides an indication about which type of processes should be prioritized. A future extension of this work would be to develop focused methodologies depending upon process type. Just as important as the parameters that were validated as being significant are those that were not validated as significant. Of particular interest was the number of people, the degree of documentation, and the degree of teaming. Each of these was identified during the model validation stage as potentially significant factors. Most difficult to explain is the fact that the number of people was not a significant variable in the model. It was strongly correlated with number of customers due to the organizational structure of the enterprise. However, it was expected that the change management aspects of the process improvement efforts would be significantly impacted by the number of people. In addition, a larger number of people would also necessitate a larger investment, which is negatively correlated with the ROI. Thus, it is hypothesized that the interaction between number of customers and investment is sufficient to model the number of people involved in a process. Despite this, future research on the impact of population size on process improvement efforts is clearly warranted. Similarly, the degree of teaming was also expected to be a significant variable based upon the factor analysis. The various quality methodologies stress the importance 129 of teaming. In addition, the creation of teams impacts the flow of information and coordination. The lack of its inclusion is doubly puzzling when traditional management theory suggests that uncertainty and complexity in processes are mitigated through the use of teams. One potential explanation is the nature of the organization. Since the enterprise is a large, government bureaucracy, there is little incentive to team for the increased performance that a traditional profit-making enterprise would provide. Rather, incentives are based on the performance of individuals. This suggests that teaming may be de- emphasized in this application, but relevant in other enterprises. Regardless, the impact of teaming should be studied in future research. The final variable whose exclusion was a surprise is the degree of documentation available. The single most common output of a process improvement effort was the creation of standardized documentation and procedures. Thus, it is reasonable to expect that the degree of available documentation would be inversely related to the potential return on investment. This result was also suggested by the previous factor analysis. However, the predictive model did not recognize degree of documentation as a significant predictor. The best hypothesis for this result is that documentation is largely available for more mature, more certain processes. Thus, the nature of the process, rather than the available documentation is more significant to predicting performance. Future research should examine this topic in more detail due to its importance in methodologies such as CMMI. 130 Caveats Despite the success of the research results and the significance of the findings discussed above, a couple caveats are also appropriate. Foremost among these is the application of the model that was developed. The predictive model is a statistically significant way to optimize Return on Investment, based upon a three year cost savings report. However, there may be numerous instances in which other considerations may warrant the selection of a project with a sub-optimum ROI, such as implementation of business strategy, market positioning, workforce development, etc. As such, caution should be taken to ensure that the model is used in an appropriate manner; it is simply one of several tools and techniques that can assist organizations with selecting a portfolio of process improvement results. There are also some concerns about the generalizibility of the research. While the methodology is fundamentally sound, all data was conducted on a single organization, causing some question as to its external validity. Although the nature of the organization presented a unique opportunity to examine the application of process improvement methodologies in a knowledge-based environment, caution is recommended before generalizing to a larger context. Future research should extend the same methodology to other enterprises improve to improve the external validity of the results and determine if they are applicable across multiple organizations. Similarly, although all data collected was validated in multiple manners, a larger sample population would also be welcome to verify the results. Despite this, the research result is a significant contribution and provides a basis upon which future refinements could be made. 131 From an internal validity perspective, the coding of the project data was done diligently with as much detail as available. However, it represents a review of historical data. Future research should improve the fidelity of the research by collecting the data in real-time. This would allow the collection of a wider number of variables, providing a more robust model with richer insights. As discussed above, although improvement could be made in the internal validity of the research, the results remain a valuable contribution to the body of knowledge and provide a basis upon which future research could be conducted. Upon review, the concerns about external and internal validity have been appropriately addressed throughout the research design, data collection, and data analysis. Future research may refine this further, but the results are appropriate, verifiable, and significant. Thus, concerns with respect to validity have been mitigated. 132 Chapter 11: Conclusion and Next Steps The research presented in this document fulfills a distinct knowledge gap in the existing academic literature and significantly extends the body of knowledge for engineering application of quality methodologies. By aggregating the various quality methodologies into a single, academically rigorous theoretical model, it has been possible to identify similarities, explain differences, and provide a solid rationale for how process improvement activities occur. The factor analysis conducted validates this model in a statistically significant manner. Although previous research has conducted exploratory factor analysis on various quality instruments, this research presents a much more detailed perspective, drawing upon numerous bodies of literature and not relying upon a single instrument. The results significantly enhance the understanding of how process improvement methodologies can be expected to impact a process and generate a return on investment. The final aspect of the research, a predictive model for process improvement implementation, is a truly seminal study. No prior models have attempted to do this in academic or professional literature. The results that were statistically validated have the potential to greatly impact future process improvement implementation efforts. The amount of investment required, level of service requirements, and number of customers were found to be the most significant predictors of return on investment for process improvement activities. Each of these was analyzed in great detail and were accompanied with solid theoretical explanations to support the statistical result. Future enhancements to this research should focus on refining the predictive model, such that the model parameters can better predict the associated return on investment. 133 Associated with this effort is the need to identify additional variables that may significantly impact the implementation of process improvement efforts. The result of this research agenda, while leaving room for improvement, is a significant product that will be of future use to both academia and industry. 134 References The Shingo Prize for Operational Excellence - Shingoprize.org. Web. 22 Mar. 2011. <http://www.shingoprize.org/> Adler, P. S. “Social Capital: Prospects for a New Concept.” Academy of Management Review (2002). Print. Ahire, Sanjay L., Damodar Y. Golhar, and Matthew A. Waller. “Development and Validation of TQM Implementation Constructs.” Decision Sciences 27.1 (1996): 23-56. Print. Anderson, John C., Manus Rungtusanatham, Roger G. Schroeder, and Sarvanan Devaraj. “A Path Analytic Model of a Theory of Quality Management Underlying the Deming Management Method: Preliminary Empirical Findings.” Decision Sciences 26.5 (1995): 637-58. Print. Bailey, D.E., Settles, F.S., and Sanrow, D. “Applying continuous quality techniques to a research environment.” Quality Management Journal 6. (1999): 62-77. Print. Bergquist, T. M. and K. D. Ramsing. “Measuring performance after meeting award criteria.” Quality Progress 32.9: (1999): 66-72. Print. Black, Simon A., and Leslie J. Porter. “Identification of the Critical Factors of TQM.” Decision Sciences 27.1 (1996): 1-21. Print. Blakeslee, J. A. “Implementing the six sigma solution” Quality progress 32.7 (1999): 77- 85. Bradburn, Norman M., Seymour Sudman, and Brian Wansink. Asking Questions: the Definitive Guide to Questionnaire Design : for Market Research, Political Polls, and Social and Health Questionnaires. San Francisco: Jossey-Bass, 2004. Print. Byrne, Barbara M. Structural Equation Modeling with EQS and EQS/Windows: Basic Concepts, Applications, and Programming. Thousand Oaks: Sage Publications, 1994. Print. Carmines, Edward G., and Richard A. Zeller. Reliability and Validity Assessment. Beverly Hills, CA: Sage Publications, 1979. Print. Cole, R. E. and W. R. Scott, Eds. The Quality Movement & Organization Theory. Thousand Oaks, Sage Publications, 2000. Print,. Collar, E. J. “An Investigation of Programming Code Textbase Readability” Department of Business Administration. Boulder, CO, University of Colorado (2005): 774. 135 College, N. L. S. S. (2007). DMAIC Project Roadmap. Collier, D. A., S. M. Goldstein, et al. “A Thing of the Past?” Quality Progress 35,10 (2002): 97-104. Cook, R. D. and S. Weisberg “Applied Regression Including Computing and Graphics.” Wiley, New York, 1999. Print. Cross, R., S. P. Borgatti, et al. “Making Invisible Work Visible: Using Social Network Analysis to Support Strategic Collaboration.” California Management Review 44.2 (2002): 25-46. Curkovic, S., S. Melnyk, et al. “Validating the Malcolm Baldrige National Quality Award Framework through structural equation modeling.” International Journal of Production Research 38.4 (2000): 765-791. Daniels, S. E. “Baldrige study says quality more than pays for itself.” Quality Progress 35.4 (2002): 36. Daniels, S. E. “Check out this Baldrige Winner.” Quality Progress 35.8(2002): 41-7. Daniels, S. E. “First to the top.” Quality Progress 35.5 (2002): 41-9. Dean, J. W. and D. E. Bowen “Management Theory and Total Quality: Improving Research and Practice through Theory Development.” The Academy of Management Review 19.3 (1994): 392-418. Deming, W. E. “Out of the Crisis” MIT/CAES (Centre for Advanced Engineering Study), 1986. Dhallin, A. “A Proposed Investigation into Enhanced Quality Management Practices through Enterprise Architecture.” Industrial and Systems Engineering (2005) Los Angeles, University of Southern California. Dimaggio, P. J. and W. W. Powell “The Iron Cage Revisited: Institutional Isomorphism and collective Rationality in Organizational Fields.” American Sociological Review 48 (1983): 147-160. Doolen, T. L. and M. E. Hacker “A Review of Lean Assessment in Organizations: An Exploratory Study of Lean Practices by Electronic Manufacturers.” Journal of Manufacturing Systems 24.1 (2005): 55-67. Drucker, P. F. The Essential Drucker: The Best of Sixty Years of Peter Drucker's Essential Writing, New York: Harper Collins, 2003. Print. 136 Dunteman, George H. Principal Components Analysis. Newbury Park: Sage Publications, 1989. Print. Eisenhardt, Kathleen M., and Mark J. Zbaracki. “Strategic Decision Making.” Strategic Management Journal 13.S2 (1992): 17-37. Print. Elzinga, D. J., T. Horak, et al. “Business Process Management: Survey and Methodology.” IEEE Transactions on Engineering Management 42.2 (1995).. Evans, James R., and William M. Lindsay. The Management and Control of Quality. Australia: South-Western, 2002. Print. Field, Joy M., and Kingshuk K. Sinha. “Applying Process Knowledge for Yield Variation Reduction: A Longitudinal Field Study*.” Decision Sciences 36.1 (2005): 159-86. Print. Flynn, Barbara B., Roger G. Schroeder and Sadao Sakakibara “A framework for quality management research and an associated measurement instrument.” Journal of Operations Management 11 (1994): 339-366. Galbraith, Jay R. Organization Design. Reading, MA: Addison-Wesley Pub., 1977. Print. Garvin, David A. Managing Quality: the Strategic and Competitive Edge. New York: Free, 1988. Print. George, Michael L. Lean Six Sigma: Combining Six Sigma Quality with Lean Speed. New York: McGraw-Hill, 2002. Print. George, S.. “Bull or bear.” Quality Progress 35.4 (2002): 32-5. Gooden, D., M. Engle, et al.. “Lean Six Sigma Standard Work Guide.” Port Hueneme, Naval Surface Warfare Center, Port Hueneme Division (2005). Gould, Stephen Jay. The Mismeasure of Man. New York: Norton, 1996. Print. Granovetter, Mark. “Economic Action and Social Structure: The Problem of Embeddedness.”American Journal of Sociology 91.3 (1985): 481. Print. Griffiths, William E., R. Carter. Hill, and George G. Judge. Learning and Practicing Econometrics. New York: Wiley, 1993. Print. Hackman, J. R. and R. Wageman. “Total Quality Management: Empirical, Conceptual, and Practical Issues.” Administrative Sciences Quarterly 40.2 (1995): 309-342. Hallam, C. R. A.. “Lean Enterprise Self-Assessment as a Leading Indicator for Accelerating Transformation in the Aerospace Industry.” Technology, Management, and Policy Program. Cambridge, MA, Massachusetts Institute of Technology (2003): 323. 137 Hammer, M. “Process management and the future of six sigma.” MIT Sloan Management Review 43.2 (2002): 26-32. Hammer, Michael, and James Champy. Reengineering the Corporation a Manifesto for Business Revolution. New York: Harper Collins, 2006. Print. Hansen, M. T. “The search-transfer problem: The role of weak ties in sharing knowledge across organization subunits.” Administrative Sciences Quarterly 44 (1999): 82-111. Hendricks, K. B. and V. R. Singhal “Does Implementing an Effective TQM Program Actually Improve Operating Performance? Empirical Evidence from Firms That Have Won Quality Awards.” Management Science 43.9 (1997): 1258-1274. Hoerl, R. “Six Sigma not a substitute for Baldrige Award criteria.” Quality Progress 31.9 (1998): 5. Kettinger, William J., James T. C. Teng, Subashish Guha “Business Process Change: A Study of Methodologies, Techniques, and Tools.” MIS Quarterly 21.1 . (1997).: 55-80. Kline, Rex B. Principles and Practice of Structural Equation Modeling. New York: Guilford, 2005. Print. Krackhardt, D. and J. R. Hanson. “Informal Networks: The Company Behind the Chart.” Harvard Business Review 71.4 (1993): 104-111. Kuratko, Donald F., John C. Goodale, and Jeffrey S. Hornsby. “Quality Practices for a Competitive Advantage in Smaller Firms.” Journal of Small Business Management 39.4 (2001): 293-311. Print. Lean Aerospace Initiative (2000). Transition to Lean Roadmap. Lean Aerospace Initiative (2001). Lean Evaluation Self-Assessment Tool. Leana, C. R. and H. J. I. Van Buren. “Organizational social capital and employment practices.” Academy of Management Review 24 (1999):538-555. Leitner, P. A. The Lean Journey at The Boeing Company. ASQ World Conference on Quality and Improvement Proceedings, ABI/INFORM Global. (2005). Liker, Jeffrey K. The Toyota Way: 14 Management Principles from the World's Greatest Manufacturer. New York: McGraw-Hill, 2004. Print. Mann, M. and A. Dhallin. The Design and Management of Knowledge Intensive Enterprises. ASEM 24th National Conference, St. Louis, MO (2003). 138 Mann, M. and A. Dhallin. Organizational Process Mapping: Aligning Organizational and Business Processes. IIE National Conference, Portland, OR (2003). March, James G., and Herbert A. Simon. Organizations. New York: Wiley, 1958. Print. Martin, James N. “Processes for Engineering a System: an Overview of the ANSI/EIA 632 Standard and Its Heritage.” Systems Engineering 3.1 (2000): 1-26. Print. Meyer, J. W. and B. Rowan. “Institutionalized Organizations: Formal Structure as Myth and Cermony.” American Journal of Sociology 83.2 (1977): 340-363. Miller, Delbert Charles. Handbook of Research Design and Social Measurement. Newbury Park, CA: Sage, 2002. Print. Murman, Earll M. Lean Enterprise Value: Insights from MIT's Lean Aerospace Initiative. Houndmills, Basingstoke, Hampshire: Palgrave, 2002. Print. Nahapiet, J. and S. Ghoshal. “Social Capital, Intellectual Capital, and the Organizational Advantage.” Academy of Management Review 23.2 (1998): 242-266. Ohno, Taiichi. Toyota Production System: beyond Large-scale Production. Cambridge, MA: Productivity, 1988. Print. Oppenheim, Bohdan W. “Lean Product Development Flow.” Systems Engineering 7.4 (2004). Print. Paez, O., J. Dewees, A. Genaidy, S. Tuncel, W. Karwowski, and J. Zurada. “The Lean Manufacturing Enterprise: An Emerging Sociotechnological System Integration.” Human Factors and Ergonomics in Manufacturing 14.3 (2004): 285-306. Print. Pande, Peter S., Robert P. Neuman, and Roland R. Cavanagh. The Six Sigma Way: How GE, Motorola, and Other Top Companies Are Honing Their Performance. New York: McGraw-Hill, 2000. Print. Perry, D. E., N. A. Staudenmayer, L.G. Votta. “People, Organizations, and Process Improvement.” IEEE Software 4 (1994): 38-44. Peters, Thomas J., and Robert H. Waterman. In Search of Excellence: Lessons from America's Best-run Companies. New York: Warner, 1984. Print. Podolny, J. M. and J. N. Baron. “Resources and Relationships: Social Networks and Mobility in the Workplace.” American Sociological Review 62.5 (1997): 673-693. Porter, M. “The contributions of industrial organization to strategic management.” Academy of Management Review 6 (1981): 609-620. 139 Powell, T. C.. “Total Quality Management as Competitive Advantage: A Review and Empirical Study.” Strategic Management Journal 16.1 (1995): 15-37. Powell, W. W. “Neither market nor hierarchy: Network forms of organizations.” Research in Organizational Behavior. B. Staw and L. Cummings, JAI Press. 12 (1989): 295-336. Powell, Walter W., and Paul DiMaggio. The New Institutionalism in Organizational Analysis. Chicago: University of Chicago, 1991. Print. Program, B. N. Q. (2005). Criteria for Performance Excellence. Gaithersburg, National Institute of Standards and Technology. Prusak, Laurence. Knowledge in Organizations. Boston: Butterworth-Heinemann, 1997. Print. Rajan, M. and N. Tamimi. “Baldrige Award winners: the payoff to quality.” Journal of Investing 8.4 (1999): 39-42. Ravichandran, T. and A. Rai. “Quality Management in systems development: an organizational system perspective.” MIS Quarterly 24.3 (2000): 381-415. Rayner, B. The Leaning of Electronics, Electronics Supply & Manufacturing (2007). Reed, R., D. J. Lemak, et al. “Beyond Process: TQM content and Firm Performance.” Academy of Management Review 21.1 (1996).: 173-202. Reger, R. K., L. T. Gustafson, et al. “Reframing the Organization: Why Implementing Total Quality is Easier Said than Done.” Academy of Management Review 19.3 (1994).: 565-584. Reijers, H., and S. Limanmansar. “Best Practices in Business Process Redesign: an Overview and Qualitative Evaluation of Successful Redesign Heuristics.” Omega 33.4 (2005): 283-306. Print. Saraph, Jayant V., P. George Benson, and Roger G. Schroeder. “An Instrument for Measuring the Critical Factors of Quality Management.” Decision Sciences 20.4 (1989): 810-29. Print. Sawhney, R. and S. Chason. “Human Behavior Based Exploratory Model for Successful Implementation of Lean Enterprises in Industry.” Performance Improvement Quarterly 18.2 (2005): 76-96. Scott, W. Richard. Organizations: Rational, Natural, and Open Systems. Upper Saddle River, NJ: Prentice Hall, 2003. Print. 140 Senge, Peter. The Fifth Discipline the Art and Practice of the Learning Organization. London: Random House Business, 2006. Print. Sila, Ismail, and Maling Ebrahimpour. “An Investigation of the Total Quality Management Survey Based Research Published between 1989 and 2000: A Literature Review.” International Journal of Quality & Reliability Management 19.7 (2002): 902- 70. Print. Sitkin, S. B., K. M. Sutcliffe, et al. “Distinguishing Control from Learning in Total Quality Management: A Contingency Perspective.” Academy of Management Review 19.3 (1994): 537-564. Spencer, B. A. “Models of Organization and Total Quality Management: A Comparison and Critical Evaluation.” Academy of Management Review 19.3 (1994): 446-471. Staw, Barry M., and Ronald S. Burt. Research in Organizational Behavior: an Annual Series of Analytical Essays and Critical Reviews. Amsterdam: JAI Pr., 2002. Print. Tamimi, N. and R. Sebastianelli. “The barriers to total quality management.” Quality Progress 31.6. (1998): 57-60. Traub, Ross E. Reliability for the Social Sciences: Theory and Applications. Thousand Oaks: Sage, 1994. Print. Tsai, W. and S. Ghoshal. “Social capital and value creation: The role of intrafirm networks.” Academy of Management Journal 41 (1998): 464-478. Upton, M. T. and C. Cox. “Lean Six Sigma: A Fusion of Pan-Pacific Process Improvement.” The George Group: 21. Uzzi, B. “Social Structure and Competition in Interfirm Networks: The Paradox of Embeddness.” Administrative Sciences Quarterly 42 (1997): 35-67. Valerdi, R.. “The Constructive Systems Engineering Cost Model (COSYSMO).” Industrial and Systems Engineering. Los Angeles, University of Southern California (2005): 152. Waldman, D. A. “The Contributions of Total Quality Management to a Theory of Work Performance.” Academy of Management Review 19.3 (1994): 510-536. Westphal, J. D., R. Gulati, et al. “Customization of Conformity? An Institutional and Network Perspective on the Content and Consequences of TQM Adoption.” Administrative Sciences Quarterly 42.2 (1997).: 366-394. Womack, James P., and Daniel T. Jones. Lean Thinking: Banish Waste and Create Wealth in Your Corporation. New York: Free, 2003. Print. 141 Wortman, Bill. The Certified Six Sigma Black Belt Primer. West Terre Haute, IN: Quality Council of Indiana, 2001. Print. Zbaracki, M. J.. “The Rhetoric and Reality of Total Quality Management.” Administrative Sciences Quarterly 43.3 (1998): 602-636. 142 Appendix A: List of Interviews Name Organization Title Haig Armaghanian Haig Barrett Incorporated Managing Partner Dr. Kirk Bozdogan Massachusetts Institute of Technology (“MIT”) Principal Research Associate Linda Chan Northrop Grumman Space Technologies Master Black Belt Chris Cool Northrop Grumman Mission Systems VP, Operational Excellence Dr. Heidi Davidz The Aerospace Corporation Sr. Member of the Technical Staff Michael Engle Naval Surface Warfare Center, Port Hueneme (“NSWC PHC”) Black Belt Darrell Gooden Naval Surface Warfare Center, Port Hueneme Transformation Manager Rick Hefner NSWC PHC Dan Jarmel Northrop Grumman Space Technologies Master Black Belt Catherine Keller Raytheon Space and Airborne Systems Director, Lean Dr. Michael Mann EnCompass Knowledge Systems Chairman Jan Martinson Boeing Director, Lean Ted Mayeshiba University of Southern California Adjunct Faculty Thomas Mellring NSWC PHC Black Belt Dr. Deborah Nightingale MIT LAI-MIT School of Engineering Co-Director James Ogonowski Boeing Director, Lean Engineering Dr. Bohdan Oppenheim Loyola Marymount University Professor and Graduate Director of Mechanical Engineering Dr. Donna Rhodes MIT Principal Research Associate Kraig Scheyer Northrop Grumman VP, Administrative Services Ron Smith Northrop Grumman Space Technologies VP, Six Sigma Ed Spaulding Northrop Grumman Director, Process Excellence Jayakanth Srinivasan MIT Research Associate Dr. Ricardo Valerdi MIT Research Associate Charles Volk Northrop Grumman, Electronic Systems VP, Technology Development 143 Appendix B: Pilot NA VSEA Survey Lean Six Sigma College Expert Opinion Survey The purpose of this survey is to help the Navy determine the best way to identify and select candidate improvement opportunities. To accomplish this, we are asking for your opinion on the factors that you consider important in predicting the success of a Rapid Improvement Event/Kaizan. We appreciate you taking the time to answer this survey! Instructions: For each of the factors listed below, please circle how important you believe each of the following categories are in predicting the success of an RIE/Kaizan before the event. 1. The number of people involved with the process, how they work together, and the characteristics of the workforce. No Impact Marginal Useful Very Important Critical 2. The type of work performed with the process, the complexity of the activities, and the maturity of the process. No Impact Marginal Useful Very Important Critical 3. The nature of customer and the type of product that results from the process. No Impact Marginal Useful Very Important Critical 4. The amount and nature of information technology and infrastructure dedicated to the process. No Impact Marginal Useful Very Important Critical 5. Organizational or environmental factors (such as location, department, date of the event) within which the process operate. No Impact Marginal Useful Very Important Critical 144 Please rank order each of the categories in order of importance in predicting the success of an RIE (5 = most important, 1 = least important). Category Rank People and Workforce _______ Activities and Process Characteristics _______ Customer and Product _______ Information and Technology Usage _______ Organizational and Environmental Factors _______ 145 Please indicate how important each of the following is in predicting the success of an RIE. 1 2 3 4 5 The number of people who do any work within the process. The amount of multi-tasking observed within the process The average amount of interaction and coordination required to complete tasks The number of functional roles that work together in the process The training and skill requirements to work in the process The diversity of the workforce in terms of education, tenure, age, and experience The number of activities performed within the process The complexity of the activities within the process The type of work performed in the process (i.e. software development, engineering, etc.) The amount of documentation available for reference The degree to which individual activities are dependent upon previous activities The degree to which the customer requirements are understood The expected level of support once the customer has the product The criticality of the product for the customer The number of customers that the process serves The diversity of the customers The involvement of the customer in the process The amount of automation associated with the process The degree to which IT and engineering tools are used The quantity of communication required to complete tasks The observed quality of the communication between individuals The number of organizations involved in the process 146 Appendix C: SPSS Output for Factor Correlation and Reliability Analysis Correlations NoPeople Multitasking Teaming NoRoles Training WorkforceDiv NoPeople Pearson Correlation 1 .064 .158(*) .176(*) .239(**) .184(*) Sig. (2-tailed) .384 .031 .016 .001 .012 N 188 188 188 188 188 188 Multitasking Pearson Correlation .064 1 .268(**) .264(**) .194(**) .186(*) Sig. (2-tailed) .384 .000 .000 .008 .011 N 188 188 188 188 188 188 Teaming Pearson Correlation .158(*) .268(**) 1 .345(**) .362(**) .212(**) Sig. (2-tailed) .031 .000 .000 .000 .003 N 188 188 188 188 188 188 NoRoles Pearson Correlation .176(*) .264(**) .345(**) 1 .430(**) .279(**) Sig. (2-tailed) .016 .000 .000 .000 .000 N 188 188 188 188 188 188 Training Pearson Correlation .239(**) .194(**) .362(**) .430(**) 1 .401(**) Sig. (2-tailed) .001 .008 .000 .000 .000 N 188 188 188 188 188 188 WorkforceDiv Pearson Correlation .184(*) .186(*) .212(**) .279(**) .401(**) 1 Sig. (2-tailed) .012 .011 .003 .000 .000 N 188 188 188 188 188 188 * Correlation is significant at the 0.05 level (2-tailed). ** Correlation is significant at the 0.01 level (2-tailed). 147 Correlations NoActivities Complexity WorkType Documentation DegreeSerial NoActivities Pearson Correlation 1 .625(**) .368(**) .257(**) .400(**) Sig. (2-tailed) .000 .000 .000 .000 N 188 188 188 188 188 Complexity Pearson Correlation .625(**) 1 .406(**) .197(**) .411(**) Sig. (2-tailed) .000 .000 .007 .000 N 188 188 188 188 188 WorkType Pearson Correlation .368(**) .406(**) 1 .318(**) .273(**) Sig. (2-tailed) .000 .000 .000 .000 N 188 188 188 188 188 Documentatio n Pearson Correlation .257(**) .197(**) .318(**) 1 .233(**) Sig. (2-tailed) .000 .007 .000 .001 N 188 188 188 188 188 DegreeSerial Pearson Correlation .400(**) .411(**) .273(**) .233(**) 1 Sig. (2-tailed) .000 .000 .000 .001 N 188 188 188 188 188 ** Correlation is significant at the 0.01 level (2-tailed). 148 Correlations Requirements CustSupport Criticality NoCusts CustDiversity CustInvolve Requirement s Pearson Correlation 1 .349(**) .409(**) .167(*) .127 .489(**) Sig. (2-tailed) .000 .000 .022 .082 .000 N 188 188 188 188 188 188 CustSupport Pearson Correlation .349(**) 1 .454(**) .181(*) .026 .201(**) Sig. (2-tailed) .000 .000 .013 .722 .006 N 188 188 188 188 188 188 Criticality Pearson Correlation .409(**) .454(**) 1 .435(**) .165(*) .316(**) Sig. (2-tailed) .000 .000 .000 .023 .000 N 188 188 188 188 188 188 NoCusts Pearson Correlation .167(*) .181(*) .435(**) 1 .486(**) .219(**) Sig. (2-tailed) .022 .013 .000 .000 .003 N 188 188 188 188 188 188 CustDiversit y Pearson Correlation .127 .026 .165(*) .486(**) 1 .284(**) Sig. (2-tailed) .082 .722 .023 .000 .000 N 188 188 188 188 188 188 CustInvolve Pearson Correlation .489(**) .201(**) .316(**) .219(**) .284(**) 1 Sig. (2-tailed) .000 .006 .000 .003 .000 N 188 188 188 188 188 188 ** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed). 149 Correlations Documentation Automation Tooluse CommQuant CommQual Documentatio n Pearson Correlation 1 .334(**) .252(**) .099 .308(**) Sig. (2-tailed) .000 .000 .176 .000 N 188 188 188 188 188 Automation Pearson Correlation .334(**) 1 .729(**) .217(**) .193(**) Sig. (2-tailed) .000 .000 .003 .008 N 188 188 188 188 188 Tooluse Pearson Correlation .252(**) .729(**) 1 .189(**) .143 Sig. (2-tailed) .000 .000 .009 .051 N 188 188 188 188 188 CommQuant Pearson Correlation .099 .217(**) .189(**) 1 .547(**) Sig. (2-tailed) .176 .003 .009 .000 N 188 188 188 188 188 CommQual Pearson Correlation .308(**) .193(**) .143 .547(**) 1 Sig. (2-tailed) .000 .008 .051 .000 N 188 188 188 188 188 ** Correlation is significant at the 0.01 level (2-tailed). 150 NoPeople Multitas Teaming NoRoles Training WFDivers NoActivi Complexi WorkType Document NoPeople 1.0251 Multitas 0.0652 1.0173 Teaming 0.1494 0.2527 0.8766 NoRoles 0.1785 0.2672 0.3236 1.0063 Training 0.2612 0.2117 0.3661 0.4663 1.1681 WFDivers 0.2088 0.2103 0.2231 0.3150 0.4867 1.2626 NoActivi 0.1721 0.2633 0.1832 0.3688 0.3133 0.1996 1.1242 Complexi 0.0734 0.2118 0.1995 0.3795 0.3870 0.2151 0.7145 1.1615 WorkType 0.2398 0.2545 0.1522 0.3352 0.5424 0.4306 0.4524 0.5073 1.3445 Document 0.1253 0.0791 0.1831 0.2853 0.1776 0.1886 0.2877 0.2242 0.3888 1.1142 SerialDe 0.0638 0.2580 0.2562 0.3209 0.2699 0.2258 0.4399 0.4588 0.3287 0.2553 Requirem 0.0443 -0.0220 0.2604 0.1326 0.0997 0.1365 0.1171 0.1179 0.0676 0.1514 CustSupp 0.0597 0.0177 0.1652 0.2414 0.4015 0.2615 0.0338 0.1089 0.2304 0.3612 Critical 0.0266 0.0887 0.2815 0.2001 0.2165 0.1434 0.2384 0.4038 0.2085 0.1042 NoCusts 0.0965 0.2932 0.1803 0.2335 0.2515 0.2880 0.2825 0.3447 0.1766 0.1739 CustDiv 0.0296 0.4004 0.0965 0.1350 0.1057 0.3911 0.1865 0.1249 0.2875 0.1010 CustInvo 0.0241 0.0987 0.2122 0.0784 -0.0788 0.0479 0.0613 0.0100 0.0896 0.2159 Automati 0.1356 0.3559 0.1024 0.3508 0.4047 0.3943 0.2971 0.2913 0.4421 0.3842 ITTools 0.1756 0.3459 0.1770 0.3121 0.3953 0.4363 0.2175 0.2306 0.4915 0.2939 QuantCom 0.1685 0.2458 0.3085 0.3471 0.2580 0.3648 0.2618 0.2012 0.1939 0.1130 QualComm 0.0240 0.1152 0.3440 0.2157 0.2393 0.2783 0.2435 0.2467 0.2687 0.3188 151 SerialDe Requirem CustSupp Critical NoCusts CustDiv CustInvo Automati ITTools QuantCom SerialDe 1.0744 Requirem 0.1749 0.6736 CustSupp 0.0903 0.3108 1.1798 Critical 0.2842 0.3702 0.5444 1.2164 NoCusts 0.3247 0.1579 0.2268 0.5539 1.3308 CustDiv 0.2837 0.1291 0.0350 0.2256 0.6933 1.5293 CustInvo 0.1240 0.4352 0.2372 0.3778 0.2739 0.3805 1.1754 Automati 0.3270 -0.0854 0.2136 0.1919 0.4237 0.4782 0.0926 1.1898 ITTools 0.2359 -0.0916 0.1037 0.0947 0.2545 0.3981 -0.0634 0.8773 1.2172 QuantCom 0.3459 0.1464 0.1757 0.2240 0.1830 0.3297 0.2169 0.2557 0.2254 1.1687 QualComm 0.3092 0.2867 0.2548 0.2746 0.1593 0.2131 0.1882 0.2069 0.1543 0.5799 QualComm QualComm 0.9615 Cronbach's alpha = 0.850 152 Scale: All People Variables Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .668 6 Scale: People Variables Minus Number of People Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .674 5 Scale: People Variables Minus Multitasking Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. 153 Reliability Statistics Cronbach's Alpha N of Items .659 5 Scale: People Variables Minus Teaming Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .617 5 Scale: People Variables Minus Number of Roles Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .596 5 154 Scale: People Variables Minus Training Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .573 5 Scale: People Variables Minus Workforce Diversity Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .624 5 Scale: All Process Variables Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. 155 Reliability Statistics Cronbach's Alpha N of Items .728 5 Scale: Process Variables Minus Number of Activities Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .639 4 Scale: Process Variables Minus Complexity Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .640 4 156 Scale: Process Variables Minus Type of Work Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .687 4 Scale: Process Variables Minus Documentation Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .736 4 Scale: Process Variables Minus Degree Serial Case Processing Summary N % Cases Valid 188 100.0 Exclude d(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. 157 Reliability Statistics Cronbach's Alpha N of Items .694 4 Scale: All Customer Variables Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .699 6 Scale: Customer Variables Minus Requirements Understanding Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .656 5 158 Scale: Customer Variables Minus Customer Support Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .685 5 Scale: Customer Variables Minus Criticality Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .618 5 Scale: Customer Variables Minus Number of Customers Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. 159 Reliability Statistics Cronbach's Alpha N of Items .642 5 Scale: Customer Variables Minus Customer Diversity Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .695 5 Scale: Customer Variables Minus Customer Involvement Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .653 5 160 Scale: All Information Variables Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .671 4 Scale: All Information Variables Plus Documentation Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .684 5 Scale: All Information Variables Minus Automation Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. 161 Reliability Statistics Cronbach's Alpha N of Items .547 3 Scale: All Information Variables Minus Tool Use Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .579 3 Scale: All Information Variables Minus Quantity of Communication Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .636 3 162 Scale: All Information Variables Minus Quality of Communication Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .648 3 Scale: All Information Variables Minus Quality of Communication Plus Documentation Case Processing Summary N % Cases Valid 188 100.0 Excluded(a) 0 .0 Total 188 100.0 a Listwise deletion based on all variables in the procedure. Reliability Statistics Cronbach's Alpha N of Items .638 4 163 Appendix D: EQS Output for Exploratory Factor Analysis Factor Analysis (All People) EQS 6.1 for Windows Tue Nov 06 23:18:53 2007 Page 13 FACTOR ANALYSIS 6 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 NoPeople Multitas Teaming NoRoles Training WFDivers NoPeople 1.0000 Multitas 0.0638 1.0000 Teaming 0.1576 0.2676 1.0000 NoRoles 0.1758 0.2641 0.3445 1.0000 Training 0.2387 0.1942 0.3618 0.4301 1.0000 WFDivers 0.1835 0.1856 0.2121 0.2795 0.4008 1.0000 Eigenvalues 1 2.306 164 2 0.969 3 0.810 4 0.754 5 0.644 6 0.517 EQS 6.1 for Windows Tue Nov 06 23:18:57 2007 Page 14 Number of factors selected are ....... 1 Constant for non-selected eigenvalues= 0.739 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:18:57 2007 Page 15 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 _____________________ Training 0.755 NoRoles 0.710 Teaming 0.650 WFDivers 0.619 Multitas 0.498 NoPeople 0.423 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Training 0.570 0.247 0.247 NoRoles 0.504 0.219 0.466 Teaming 0.422 0.183 0.649 WFDivers 0.383 0.166 0.815 Multitas 0.248 0.107 0.922 NoPeople 0.179 0.078 1.000 _________________________________________ Variance Explained by Each Factor: 165 FACTOR 1 _____________________ 2.306 _____________________ Total: 2.306 EQS 6.1 for Windows Tue Nov 06 23:18:57 2007 Page 16 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] FACTOR 1 _____________________ Training 0.622 NoRoles 0.585 Teaming 0.536 WFDivers 0.510 Multitas 0.410 NoPeople 0.349 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Training 0.387 0.247 0.247 NoRoles 0.343 0.219 0.466 Teaming 0.287 0.183 0.649 WFDivers 0.261 0.166 0.815 Multitas 0.168 0.107 0.922 NoPeople 0.122 0.078 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 _____________________ 1.568 _____________________ Total: 1.568 FACTOR SCORE COEFFICIENTS 166 FACTOR 1 _____________________ NoPeople -0.151 Multitas -0.178 Teaming -0.232 NoRoles -0.254 Training -0.270 WFDivers -0.221 _____________________ Factor Analysis without Multitasking EQS 6.1 for Windows Tue Nov 06 23:16:37 2007 Page 9 FACTOR ANALYSIS 5 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 NoPeople Teaming NoRoles Training WFDivers NoPeople 1.0000 Teaming 0.1576 1.0000 NoRoles 0.1758 0.3445 1.0000 Training 0.2387 0.3618 0.4301 1.0000 WFDivers 0.1835 0.2121 0.2795 0.4008 1.0000 Eigenvalues 1 2.150 2 0.884 3 0.794 4 0.645 5 0.527 EQS 6.1 for Windows Tue Nov 06 23:16:45 2007 Page 10 Number of factors selected are ....... 1 Constant for non-selected eigenvalues= 0.712 167 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:16:45 2007 Page 11 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 _____________________ Training 0.785 NoRoles 0.711 Teaming 0.640 WFDivers 0.638 NoPeople 0.461 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Training 0.616 0.287 0.287 NoRoles 0.505 0.235 0.521 Teaming 0.410 0.191 0.712 WFDivers 0.407 0.189 0.901 NoPeople 0.212 0.099 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 _____________________ 2.150 _____________________ Total: 2.150 EQS 6.1 for Windows Tue Nov 06 23:16:45 2007 Page 12 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] 168 FACTOR 1 _____________________ Training 0.642 NoRoles 0.581 Teaming 0.524 WFDivers 0.521 NoPeople 0.377 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Training 0.412 0.287 0.287 NoRoles 0.338 0.235 0.521 Teaming 0.274 0.191 0.712 WFDivers 0.272 0.189 0.901 NoPeople 0.142 0.099 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 _____________________ 1.438 _____________________ Total: 1.438 FACTOR SCORE COEFFICIENTS FACTOR 1 _____________________ NoPeople -0.175 Teaming -0.244 NoRoles -0.270 Training -0.298 WFDivers -0.243 _____________________ 169 Factor Analysis (All Activities) EQS 6.1 for Windows Tue Nov 06 23:28:32 2007 Page 17 FACTOR ANALYSIS 5 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 NoActivi Complexi WorkType Document SerialDe NoActivi 1.0000 Complexi 0.6252 1.0000 WorkType 0.3680 0.4059 1.0000 Document 0.2570 0.1971 0.3177 1.0000 SerialDe 0.4002 0.4107 0.2735 0.2333 1.0000 Eigenvalues 1 2.432 2 0.894 3 0.707 4 0.601 5 0.366 170 EQS 6.1 for Windows Tue Nov 06 23:28:43 2007 Page 18 Number of factors selected are ....... 1 Constant for non-selected eigenvalues= 0.642 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:28:43 2007 Page 19 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 _____________________ Complexi 0.800 NoActivi 0.799 WorkType 0.672 SerialDe 0.664 Document 0.511 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Complexi 0.640 0.263 0.263 NoActivi 0.639 0.263 0.526 WorkType 0.452 0.186 0.711 SerialDe 0.441 0.181 0.893 Document 0.261 0.107 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 _____________________ 2.432 _____________________ Total: 2.432 171 EQS 6.1 for Windows Tue Nov 06 23:28:43 2007 Page 20 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] FACTOR 1 _____________________ Complexi 0.686 NoActivi 0.686 WorkType 0.577 SerialDe 0.570 Document 0.438 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Complexi 0.471 0.263 0.263 NoActivi 0.470 0.263 0.526 WorkType 0.333 0.186 0.711 SerialDe 0.325 0.181 0.893 Document 0.192 0.107 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 _____________________ 1.790 _____________________ Total: 1.790 FACTOR SCORE COEFFICIENTS FACTOR 1 _____________________ NoActivi 0.282 Complexi 0.282 WorkType 0.237 Document 0.180 SerialDe 0.234 172 Factor Analysis (All Customers) EQS 6.1 for Windows Tue Nov 06 23:33:22 2007 Page 22 FACTOR ANALYSIS 6 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 Requirem CustSupp Critical NoCusts CustDiv CustInvo Requirem 1.0000 CustSupp 0.3487 1.0000 Critical 0.4090 0.4544 1.0000 NoCusts 0.1668 0.1810 0.4353 1.0000 CustDiv 0.1272 0.0261 0.1654 0.4860 1.0000 CustInvo 0.4891 0.2014 0.3159 0.2190 0.2838 1.0000 Eigenvalues 1 2.466 2 1.216 3 0.916 4 0.555 5 0.464 6 0.383 173 EQS 6.1 for Windows Tue Nov 06 23:33:33 2007 Page 23 Number of factors selected are ....... 2 Constant for non-selected eigenvalues= 0.579 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:33:33 2007 Page 24 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 FACTOR 2 _______________________________ Requirem 0.685 0.378 Critical 0.758 0.169 CustSupp 0.576 0.480 CustInvo 0.662 0.055 CustDiv 0.495 -0.724 NoCusts 0.637 -0.537 _______________________________ Communal. Prop. Cum.Prop. _________________________________________ Requirem 0.612 0.166 0.166 Critical 0.603 0.164 0.330 CustSupp 0.562 0.153 0.483 CustInvo 0.442 0.120 0.603 CustDiv 0.769 0.209 0.811 NoCusts 0.694 0.189 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 _______________________________ 2.466 1.216 _______________________________ Total: 3.682 174 EQS 6.1 for Windows Tue Nov 06 23:33:33 2007 Page 25 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] FACTOR 1 FACTOR 2 _______________________________ Requirem 0.599 0.273 Critical 0.663 0.122 CustSupp 0.504 0.347 CustInvo 0.579 0.040 CustDiv 0.433 -0.524 NoCusts 0.557 -0.388 _______________________________ Communal. Prop. Cum.Prop. _________________________________________ Requirem 0.434 0.172 0.172 Critical 0.454 0.180 0.352 CustSupp 0.375 0.148 0.501 CustInvo 0.337 0.134 0.634 CustDiv 0.462 0.183 0.817 NoCusts 0.462 0.183 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 _______________________________ 1.886 0.637 _______________________________ Total: 2.523 EQS 6.1 for Windows Tue Nov 06 23:33:33 2007 Page 26 175 FACTOR LOADINGS (KAISER VARIMAX SOLUTION) Converge after 2 iterations FACTOR 1 FACTOR 2 _______________________________ Requirem 0.648 0.120 Critical 0.613 0.280 CustSupp 0.612 0.005 CustInvo 0.497 0.300 CustDiv 0.054 0.677 NoCusts 0.233 0.638 _______________________________ Communal. Prop. Cum.Prop. _________________________________________ Requirem 0.434 0.172 0.172 Critical 0.454 0.180 0.352 CustSupp 0.375 0.148 0.501 CustInvo 0.337 0.134 0.634 CustDiv 0.462 0.183 0.817 NoCusts 0.462 0.183 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 _______________________________ 1.474 1.049 _______________________________ Total: 2.523 FACTOR TRANSFORMATION MATRIX FACTOR 1 FACTOR 2 FACTOR 1 -0.819 FACTOR 2 -0.574 0.819 176 FACTOR SCORE COEFFICIENTS FACTOR 1 FACTOR 2 _______________________________ Requirem 0.328 -0.045 CustSupp 0.331 -0.116 Critical 0.278 0.072 NoCusts 0.002 0.391 CustDiv -0.104 0.453 CustInvo 0.211 0.108 Factor Analysis (w/o Customer Diversity) EQS 6.1 for Windows Tue Nov 06 23:36:04 2007 Page 27 FACTOR ANALYSIS 5 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 Requirem CustSupp Critical NoCusts CustInvo Requirem 1.0000 CustSupp 0.3487 1.0000 Critical 0.4090 0.4544 1.0000 NoCusts 0.1668 0.1810 0.4353 1.0000 CustInvo 0.4891 0.2014 0.3159 0.2190 1.0000 177 Eigenvalues 1 2.313 2 0.943 3 0.827 4 0.484 5 0.433 EQS 6.1 for Windows Tue Nov 06 23:36:44 2007 Page 28 Number of factors selected are ....... 1 Constant for non-selected eigenvalues= 0.672 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:36:44 2007 Page 29 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 _____________________ Critical 0.792 Requirem 0.732 CustInvo 0.654 CustSupp 0.645 NoCusts 0.553 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Critical 0.627 0.271 0.271 Requirem 0.536 0.232 0.503 CustInvo 0.428 0.185 0.688 CustSupp 0.416 0.180 0.868 NoCusts 0.306 0.132 1.000 _________________________________________ 178 Variance Explained by Each Factor: FACTOR 1 _____________________ 2.313 _____________________ Total: 2.313 EQS 6.1 for Windows Tue Nov 06 23:36:44 2007 Page 30 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] FACTOR 1 _____________________ Critical 0.667 Requirem 0.617 CustInvo 0.551 CustSupp 0.543 NoCusts 0.466 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Critical 0.445 0.271 0.271 Requirem 0.380 0.232 0.503 CustInvo 0.304 0.185 0.688 CustSupp 0.295 0.180 0.868 NoCusts 0.217 0.132 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 _____________________ 1.642 _____________________ Total: 1.642 179 FACTOR SCORE COEFFICIENTS FACTOR 1 _____________________ Requirem 0.267 CustSupp 0.235 Critical 0.288 NoCusts 0.202 CustInvo 0.238 _____________________ Factor Analysis (Information) EQS 6.1 for Windows Tue Nov 06 23:37:59 2007 Page 31 FACTOR ANALYSIS 4 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 Automati ITTools QuantCom QualComm Automati 1.0000 ITTools 0.7290 1.0000 QuantCom 0.2168 0.1890 1.0000 QualComm 0.1934 0.1426 0.5471 1.0000 180 Eigenvalues 1 2.021 2 1.258 3 0.453 4 0.269 EQS 6.1 for Windows Tue Nov 06 23:38:08 2007 Page 32 Number of factors selected are ....... 2 Constant for non-selected eigenvalues= 0.361 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:38:08 2007 Page 33 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 FACTOR 2 _______________________________ ITTools 0.777 0.514 Automati 0.805 0.464 QualComm 0.603 -0.645 QuantCom 0.638 -0.602 _______________________________ Communal. Prop. Cum.Prop. _________________________________________ ITTools 0.868 0.265 0.265 Automati 0.862 0.263 0.528 QualComm 0.779 0.238 0.765 QuantCom 0.769 0.235 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 _______________________________ 2.021 1.258 _______________________________ Total: 3.278 181 EQS 6.1 for Windows Tue Nov 06 23:38:08 2007 Page 34 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] FACTOR 1 FACTOR 2 _______________________________ ITTools 0.704 0.434 Automati 0.729 0.392 QualComm 0.546 -0.545 QuantCom 0.578 -0.508 _______________________________ Communal. Prop. Cum.Prop. _________________________________________ ITTools 0.684 0.268 0.268 Automati 0.685 0.268 0.536 QualComm 0.595 0.233 0.768 QuantCom 0.592 0.232 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 _______________________________ 1.660 0.897 _______________________________ Total: 2.557 EQS 6.1 for Windows Tue Nov 06 23:38:08 2007 Page 35 FACTOR LOADINGS (KAISER VARIMAX SOLUTION) Converge after 2 iterations FACTOR 1 FACTOR 2 _______________________________ ITTools 0.821 0.097 Automati 0.815 0.146 QualComm 0.090 0.766 QuantCom 0.137 0.757 _______________________________ 182 Communal. Prop. Cum.Prop. _________________________________________ ITTools 0.684 0.268 0.268 Automati 0.685 0.268 0.536 QualComm 0.595 0.233 0.768 QuantCom 0.592 0.232 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 _______________________________ 1.365 1.192 _______________________________ Total: 2.557 FACTOR TRANSFORMATION MATRIX FACTOR 1 FACTOR 2 FACTOR 1 -0.784 FACTOR 2 -0.621 -0.784 FACTOR SCORE COEFFICIENTS FACTOR 1 FACTOR 2 _______________________________ Automati 0.476 -0.020 ITTools 0.487 -0.054 QuantCom -0.027 0.494 QualComm -0.057 0.507 _______________________________ 183 Factor Analysis (w/o Communication Quality) EQS 6.1 for Windows Tue Nov 06 23:39:55 2007 Page 36 FACTOR ANALYSIS 3 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 Automati ITTools QuantCom Automati 1.0000 ITTools 0.7290 1.0000 QuantCom 0.2168 0.1890 1.0000 Eigenvalues 1 1.828 2 0.901 3 0.270 EQS 6.1 for Windows Tue Nov 06 23:40:04 2007 Page 37 Number of factors selected are ....... 1 Constant for non-selected eigenvalues= 0.586 184 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:40:04 2007 Page 38 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 _____________________ Automati 0.907 ITTools 0.900 QuantCom 0.443 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Automati 0.823 0.450 0.450 ITTools 0.809 0.443 0.893 QuantCom 0.196 0.107 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 _____________________ 1.828 _____________________ Total: 1.828 EQS 6.1 for Windows Tue Nov 06 23:40:04 2007 Page 39 185 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] FACTOR 1 _____________________ Automati 0.748 ITTools 0.742 QuantCom 0.365 _____________________ Communal. Prop. Cum.Prop. _________________________________________ Automati 0.560 0.450 0.450 ITTools 0.550 0.443 0.893 QuantCom 0.133 0.107 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 _____________________ 1.243 _____________________ Total: 1.243 FACTOR SCORE COEFFICIENTS FACTOR 1 _____________________ Automati 0.409 ITTools 0.406 QuantCom 0.200 186 Factor Analysis (All) EQS 6.1 for Windows Tue Nov 06 23:43:36 2007 Page 45 FACTOR ANALYSIS 21 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 187 NoPeople Multitas Teaming NoRoles Training WFDivers NoActivi Complexi WorkType Document NoPeople 1.0000 Multitas 0.0638 1.0000 Teaming 0.1576 0.2676 1.0000 NoRoles 0.1758 0.2641 0.3445 1.0000 Training 0.2387 0.1942 0.3618 0.4301 1.0000 WFDivers 0.1835 0.1856 0.2121 0.2795 0.4008 1.0000 NoActivi 0.1604 0.2462 0.1845 0.3467 0.2734 0.1676 1.0000 Complexi 0.0673 0.1948 0.1977 0.3510 0.3322 0.1776 0.6252 1.0000 WorkType 0.2043 0.2176 0.1402 0.2882 0.4328 0.3305 0.3680 0.4059 1.0000 Document 0.1172 0.0743 0.1852 0.2694 0.1557 0.1590 0.2570 0.1971 0.3177 1.0000 SerialDe 0.0608 0.2468 0.2640 0.3086 0.2409 0.1939 0.4002 0.4107 0.2735 0.2333 Requirem 0.0533 -0.0265 0.3389 0.1610 0.1124 0.1481 0.1345 0.1333 0.0710 0.1748 CustSupp 0.0543 0.0162 0.1625 0.2215 0.3420 0.2143 0.0293 0.0931 0.1829 0.3150 Critical 0.0238 0.0797 0.2726 0.1809 0.1816 0.1158 0.2039 0.3397 0.1630 0.0895 NoCusts 0.0826 0.2520 0.1670 0.2018 0.2017 0.2222 0.2310 0.2773 0.1320 0.1428 CustDiv 0.0236 0.3210 0.0833 0.1088 0.0791 0.2815 0.1422 0.0937 0.2005 0.0774 CustInvo 0.0220 0.0903 0.2091 0.0721 -0.0673 0.0393 0.0533 0.0085 0.0713 0.1887 Automati 0.1228 0.3235 0.1003 0.3206 0.3433 0.3217 0.2569 0.2478 0.3496 0.3337 ITTools 0.1572 0.3108 0.1714 0.2820 0.3315 0.3520 0.1860 0.1940 0.3842 0.2524 QuantCom 0.1539 0.2254 0.3048 0.3201 0.2209 0.3003 0.2284 0.1727 0.1547 0.0990 QualComm 0.0242 0.1165 0.3747 0.2193 0.2258 0.2526 0.2342 0.2334 0.2363 0.3080 188 SerialDe Requirem CustSupp Critical NoCusts CustDiv CustInvo Automati ITTools QuantCom SerialDe 1.0000 Requirem 0.2056 1.0000 CustSupp 0.0802 0.3487 1.0000 Critical 0.2486 0.4090 0.4544 1.0000 NoCusts 0.2716 0.1668 0.1810 0.4353 1.0000 CustDiv 0.2213 0.1272 0.0261 0.1654 0.4860 1.0000 CustInvo 0.1103 0.4891 0.2014 0.3159 0.2190 0.2838 1.0000 Automati 0.2892 -0.0954 0.1803 0.1595 0.3367 0.3545 0.0783 1.0000 ITTools 0.2062 -0.1012 0.0865 0.0778 0.2000 0.2918 -0.0530 0.7290 1.0000 QuantCom 0.3087 0.1650 0.1496 0.1879 0.1467 0.2466 0.1850 0.2168 0.1890 1.0000 QualComm 0.3042 0.3563 0.2392 0.2539 0.1408 0.1757 0.1770 0.1934 0.1426 0.5471 189 QualComm QualComm 1.0000 Eigenvalues 1 5.407 2 2.154 3 1.567 4 1.396 5 1.274 6 1.082 7 0.980 8 0.899 9 0.774 10 0.681 11 0.669 12 0.628 13 0.595 14 0.537 15 0.456 16 0.439 17 0.375 18 0.322 19 0.297 20 0.267 21 0.201 EQS 6.1 for Windows Tue Nov 06 23:44:11 2007 Page 46 Number of factors selected are ....... 6 Constant for non-selected eigenvalues= 0.541 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:44:11 2007 Page 47 190 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ Training 0.599 0.192 0.334 0.247 0.075 0.347 WFDivers 0.534 0.153 -0.037 0.342 -0.114 0.170 NoPeople 0.264 0.142 0.212 0.238 -0.076 0.310 NoRoles 0.605 0.095 0.252 0.022 -0.077 0.115 ITTools 0.540 0.534 -0.215 0.262 0.064 -0.125 Teaming 0.514 -0.257 0.181 0.092 -0.352 0.204 WorkType 0.583 0.263 0.185 0.028 0.203 -0.130 CustSupp 0.410 -0.374 0.124 0.438 0.419 0.038 Automati 0.612 0.446 -0.301 0.176 0.171 -0.194 QuantCom 0.525 -0.130 -0.008 0.071 -0.588 -0.072 CustInvo 0.289 -0.583 -0.355 0.030 0.021 -0.159 Multitas 0.446 0.249 -0.291 -0.158 -0.262 0.151 Complexi 0.578 0.091 0.295 -0.544 0.203 0.032 NoActivi 0.567 0.128 0.260 -0.537 0.057 -0.080 Critical 0.489 -0.483 -0.092 -0.115 0.351 0.248 QualComm 0.547 -0.303 0.123 0.085 -0.380 -0.340 NoCusts 0.514 -0.094 -0.478 -0.196 0.243 0.284 Document 0.463 -0.028 0.138 0.186 0.248 -0.610 SerialDe 0.577 -0.012 0.046 -0.380 -0.116 -0.135 CustDiv 0.440 0.035 -0.697 -0.069 -0.091 0.027 Requirem 0.365 -0.725 0.053 0.048 0.020 0.003 _______________________________________________________________________ 191 Communal. Prop. Cum.Prop. _________________________________________ Training 0.694 0.054 0.054 WFDivers 0.470 0.036 0.090 NoPeople 0.293 0.023 0.113 NoRoles 0.458 0.036 0.149 ITTools 0.711 0.055 0.204 Teaming 0.536 0.042 0.245 WorkType 0.502 0.039 0.284 CustSupp 0.692 0.054 0.338 Automati 0.761 0.059 0.397 QuantCom 0.649 0.050 0.448 CustInvo 0.576 0.045 0.492 Multitas 0.462 0.036 0.528 Complexi 0.767 0.060 0.588 NoActivi 0.703 0.055 0.642 Critical 0.679 0.053 0.695 QualComm 0.673 0.052 0.747 NoCusts 0.680 0.053 0.800 Document 0.702 0.055 0.855 SerialDe 0.511 0.040 0.894 CustDiv 0.695 0.054 0.948 Requirem 0.665 0.052 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ 5.407 2.154 1.567 1.396 1.274 1.082 _______________________________________________________________________ Total: 12.880 EQS 6.1 for Windows Tue Nov 06 23:44:11 2007 Page 48 192 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ Training 0.568 0.166 0.270 0.193 0.057 0.245 WFDivers 0.507 0.133 -0.030 0.268 -0.087 0.120 NoPeople 0.250 0.123 0.171 0.186 -0.057 0.219 NoRoles 0.574 0.082 0.204 0.017 -0.059 0.081 ITTools 0.512 0.462 -0.174 0.205 0.049 -0.088 Teaming 0.487 -0.222 0.146 0.072 -0.267 0.144 WorkType 0.553 0.228 0.150 0.022 0.154 -0.092 CustSupp 0.389 -0.324 0.100 0.343 0.318 0.027 Automati 0.580 0.386 -0.244 0.138 0.130 -0.137 QuantCom 0.498 -0.113 -0.006 0.056 -0.446 -0.051 CustInvo 0.274 -0.505 -0.287 0.023 0.016 -0.113 Multitas 0.423 0.215 -0.235 -0.123 -0.198 0.107 Complexi 0.548 0.079 0.239 -0.426 0.154 0.023 NoActivi 0.538 0.111 0.210 -0.420 0.043 -0.057 Critical 0.464 -0.418 -0.074 -0.090 0.267 0.175 QualComm 0.519 -0.262 0.099 0.066 -0.288 -0.240 NoCusts 0.488 -0.082 -0.386 -0.153 0.184 0.201 Document 0.439 -0.024 0.112 0.146 0.188 -0.431 SerialDe 0.547 -0.010 0.037 -0.297 -0.088 -0.095 CustDiv 0.418 0.030 -0.564 -0.054 -0.069 0.019 Requirem 0.346 -0.627 0.043 0.038 0.015 0.002 _______________________________________________________________________ 193 Communal. Prop. Cum.Prop. _________________________________________ Training 0.524 0.054 0.054 WFDivers 0.369 0.038 0.093 NoPeople 0.193 0.020 0.113 NoRoles 0.388 0.040 0.153 ITTools 0.558 0.058 0.211 Teaming 0.405 0.042 0.253 WorkType 0.413 0.043 0.296 CustSupp 0.485 0.050 0.346 Automati 0.599 0.062 0.408 QuantCom 0.465 0.048 0.457 CustInvo 0.426 0.044 0.501 Multitas 0.347 0.036 0.537 Complexi 0.569 0.059 0.596 NoActivi 0.527 0.055 0.651 Critical 0.505 0.052 0.703 QualComm 0.493 0.051 0.754 NoCusts 0.492 0.051 0.806 Document 0.448 0.047 0.852 SerialDe 0.406 0.042 0.894 CustDiv 0.502 0.052 0.946 Requirem 0.517 0.054 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ 4.865 1.612 1.026 0.855 0.733 0.540 _______________________________________________________________________ Total: 9.632 EQS 6.1 for Windows Tue Nov 06 23:44:11 2007 Page 49 194 FACTOR LOADINGS (KAISER VARIMAX SOLUTION) Converge after 6 iterations FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ Training 0.643 0.100 0.082 0.252 0.083 0.154 WFDivers 0.458 0.075 0.283 0.034 0.201 0.179 NoPeople 0.431 -0.010 0.019 0.043 0.071 0.007 NoRoles 0.417 0.092 0.115 0.341 0.233 0.148 ITTools 0.351 -0.158 0.469 0.102 0.029 0.423 Teaming 0.332 0.250 0.054 0.162 0.448 -0.051 WorkType 0.332 0.032 0.156 0.365 0.048 0.377 CustSupp 0.305 0.551 -0.054 -0.029 0.007 0.292 Automati 0.272 -0.045 0.528 0.167 0.013 0.466 QuantCom 0.193 0.080 0.203 0.117 0.604 0.043 CustInvo -0.182 0.535 0.193 -0.046 0.249 0.066 Multitas 0.165 -0.080 0.479 0.213 0.194 -0.023 Complexi 0.163 0.116 0.088 0.715 0.037 0.093 NoActivi 0.115 0.034 0.112 0.685 0.135 0.116 Critical 0.115 0.635 0.156 0.253 0.019 -0.008 QualComm 0.097 0.233 0.044 0.168 0.590 0.227 NoCusts 0.087 0.370 0.539 0.232 -0.048 -0.019 Document 0.062 0.180 0.006 0.182 0.153 0.596 SerialDe 0.059 0.111 0.219 0.498 0.286 0.112 CustDiv -0.032 0.157 0.672 0.038 0.141 0.054 Requirem 0.013 0.638 -0.072 0.077 0.315 0.003 _______________________________________________________________________ 195 Communal. Prop. Cum.Prop. _________________________________________ Training 0.524 0.054 0.054 WFDivers 0.369 0.038 0.093 NoPeople 0.193 0.020 0.113 NoRoles 0.388 0.040 0.153 ITTools 0.558 0.058 0.211 Teaming 0.405 0.042 0.253 WorkType 0.413 0.043 0.296 CustSupp 0.485 0.050 0.346 Automati 0.599 0.062 0.408 QuantCom 0.465 0.048 0.457 CustInvo 0.426 0.044 0.501 Multitas 0.347 0.036 0.537 Complexi 0.569 0.059 0.596 NoActivi 0.527 0.055 0.651 Critical 0.505 0.052 0.703 QualComm 0.493 0.051 0.754 NoCusts 0.492 0.051 0.806 Document 0.448 0.047 0.852 SerialDe 0.406 0.042 0.894 CustDiv 0.502 0.052 0.946 Requirem 0.517 0.054 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ 1.668 1.803 1.779 1.857 1.369 1.155 _______________________________________________________________________ Total: 9.632 196 FACTOR TRANSFORMATION MATRIX FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 FACTOR 1 0.454 FACTOR 2 0.339 -0.822 FACTOR 3 0.432 0.290 0.836 FACTOR 4 0.492 0.130 -0.358 0.767 FACTOR 5 0.369 -0.308 -0.092 -0.070 -0.825 FACTOR 6 0.338 0.214 -0.060 -0.381 0.359 -0.750 FACTOR SCORE COEFFICIENTS FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ NoPeople 0.260 -0.020 -0.035 -0.045 0.001 -0.083 Multitas 0.020 -0.094 0.219 0.039 0.079 -0.125 Teaming 0.149 0.044 -0.042 -0.011 0.214 -0.142 NoRoles 0.158 -0.018 -0.040 0.085 0.058 -0.016 Training 0.332 0.016 -0.059 0.018 -0.058 -0.039 WFDivers 0.212 -0.009 0.079 -0.113 0.053 0.009 NoActivi -0.068 -0.048 -0.039 0.341 -0.001 -0.011 Complexi -0.026 0.008 -0.053 0.361 -0.086 -0.037 WorkType 0.072 -0.026 -0.026 0.107 -0.061 0.176 Document -0.114 0.038 -0.101 0.018 0.035 0.421 SerialDe -0.101 -0.026 0.032 0.213 0.107 -0.005 Requirem -0.030 0.265 -0.083 -0.016 0.108 -0.026 CustSupp 0.148 0.268 -0.098 -0.111 -0.117 0.160 Critical 0.023 0.297 0.035 0.078 -0.134 -0.087 NoCusts -0.014 0.168 0.260 0.053 -0.156 -0.122 CustDiv -0.107 0.037 0.347 -0.068 0.028 -0.040 CustInvo -0.165 0.224 0.090 -0.079 0.093 0.045 Automati 0.018 -0.060 0.196 -0.036 -0.072 0.234 ITTools 0.089 -0.113 0.170 -0.070 -0.040 0.202 QuantCom 0.021 -0.070 0.040 -0.047 0.354 -0.055 QualComm -0.068 0.005 -0.068 -0.012 0.329 0.114 _______________________________________________________________________ 197 Factor Analysis (All, w/Oblique) EQS 6.1 for Windows Tue Nov 06 23:49:23 2007 Page 55 FACTOR ANALYSIS 21 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 198 NoPeople Multitas Teaming NoRoles Training WFDivers NoActivi Complexi WorkType Document NoPeople 1.0000 Multitas 0.0638 1.0000 Teaming 0.1576 0.2676 1.0000 NoRoles 0.1758 0.2641 0.3445 1.0000 Training 0.2387 0.1942 0.3618 0.4301 1.0000 WFDivers 0.1835 0.1856 0.2121 0.2795 0.4008 1.0000 NoActivi 0.1604 0.2462 0.1845 0.3467 0.2734 0.1676 1.0000 Complexi 0.0673 0.1948 0.1977 0.3510 0.3322 0.1776 0.6252 1.0000 WorkType 0.2043 0.2176 0.1402 0.2882 0.4328 0.3305 0.3680 0.4059 1.0000 Document 0.1172 0.0743 0.1852 0.2694 0.1557 0.1590 0.2570 0.1971 0.3177 1.0000 SerialDe 0.0608 0.2468 0.2640 0.3086 0.2409 0.1939 0.4002 0.4107 0.2735 0.2333 Requirem 0.0533 -0.0265 0.3389 0.1610 0.1124 0.1481 0.1345 0.1333 0.0710 0.1748 CustSupp 0.0543 0.0162 0.1625 0.2215 0.3420 0.2143 0.0293 0.0931 0.1829 0.3150 Critical 0.0238 0.0797 0.2726 0.1809 0.1816 0.1158 0.2039 0.3397 0.1630 0.0895 NoCusts 0.0826 0.2520 0.1670 0.2018 0.2017 0.2222 0.2310 0.2773 0.1320 0.1428 CustDiv 0.0236 0.3210 0.0833 0.1088 0.0791 0.2815 0.1422 0.0937 0.2005 0.0774 CustInvo 0.0220 0.0903 0.2091 0.0721 -0.0673 0.0393 0.0533 0.0085 0.0713 0.1887 Automati 0.1228 0.3235 0.1003 0.3206 0.3433 0.3217 0.2569 0.2478 0.3496 0.3337 ITTools 0.1572 0.3108 0.1714 0.2820 0.3315 0.3520 0.1860 0.1940 0.3842 0.2524 QuantCom 0.1539 0.2254 0.3048 0.3201 0.2209 0.3003 0.2284 0.1727 0.1547 0.0990 QualComm 0.0242 0.1165 0.3747 0.2193 0.2258 0.2526 0.2342 0.2334 0.2363 0.3080 199 SerialDe Requirem CustSupp Critical NoCusts CustDiv CustInvo Automati ITTools QuantCom SerialDe 1.0000 Requirem 0.2056 1.0000 CustSupp 0.0802 0.3487 1.0000 Critical 0.2486 0.4090 0.4544 1.0000 NoCusts 0.2716 0.1668 0.1810 0.4353 1.0000 CustDiv 0.2213 0.1272 0.0261 0.1654 0.4860 1.0000 CustInvo 0.1103 0.4891 0.2014 0.3159 0.2190 0.2838 1.0000 Automati 0.2892 -0.0954 0.1803 0.1595 0.3367 0.3545 0.0783 1.0000 ITTools 0.2062 -0.1012 0.0865 0.0778 0.2000 0.2918 -0.0530 0.7290 1.0000 QuantCom 0.3087 0.1650 0.1496 0.1879 0.1467 0.2466 0.1850 0.2168 0.1890 1.0000 QualComm 0.3042 0.3563 0.2392 0.2539 0.1408 0.1757 0.1770 0.1934 0.1426 0.5471 QualComm QualComm 1.0000 200 Eigenvalues 1 5.407 2 2.154 3 1.567 4 1.396 5 1.274 6 1.082 7 0.980 8 0.899 9 0.774 10 0.681 11 0.669 12 0.628 13 0.595 14 0.537 15 0.456 16 0.439 17 0.375 18 0.322 19 0.297 20 0.267 21 0.201 EQS 6.1 for Windows Tue Nov 06 23:49:48 2007 Page 56 Number of factors selected are ....... 6 Constant for non-selected eigenvalues= 0.541 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:49:48 2007 Page 57 201 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ Complexi 0.578 0.091 0.295 -0.544 0.203 0.032 NoActivi 0.567 0.128 0.260 -0.537 0.057 -0.080 SerialDe 0.577 -0.012 0.046 -0.380 -0.116 -0.135 WorkType 0.583 0.263 0.185 0.028 0.203 -0.130 NoRoles 0.605 0.095 0.252 0.022 -0.077 0.115 Critical 0.489 -0.483 -0.092 -0.115 0.351 0.248 NoCusts 0.514 -0.094 -0.478 -0.196 0.243 0.284 Training 0.599 0.192 0.334 0.247 0.075 0.347 Multitas 0.446 0.249 -0.291 -0.158 -0.262 0.151 Document 0.463 -0.028 0.138 0.186 0.248 -0.610 CustSupp 0.410 -0.374 0.124 0.438 0.419 0.038 QualComm 0.547 -0.303 0.123 0.085 -0.380 -0.340 CustInvo 0.289 -0.583 -0.355 0.030 0.021 -0.159 Teaming 0.514 -0.257 0.181 0.092 -0.352 0.204 WFDivers 0.534 0.153 -0.037 0.342 -0.114 0.170 Automati 0.612 0.446 -0.301 0.176 0.171 -0.194 CustDiv 0.440 0.035 -0.697 -0.069 -0.091 0.027 NoPeople 0.264 0.142 0.212 0.238 -0.076 0.310 ITTools 0.540 0.534 -0.215 0.262 0.064 -0.125 Requirem 0.365 -0.725 0.053 0.048 0.020 0.003 QuantCom 0.525 -0.130 -0.008 0.071 -0.588 -0.072 _______________________________________________________________________ 202 Communal. Prop. Cum.Prop. _________________________________________ Complexi 0.767 0.060 0.060 NoActivi 0.703 0.055 0.114 SerialDe 0.511 0.040 0.154 WorkType 0.502 0.039 0.193 NoRoles 0.458 0.036 0.228 Critical 0.679 0.053 0.281 NoCusts 0.680 0.053 0.334 Training 0.694 0.054 0.388 Multitas 0.462 0.036 0.424 Document 0.702 0.055 0.478 CustSupp 0.692 0.054 0.532 QualComm 0.673 0.052 0.584 CustInvo 0.576 0.045 0.629 Teaming 0.536 0.042 0.671 WFDivers 0.470 0.036 0.707 Automati 0.761 0.059 0.766 CustDiv 0.695 0.054 0.820 NoPeople 0.293 0.023 0.843 ITTools 0.711 0.055 0.898 Requirem 0.665 0.052 0.950 QuantCom 0.649 0.050 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ 5.407 2.154 1.567 1.396 1.274 1.082 _______________________________________________________________________ Total: 12.880 EQS 6.1 for Windows Tue Nov 06 23:49:48 2007 Page 58 203 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ Complexi 0.578 0.091 0.295 -0.544 0.203 0.032 NoActivi 0.567 0.128 0.260 -0.537 0.057 -0.080 SerialDe 0.577 -0.012 0.046 -0.380 -0.116 -0.135 WorkType 0.583 0.263 0.185 0.028 0.203 -0.130 NoRoles 0.605 0.095 0.252 0.022 -0.077 0.115 Critical 0.489 -0.483 -0.092 -0.115 0.351 0.248 NoCusts 0.514 -0.094 -0.478 -0.196 0.243 0.284 Training 0.599 0.192 0.334 0.247 0.075 0.347 Multitas 0.446 0.249 -0.291 -0.158 -0.262 0.151 Document 0.463 -0.028 0.138 0.186 0.248 -0.610 CustSupp 0.410 -0.374 0.124 0.438 0.419 0.038 QualComm 0.547 -0.303 0.123 0.085 -0.380 -0.340 CustInvo 0.289 -0.583 -0.355 0.030 0.021 -0.159 Teaming 0.514 -0.257 0.181 0.092 -0.352 0.204 WFDivers 0.534 0.153 -0.037 0.342 -0.114 0.170 Automati 0.612 0.446 -0.301 0.176 0.171 -0.194 CustDiv 0.440 0.035 -0.697 -0.069 -0.091 0.027 NoPeople 0.264 0.142 0.212 0.238 -0.076 0.310 ITTools 0.540 0.534 -0.215 0.262 0.064 -0.125 Requirem 0.365 -0.725 0.053 0.048 0.020 0.003 QuantCom 0.525 -0.130 -0.008 0.071 -0.588 -0.072 _______________________________________________________________________ 204 Communal. Prop. Cum.Prop. _________________________________________ Complexi 0.767 0.060 0.060 NoActivi 0.703 0.055 0.114 SerialDe 0.511 0.040 0.154 WorkType 0.502 0.039 0.193 NoRoles 0.458 0.036 0.228 Critical 0.679 0.053 0.281 NoCusts 0.680 0.053 0.334 Training 0.694 0.054 0.388 Multitas 0.462 0.036 0.424 Document 0.702 0.055 0.478 CustSupp 0.692 0.054 0.532 QualComm 0.673 0.052 0.584 CustInvo 0.576 0.045 0.629 Teaming 0.536 0.042 0.671 WFDivers 0.470 0.036 0.707 Automati 0.761 0.059 0.766 CustDiv 0.695 0.054 0.820 NoPeople 0.293 0.023 0.843 ITTools 0.711 0.055 0.898 Requirem 0.665 0.052 0.950 QuantCom 0.649 0.050 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ 5.407 2.154 1.567 1.396 1.274 1.082 _______________________________________________________________________ Total: 12.880 EQS 6.1 for Windows Tue Nov 06 23:49:48 2007 Page 59 205 FACTOR LOADINGS (DIRECT OBLIMIN SOLUTION) Converge after 17 iterations FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ Complexi 0.749 0.065 -0.027 0.039 -0.070 0.017 NoActivi 0.714 -0.043 -0.007 -0.018 0.044 0.048 SerialDe 0.488 0.006 0.124 -0.067 0.223 0.054 WorkType 0.318 -0.015 0.056 0.218 -0.036 0.332 NoRoles 0.283 0.029 0.014 0.344 0.164 0.078 Critical 0.218 0.626 0.135 0.064 -0.063 -0.064 NoCusts 0.177 0.347 0.542 0.027 -0.139 -0.081 Training 0.169 0.087 -0.010 0.611 -0.001 0.075 Multitas 0.156 -0.155 0.446 0.110 0.148 -0.081 Document 0.136 0.108 -0.080 -0.086 0.114 0.607 CustSupp -0.110 0.563 -0.084 0.270 -0.047 0.273 QualComm 0.099 0.093 -0.047 -0.010 0.587 0.198 CustInvo -0.097 0.472 0.208 -0.239 0.243 0.063 Teaming 0.090 0.168 -0.017 0.303 0.422 -0.118 WFDivers -0.079 0.027 0.232 0.423 0.153 0.126 Automati 0.069 -0.105 0.474 0.150 -0.066 0.435 CustDiv -0.045 0.083 0.691 -0.101 0.099 0.020 NoPeople -0.014 -0.008 -0.022 0.449 0.039 -0.038 ITTools 0.000 -0.210 0.415 0.256 -0.036 0.393 Requirem 0.037 0.585 -0.100 -0.027 0.296 -0.025 QuantCom 0.032 -0.058 0.130 0.127 0.605 -0.006 _______________________________________________________________________ 206 Communal. Prop. Cum.Prop. _________________________________________ Complexi 0.572 0.069 0.069 NoActivi 0.516 0.062 0.131 SerialDe 0.311 0.037 0.169 WorkType 0.264 0.032 0.200 NoRoles 0.233 0.028 0.228 Critical 0.470 0.057 0.285 NoCusts 0.473 0.057 0.342 Training 0.416 0.050 0.392 Multitas 0.288 0.035 0.427 Document 0.425 0.051 0.478 CustSupp 0.486 0.059 0.536 QualComm 0.404 0.049 0.585 CustInvo 0.396 0.048 0.633 Teaming 0.321 0.039 0.671 WFDivers 0.279 0.034 0.705 Automati 0.456 0.055 0.760 CustDiv 0.506 0.061 0.821 NoPeople 0.205 0.025 0.845 ITTools 0.437 0.053 0.898 Requirem 0.443 0.053 0.951 QuantCom 0.403 0.049 1.000 _________________________________________ FACTOR CORRELATION MATRIX FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 FACTOR 1 1.000 FACTOR 2 0.156 1.000 FACTOR 3 0.300 0.091 1.000 FACTOR 4 0.327 0.039 0.205 1.000 FACTOR 5 0.273 0.276 0.209 0.198 1.000 FACTOR 6 0.246 0.116 0.194 0.309 0.145 1.000 207 FACTOR SCORE COEFFICIENTS FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ NoPeople -0.015 -0.016 -0.024 0.237 0.009 -0.040 Multitas 0.064 -0.091 0.212 0.041 0.078 -0.094 Teaming 0.026 0.057 -0.018 0.144 0.217 -0.100 NoRoles 0.106 -0.004 -0.014 0.165 0.070 0.020 Training 0.055 0.024 -0.033 0.312 -0.028 0.019 WFDivers -0.053 -0.004 0.086 0.205 0.065 0.049 NoActivi 0.302 -0.033 -0.010 -0.026 0.011 0.001 Complexi 0.318 0.020 -0.021 0.006 -0.056 -0.017 WorkType 0.118 -0.016 -0.003 0.095 -0.042 0.186 Document 0.032 0.051 -0.075 -0.074 0.042 0.382 SerialDe 0.202 -0.012 0.053 -0.059 0.112 0.002 Requirem 0.003 0.269 -0.057 -0.035 0.145 -0.016 CustSupp -0.072 0.265 -0.074 0.124 -0.056 0.176 Critical 0.084 0.289 0.053 0.013 -0.058 -0.055 NoCusts 0.071 0.155 0.254 -0.009 -0.091 -0.086 CustDiv -0.026 0.028 0.327 -0.082 0.050 -0.026 CustInvo -0.053 0.220 0.094 -0.152 0.124 0.035 Automati 0.008 -0.058 0.194 0.049 -0.054 0.239 ITTools -0.020 -0.110 0.167 0.112 -0.035 0.213 QuantCom -0.000 -0.051 0.054 0.044 0.325 -0.036 QualComm 0.022 0.027 -0.040 -0.034 0.309 0.107 _______________________________________________________________________ 208 Factor Analysis (w/o Customer Diversity) EQS 6.1 for Windows Tue Nov 06 23:47:57 2007 Page 50 FACTOR ANALYSIS 20 Variables are selected from file c:\documents and settings\arthur dhallin\my documents\phd\data\eqs_scrubbed_20071106.ess Number of cases in data file are ........... 188 Number of cases used in this analysis are .. 188 209 NoPeople Multitas Teaming NoRoles Training WFDivers NoActivi Complexi WorkType Document NoPeople 1.0000 Multitas 0.0638 1.0000 Teaming 0.1576 0.2676 1.0000 NoRoles 0.1758 0.2641 0.3445 1.0000 Training 0.2387 0.1942 0.3618 0.4301 1.0000 WFDivers 0.1835 0.1856 0.2121 0.2795 0.4008 1.0000 NoActivi 0.1604 0.2462 0.1845 0.3467 0.2734 0.1676 1.0000 Complexi 0.0673 0.1948 0.1977 0.3510 0.3322 0.1776 0.6252 1.0000 WorkType 0.2043 0.2176 0.1402 0.2882 0.4328 0.3305 0.3680 0.4059 1.0000 Document 0.1172 0.0743 0.1852 0.2694 0.1557 0.1590 0.2570 0.1971 0.3177 1.0000 SerialDe 0.0608 0.2468 0.2640 0.3086 0.2409 0.1939 0.4002 0.4107 0.2735 0.2333 Requirem 0.0533 -0.0265 0.3389 0.1610 0.1124 0.1481 0.1345 0.1333 0.0710 0.1748 CustSupp 0.0543 0.0162 0.1625 0.2215 0.3420 0.2143 0.0293 0.0931 0.1829 0.3150 Critical 0.0238 0.0797 0.2726 0.1809 0.1816 0.1158 0.2039 0.3397 0.1630 0.0895 NoCusts 0.0826 0.2520 0.1670 0.2018 0.2017 0.2222 0.2310 0.2773 0.1320 0.1428 CustInvo 0.0220 0.0903 0.2091 0.0721 -0.0673 0.0393 0.0533 0.0085 0.0713 0.1887 Automati 0.1228 0.3235 0.1003 0.3206 0.3433 0.3217 0.2569 0.2478 0.3496 0.3337 ITTools 0.1572 0.3108 0.1714 0.2820 0.3315 0.3520 0.1860 0.1940 0.3842 0.2524 QuantCom 0.1539 0.2254 0.3048 0.3201 0.2209 0.3003 0.2284 0.1727 0.1547 0.0990 QualComm 0.0242 0.1165 0.3747 0.2193 0.2258 0.2526 0.2342 0.2334 0.2363 0.3080 210 SerialDe Requirem CustSupp Critical NoCusts CustInvo Automati ITTools QuantCom QualComm SerialDe 1.0000 Requirem 0.2056 1.0000 CustSupp 0.0802 0.3487 1.0000 Critical 0.2486 0.4090 0.4544 1.0000 NoCusts 0.2716 0.1668 0.1810 0.4353 1.0000 CustInvo 0.1103 0.4891 0.2014 0.3159 0.2190 1.0000 Automati 0.2892 -0.0954 0.1803 0.1595 0.3367 0.0783 1.0000 ITTools 0.2062 -0.1012 0.0865 0.0778 0.2000 -0.0530 0.7290 1.0000 QuantCom 0.3087 0.1650 0.1496 0.1879 0.1467 0.1850 0.2168 0.1890 1.0000 QualComm 0.3042 0.3563 0.2392 0.2539 0.1408 0.1770 0.1934 0.1426 0.5471 1.0000 211 Eigenvalues 1 5.245 2 2.153 3 1.399 4 1.294 5 1.206 6 1.081 7 0.973 8 0.856 9 0.754 10 0.672 11 0.665 12 0.622 13 0.594 14 0.510 15 0.451 16 0.423 17 0.331 18 0.300 19 0.270 20 0.201 EQS 6.1 for Windows Tue Nov 06 23:48:25 2007 Page 51 Number of factors selected are ....... 6 Constant for non-selected eigenvalues= 0.544 Sorting is performed based on the information produced by factor rotations. Factor loading is sorted by the order of factors. EQS 6.1 for Windows Tue Nov 06 23:48:25 2007 Page 52 212 COMPONENT MATRIX (PRINCIPAL COMPONENTS) FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ Training 0.616 -0.200 0.181 0.132 0.346 -0.324 WFDivers 0.528 -0.153 0.336 0.144 0.032 -0.158 NoPeople 0.273 -0.146 0.185 0.231 0.267 -0.271 NoRoles 0.619 -0.101 -0.024 0.167 0.098 -0.112 WorkType 0.588 -0.267 -0.005 -0.056 0.319 0.162 CustSupp 0.424 0.369 0.423 -0.259 0.327 -0.029 Teaming 0.526 0.252 0.051 0.365 -0.115 -0.216 ITTools 0.530 -0.532 0.311 -0.174 -0.210 0.099 Automati 0.599 -0.443 0.247 -0.324 -0.250 0.162 CustInvo 0.271 0.589 0.099 -0.193 -0.266 0.143 Complexi 0.592 -0.098 -0.581 -0.107 0.240 -0.016 QuantCom 0.521 0.130 0.056 0.492 -0.334 0.057 NoActivi 0.576 -0.133 -0.574 0.012 0.167 0.099 QualComm 0.553 0.300 0.050 0.381 -0.129 0.336 Critical 0.491 0.482 -0.077 -0.391 0.020 -0.263 Document 0.474 0.023 0.169 -0.143 0.196 0.623 Multitas 0.430 -0.245 -0.098 0.012 -0.498 -0.196 NoCusts 0.485 0.103 -0.091 -0.472 -0.274 -0.312 Requirem 0.368 0.723 0.035 0.030 0.100 0.009 SerialDe 0.578 0.010 -0.380 0.047 -0.141 0.124 _______________________________________________________________________ 213 Communal. Prop. Cum.Prop. _________________________________________ Training 0.695 0.056 0.056 WFDivers 0.462 0.037 0.093 NoPeople 0.328 0.027 0.120 NoRoles 0.444 0.036 0.156 WorkType 0.548 0.044 0.200 CustSupp 0.670 0.054 0.254 Teaming 0.536 0.043 0.297 ITTools 0.745 0.060 0.358 Automati 0.809 0.065 0.423 CustInvo 0.559 0.045 0.468 Complexi 0.767 0.062 0.530 QuantCom 0.648 0.052 0.582 NoActivi 0.717 0.058 0.640 QualComm 0.673 0.054 0.695 Critical 0.702 0.057 0.751 Document 0.700 0.057 0.808 Multitas 0.541 0.044 0.852 NoCusts 0.649 0.052 0.904 Requirem 0.671 0.054 0.958 SerialDe 0.515 0.042 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ 5.245 2.153 1.399 1.294 1.206 1.081 _______________________________________________________________________ Total: 12.378 EQS 6.1 for Windows Tue Nov 06 23:48:25 2007 Page 53 214 COMPONENT MATRIX (ADJUSTED COMPONENTS) [Used in calculations below] FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ Training 0.583 -0.173 0.142 0.101 0.256 -0.228 WFDivers 0.500 -0.133 0.263 0.110 0.024 -0.111 NoPeople 0.258 -0.126 0.145 0.176 0.198 -0.191 NoRoles 0.586 -0.088 -0.019 0.127 0.072 -0.079 WorkType 0.556 -0.231 -0.004 -0.043 0.236 0.114 CustSupp 0.402 0.319 0.331 -0.197 0.242 -0.021 Teaming 0.498 0.217 0.040 0.278 -0.085 -0.152 ITTools 0.502 -0.460 0.243 -0.132 -0.156 0.070 Automati 0.567 -0.383 0.193 -0.247 -0.186 0.114 CustInvo 0.256 0.509 0.077 -0.147 -0.197 0.101 Complexi 0.561 -0.085 -0.454 -0.081 0.178 -0.011 QuantCom 0.493 0.112 0.044 0.374 -0.247 0.040 NoActivi 0.546 -0.115 -0.448 0.009 0.123 0.070 QualComm 0.523 0.259 0.039 0.290 -0.096 0.237 Critical 0.465 0.417 -0.060 -0.298 0.015 -0.185 Document 0.449 0.020 0.132 -0.108 0.145 0.439 Multitas 0.407 -0.212 -0.076 0.009 -0.369 -0.138 NoCusts 0.459 0.089 -0.071 -0.359 -0.203 -0.220 Requirem 0.348 0.625 0.027 0.023 0.074 0.007 SerialDe 0.547 0.008 -0.297 0.036 -0.104 0.087 _______________________________________________________________________ 215 Communal. Prop. Cum.Prop. _________________________________________ Training 0.518 0.057 0.057 WFDivers 0.361 0.040 0.096 NoPeople 0.210 0.023 0.120 NoRoles 0.379 0.042 0.161 WorkType 0.434 0.048 0.209 CustSupp 0.470 0.052 0.260 Teaming 0.405 0.044 0.305 ITTools 0.569 0.062 0.367 Automati 0.613 0.067 0.434 CustInvo 0.402 0.044 0.479 Complexi 0.566 0.062 0.541 QuantCom 0.461 0.051 0.591 NoActivi 0.532 0.058 0.650 QualComm 0.492 0.054 0.704 Critical 0.517 0.057 0.760 Document 0.444 0.049 0.809 Multitas 0.372 0.041 0.850 NoCusts 0.442 0.049 0.898 Requirem 0.519 0.057 0.955 SerialDe 0.407 0.045 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ 4.701 1.609 0.855 0.750 0.662 0.536 _______________________________________________________________________ Total: 9.112 EQS 6.1 for Windows Tue Nov 06 23:48:25 2007 Page 54 216 FACTOR LOADINGS (KAISER VARIMAX SOLUTION) Converge after 5 iterations FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ Training 0.628 0.106 0.238 0.106 0.195 0.085 WFDivers 0.452 0.093 0.046 0.218 0.294 0.112 NoPeople 0.448 -0.024 0.049 0.074 0.035 0.003 NoRoles 0.384 0.107 0.325 0.256 0.206 0.085 WorkType 0.357 0.029 0.376 0.043 0.233 0.329 CustSupp 0.314 0.530 -0.048 0.020 0.018 0.296 Teaming 0.291 0.253 0.140 0.478 0.059 -0.064 ITTools 0.270 -0.074 0.099 0.059 0.651 0.232 Automati 0.191 0.048 0.166 0.047 0.686 0.272 CustInvo -0.173 0.538 -0.033 0.263 0.030 0.109 Complexi 0.164 0.129 0.710 0.051 0.107 0.064 QuantCom 0.164 0.086 0.118 0.621 0.164 0.025 NoActivi 0.125 0.040 0.688 0.140 0.112 0.097 QualComm 0.103 0.203 0.165 0.588 0.035 0.258 Critical 0.093 0.663 0.240 0.061 0.080 -0.019 Document 0.081 0.155 0.177 0.134 0.152 0.584 Multitas 0.070 0.014 0.210 0.242 0.485 -0.168 NoCusts 0.046 0.454 0.248 0.003 0.398 -0.117 Requirem 0.060 0.588 0.079 0.319 -0.223 0.111 SerialDe 0.034 0.137 0.497 0.305 0.205 0.071 _______________________________________________________________________ 217 Communal. Prop. Cum.Prop. _________________________________________ Training 0.518 0.057 0.057 WFDivers 0.361 0.040 0.096 NoPeople 0.210 0.023 0.120 NoRoles 0.379 0.042 0.161 WorkType 0.434 0.048 0.209 CustSupp 0.470 0.052 0.260 Teaming 0.405 0.044 0.305 ITTools 0.569 0.062 0.367 Automati 0.613 0.067 0.434 CustInvo 0.402 0.044 0.479 Complexi 0.566 0.062 0.541 QuantCom 0.461 0.051 0.591 NoActivi 0.532 0.058 0.650 QualComm 0.492 0.054 0.704 Critical 0.517 0.057 0.760 Document 0.444 0.049 0.809 Multitas 0.372 0.041 0.850 NoCusts 0.442 0.049 0.898 Requirem 0.519 0.057 0.955 SerialDe 0.407 0.045 1.000 _________________________________________ Variance Explained by Each Factor: FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ 1.502 1.776 1.835 1.463 1.688 0.847 _______________________________________________________________________ Total: 9.112 FACTOR TRANSFORMATION MATRIX FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 FACTOR 1 0.439 FACTOR 2 0.375 0.747 FACTOR 3 0.504 -0.142 -0.820 FACTOR 4 0.408 0.305 0.030 -0.715 FACTOR 5 0.430 -0.522 0.233 0.382 0.584 FACTOR 6 0.249 -0.022 0.318 0.139 -0.365 0.827 218 FACTOR SCORE COEFFICIENTS FACTOR 1 FACTOR 2 FACTOR 3 FACTOR 4 FACTOR 5 FACTOR 6 _______________________________________________________________________ NoPeople 0.287 -0.041 -0.036 -0.003 -0.065 -0.059 Multitas -0.065 -0.024 0.025 0.116 0.253 -0.214 Teaming 0.121 0.042 -0.028 0.229 -0.041 -0.141 NoRoles 0.149 -0.022 0.075 0.063 -0.003 -0.028 Training 0.345 0.001 0.013 -0.054 -0.032 -0.046 WFDivers 0.214 -0.005 -0.105 0.058 0.077 -0.005 NoActivi -0.042 -0.061 0.348 -0.011 -0.067 0.015 Complexi -0.006 0.001 0.361 -0.090 -0.071 -0.022 WorkType 0.116 -0.050 0.121 -0.079 -0.005 0.191 Document -0.081 0.002 0.018 0.004 -0.000 0.442 SerialDe -0.115 -0.016 0.209 0.110 0.030 -0.010 Requirem 0.004 0.234 -0.012 0.103 -0.161 0.041 CustSupp 0.164 0.246 -0.123 -0.117 -0.048 0.169 Critical -0.002 0.322 0.063 -0.108 0.014 -0.101 NoCusts -0.064 0.238 0.054 -0.116 0.216 -0.186 CustInvo -0.177 0.238 -0.076 0.102 0.046 0.058 Automati -0.044 -0.004 -0.047 -0.055 0.330 0.132 ITTools 0.031 -0.065 -0.080 -0.025 0.305 0.104 QuantCom -0.003 -0.069 -0.049 0.359 0.028 -0.052 QualComm -0.056 -0.025 -0.012 0.315 -0.061 0.152 _______________________________________________________________________ 219 Appendix E: SPSS Exploratory Factor Analysis Results Factor Analysis: Varimax rotation [DataSet1] C:\Documents and Settings\Arthur Dhallin\My Documents\PhD\Data\Scrubbed_Data_Variables_20071106.sav Communalities Initial Extraction NoPeople 1.000 .293 Multitasking 1.000 .462 Teaming 1.000 .536 NoRoles 1.000 .458 Training 1.000 .694 WorkforceDi v 1.000 .470 NoActivities 1.000 .703 Complexity 1.000 .767 WorkType 1.000 .502 Documentati on 1.000 .702 DegreeSerial 1.000 .511 Requirement s 1.000 .665 CustSupport 1.000 .692 Criticality 1.000 .679 NoCusts 1.000 .680 CustDiversit y 1.000 .695 CustInvolve 1.000 .576 Automation 1.000 .761 Tooluse 1.000 .711 CommQuant 1.000 .649 CommQual 1.000 .673 Extraction Method: Principal Component Analysis. 220 Total Variance Explained Componen t Initial Eigenvalues Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings Total % of Variance Cumulative % Total % of Variance Cumulative % Total % of Variance Cumulative % 1 5.407 25.746 25.746 5.407 25.746 25.746 2.375 11.307 11.307 2 2.154 10.256 36.002 2.154 10.256 36.002 2.303 10.967 22.274 3 1.567 7.463 43.466 1.567 7.463 43.466 2.204 10.493 32.767 4 1.396 6.648 50.114 1.396 6.648 50.114 2.167 10.321 43.088 5 1.274 6.068 56.182 1.274 6.068 56.182 1.944 9.256 52.345 6 1.082 5.150 61.332 1.082 5.150 61.332 1.887 8.988 61.332 7 .980 4.666 65.998 8 .899 4.280 70.279 9 .774 3.686 73.964 10 .681 3.243 77.207 11 .669 3.186 80.393 12 .628 2.988 83.382 13 .595 2.833 86.214 14 .537 2.558 88.772 15 .456 2.173 90.946 16 .439 2.089 93.034 17 .375 1.787 94.822 18 .322 1.535 96.356 19 .297 1.413 97.769 20 .267 1.274 99.043 21 .201 .957 100.000 Extraction Method: Principal Component Analysis. 221 Component Number 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Eigenvalue 6 5 4 3 2 1 0 Scree Plot 222 Component Matrix(a) Component 1 2 3 4 5 6 NoPeople .264 -.142 -.212 .238 -.076 .310 Multitasking .446 -.249 .291 -.158 -.262 .151 Teaming .514 .257 -.181 .092 -.352 .204 NoRoles .605 -.095 -.252 .022 -.077 .115 Training .599 -.192 -.334 .247 .075 .347 WorkforceDiv .534 -.153 .037 .342 -.114 .170 NoActivities .567 -.128 -.260 -.537 .057 -.080 Complexity .578 -.091 -.295 -.544 .203 .032 WorkType .583 -.263 -.185 .028 .203 -.130 Documentation .463 .028 -.138 .186 .248 -.610 DegreeSerial .577 .012 -.046 -.380 -.116 -.135 Requirements .365 .725 -.053 .048 .020 .003 CustSupport .410 .374 -.124 .438 .419 .038 Criticality .489 .483 .092 -.115 .351 .248 NoCusts .514 .094 .478 -.196 .243 .284 CustDiversity .440 -.035 .697 -.069 -.091 .027 CustInvolve .289 .583 .355 .030 .021 -.159 Automation .612 -.446 .301 .176 .171 -.194 Tooluse .540 -.534 .215 .262 .064 -.125 CommQuant .525 .130 .008 .071 -.588 -.072 CommQual .547 .303 -.123 .085 -.380 -.340 Extraction Method: Principal Component Analysis. a 6 components extracted. 223 Rotated Component Matrix(a) Component 1 2 3 4 5 6 NoPeople .016 -.018 .004 .537 .060 -.008 Multitasking .220 -.136 .561 .184 .213 -.017 Teaming .153 .241 .056 .407 .524 -.111 NoRoles .363 .078 .084 .474 .249 .161 Training .243 .109 .041 .769 .050 .168 WorkforceDiv -.022 .064 .288 .533 .218 .224 NoActivities .810 .011 .087 .086 .128 .123 Complexity .847 .118 .064 .154 -.005 .088 WorkType .395 .032 .095 .330 .029 .476 Documentation .186 .208 -.109 -.038 .195 .757 DegreeSerial .576 .085 .222 .018 .327 .123 Requirements .074 .718 -.080 .011 .367 -.046 CustSupport -.079 .660 -.115 .342 -.023 .345 Criticality .279 .737 .192 .137 -.032 -.041 NoCusts .245 .419 .649 .097 -.114 -.014 CustDiversity .007 .148 .796 -.069 .153 .103 CustInvolve -.072 .599 .235 -.246 .302 .073 Automation .140 -.061 .531 .241 -.006 .631 Tooluse .061 -.194 .464 .350 .018 .575 CommQuant .096 .020 .217 .206 .741 .029 CommQual .163 .212 -.002 .056 .731 .254 Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. a Rotation converged in 10 iterations. Component Transformation Matrix Component 1 2 3 4 5 6 1 .486 .328 .404 .448 .378 .385 2 -.121 .813 -.243 -.271 .327 -.290 3 -.359 .108 .843 -.377 -.088 .001 4 -.772 .107 -.126 .466 .085 .389 5 .150 .436 -.092 -.034 -.804 .363 6 -.035 .131 .208 .604 -.299 -.696 Extraction Method: Principal Component Analysis. Rotation Method: Varimax with Kaiser Normalization. 224 Factor Analysis: Oblique Rotation [DataSet1] C:\Documents and Settings\Arthur Dhallin\My Documents\PhD\Data\Scrubbed_Data_Variables_20071106.sav Communalities Initial Extractio n NoPeople 1.000 .293 Multitasking 1.000 .462 Teaming 1.000 .536 NoRoles 1.000 .458 Training 1.000 .694 WorkforceDiv 1.000 .470 NoActivities 1.000 .703 Complexity 1.000 .767 WorkType 1.000 .502 Documentatio n 1.000 .702 DegreeSerial 1.000 .511 Requirements 1.000 .665 CustSupport 1.000 .692 Criticality 1.000 .679 NoCusts 1.000 .680 CustDiversity 1.000 .695 CustInvolve 1.000 .576 Automation 1.000 .761 Tooluse 1.000 .711 CommQuant 1.000 .649 CommQual 1.000 .673 Extraction Method: Principal Component Analysis. 225 Total Variance Explained Component Initial Eigenvalues Extraction Sums of Squared Loadings Rotation Sums of Squared Loadings(a) Total % of Variance Cumulative % Total % of Variance Cumulativ e % Total 1 5.407 25.746 25.746 5.407 25.746 25.746 2.759 2 2.154 10.256 36.002 2.154 10.256 36.002 2.495 3 1.567 7.463 43.466 1.567 7.463 43.466 2.758 4 1.396 6.648 50.114 1.396 6.648 50.114 3.247 5 1.274 6.068 56.182 1.274 6.068 56.182 2.506 6 1.082 5.150 61.332 1.082 5.150 61.332 2.355 7 .980 4.666 65.998 8 .899 4.280 70.279 9 .774 3.686 73.964 10 .681 3.243 77.207 11 .669 3.186 80.393 12 .628 2.988 83.382 13 .595 2.833 86.214 14 .537 2.558 88.772 15 .456 2.173 90.946 16 .439 2.089 93.034 17 .375 1.787 94.822 18 .322 1.535 96.356 19 .297 1.413 97.769 20 .267 1.274 99.043 21 .201 .957 100.000 Extraction Method: Principal Component Analysis. a When components are correlated, sums of squared loadings cannot be added to obtain a total variance. 226 Component Number 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 Eigenvalue 6 5 4 3 2 1 0 Scree Plot 227 Component Matrix(a) Component 1 2 3 4 5 6 NoPeople .264 -.142 -.212 .238 -.076 .310 Multitasking .446 -.249 .291 -.158 -.262 .151 Teaming .514 .257 -.181 .092 -.352 .204 NoRoles .605 -.095 -.252 .022 -.077 .115 Training .599 -.192 -.334 .247 .075 .347 WorkforceDiv .534 -.153 .037 .342 -.114 .170 NoActivities .567 -.128 -.260 -.537 .057 -.080 Complexity .578 -.091 -.295 -.544 .203 .032 WorkType .583 -.263 -.185 .028 .203 -.130 Documentation .463 .028 -.138 .186 .248 -.610 DegreeSerial .577 .012 -.046 -.380 -.116 -.135 Requirements .365 .725 -.053 .048 .020 .003 CustSupport .410 .374 -.124 .438 .419 .038 Criticality .489 .483 .092 -.115 .351 .248 NoCusts .514 .094 .478 -.196 .243 .284 CustDiversity .440 -.035 .697 -.069 -.091 .027 CustInvolve .289 .583 .355 .030 .021 -.159 Automation .612 -.446 .301 .176 .171 -.194 Tooluse .540 -.534 .215 .262 .064 -.125 CommQuant .525 .130 .008 .071 -.588 -.072 CommQual .547 .303 -.123 .085 -.380 -.340 Extraction Method: Principal Component Analysis. a 6 components extracted. 228 Pattern Matrix(a) Component 1 2 3 4 5 6 NoPeople .557 -.016 -.031 .035 -.036 .063 Multitasking .128 -.189 .530 -.173 -.175 .108 Teaming .380 .186 -.015 -.087 -.496 .175 NoRoles .416 .029 .003 -.311 -.188 -.083 Training .750 .097 -.030 -.169 .023 -.077 WorkforceDiv .507 .028 .254 .119 -.177 -.150 NoActivities -.030 -.051 -.013 -.833 -.045 -.052 Complexity .046 .076 -.035 -.873 .104 -.011 WorkType .243 -.013 .031 -.355 .046 -.421 Documentation -.148 .140 -.148 -.148 -.151 -.779 DegreeSerial -.094 .006 .143 -.566 -.265 -.060 Requirements -.023 .686 -.112 -.034 -.332 .048 CustSupport .328 .668 -.125 .152 .084 -.340 Criticality .093 .737 .168 -.244 .122 .093 NoCusts .038 .408 .650 -.196 .204 .103 CustDiversity -.138 .097 .816 .063 -.110 -.033 CustInvolve -.297 .558 .247 .118 -.279 -.078 Automation .144 -.116 .516 -.059 .075 -.565 Tooluse .277 -.243 .447 .023 .034 -.508 CommQuant .146 -.078 .151 -.023 -.739 .026 CommQual -.033 .106 -.074 -.103 -.719 -.238 Extraction Method: Principal Component Analysis. Rotation Method: Oblimin with Kaiser Normalization. a Rotation converged in 14 iterations. 229 Structure Matrix Component 1 2 3 4 5 6 NoPeople .534 .001 .048 -.091 -.097 -.053 Multitasking .252 -.103 .590 -.316 -.265 -.033 Teaming .440 .294 .131 -.269 -.586 .012 NoRoles .541 .125 .184 -.471 -.326 -.254 Training .805 .144 .145 -.369 -.143 -.276 WorkforceDiv .580 .094 .362 -.130 -.288 -.299 NoActivities .191 .060 .180 -.835 -.201 -.193 Complexity .247 .161 .159 -.867 -.089 -.169 WorkType .423 .064 .205 -.487 -.105 -.538 Documentation .066 .232 .008 -.264 -.242 -.777 DegreeSerial .122 .136 .305 -.640 -.395 -.188 Requirements .031 .744 -.020 -.144 -.448 -.022 CustSupport .358 .663 -.026 -.026 -.079 -.411 Criticality .170 .749 .252 -.352 -.103 -.041 NoCusts .156 .426 .678 -.341 -.013 -.047 CustDiversity .008 .160 .807 -.125 -.228 -.129 CustInvolve -.206 .609 .263 -.002 -.368 -.098 Automation .354 -.043 .617 -.282 -.073 -.667 Tooluse .445 -.174 .542 -.201 -.076 -.610 CommQuant .276 .083 .288 -.227 -.771 -.101 CommQual .144 .272 .099 -.279 -.768 -.318 Extraction Method: Principal Component Analysis. Rotation Method: Oblimin with Kaiser Normalization. Component Correlation Matrix Component 1 2 3 4 5 6 1 1.000 .037 .165 -.248 -.147 -.224 2 .037 1.000 .063 -.120 -.196 -.077 3 .165 .063 1.000 -.224 -.158 -.147 4 -.248 -.120 -.224 1.000 .201 .179 5 -.147 -.196 -.158 .201 1.000 .100 6 -.224 -.077 -.147 .179 .100 1.000 Extraction Method: Principal Component Analysis. Rotation Method: Oblimin with Kaiser Normalization. 230 Appendix F: SPSS Correlation Matrix Results for Model Variables Correlations Dat e #Peo ple Eng r Ove r LvlReqs Und LS R1 LS R2 LS R3 Prod Crit #Cu st CustInv olv ActCo mpl ProcExec Type Docum ent Tool Usg Teami ng Inv est RO I Date Pearson Correlat ion 1 .237 * - .05 6 .11 9 -.338 ** - .06 9 - .04 5 .10 3 -.073 .20 7 * .184 -.115 -.009 .130 -.064 .084 .080 .10 9 Sig. (2- tailed) .020 .59 0 .25 0 .001 .50 7 .66 5 .31 8 .478 .04 3 .073 .264 .928 .206 .533 .413 .438 .28 9 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 #People Pearson Correlat ion .23 7 * 1 - .10 7 .19 4 -.058 - .04 2 .06 9 - .02 9 .099 .59 5 ** .095 -.013 .113 -.238 * .085 .019 .320 ** - .09 1 Sig. (2- tailed) .02 0 .29 9 .05 8 .571 .68 3 .50 4 .78 0 .339 .00 0 .359 .902 .273 .020 .409 .855 .001 .38 0 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 Engr Pearson Correlat ion - .05 6 -.107 1 - .23 4 * -.085 .30 9 ** .10 6 - .37 3 ** .397 ** - .17 7 .407 ** .433 ** -.154 -.009 -.107 .260 * .061 .14 1 Sig. (2- tailed) .59 0 .299 .02 2 .411 .00 2 .30 4 .00 0 .000 .08 5 .000 .000 .135 .934 .299 .011 .554 .17 0 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 Over Pearson Correlat ion .11 9 .194 - .23 4 * 1 .380 ** - .27 1 ** - .03 0 .26 6 ** -.247 * .08 8 -.154 -.494 ** .215 * .162 .344 ** -.143 - .008 - .10 8 Sig. (2- tailed) .25 0 .058 .02 2 .000 .00 8 .77 1 .00 9 .015 .39 5 .134 .000 .035 .114 .001 .165 .940 .29 4 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 231 LvlReqsU nd Pearson Correlat ion - .33 8 ** -.058 - .08 5 .38 0 ** 1 - .22 6 * - .06 2 .25 8 * - .292 ** - .03 2 -.412 ** -.507 ** .229 * .109 .299 ** - .394 ** - .046 .03 1 Sig. (2- tailed) .00 1 .571 .41 1 .00 0 .02 7 .55 1 .01 1 .004 .75 4 .000 .000 .025 .290 .003 .000 .656 .76 5 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 LSR1 Pearson Correlat ion - .06 9 -.042 .30 9 ** - .27 1 ** -.226 * 1 - .40 8 ** - .48 8 ** .361 ** - .10 7 .327 ** .445 ** -.206 * .134 -.211 * .14 9 .169 .19 7 Sig. (2- tailed) .50 7 .683 .00 2 .00 8 .027 .00 0 .00 0 .000 .30 0 .001 .000 .044 .194 .039 .147 .099 .05 4 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 LSR2 Pearson Correlat ion - .04 5 .069 .10 6 - .03 0 -.062 - .40 8 ** 1 - .59 8 ** .236 * .06 1 .262 ** .147 .059 -.164 .121 .183 - .076 - .10 3 Sig. (2- tailed) .66 5 .504 .30 4 .77 1 .551 .00 0 .00 0 .021 .55 2 .010 .152 .566 .111 .241 .075 .460 .31 9 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 LSR3 Pearson Correlat ion .10 3 -.029 - .37 3 ** .26 6 ** .258 * - .48 8 ** - .59 8 ** 1 - .543 ** .03 5 -.538 ** -.532 ** .124 .039 .070 - .306 ** - .076 - .07 5 Sig. (2- tailed) .31 8 .780 .00 0 .00 9 .011 .00 0 .00 0 .000 .73 4 .000 .000 .229 .705 .497 .002 .464 .46 6 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 ProdCrit Pearson Correlat ion - .07 3 .099 .39 7 ** - .24 7 * -.292 ** .36 1 ** .23 6 * - .54 3 ** 1 .01 3 .492 ** .647 ** .059 -.015 -.048 .226 * .110 .05 1 Sig. (2- tailed) .47 8 .339 .00 0 .01 5 .004 .00 0 .02 1 .00 0 .89 9 .000 .000 .571 .885 .644 .027 .285 .62 5 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 232 #Cust Pearson Correlat ion .20 7 * .595 * * - .17 7 .08 8 -.032 - .10 7 .06 1 .03 5 .013 1 .062 .045 .164 -.113 .302 ** -.007 .132 .03 4 Sig. (2- tailed) .04 3 .000 .08 5 .39 5 .754 .30 0 .55 2 .73 4 .899 .546 .667 .110 .272 .003 .947 .201 .74 2 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 CustInvol v Pearson Correlat ion .18 4 .095 .40 7 ** - .15 4 -.412 ** .32 7 ** .26 2 ** - .53 8 ** .492 ** .06 2 1 .538 ** -.165 .065 -.058 .326 ** .194 .07 3 Sig. (2- tailed) .07 3 .359 .00 0 .13 4 .000 .00 1 .01 0 .00 0 .000 .54 6 .000 .108 .528 .571 .001 .059 .47 7 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 ActCompl Pearson Correlat ion - .11 5 -.013 .43 3 ** - .49 4 ** -.507 ** .44 5 ** .14 7 - .53 2 ** .647 ** .04 5 .538 ** 1 -.249 * -.003 -.257 * .447 ** .110 .13 9 Sig. (2- tailed) .26 4 .902 .00 0 .00 0 .000 .00 0 .15 2 .00 0 .000 .66 7 .000 .014 .976 .012 .000 .285 .17 7 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 ProcExec Type Pearson Correlat ion - .00 9 .113 - .15 4 .21 5 * .229 * - .20 6 * .05 9 .12 4 .059 .16 4 -.165 -.249 * 1 -.112 .260 * - .427 ** - .021 - .17 5 Sig. (2- tailed) .92 8 .273 .13 5 .03 5 .025 .04 4 .56 6 .22 9 .571 .11 0 .108 .014 .279 .010 .000 .835 .08 8 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 Document Pearson Correlat ion .13 0 - .238 * - .00 9 .16 2 .109 .13 4 - .16 4 .03 9 -.015 - .11 3 .065 -.003 -.112 1 .058 -.147 - .117 .08 2 Sig. (2- tailed) .20 6 .020 .93 4 .11 4 .290 .19 4 .11 1 .70 5 .885 .27 2 .528 .976 .279 .574 .153 .258 .42 6 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 233 ToolUsg Pearson Correlat ion - .06 4 .085 - .10 7 .34 4 ** .299 ** - .21 1 * .12 1 .07 0 -.048 .30 2 ** -.058 -.257 * .260 * .058 1 - .284 ** .064 - .08 0 Sig. (2- tailed) .53 3 .409 .29 9 .00 1 .003 .03 9 .24 1 .49 7 .644 .00 3 .571 .012 .010 .574 .005 .538 .43 9 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 Teaming Pearson Correlat ion .08 4 .019 .26 0 * - .14 3 -.394 ** .14 9 .18 3 - .30 6 ** .226 * - .00 7 .326 ** .447 ** -.427 ** -.147 - .284 ** 1 .090 .02 5 Sig. (2- tailed) .41 3 .855 .01 1 .16 5 .000 .14 7 .07 5 .00 2 .027 .94 7 .001 .000 .000 .153 .005 .385 .80 6 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 Invest Pearson Correlat ion .08 0 .320 * * .06 1 - .00 8 -.046 .16 9 - .07 6 - .07 6 .110 .13 2 .194 .110 -.021 -.117 .064 .090 1 - .20 9 * Sig. (2- tailed) .43 8 .001 .55 4 .94 0 .656 .09 9 .46 0 .46 4 .285 .20 1 .059 .285 .835 .258 .538 .385 .04 1 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 ROI Pearson Correlat ion .10 9 -.091 .14 1 - .10 8 .031 .19 7 - .10 3 - .07 5 .051 .03 4 .073 .139 -.175 .082 -.080 .025 - .209 * 1 Sig. (2- tailed) .28 9 .380 .17 0 .29 4 .765 .05 4 .31 9 .46 6 .625 .74 2 .477 .177 .088 .426 .439 .806 .041 N 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 96 *. Correlation is significant at the 0.05 level (2-tailed). **. Correlation is significant at the 0.01 level (2-tailed). 234 Appendix G: Results of Predictive Model Development Using Linear Regression (Backwards Elimination) Data set = Dissertation, Name of Fit = L1 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (ActComplex CustInvolv Date Document Engr log[Invest] LvlReqsUnd NoCust Overhead People ProcExecType ProdCrit Teaming ToolUse {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -22.9449 31.5217 -0.728 0.4689 ActComplex 0.157169 0.503291 0.312 0.7557 CustInvolv -0.00160006 0.382354 -0.004 0.9967 Date 0.000771077 0.000807553 0.955 0.3427 Document -0.0142149 0.298194 -0.048 0.9621 Engr 0.0600066 0.326708 0.184 0.8548 log[Invest] -0.458099 0.0999142 -4.585 0.0000 LvlReqsUnd 0.0451418 0.387280 0.117 0.9075 NoCust 0.102293 0.0657286 1.556 0.1238 Overhead 0.147869 0.348845 0.424 0.6729 People -0.419884 0.435905 -0.963 0.3385 ProcExecType -0.0921076 0.323596 -0.285 0.7767 ProdCrit 0.315777 0.390877 0.808 0.4217 Teaming -0.391908 0.352559 -1.112 0.2699 ToolUse 0.00389224 0.364051 0.011 0.9915 {F}Service[2] -0.410287 0.364968 -1.124 0.2645 {F}Service[3] -0.599576 0.427095 -1.404 0.1645 R-Squared: 0.319216 Sigma hat: 1.21873 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 75 Summary Analysis of Variance Table Source df SS MS F p-value Regression 16 52.2338 3.26461 2.20 0.0120 Residual 75 111.398 1.4853 Lack of fit 69 102.043 1.47888 0.95 0.6015 Pure Error 6 9.35474 1.55912 Data set = Dissertation, Name of Fit = L2 Deleted cases are 235 (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (ActComplex CustInvolv Date Document Engr log[Invest] LvlReqsUnd NoCust Overhead People ProcExecType ProdCrit Teaming {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -22.9098 31.1441 -0.736 0.4642 ActComplex 0.156387 0.494659 0.316 0.7528 CustInvolv -0.00123910 0.378346 -0.003 0.9974 Date 0.000770155 0.000797640 0.966 0.3373 Document -0.0141188 0.296091 -0.048 0.9621 Engr 0.0602991 0.323412 0.186 0.8526 log[Invest] -0.458020 0.0989883 -4.627 0.0000 LvlReqsUnd 0.0452226 0.384651 0.118 0.9067 NoCust 0.102556 0.0605682 1.693 0.0945 Overhead 0.148786 0.335904 0.443 0.6591 People -0.420859 0.423464 -0.994 0.3235 ProcExecType -0.0920401 0.321399 -0.286 0.7754 ProdCrit 0.316189 0.386406 0.818 0.4158 Teaming -0.392581 0.344596 -1.139 0.2582 {F}Service[2] -0.409721 0.358719 -1.142 0.2570 {F}Service[3] -0.599372 0.423850 -1.414 0.1614 R-Squared: 0.319215 Sigma hat: 1.21069 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 76 Summary Analysis of Variance Table Source df SS MS F p-value Regression 15 52.2336 3.48224 2.38 0.0073 Residual 76 111.398 1.46576 Lack of fit 70 102.043 1.45776 0.93 0.6109 Pure Error 6 9.35474 1.55912 Data set = Dissertation, Name of Fit = L3 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (ActComplex Date Document Engr log[Invest] LvlReqsUnd NoCust Overhead People ProcExecType ProdCrit Teaming {F}Service) 236 Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -22.8923 30.4820 -0.751 0.4549 ActComplex 0.156152 0.486267 0.321 0.7490 Date 0.000769705 0.000780600 0.986 0.3272 Document -0.0141736 0.293693 -0.048 0.9616 Engr 0.0601040 0.315807 0.190 0.8496 log[Invest] -0.458050 0.0979429 -4.677 0.0000 LvlReqsUnd 0.0454022 0.378244 0.120 0.9048 NoCust 0.102542 0.0600184 1.709 0.0916 Overhead 0.148630 0.330320 0.450 0.6540 People -0.420812 0.420468 -1.001 0.3200 ProcExecType -0.0919271 0.317458 -0.290 0.7729 ProdCrit 0.316013 0.380138 0.831 0.4084 Teaming -0.392575 0.342346 -1.147 0.2550 {F}Service[2] -0.409757 0.356212 -1.150 0.2536 {F}Service[3] -0.599023 0.407563 -1.470 0.1457 R-Squared: 0.319215 Sigma hat: 1.2028 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 77 Summary Analysis of Variance Table Source df SS MS F p-value Regression 14 52.2336 3.73097 2.58 0.0043 Residual 77 111.398 1.44673 Lack of fit 71 102.043 1.43723 0.92 0.6201 Pure Error 6 9.35474 1.55912 Data set = Dissertation, Name of Fit = L4 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (ActComplex Date Engr log[Invest] LvlReqsUnd NoCust Overhead People ProcExecType ProdCrit Teaming {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -22.5122 29.2577 -0.769 0.4440 ActComplex 0.151647 0.474160 0.320 0.7500 Date 0.000759743 0.000747983 1.016 0.3129 Engr 0.0617024 0.312050 0.198 0.8438 log[Invest] -0.458135 0.0972986 -4.709 0.0000 237 LvlReqsUnd 0.0421654 0.369862 0.114 0.9095 NoCust 0.102503 0.0596280 1.719 0.0896 Overhead 0.144722 0.318187 0.455 0.6505 People -0.415409 0.402689 -1.032 0.3055 ProcExecType -0.0888457 0.308975 -0.288 0.7745 ProdCrit 0.315332 0.377440 0.835 0.4060 Teaming -0.388828 0.331283 -1.174 0.2441 {F}Service[2] -0.407038 0.349472 -1.165 0.2477 {F}Service[3] -0.597052 0.402911 -1.482 0.1424 R-Squared: 0.319194 Sigma hat: 1.19508 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 78 Summary Analysis of Variance Table Source df SS MS F p-value Regression 13 52.2302 4.01771 2.81 0.0024 Residual 78 111.401 1.42822 Lack of fit 72 102.047 1.41731 0.91 0.6292 Pure Error 6 9.35474 1.55912 Data set = Dissertation, Name of Fit = L5 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (ActComplex Date log[Invest] LvlReqsUnd NoCust Overhead People ProcExecType ProdCrit Teaming {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -23.0132 28.9700 -0.794 0.4294 ActComplex 0.172906 0.458993 0.377 0.7074 Date 0.000773607 0.000740146 1.045 0.2991 log[Invest] -0.459331 0.0965181 -4.759 0.0000 LvlReqsUnd 0.0608262 0.355437 0.171 0.8646 NoCust 0.100576 0.0584673 1.720 0.0893 Overhead 0.139114 0.314988 0.442 0.6599 People -0.414208 0.400187 -1.035 0.3038 ProcExecType -0.0898839 0.307046 -0.293 0.7705 ProdCrit 0.324035 0.372578 0.870 0.3871 Teaming -0.382299 0.327623 -1.167 0.2468 {F}Service[2] -0.411879 0.346487 -1.189 0.2381 {F}Service[3] -0.607876 0.396740 -1.532 0.1295 238 R-Squared: 0.318853 Sigma hat: 1.18779 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 79 Summary Analysis of Variance Table Source df SS MS F p-value Regression 12 52.1744 4.34786 3.08 0.0013 Residual 79 111.457 1.41085 Lack of fit 73 102.102 1.39866 0.90 0.6377 Pure Error 6 9.35474 1.55912 Data set = Dissertation, Name of Fit = L6 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (Date log[Invest] LvlReqsUnd NoCust Overhead People ProcExecType ProdCrit Teaming {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -21.0373 28.3379 -0.742 0.4600 Date 0.000723908 0.000724376 0.999 0.3206 log[Invest] -0.454876 0.0952758 -4.774 0.0000 LvlReqsUnd 0.0266339 0.341805 0.078 0.9381 NoCust 0.105554 0.0566480 1.863 0.0661 Overhead 0.106707 0.301383 0.354 0.7242 People -0.434977 0.394239 -1.103 0.2732 ProcExecType -0.113318 0.299062 -0.379 0.7058 ProdCrit 0.400064 0.311493 1.284 0.2027 Teaming -0.349372 0.314050 -1.112 0.2693 {F}Service[2] -0.435159 0.339098 -1.283 0.2031 {F}Service[3] -0.645243 0.382074 -1.689 0.0952 R-Squared: 0.317629 Sigma hat: 1.1814 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 80 Summary Analysis of Variance Table Source df SS MS F p-value Regression 11 51.9742 4.72492 3.39 0.0007 239 Residual 80 111.657 1.39572 Lack of fit 74 102.303 1.38247 0.89 0.6453 Pure Error 6 9.35474 1.55912 Data set = Dissertation, Name of Fit = L7 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (Date log[Invest] NoCust Overhead People ProcExecType ProdCrit Teaming {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -20.1007 25.5045 -0.788 0.4329 Date 0.000700127 0.000652906 1.072 0.2868 log[Invest] -0.455258 0.0945643 -4.814 0.0000 NoCust 0.105667 0.0562810 1.877 0.0641 Overhead 0.115211 0.279200 0.413 0.6810 People -0.436549 0.391300 -1.116 0.2679 ProcExecType -0.111323 0.296130 -0.376 0.7080 ProdCrit 0.395545 0.304163 1.300 0.1971 Teaming -0.356304 0.299332 -1.190 0.2374 {F}Service[2] -0.433753 0.336534 -1.289 0.2011 {F}Service[3] -0.642911 0.378555 -1.698 0.0933 R-Squared: 0.317577 Sigma hat: 1.17413 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 81 Summary Analysis of Variance Table Source df SS MS F p-value Regression 10 51.9657 5.19657 3.77 0.0003 Residual 81 111.666 1.37859 Lack of fit 74 101.309 1.36903 0.93 0.6157 Pure Error 7 10.3573 1.47962 Data set = Dissertation, Name of Fit = L8 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (Date log[Invest] NoCust Overhead People ProdCrit Teaming {F}Service) 240 Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -20.0029 25.3693 -0.788 0.4327 Date 0.000697984 0.000649454 1.075 0.2857 log[Invest] -0.460720 0.0929507 -4.957 0.0000 NoCust 0.103091 0.0555692 1.855 0.0672 Overhead 0.0959966 0.273041 0.352 0.7261 People -0.426537 0.388343 -1.098 0.2753 ProdCrit 0.367031 0.293008 1.253 0.2139 Teaming -0.309489 0.270768 -1.143 0.2564 {F}Service[2] -0.454635 0.330175 -1.377 0.1723 {F}Service[3] -0.663598 0.372567 -1.781 0.0786 R-Squared: 0.316387 Sigma hat: 1.16797 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 82 Summary Analysis of Variance Table Source df SS MS F p-value Regression 9 51.7709 5.75232 4.22 0.0002 Residual 82 111.861 1.36415 Lack of fit 75 101.503 1.35338 0.91 0.6237 Pure Error 7 10.3573 1.47962 Data set = Dissertation, Name of Fit = L9 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (Date log[Invest] NoCust People ProdCrit Teaming {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -20.4062 25.2092 -0.809 0.4206 Date 0.000708373 0.000645347 1.098 0.2755 log[Invest] -0.457812 0.0920918 -4.971 0.0000 NoCust 0.101130 0.0549957 1.839 0.0695 People -0.396965 0.377118 -1.053 0.2956 ProdCrit 0.354040 0.289130 1.225 0.2242 Teaming -0.311976 0.269243 -1.159 0.2499 {F}Service[2] -0.437175 0.324691 -1.346 0.1818 {F}Service[3] -0.636685 0.362689 -1.755 0.0829 R-Squared: 0.315356 241 Sigma hat: 1.16179 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 83 Summary Analysis of Variance Table Source df SS MS F p-value Regression 8 51.6022 6.45028 4.78 0.0001 Residual 83 112.029 1.34975 Lack of fit 76 101.672 1.33779 0.90 0.6319 Pure Error 7 10.3573 1.47962 Data set = Dissertation, Name of Fit = L10 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (Date log[Invest] NoCust ProdCrit Teaming {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant -15.4989 24.7904 -0.625 0.5335 Date 0.000587685 0.000635489 0.925 0.3577 log[Invest] -0.480932 0.0894917 -5.374 0.0000 NoCust 0.0705223 0.0467103 1.510 0.1349 ProdCrit 0.330525 0.288451 1.146 0.2551 Teaming -0.309210 0.269403 -1.148 0.2543 {F}Service[2] -0.453492 0.324529 -1.397 0.1660 {F}Service[3] -0.649226 0.362726 -1.790 0.0771 R-Squared: 0.306217 Sigma hat: 1.16253 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 84 Summary Analysis of Variance Table Source df SS MS F p-value Regression 7 50.1067 7.1581 5.30 0.0001 Residual 84 113.525 1.35149 Lack of fit 77 103.168 1.33984 0.91 0.6310 Pure Error 7 10.3573 1.47962 Data set = Dissertation, Name of Fit = L11 Deleted cases are (15 19 22 95) 242 Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (log[Invest] NoCust ProdCrit Teaming {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant 7.41182 0.889538 8.332 0.0000 log[Invest] -0.492510 0.0885360 -5.563 0.0000 NoCust 0.0807954 0.0453314 1.782 0.0783 ProdCrit 0.320308 0.287994 1.112 0.2692 Teaming -0.288256 0.268220 -1.075 0.2855 {F}Service[2] -0.456942 0.324231 -1.409 0.1624 {F}Service[3] -0.627384 0.361648 -1.735 0.0864 R-Squared: 0.299153 Sigma hat: 1.16154 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 85 Summary Analysis of Variance Table Source df SS MS F p-value Regression 6 48.9509 8.15848 6.05 0.0000 Residual 85 114.681 1.34918 Lack of fit 76 101.16 1.33105 0.89 0.6466 Pure Error 9 13.521 1.50233 Data set = Dissertation, Name of Fit = L12 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (log[Invest] NoCust ProdCrit {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant 7.10101 0.841971 8.434 0.0000 log[Invest] -0.471415 0.0864106 -5.456 0.0000 NoCust 0.0785368 0.0453235 1.733 0.0867 ProdCrit 0.298784 0.287555 1.039 0.3017 {F}Service[2] -0.458967 0.324518 -1.414 0.1609 {F}Service[3] -0.560167 0.356519 -1.571 0.1198 R-Squared: 0.28963 Sigma hat: 1.16259 Number of cases: 96 243 Number of cases used: 92 Degrees of freedom: 86 Summary Analysis of Variance Table Source df SS MS F p-value Regression 5 47.3926 9.47852 7.01 0.0000 Residual 86 116.239 1.35162 Lack of fit 76 102.694 1.35124 1.00 0.5507 Pure Error 10 13.5449 1.35449 Data set = Dissertation, Name of Fit = L13 Deleted cases are (15 19 22 95) Normal Regression Kernel mean function = Identity Response = log[ROI] Terms = (log[Invest] NoCust {F}Service) Coefficient Estimates Label Estimate Std. Error t-value p-value Constant 7.31491 0.816790 8.956 0.0000 log[Invest] -0.468128 0.0863921 -5.419 0.0000 NoCust 0.0798979 0.0453253 1.763 0.0815 {F}Service[2] -0.504114 0.321743 -1.567 0.1208 {F}Service[3] -0.744789 0.309228 -2.409 0.0181 R-Squared: 0.280712 Sigma hat: 1.16312 Number of cases: 96 Number of cases used: 92 Degrees of freedom: 87 Summary Analysis of Variance Table Source df SS MS F p-value Regression 4 45.9334 11.4833 8.49 0.0000 Residual 87 117.698 1.35285 Lack of fit 75 102.676 1.36901 1.09 0.4630 Pure Error 12 15.0223 1.25185
Abstract (if available)
Abstract
The objective of the research project discussed in this document was to develop a theoretical and empirical model that could be used to predict the results of Lean Six Sigma implementation efforts in a knowledge-intensive environment. Some previous research had attempted to develop a theoretical model for quality management. However, the results were narrowly focused around specific tools and emphasized manufacturing environments. This research project developed a generalized manner in which any process could be modeled using the people, process activities, customer, and information sharing elements to describe it. Processes modeled in this manner can then be assessed with respect to Lean Six Sigma implementations and the results of the implementation hypothesized.
Linked assets
University of Southern California Dissertations and Theses
Conceptually similar
PDF
Expanding constraint theory to determine well-posedness of large mathematical models
PDF
Optimizing a lean logistic system and the identification of its breakdown points
PDF
A framework for examining relationships among electronic health record (EHR) system design, implementation, physicians’ work impact
PDF
The incremental commitment spiral model process patterns for rapid-fielding projects
PDF
Total systems engineering evaluation of invasive pediatric medical therapies conducted in non-clinical environments
PDF
Risk transfer modeling among hierarchically associated stakeholders in development of space systems
PDF
Speech and language understanding in the Sigma cognitive architecture
PDF
Impacts of system of system management strategies on system of system capability engineering effort
PDF
Integrating data analytics and blended quality management to optimize higher education systems (HEES)
PDF
BRIM: A performance-based Bayesian model to identify use-error risk levels in medical devices
PDF
Model, identification & analysis of complex stochastic systems: applications in stochastic partial differential equations and multiscale mechanics
PDF
A system framework for evidence based implementations in a health care organization
PDF
Organizing complex projects around critical skills, and the mitigation of risks arising from system dynamic behavior
PDF
Modeling human bounded rationality in opportunistic security games
PDF
Analytical and experimental studies in system identification and modeling for structural control and health monitoring
PDF
Analytical and experimental studies in the development of reduced-order computational models for nonlinear systems
PDF
A parallel computation framework for EONS synaptic modeling platform for parameter optimization and drug discovery
PDF
A framework for intelligent assessment and resolution of commercial-off-the-shelf product incompatibilities
PDF
A novel hybrid probabilistic framework for model validation
PDF
Non‐steady state Kalman filter for subspace identification and predictive control
Asset Metadata
Creator
Dhallin, Arthur James
(author)
Core Title
The identification, validation, and modeling of critical parameters in lean six sigma implementations
School
Viterbi School of Engineering
Degree
Doctor of Philosophy
Degree Program
Industrial and Systems Engineering
Publication Date
05/04/2011
Defense Date
02/14/2011
Publisher
University of Southern California
(original),
University of Southern California. Libraries
(digital)
Tag
lean six sigma,OAI-PMH Harvest,process modeling,quality constructs,quality management
Language
English
Contributor
Electronically uploaded by the author
(provenance)
Advisor
Settles, F. Stan (
committee chair
), Friedman, George J. (
committee member
), Kumar, K. Ravi (
committee member
), Meshkati, Najmedin (
committee member
), Moore, James Elliott, II (
committee member
)
Creator Email
artdhallin@gmail.com,dhallin@usc.edu
Permanent Link (DOI)
https://doi.org/10.25549/usctheses-m3898
Unique identifier
UC1204267
Identifier
etd-Dhallin-4546 (filename),usctheses-m40 (legacy collection record id),usctheses-c127-473315 (legacy record id),usctheses-m3898 (legacy record id)
Legacy Identifier
etd-Dhallin-4546.pdf
Dmrecord
473315
Document Type
Dissertation
Rights
Dhallin, Arthur James
Type
texts
Source
University of Southern California
(contributing entity),
University of Southern California Dissertations and Theses
(collection)
Repository Name
Libraries, University of Southern California
Repository Location
Los Angeles, California
Repository Email
cisadmin@lib.usc.edu
Tags
lean six sigma
process modeling
quality constructs
quality management